text
stringlengths
1
1.03M
id
stringlengths
1
7.38k
metadata
dict
\section{Introduction} \label{sec:intro} Fast radio bursts (FRB) are millisecond luminous radio pulses with large dispersion measures (DMs), well in excess of the Milky Way contribution. There have been more than one hundred FRBs reported since \citet{2007Sci...318..777L} found the first one from archival data. Among them, only thirteen FRBs have been localized \citep{2017Natur.541...58C, 2019Sci...365..565B, 2019Natur.572..352R, 2019Sci...366..231P, 2020Natur.577..190M, 2020Natur.581..391M}. Apparently, FRBs can be divided into to two types, repeating and non-repeating ones and their host galaxy properties may be different. The DM is defined as the column density of free electron density along a given line of sight (LoS). The observed DM is usually divided into several parts \begin{equation} \label{eq:DM} DM = DM_{\rm{MW}} + DM_{\rm{halo}} + DM_{\rm{IGM}} + \frac{DM_{\rm{host}} + DM_{\rm{source}}}{1 + z}. \end{equation} In the above equation, $DM_{\rm MW}$ is the contribution of the interstellar medium in the Milky Way, which can be derived from the NE2001 Galactic free electron density model \citep{2002astro.ph..7156C} or the YMW model \citep{2017ApJ...835...29Y}. $DM_{\mathrm{halo}}$ is contributed by the free electrons in the Galactic halo. \citet{2019MNRAS.485..648P} found that the $DM_{\mathrm{halo}}$ is 50 pc cm$^{-3}$ \textless \ $DM_{\mathrm{halo}}$ \textless \ 80 pc cm$^{-3}$. Recently, \citet{2020ApJ...888..105Y} estimated that the mean $DM_{\rm halo}$ is 43 pc cm$^{-3}$, with a full range of 30-245 pc cm$^{-3}$. \citet{2019zhang} derived host contribution $DM_{\rm host}$ distributions of repeating and non-repeating FRBs with the IllustrisTNG simulation. They found that the distributions of $DM_{\mathrm{host}}$ can be well fitted by a log-normal function. For non-repeating FRBs, the median of $DM_{\mathrm{host}}$ is about $30 - 70$ pc cm$^{-3}$ in the redshift range $z=0.1-1.5$. The $DM_{\mathrm{source}}$ depends on the central engine of FRBs. If FRB is generated by mergers of binary neutron stars \citep{2016ApJ...822L...7W, 2020ApJ...890L..24Z}, the value of $DM_{\mathrm{source}}$ is small \citep{2020ApJ...891...72W,Zhao2020}. The DM contributed by intergalactic medium (IGM) is an important cosmological probe \citep{2014PhRvD..89j7303Z,2014ApJ...788..189G, 2016PhRvL.117i1301M, 2017A&A...606A...3Y, 2018A&A...614A..50W, 2018ApJ...856...65W, 2019MNRAS.484.1637J, 2019JCAP...09..039W, 2019ApJ...876..146L,2020ApJ...895...33W,Zhao2020a}. By assuming the cosmic reionization history, the value of $DM_{\mathrm{IGM}}$ can be derived theoretically from \citep{2003ApJ...598L..79I, 2004MNRAS.348..999I, 2014ApJ...783L..35D} \begin{equation}\label{DM_{igm}} DM_{\rm IGM}(z) = \frac{3c\Omega_{\rm b}H_0}{8\pi Gm_{\rm p}} \int^z_0\frac{ (1+z^\prime)f_{\rm IGM}(z^\prime)f_{\rm e}(z^\prime)}{E(z^\prime)}{\rm d}z^\prime, \end{equation} where $E(z)=H(z)/H_{0}$, $H(z)$ is Hubble parameter, $H_0$ is the Hubble constant, $m_p$ is the mass of proton, $\Omega_{\rm b}=0.0486$ is the density of baryons, and $f_{\rm IGM}$ is the fraction of baryon mass in the IGM. $f_{\rm e} = Y_{\rm H}X_{\rm e,H}(z)+\frac{1}{2}Y_{\rm He}X_{\rm e,He}(z)$. $Y_{\rm H}=3/4$ and $Y_{\rm He}=1/4$ are the mass fractions of hydrogen and helium, respectively. $X_{\rm e,H}$ and $X_{\rm e,He}$ are the ionization fractions of intergalactic hydrogen and helium. Different from the previous theoretical investigations which use extragalactic DM with a homogeneous universe \citep{2003ApJ...598L..79I,2004MNRAS.348..999I}, \citet{2014ApJ...780L..33M} considered the effect of inhomogeneity with three models for halo gas profile of the ionized baryons. \citet{2015MNRAS.451.4277D}, \citet{2019ApJ...886..135P} and \citet{Zhu2020} studied $DM_{\rm IGM}$ with different cosmological simulations at low-redshift (\textit {z} \textless \ 2) universe. \citet{2019MNRAS.484.1637J} used the Illustris simulation to estimate the $DM_{\rm IGM}$ and its scatters in the \textit {z} \textless \ 5 universe. However, the simulation accuracy can be improved with the latest IllustrisTNG simulation. Another advantage of IllustrisTNG is that it can provide the electron density directly instead of converting from the dark matter particle number density to baryonic matter density like \citet{2019ApJ...886..135P}. The complicated conversion may bring in extra uncertainties. Besides, the IllustrisTNG has a wide redshift range. Considering these advantages, we choose the IllustrisTNG (the successor of Illustris) simulation which possesses both the high accuracy and large structures in the range 0 \textless \ \textit {z} \textless \ 20. Such a broad range of redshifts enables us to examine the prospect of constraining the cosmic reionization history with high-redshift FRBs. It also gives a chance to check whether FRBs can be a new type of standard candles besides supernovae, which is crucial for distance measurement. In this paper, we use the IllustrisTNG simulation to study $DM_{\rm IGM}$ and their cosmological applications, especially at the high-redshift universe. The outline is as follows. An introduction to the IllustrisTNG simulation and our method to derive $DM_{\rm IGM}$ from the simulation are given in Section \ref{sec:method}. We show the result in Section \ref{sec:result}. The scenario of constraining the cosmic reionization history with FRBs is discussed in Section \ref{sec:dete}. We estimate the redshifts of non-localized FRBs with $DM_{\rm IGM}-z$ relation in Section \ref{sec:est}. Conclusions are given in Section \ref{sec:con}. \section{Methods} \label{sec:method} \subsection{Data access to IllustrisTNG} \begin{figure*}[htb!] \centering \includegraphics[width=0.8\linewidth]{pipe.pdf} \caption{A schematic diagram of choosing LoS. There are 5125 pipes at each redshift as shown in this figure.} \label{pipe} \end{figure*} The IllustrisTNG project, a successor of Illustris, consists of three large volume, cosmological and gravo-magnetohydrodynamical simulations \citep{2018MNRAS.475..648P,2018MNRAS.475..676S, 2018MNRAS.475..624N, 2018MNRAS.480.5113M, 2018MNRAS.477.1206N}, such as TNG50, TNG100, and TNG300. The number represents the scale of simulations in the unit of cMpc (c for comoving similarly hereinafter). Each simulation contains several runs with different resolutions. The final results are stored in 600 HDF5 files. According to the cosmological principle, TNG300 is the best choice for our research and TNG300-1 is chosen among the three runs for its best resolution. There are 100 snapshots at different redshifts stored in each run and each snapshot has 15,625,000,000 Voronoi gas cells in total. Each cell corresponds to a particle and its physical parameters given by IllustrisTNG represent the whole cell. In the 100 snapshots, 20 snapshots are `full' and 80 snapshots are `mini' which only lack some particle fields. We use all the full snapshots at \textit{z} \textless \ 10 and several `mini' snapshots for better accuracy. The web-based JupyterLab Workspace and high-performance computing resources provided by IllustrisTNG are used in this work. \subsection{Dispersion of IGM} For an FRB at \textit {z}$_{s}$, the $DM_{\rm IGM}$ can be written as \begin{equation}\label{...} DM_{\rm IGM}(z_s)=\int_{0}^{z_{s}} \frac{n_{\rm e}(z)}{1+z} {\rm d}l_{\rm prop}, \end{equation} where $n_{\rm e}(z)$ is electron density in the comoving frame and ${\rm d}l_{\rm prop}$ is the differential proper distance. Then we use the redshift differential ${\rm d}z$ to express ${\rm d}l_{\rm prop}$ as \begin{equation}\label{conv} {\rm d}l_{\rm prop}=\frac {c}{H_{\rm 0}(1+z)E(z)}{\rm d}z, \end{equation} where \begin{equation}\label{...} E(z)=\sqrt{\Omega_{\rm m}(1+z)^3+\Omega_{\rm k}(1+z)^2+\Omega_{\rm \Lambda}}. \end{equation} So the $DM_{\rm IGM}$ can be rewritten as \begin{equation}\label{dm_final} DM_{\rm IGM}(z_{\rm s})=\frac {c}{H_0}\int_{0}^{z_{s}} \frac{n_{\rm e}(z)}{(1+z)^2E(z)}{\rm d}z. \end{equation} The cosmological parameters are taken as $\Omega_{\rm m}=0.3089$, $\Omega_{\Lambda}=0.6911$, and $H_0=67.74\ \rm{km\ s^{-1}\ Mpc^{-1}}$, which are the same as those used by the IllustrisTNG simulation \citep{2018MNRAS.475..648P}. None of cosmological simulations provides continuous universe evolution, that's why snapshots exist. Therefore, equation (\ref{dm_final}) cannot be applied directly. In practice, \citet{2019MNRAS.484.1637J} and \citet{2019ApJ...886..135P} clipped and stacked the snapshots to construct the LoS. Here we solve the problem from another aspect. The basic idea is to convert the integral into summation. If $n_{\rm e}(z_{\rm i})$ is available, the ${\rm d}DM_{\rm IGM}/{\rm d}z$ at a specific redshift ${z_{\rm i}}$ (${z_{\rm i}}$ = 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 1, $\cdots$) can be derived by \begin{equation}\label{dd} \frac{{\rm d}DM_{\rm IGM}}{{\rm d}z}\bigg|_{z={z_{\rm i}}}= \frac {c}{H_0} \frac{n_{\rm e}(z_{\rm i})}{(1+z_{\rm i})^2E(z_{\rm i})}, \end{equation} Then we use \begin{small} \begin{eqnarray}\label{} &&DM_{\rm IGM}(z_{\rm i+1})= DM_{\rm IGM}(z_{\rm i})+\nonumber\\ &&\frac{1}{2}(\frac{{\rm d}DM_{\rm IGM}}{{\rm d}z}\bigg|_{{z_{\rm i}}}+\frac{{\rm d}DM_{\rm IGM}}{{\rm d}z}\bigg|_{{z_{\rm i+1}}})(z_{\rm i+1}-z_{\rm i}) \end{eqnarray} \end{small} to calculate the $DM_{\rm IGM}$ with the initial condition $DM_{\rm IGM}(z)\bigg|_{z=0}=0$. Next subsection will introduce how to obtain the electron density. \subsection{Calculations of $DM_{\rm IGM}$} IllustrisTNG snapshot data is not organized according to spatial position. In order to obtain the average electron density along a given LoS, we use a traversal method on all the 15,625,000,000 particles and find the particles belonging to the given LoS. For computational simplicity, the LoS is chosen to parallel to the \textit{x} axis, which is similar to \citet{2019MNRAS.484.1637J}. Then we make 5125 square pipes with 200 ckpc/h ($\textit H_{\rm 0}\rm = 100\ \textit h\ \rm km\ s^{-1}\ Mpc^{-1}$) side in each snapshot and find the particles as well as necessary parameters (including \textit{Coordinates, Density, ElectronAbundance, GFM\_Metals and StarFormationRate}) in the pipes (see Figure \ref{pipe}). The 5125 pipes are chosen from different locations at 24 snapshots randomly and K-S test shows the sample size is representative. The electron density can be calculated by \begin{equation} n_{\rm e}(z)_{\rm prop}=\eta_{\rm e} X_{\rm H} \frac {\rho }{m_{\rm p}} (1+z)^3, \label{ne} \end{equation} where $\eta_{\rm e}$ is the electron abundance, $X_{\rm H}$ is the hydrogen mass abundance, $\rho$ is the density and $m_{\rm p}$ is mass of a hydrogen atom. The fourth parameter is the particle coordinate, which is used to select particles. The factor $(1+z)^3$ converts the density in the simulation comoving units into the proper units. \begin{figure*}[htb!] \centering \includegraphics[width=1\linewidth]{fig1.pdf} \caption{$DM_{\rm IGM}$ distributions from $z= 0.1$ to $9$. Dashed lines are $DM_{\rm IGM}$ distributions derived from IllustrisTNG simulations and solid lines are the fitting results using equation (\ref{pIGM}). The best fitting parameters are shown in Table \ref{para}. At $z>6$, the distributions overlap as shown in panel (e), which indicates the universe do not reionize at such high redshifts.} \label{dis} \end{figure*} The equation (\ref{ne}) cannot be used in star-forming gas because the calculation is based on the `effective' temperature of the equation of state, which is not a physical temperature\footnote{\url{https://www.tng-project.org/data/docs/specifications/}}. Therefore, we use the parameter \textit{StarFormationRate} to exclude the star-formation gas. For comparison, star-forming particles are also excluded in \citet{2015MNRAS.451.4277D}. \citet{2019MNRAS.484.1637J} excluded gas cells belonging to haloes where dark matter density is 15 times larger than the cosmic critical density. It is hard to get the electron density field along the LoS analytically, which requires to calculate the boundary of all the cells. While we know the cell to which any point on a LoS belongs. We divide the pipe into 10,000 bins along the \textit x axis and take the geometric center coordinates as the representation of bins. The distances between each bin and each particle in the pipe are calculated. We choose the nearest particle of each bin and assume the bin belongs to the cell of the chosen particle. We take an average of the electron density of 10,000 bins and put it into equation (\ref{dd}). As a result, 5125 ${\rm d}DM_{\rm IGM}/{\rm d}z$ are obtained at each redshift. Ten million $DM_{\rm IGM}-z$ relations are built by randomly selected ${\rm d}DM_{\rm IGM}/{\rm d}z$ from $z=0.1$ to 9. \section{Result} \label{sec:result} The redshifts of snapshots besides $z=0$ are shown in Table \ref{para}. The distributions of $DM_{\rm IGM}$ (from 0 to $z$ similarly hereafter) at different redshifts are shown in Figure \ref{dis}. The DM distributions are fitted with \citep{2014ApJ...780L..33M, 2019MNRAS.485..648P, 2020Natur.581..391M}: \begin{equation}\label{pIGM} p_{\rm IGM}(\Delta)=A\Delta^{-\beta}\exp[-\frac{(\Delta^{-\alpha}-C_0)^2}{2\alpha^2\sigma_{\rm DM}^2}], \Delta \textgreater 0, \end{equation} where $\Delta = DM_{\rm IGM}/<DM_{\rm IGM}>$, and $\beta$ is related to the inner density profile of gas in halos. We take $\alpha = \beta =3$, which is the same as \citet{2020Natur.581..391M}. $\sigma_{\rm DM}$ is an effective standard deviation. $C_0$, which can affect the horizontal position, is the remaining parameter to be fitted. The fitting results are shown in Table \ref{para}. \begin{table}[htp!] \begin{tabular}{cccccc} \tableline $z$ & $A$ & $C_0$ & $\sigma_{\rm DM}$ \\ \tableline 0.1 & 0.04721 & -13.17 & 2.554 \\ 0.2 & 0.005693 & -1.008 & 1.118 \\ 0.3 & 0.003584 & 0.596 & 0.7043 \\ 0.4 & 0.002876 & 1.010 & 0.5158 \\ 0.5 & 0.002423 & 1.127 & 0.4306 \\ 0.7 & 0.001880 & 1.170 & 0.3595 \\ 1 & 0.001456 & 1.189 & 0.3044 \\ 1.5 & 0.001098 & 1.163 & 0.2609 \\ 2 & 0.0009672 & 1.162 & 0.2160 \\ 2.4 & 0.0009220& 1.142 & 0.1857 \\ 3 & 0.0008968& 1.119 & 0.1566 \\ 3.5 & 0.0008862& 1.104 & 0.1385 \\ 4 & 0.0008826 & 1.092 & 0.1233 \\ 4.4 & 0.0008827& 1.084 & 0.1134 \\ 5 & 0.0008834 & 1.076 & 0.1029 \\ 5.2 & 0.0008846 & 1.073 & 0.09918 \\ 5.5 & 0.0008863 & 1.070 & 0.09481 \\ 5.8 & 0.0008878 & 1.067 & 0.09072\\ 6 & 0.0008881 & 1.066&0.08971\\ 6.5 & 0.0008881 &1.066&0.08960\\ 7 & 0.0008881 &1.066&0.08952\\ 8 & 0.0008881 &1.066&0.08944\\ 9 & 0.0008881 &1.066&0.08941\\ \tableline \end{tabular} \caption{Redshifts of the snapshots and fitting parameters of the $DM_{\rm IGM}$ distributions in Figure \ref{dis}. \label{para}} \end{table} It's obvious that the asymmetrical distributions have long tails at high $DM_{\rm IGM}$, so we choose the most probable value for analysis, which is also used by \citet{2015MNRAS.451.4277D} and \citet{2019ApJ...886..135P}. We find $DM_{\rm IGM}(z=1) = 892^{+721}_{-270}$ pc cm$^{-3}$ (errors represent 95\% confidence level). \citet{2003ApJ...598L..79I} and \citet{2004MNRAS.348..999I} predicted $DM_{\rm IGM}(z=1) \sim 1200$ pc cm$^{-3}$. \citet{2018ApJ...867L..21Z} predicted $DM_{\rm IGM}(z=1) \sim 855 \pm 345 $ pc cm$^{-3}$. \citet{2019MNRAS.484.1637J} found $DM_{\rm IGM}(z=1) \sim 905 \pm 115$ pc cm$^{-3}$ (errors represent $1\sigma$ standard deviation). \citet{2019ApJ...886..135P} derived $DM_{\rm IGM}(z=1) \sim 800^{+7000}_{-170}$ pc cm$^{-3}$ with uniform weighting and $DM_{\rm IGM}(z=1) \sim 960^{+350}_{-160}$ pc cm$^{-3}$ with weighting by matter distribution (errors represent 95\% confidence level). In Figure \ref{c_other}, we show these above $DM_{\rm IGM}-z$ relations. Our result is shown as blue solid line with 95\% confidence region (blue region). The $DM_{\rm IGM}$ estimated in our work is consistent with others within 95\% confidence level including the one derived by \citet{2019ApJ...886..135P}. The difference between \citet{2019ApJ...886..135P} and our result may be caused by the conversion from the dark matter number density to the free electron density in the MareNostrum Instituto de Ciencias del Espacio Onion Universe simulation. A non-negligible systematic error may be from the different cosmological parameters used by these simulations. For example, Illustris uses the cosmological parameters from WMAP-9 measurements \citep{2014Natur.509..177V}, while IllustrisTNG uses those from \textit{Planck} \citep{2018MNRAS.475..648P}. \begin{figure*}[htb!] \centering \includegraphics[width=0.8\linewidth]{fig2.pdf} \caption{$DM_{\rm IGM}-z$ relation at low redshift ($z<1.5$). The blue line is our result from IllustrisTNG simulation and the blue shaded region is the 95\% confidence region. Other results are shown for comparison. The pink line with inverted triangle markers is taken from \citet{2003ApJ...598L..79I} and \citet{2004MNRAS.348..999I}. The dotted yellow line is taken from \citet{2015MNRAS.451.4277D}, the orange line with star marker is taken from \citet{2018ApJ...867L..21Z} and the green line with cross marks is \citet{2019ApJ...886..135P} uniform weighting result. Our result is consistent with other works except for the result of \citet{2019ApJ...886..135P}. However, if considering the large confidence level of \citet{2019ApJ...886..135P} (not shown in this figure), our result is consistent with theirs.} \label{c_other} \end{figure*} We also compare our result with several localized FRBs \citep{2019Sci...365..565B, 2019Natur.572..352R, 2019Sci...366..231P, 2020Natur.581..391M} in Figure \ref{c_obs}. The blue line is our result derived from TNG300 and the blue shade region is the 95\% confidence region. The theoretical one from equation (\ref{DM_{igm}}) is shown as dash line for the cosmological parameters from Planck \citep{2016A&A...594A..13P}. The $DM_{\rm host}$ value is adopted from \citet{2019zhang}. Based on the host galaxy observations, \citet{2019zhang} calculated the $DM_{\rm host}$ of repeating and non-repeating FRBs from the IllustrisTNG simulation. They found $DM_{\rm host}=32.97(1+z)^{0.84}$ pc cm$^{-3}$ for non-repeating FRBs. For FRB 190608 we adopt its $DM_{\rm host}=137\pm43$ pc cm$^{-3}$ from observations \citep{2020arXiv200513158C}. The $DM_{\rm MW}$ value is derived with the NE2001 model \citep{2002astro.ph..7156C}. We take $DM_{\rm halo}=50$ pc cm$^{-3}$ for all FRBs. The contributions from FRB sources are ignored. All the derived $DM_{\rm IGM}$ are compatible with our result. \begin{figure*}[htb!] \centering \includegraphics[width=0.8\linewidth]{new.pdf} \caption{$DM_{\rm IGM}-z$ (Macquart) relation from IllustrisTNG simulation and several localized FRBs. The orange dashed line is the model of equation (\ref{DM_{igm}}). We take $f_{IGM}=0.82+0.08z/0.9$, $f_{\rm e} = 7/8$, and $X_{\rm e,H}=X_{\rm e,He}=1$. The blue line is our result derived from TNG300 and the blue shaded region is the 95\% confidence level. The two lines differ because the ionized fraction and IGM baryon fraction are different between the model and the simulation. We also show the $DM_{\rm IGM}$ value of several localized FRBs. In calculation, the $DM_{\rm MW}$ value is derived with the NE2001 model and $DM_{\rm host}$ is taken from \citet{2019zhang} except FRB 190608 (taken from \citet{2020arXiv200513158C}).} \label{c_obs} \end{figure*} \section{The scenario of constraining the cosmic reionization history} \label{sec:dete} High-redshift FRBs can be used to probe the cosmic reionization history and we test the expectation with TNG300 simulation. The universe is mostly neutral and non-transparent after the recombination \citep{1968ApJ...153....1P,1969JETP...28..146Z,2000ApJS..128..407S}. The transition from the neutral to ionized universe called reionization, which is the first chance to detect the universe evolution after the end of cosmic dark ages. Reionization history is a frontier and challenging area in cosmology \citep{2013fgu..book.....L}, because it is hard to detect now. Usually, it is believed that the reionization of hydrogen occurs between $z \sim 12$ and $z \sim 6$. For helium the range is between $z \sim 6$ and $z \sim 2$ \citep{2016A&A...596A.108P}. Some works have proposed to use the FRBs to constrain the reionization history \citep{Zheng2014,2016JCAP...05..004F,2020PhRvD.101j3019L,DaiXia2020,Bhattacharya2020}. It is still difficult to localize non-repeating FRBs even at low redshifts. The capability of localization is improving. \citet{2020arXiv200811738L} demonstrated that the blind interferometric detection and localization of non-repeating FRBs by the CHIME and the CHIME Pathfinder can be down to about milliarcsecond precision. In future, they will observe thousands of single FRB events to this precision. However, considering the offsets which have been observed in FRBs, spectroscopy should be applied for high-redshift FRBs besides radio interferometer. To confirm the redshift, optical/infrared observations should also be applied. Finding high-redshift quasars is a challenging task at present, let alone the high-redshift FRB hosts which may be faint. Combining the high burst rate and the construction of next generation telescopes, it is hopeful to constrain the reionization history with FRBs. The estimated $\gtrsim 10^3$ day$^{-1}$ rate of FRBs \citep{2019A&ARv..27....4P} would provide abundant chances for future detections. Although the redshift distribution of FRBs is unknown, it is optimistic to gain sufficient high-redshift FRBs with the Five-hundred-meter Aperture Spherical radio Telescope (FAST) \citep{2018ApJ...867L..21Z} and future Square Kilometer Array (SKA) \citep{2016JCAP...05..004F}. If high-redshift FRBs are observed with accurate redshift information from optical/infrared band, we can extend the $DM_{\rm IGM}-z$ relation to the epoch of reionization (EoR). The relation can constrain the parameters in reionization model and the ionized fraction precisely. The wide redshift range of IllustrisTNG gives us a chance to constrain the cosmic reionzation history. We use the \textit{tanh} model given by \citet{2008PhRvD..78b3002L} (also applied by \citet{2016A&A...596A.108P}): \begin{equation}\label{mod} x_{\rm e}(z)=\frac{f}{2}[1+\tanh(\frac {y_{\rm re}-y}{\Delta y})], \end{equation} where $x_{\rm e}$ is ionized fraction. \textit{f} is a normalized parameter for considering both hydrogen and helium and expressed as $f=1+f_{\rm He}=1+n_{\rm He}/n_{\rm H}$. Typically $f \sim 1.08$, $y=(1+z)^{3/2}$, $y_{\rm re}=(1+z_{\rm re})^{3/2}$ and $z_{\rm re}$ is defined as the redshift at which $x_{\rm e}=f/2$. $\Delta y=1.5\sqrt{1+z_{\rm re}}\Delta z$ and $\Delta z$ reflects the duration of reionization. In the model of \citet{2008PhRvD..78b3002L}, $\Delta z$ is a fixed value. It was estimated with an upper limit of 1.3 at 95\% confidence level in \citet{2016A&A...596A.108P} redshift-symmetric case. Then we calculate $n_{\rm e}(z)$ from \begin{equation} n_{\rm e}(z)=\overline n_{\rm b}(z) X_{\rm H} x_{\rm e}(z) \end{equation} and \begin{equation} \overline n_{\rm b}(z) = \frac {\Omega_{\rm b} \rho_{\rm cr,0}(1+z)^3}{m_{\rm p}}, \end{equation} where $\rho_{\rm cr,0}$ is the critical density of universe. After taking $n_{\rm e}(z)$ into equation (\ref{dm_final}), the theoretical value of $DM_{\rm IGM}$ can be obtained in this cosmic reionization model. Figure \ref{dm-z} gives the value of $DM_{\rm IGM}$ as a function of redshift up to $z\sim9$. The blue line shows the derived $DM_{\rm IGM}$ from TNG300 simulation with 95\% confidence region (blue region). Cosmic reionization affects both CMB power spectrum and kinematic Sunyaev-Zeldovich (kSZ) effect. We display the results from \textit{Planck} data combined with the kSZ effect \citep{2016A&A...596A.108P} with the \textit{tanh} model (see equation (\ref{mod})) for comparison as well. The green line takes $z_{\rm re} = 7.2$, which is the result from a uniform prior on the redshift at which the reionization ends ($z_{\rm end}$). The orange line is the result from the prior $z_{\rm end}\ \textgreater \ 6$ and it gives $z_{\rm re} = 7.8$. The least-square-method fit (red line) indicates a very fast reionization process ($\Delta z = 0.05$ ) at $z_{\rm re}=5.95$ with the \textit{tanh} model, which is also compatible with the model used by IllustrisTNG \citep{2009ApJ...703.1416F}\footnote{Dec 2011 version, \url{https://galaxies.northwestern.edu/uvb-fg09/}}. Therefore, high-redshift FRBs are promising probes of the cosmic reionization history. \begin{figure}[htb!] \centering \includegraphics[width=0.5\textwidth]{t.pdf} \caption{The blue line is our result and the red line is the best fit with the $tanh$ model. For comparison, the value derived from \textit{Planck} data is also shown \citep{2016A&A...596A.108P}. The green line takes $z_{\rm re} = 7.2$, which is the result from a uniform prior on the redshift at which the reionization ends. The orange line is the result from the prior $z_{\rm end}\ \textgreater \ 6$ and it gives $z_{\rm re} = 7.8$.} \label{dm-z} \end{figure} Meanwhile, we calculate the optical depth $\tau(z)$ of CMB contributed by IGM from \begin{equation} \tau(z)=\int_{0}^{z} \sigma_{\rm T} n_{\rm e}(z^\prime) {{\rm d}l_{\rm prop}}, \end{equation} where $\sigma_{\rm T}=6.25\times 10^{-25}$ cm$^2$ is the Thompson scattering cross section. The measurement of $\tau$ is changing for different instruments and we are curious whether FRBs may help. Using the electron number density derived from TNG300 simulation, we find that the total optical depth $\tau(z>6)=0.037^{+0.006}_{-0.004}$ with 95\% confidence level. Figure \ref{tau} shows the value of $\tau(z)$ as a function of redshfit from TNG300 simulation (blue line). The ionized electron fraction drops to zero before reionization so the optical depth saturates at high redshifts \citep{2016JCAP...05..004F}. The saturation value can be detected by high redshift FRBs which have no connection with CMB. The optical depth $\tau=0.058\ \pm \ 0.023$ \ \ (95\% confidence level) from Planck result is shown as the orange region \citep{2016A&A...596A.108P}. Considering the uncertainties, we find these two results are consistent with each other. Therefore, the DM of high-redshift FRBs provides an independent way to measure the optical depth of CMB, which can tightly constrain the cosmic reionization history. However, we must state that the resolution of \textit{Planck} is 12 degrees which is much larger than that of FRBs. Comparing the two results with different resolution directly may be kind of inappropriate, however there is no need to decrease the resolution of FRBs. \begin{figure}[htb!] \centering \includegraphics[width=0.5\textwidth]{tau.pdf} \caption{The blue line and shade region is our result with 95\% confidence region. The orange line and region is $Planck$ result with 95\% confidence region \citep{2016A&A...596A.108P}, which gives $\tau=0.058 \ \pm \ 0.023$.} \label{tau} \end{figure} \section{Estimating the redshifts of non-localized FRBs} \label{sec:est} As mentioned before, there are only ten localized FRBs \citep{2017Natur.541...58C, 2019Sci...365..565B, 2019Natur.572..352R, 2019Sci...366..231P, 2020Natur.577..190M, 2020Natur.581..391M}, which means the redshifts for most FRBs are unknown. Assuming that the accurate $DM_{\rm IGM}$ is known, we check whether the redshift of FRB can be precisely derived from the $z-DM_{\rm IGM}$ relation. As mentioned in Section \ref{sec:method} and Section \ref{sec:result}, we get $23 \times 10,000,000$ $DM_{\rm IGM}-z$ data points (FRBs) in total (24 snapshots and 10,000,000 combinations). We divide the data points whose $DM_{\rm IGM}$ is between 0 and 6000 pc cm$^{-3}$ into 150 bins (40 pc cm$^{-3}$ in each bin) and calculate the mean redshift of these FRBs in each bin. The $z-DM_{\rm IGM}$ relation is shown in Figure \ref{eor}. The standard deviation of derived redshift is 0.74 at the pseudo-redshfit 4.61 ($DM_{\rm IGM}=4000$ pc cm$^{-3}$), which means the relative error is 16.1\%. The small redshift standard deviations of FRBs whose $DM_{\rm IGM}$ are less than 4000 pc cm$^{-3}$ show a good prospect for calculating pseudo-redshfits of non-localized FRBs. We only take full snapshots at low redshifts, which means the redshifts of our simulated FRBs are restricted to some fixed redshifts such as 0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 1 and 1.5. However, the standard deviation should not be larger when including mini snapshots because the standard deviation does not change for different sampling methods. What's more, the 40 pc cm$^{-3}$ bin width can be taken as the systematic error during the estimation of $DM_{\rm halo}$, $DM_{\rm host}$ and $DM_{\rm source}$, which means it is even optimistic to derive the pseudo-redshift with a `not very accurate' $DM_{\rm IGM}$. \begin{figure}[htb!] \centering \includegraphics[width=0.5\textwidth]{te.pdf} \caption{The $z-DM_{\rm IGM}$ relation. The short blue lines show one standard deviation of each bin.} \label{eor} \end{figure} \section{Conclusions}\label{sec:con} In this work, we derive $DM_{\rm IGM}$ of FRBs at the redshift range of $0\ \textless \ z\ \textless \ 9$ from IllustrisTNG simulation. We obtain $DM_{\rm IGM} = 892^{+721}_{-270}$ pc cm$^{-3}$ at $z=1$. The $DM_{\rm IGM}$ value of localized FRBs are consistent with the derived $DM_{\rm IGM}-z$ relation. At high redshifts $z>5$, we show the scenario of probing the cosmic reionization history with FRBs. The $tanh$ reionization model is used to fit the derived $DM_{\rm IGM}-z$ relation at high redshifts. We find the reionization of IllustrisTNG universe occurs quickly at $z = 5.95$. This reionization model is compatible with the theoretical model used by the IllustrisTNG simulation \citep{2009ApJ...703.1416F}. The optical depth of CMB is also derived from the IllustrisTNG simulation, which is consistent with that from \citet{2016A&A...594A..13P}. We try to limit the redshifts of non-localized FRBs with their $DM_{\rm IGM}$. The standard deviation of pseudo-redshifts for non-localized FRBs is 16.1\% for $DM_{\rm IGM}=4000$ pc cm$^{-3}$. The highest DM of observed FRB is $2596.1 \pm 0.3$ pc cm$^{-3}$ and its nominal redshift is 2.1 \citep{2018MNRAS.475.1427B,2018MNRAS.478.2046C}. It is predictable that there will be two orders of magnitude more FRBs in the next few years \citep{2018NatAs...2..865K}. According to \citet{2016JCAP...05..004F} and \citet{2018ApJ...867L..21Z}, FAST and SKA will have enough capability to detect FRBs out to $z = 14\sim15$. Therefore, FRBs will be a powerful and independent probe of the universe during epoch of reionization besides hydrogen 21-cm line. The future large sample of FRBs can be used to test our result about estimating redshifts for non-localized FRBs. What's more, most of previous works only considered the mean value of $DM_{\rm IGM}$ (equation \ref{DM_{igm}}) for cosmological constraints, while the scatter of it (Figure \ref{dis}) which can degrade the cosmological constraints was not handled properly. Since IllustrisTNG shows a certain shape of the scatter, more authentic distributions of $DM_{\rm IGM}$ and $DM_{\rm host}$ can be used to test the FRB's capability of constraining cosmological parameters, such as $\Omega_{\rm m}$ and $\Omega_{\rm \Lambda}$. \acknowledgements We thank the anonymous referee for valuable comments. We thank Zhimei Tan, Lingrui Lin and Yichen Sun for helpful discussions and thank Dylan Nelson for his theoretical and technical help. This work is supported by the National Natural Science Foundation of China (grant U1831207).
967a80abd3f4322e42b187c7e55136f5dc01f7aa
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Image denoising is an ill-posed inverse problem to recover a clean signal $y$ from the corrupted noisy image $x$, \begin{equation} \label{eq.x} x=y+n, \end{equation} where $n$ is the noise component we would like to remove. In many imaging systems \cite{lee1999polarimetric, eo2018kiki}, image noise comes from multiple sources, such as the capturing instrument, medium of data transmission, and subsequent postprocessing. This complex generation process leads to complex noise distributions and variable noise levels, which makes denoising a challenging problem. Recently, the field of image denoising has become dominated by supervised deep convolutional neural networks (CNN), for which a noisy input and the corresponding ground-truth are required. Many CNNs \cite{zhang2018ffdnet, yue2019variational} show impressive denoising performance on some synthetic datasets. However, the synthesized noise usually deviates severely from the real noise distribution, resulting in often poor generalization. In addition, for many imaging systems, such as medical imaging, paired data is difficult to obtain, further limiting the application of these supervised techniques. \begin{figure} \centering \includegraphics[width=0.73in]{fig1_2-77_A_gaus.png} \includegraphics[width=0.73in]{fig1_2-77_B.jpg} \includegraphics[width=0.73in]{fig1_2-77_our.png} \includegraphics[width=0.73in]{fig1_2-77_LIR.png}\\ \includegraphics[width=0.73in]{fig1_2-86_A_spk.png} \includegraphics[width=0.73in]{fig1_2-86_B.jpg} \includegraphics[width=0.73in]{fig1_2-86_our.png} \includegraphics[width=0.73in]{fig1_2-86_LIR.png}\\ \vspace{-0.06in} \subfigure[Noise]{\includegraphics[width=0.73in]{fig1_2-607_A.png}} \subfigure[Clean]{\includegraphics[width=0.73in]{fig1_2-607_B.jpg}} \subfigure[ADANI]{\includegraphics[width=0.73in]{fig1_2-607_our.png}} \subfigure[LIR \cite{du2020learning}]{\includegraphics[width=0.73in]{fig1_2-607_pos_LIR.png}} \caption{(a) and (b): unpaired data for noise generation. (c): the image produced by ADANI for supervision has noise similar to (a), while the background is (b). (d): LIR is a GAN-based method that also uses unpaired data (a) and (b) to generate a noisy image. From top to bottom the noise are Gaussian, Speckle and Poisson.} \label{fig1} \end{figure} To relax data constraints, training denoising CNNs without pre-collected paired data has become a focus topic. Some ``self-supervised'' methods, such as Noise2Void \cite{krull2019noise2void} and Self2Self \cite{quan2020self2self}, show that individual noisy images can be used to train denoising networks via the so-called blind spot strategy. Despite great success, the effectiveness of these self-supervised methods relies on some pre-defined statistical assumptions, for example, noise $n$ is zero-mean, $n$ is independent of clean signal $y$, and $n$ is pixel-independent. Once the noise distribution does not meet those assumptions (\textit{e.g.} speckle noise), the denoising performance drops significantly. Another elegant strategy is to use unpaired noisy/clean images to learn denoising. To generate the necessary supervision, methods along this line usually integrate noise modeling and denoising into a deep learning framework. For instance, the authors in \cite{chen2018image, du2020learning, kaneko2020noise} use generative adversarial network (GAN) \cite{goodfellow2014generative} to synthesize noisy images corresponding to accessible clean images for supervision. Due to its strong generative ability, GAN is currently the most popular tool for unpaired denoising. However, GAN cannot promise the quality of the generated data, so the generated noise is often unrealistic (see Figure \ref{fig1}). In addition, GAN suffers from mode collapse \cite{arjovsky2017wasserstein}, resulting in a lack of diversity in the generated data. In Figure \ref{fig_level}, we see that the noisy images generated by the GAN-based method LIR exhibit monotonous noise levels. These unrealistic and monotonous noisy images will lead to poor denoising. \begin{figure} \centering \subfigure[Gaussian noise]{\includegraphics[width=1.05in]{level-gaus.png}} \subfigure[Speckle noise]{\includegraphics[width=1.05in]{level-spk.png}} \subfigure[Poisson noise]{\includegraphics[width=1.05in]{level-pos.png}} \caption{The noise level statistical histograms for 10,000 images generated by ADANI and LIR. (a)-(c) represent the noisy data generated in three experiments. The level of noise produced by LIR is similar. In contrast, the output of ADANI has a wider distributional coverage. The noise level $z_1$ is provided by a pre-trained noise level estimator.} \label{fig_level} \end{figure} Motivated by the practical value of this open problem, we develop an efficient denoising method that does not rely on pre-defined noise statistics or pre-collected paired data. Given that the collection of unpaired data is relatively easy in most applications, the focus of our work is unpaired learning. Similar to the previous methods \cite{chen2018image, du2020learning, kaneko2020noise, yan2019unsupervised}, our strategy is to use unpaired data to synthesize new noisy images to learn a denoising model. We also use the GAN as part of our model to distinguish the types of noise. However, the key of our method is to generate realistic noise with adjustable noise level by imitating a guided noise. In this way, we can simply change the guided noisy image to generate a variety of different levels of noise, thereby expanding the distributional coverage of noise. Specifically, the generated noise is forced to be similar to a guided real noise by comparing their gradients, thus avoiding unrealistic noise patterns. Then, a pre-trained noise/clean classification network is introduced to estimate the level of noise. To achieve the same noise level, the noise generator is encouraged to imitate the guided noise to refine its output noise. For the background of the generated image, it is consistent with an accessible clean image specified by a background consistency module. Since the generator can adaptively generate noise similar to the input guided noise, we call our method adaptive noise imitation (ADANI) algorithm. Next, by pairing the generated noisy image with the corresponding ground-truth, we can train a denoising CNN supervisedly. To demonstrate the effectiveness of ADANI, we conduct experiments on several synthetic and real-world datasets. The noisy image produced by ADANI is visually and statistically indistinguishable from the real one. Consequently, the performance of our denoising CNN is close to other networks trained with pre-collected paired data and is better than other self-supervised and unpaired denoising methods. Overall, our contributions are summarized as follows: \begin{itemize} \item We propose an adaptive noise imitation algorithm for the generation of various noisy images, which only requires some unpaired data. \item We observe that the class logit (the input to the final softmax) from the noise/clean classification network is positively correlated with the noise level of the image. We use it as an indicator of the noise level. \item We show the application of the data generated by ADANI in various denoising tasks, where the noise statistics can be unknown. \end{itemize} \begin{figure*} \centering \includegraphics[width=5.6in]{fw_new2.jpg} \caption{Illustration of our adaptive noise imitation algorithm. The generated $x^g$ has noise similar to $x^r$, and its background is $y^r$. } \label{fig_frame} \end{figure*} \section{Related Work} We present a brief review of image denoising methods related to our work, including model-based methods and learning-based methods. \emph{\textbf{Model-based methods.}} Most traditional image denoising algorithms use hand-crafted priors \cite{xu2018trilateral, buades2005non, xu2018trilateral, meng2013robust, zhao2014robust} to simplify the denoising problem. One widely-used prior in image denoising is non-local self-similarity (NSS) \cite{hou2020nlh, mairal2009non, dong2012nonlocally}, which assumes that many patches in a non-local image area share a similar appearance. Some popular NSS-based algorithms, such as BM3D \cite{dabov2007image} and WNNM \cite{gu2014weighted}, have become the benchmark of image denoising. Other prominent techniques, such as total variation \cite{rudin1992nonlinear, beck2009fast, selesnick2017total}, wavelet coring \cite{simoncelli1996noise} and low-rank assumptions \cite{zhu2016noise, chang2017hyper}, have also been proven effective in some image denoising tasks. \emph{\textbf{Learning with paired data.}} Due to powerful nonlinear modeling capabilities, deep learning has become the dominant method for image denoising \cite{chang2020spatial, kim2020transfer, liu2020joint, zhang2020memory}. Typically, supervised denoising CNNs require a large number of pairs of noisy/clean images for supervision. A state-of-the-art supervised approach is DnCNN \cite{zhang2017beyond}, which exploits residual learning for blind denoising. Following DnCNN, many different network architectures have been designed to obtain better visual results after denoising, including FFDNet \cite{zhang2018ffdnet}, DANet \cite{yue2020dual}, VDN \cite{yue2019variational}, RIDNet \cite{anwar2019real} and NLRN \cite{liu2018non}. When clean targets are unavailable, Lehtinen \emph{et al.} \cite{lehtinen2018noise2noise} suggest to learn a Noise2Noise (N2N) model with the pairs of two noisy images of the same scene. The performance of N2N is on par with other networks trained using noisy/clean image pairs. Nevertheless, it is not always feasible to sample two independent noises for the same scene. \emph{\textbf{Learning without paired data.}} It is sometimes useful to develop methods that do not rely on paired data as inputs. The blind-spot mechanism proposed in Noise2Void (N2V) \cite{krull2019noise2void} allows the denoiser to be trained using individual noisy images without paired data. This implementation is based on the assumption that the noise is zero mean and spatially uncorrelated, so that each pixel can be restored by its surrounding pixels. Due to its practical value, the blind spot mechanism is further improved in \cite{laine2019high, batson2019noise2self, wu2020unpaired, quan2020self2self} to obtain higher quality denoising. However, these self-supervised methods cannot handle noise that violates their assumptions, such as spatially correlated noise. In contrast, our ADANI does not rely on assumptions about the statistical characteristics or patterns of noise. Another strategy is to train denoising CNNs with unpaired noisy/clean images, which is also the focus of our work. Since unpaired data cannot directly guide the denoising for CNNs, methods to this group usually learn to synthesize noise before denoising. In particular, the GAN is widely used for noise modeling and has shown the potential for blind image denoising \cite{chen2018image, kaneko2020noise, yan2019unsupervised}. Furthermore, cycle-consistency \cite{du2020learning} is utilized to aid GAN in learning the invariant representation between noise and clean domains. However, the discriminant information from GAN is too ambiguous to permit the generation of complex and diverse noise. Therefore, these GAN-based methods are easy to suffer from mode collapse, resulting in lack of diversity or unrealistic noise. \section{Methodology} Given some unpaired noisy images $\mathcal{D}^{noise}=\{x_i \}^N_{i=1}$ and clean images $\mathcal{D}^{clean}=\{y_j \}^M_{j=1}$, our goal is to learn denoising with these unpaired data. We denote the data distribution as $x^r \sim p^r(x)$ and $y^r \sim p^r(y)$. Hereafter, we use superscripts $r$ and $g$ to represent the real distribution and generative distribution, respectively. To reconstruct high-quality images, supervised methods \cite{zhang2017beyond, yue2019variational} incorporate pixel-level constraints to inform the denoising CNN to carefully restore each pixel during denoising. Unfortunately, unpaired data cannot directly form pixel-level supervision due to different image content. To solve this problem, we propose an adaptive noise imitation (ADANI) algorithm, which uses a CNN to learn to synthesize noise with these unpaired data (see Figure \ref{fig_frame}). In doing so, the gound-truth corresponding to the newly generated noisy image provides strong supervision for denoising. \subsection{GAN-based noise generation} GANs have recently demonstrated the potential to generate certain distribution types of noise \cite{chen2018image, du2020learning, kaneko2020noise}. We also adopt GAN as a component of our method to guide noise generation. The process of noise generation is performed by a generator that takes a clean background image $y^r$ and a guided noisy image $x^r$ as input, \begin{equation} \label{eq.g} \begin{aligned} x^g&=G(y^r,x^r)\\ &=y^r+n^g. \end{aligned} \end{equation} where $x^g \sim p^g(x)$ and $n^g$ is the noise generated by the generator. We use the extra input $x^r$ to guide the generation of realistic noise. \emph{\textbf{Background consistency.}} In Eq.(\ref{eq.g}), we want to generate noisy $x^g$ with the same image background as $y^r$, so that $x^g$ and $y^r$ can be paired to train denoising CNNs. To this end, we first build a background consistency module (BCM) to preserve the background consistency between $x^g$ and $y^r$. BCM is a pre-trained network related to image filters (\textit{e.g.} Gaussian filter, median filter). It is based on the assumption that paired noisy and clean images share the same low-frequency content. To pre-train BCM, a mixture of $\mathcal{D}^{noise}$ and $\mathcal{D}^{clean}$ is adopted as the training set $\mathcal{D}^{mix}$, and the blurred targets corresponding to $\mathcal{D}^{mix}$ is produced by image filtering (we use a median filter with a kernel size of 31). After pre-training, BCM acts as an image filter, which can filter out high-frequency parts including noise from the input image. We use BCM to provide the background consistency constraint, \begin{equation} \label{eq.Lbc} \mathcal{L}_{BC}=\mathbb{E}_{y^r \sim p^r(y),\hfill\atop x^g \sim p^g(x) \hfill} \left[ \left\| {{B(x^g)} - B(y^r)} \right\|_1 \right], \end{equation} where $B(\cdot)$ denotes BCM and we adopt L1 loss. To generate noise, the generator in Eq.(\ref{eq.g}) is supervised by a noise discriminator. Following adversarial training, a discriminator is responsible for distinguishing a real noisy image $x^r$ from the generated noisy image $x^g$. The purpose of the generator is to fool the discriminator which means that $p^g(x)$ gets close to $p^r(x)$. This corresponds to the following GAN loss, \begin{equation} \scriptsize \label{eq.gan} \mathcal{L}_{GAN}=\mathbb{E}_{x^r \sim p^r(x)} \left[log D(x^r)\right]+\mathbb{E}_{x^g \sim p^g(x)} \left[log \left(1-D(x^g) \right) \right]. \end{equation} Eq.(\ref{eq.gan}) allows the generation of noise of a certain distribution type, but does not impose constraints on the quality of noise. Therefore, unrealistic noise is often generated. In addition, Eq.(\ref{eq.gan}) does not indicate the level of noise, which leads to mode collapse. For example, use Eq.(\ref{eq.g}) to synthesize Gaussian noise with different variances (\textit{e.g.} $\sigma \in \left( 0, 50 \right]$). The generator can easily fool the discriminator by always producing the same level (\textit{e.g.} $\sigma = 25$) of noise, resulting in the lack of variety. To solve the above problems, our strategy is to generate $x^g$ similar to the guided noise $x^r$ in noise type and level by imitating $x^r$. Since the noise of $x^g$ is similar to $x^r$, it avoids unrealistic noise patterns. What's more, we can obtain various noisy data by simply changing $x^r$. We then introduce some constraints on the noise similarity between $x^r$ and $x^g$. \begin{figure} \centering \includegraphics[width=0.78in]{house_g5.png} \includegraphics[width=0.78in]{house_g20.png} \includegraphics[width=0.78in]{house_g35.png} \includegraphics[width=0.78in]{house_g50.png}\\ \vspace{-0.04in} \subfigure[{$\sigma=5$, \color{white}{aa} \color{black} $ z_1=1.09$,\color{white}{aaa} \color{black} $q_1=0.91$.}]{\includegraphics[width=0.78in]{house_g5_grad.png}} \subfigure[$\sigma=20$, \color{white}{aa} \color{black} $ z_1=10.6$,\color{white}{aaa} \color{black} $q_1=1.00$.]{\includegraphics[width=0.78in]{house_g20_grad.png}} \subfigure[$\sigma=35$, \color{white}{aa} \color{black} $ z_1=21.6$,\color{white}{aaa} \color{black} $q_1=1.00$.]{\includegraphics[width=0.78in]{house_g35_grad.png}} \subfigure[$\sigma=50$, \color{white}{aa} \color{black} $ z_1=30.0$,\color{white}{aaa} \color{black} $q_1=1.00$.]{\includegraphics[width=0.78in]{house_g50_grad.png}} \caption{Images with different levels of Gaussian noise. In the second row are gradient maps corresponding to the first row. $\sigma$: the standard deviation of Gaussian noise. $z_1$ and $q_1$: the logit and probability output by the noise level estimator Eq.(\ref{eq.softmax}). } \label{fig_house} \end{figure} \emph{\textbf{Noise similarity.}} We note that image noise is a random variation in pixel brightness, which will cause the magnitude of the gradient around the noisy pixel to be dramatically improved. The image gradient reflects the high-frequency information (\textit{e.g.} noise, edges), while excluding the low-frequency image content. Normally, the more noisy the image is, the more noisy the gradient map is. This motivates us to achieve noise similarity by matching the gradient distributions of $x^r$ and $x^g$. We compute the image gradient $\nabla x$ by combining the horizontal and vertical deviations of adjacent pixels. Then, we impose an L1 penalty on the gradient gap between $x^r$ and $x^g$, \begin{equation} \begin{aligned} \label{eq.Lgrad} \mathcal{L}_{gradient}=&\mathbb{E}_{x^r \sim p^r(x),\hfill\atop x^g \sim p^g(x) \hfill} [\left\| { {{\nabla }x^g}-{{\nabla }x^r}} \right\|_1 ]. \end{aligned} \end{equation} Since the gradient of the noisy image is dominated by noise, $\mathcal{L}_{gradient}$ forces the noise of $x^g$ to be similar to the real $x^r$. Combining Eq.(\ref{eq.Lbc}), Eq.(\ref{eq.gan}) and Eq.(\ref{eq.Lgrad}), our GAN-based noise generation strategy can be briefly expressed as \begin{equation} \label{eq.L_all} \mathop {\min }\limits_G \mathop {\max }\limits_D {\mathcal{L}_{GAN}} + \alpha \mathcal{L}_{gradient}+ \beta \mathcal{L}_{BC}, \end{equation} where $\alpha$ and $\beta$ are the trade-off parameters. \subsection{Adaptive noise imitation} In Eq.(\ref{eq.L_all}), $\mathcal{L}_{GAN}$, $\mathcal{L}_{gradient}$ and $\mathcal{L}_{BC}$ are the constraints on noise type, gradient and image background, respectively. Among them, $\mathcal{L}_{gradient}$ can be regarded as an indicator of the similarity between the generated noise and the guided noise. However, a flaw of $\mathcal{L}_{gradient}$ is that it gives equal importance to guided noisy images over a wide range of noise levels. For the guided $x^r$ with a high level of noise, the primary component of its gradient map is noise, which can provide effective guidance for noise generation. On the contrary, for $x^r$ with weak noise, its gradient map is mainly composed of edges (see Figure \ref{fig_house}), which may pollute the generated noise. These observations suggest that different $x^r$ has different effects on noise generation. Therefore, we set the hyperparameter $\alpha$ for $L_{gradient}$ Eq.(\ref{eq.L_all}) to adaptively change according to the noise level of $x^r$, rather than a fixed value. The noisier the $x^r$ is, the greater the $\alpha$ is, so as to eliminate the negative effect of $L_{gradient}$. To do this, we use a pre-trained noise/clean binary classification network as a noise level estimator, where the noisy image is class 1 and the clean image is class 0. The dataset for pre-training the classification network is also $D^{mix}$. Class probabilities are produced by the softmax activation layer that converts the logit, $z_i$, computed for each class $i$ into a probability, $q_i$, by comparing $z_i$ with the other logits, \begin{equation} \begin{aligned} \label{eq.softmax} &{q_i} = \frac{{\exp ({z_i})}}{{\sum\nolimits_j {\exp ({z_j})} }},\\ &z_i=C(x)_i, \end{aligned} \end{equation} where $j\in \{0,1 \}$, $C(\cdot)$ denotes the classification network except the softmax layer. After pre-training, the classification probability $q_i$ represents the network's confidence that its input belongs to class $i$. This means that for noise/clean classification, $q_1$ reflects the noise level to a certain extent. However, $q_1 \in [0,1]$, it is difficult to cover a wide range of noise. In addition, the early saturation behavior \cite{chen2017noisy} of softmax makes most noisy images easily classified into class 1 with high confidence (\textit{i.e.} $q_1 \rightarrow 1$). Therefore, $q_1$ produced by softmax cannot accurately characterize the noise level. Based on the above analysis, when learning to generate noise, we remove the softmax layer from the noise level estimator and use the logit $z_1$ as the estimate of the noise level. The value of $z_1$ is not limited, so it can match a wide range of noise. More importantly, $z_1$ is positively correlated with the noise level of input, as shown in Figure \ref{fig_house}. Therefore, in each iteration, the level of guided noise is estimated by, \begin{equation} \label{eq.zr} {z^r_1} = C(x^r)_1. \end{equation} Then, we use $z^r_1$ to replace $\alpha$ in Eq.(\ref{eq.L_all}). Following such a dynamic objective, the noise generator can identify the components of interest (\textit{i.e.} noise) in the guided $x^r$, and adaptively generate realistic noise similar to $x^r$. Finally, we construct a logit consistency loss to further promote the noise similarity between $x^g$ and $x^r$, \textit{i.e.} \begin{equation} \begin{aligned} \label{eq.Llogit} \mathcal{L}_{logit}=&\mathbb{E}_{x^r \sim p^r(x),\hfill\atop x^g \sim p^g(x) \hfill} [\left\| z^g_1-z^r_1 \right\|_2 ], \end{aligned} \end{equation} where ${z^g_1} = C(x^g)_1$. Combining Eq.(\ref{eq.L_all}), Eq.(\ref{eq.Llogit}) and ${z^r_1}$, we aim to solve, \begin{equation} \label{eq.L_all_final} \mathop {\min }\limits_G \mathop {\max }\limits_D {\mathcal{L}_{GAN}} + \alpha \mathcal{L}_{gradient}+ \beta \mathcal{L}_{BC}+ \gamma \mathcal{L}_{logit}, \end{equation} where $\alpha=z^r_1$, $\beta$ and $\gamma$ are the trade-off parameters. \begin{figure*}[t] \centering \subfigure[ {Clean $|$ SSIM, PSNR}]{\includegraphics[width=1.6in]{gaus2-gaus_gt.jpg}} \subfigure[ {Input $|$ 0.637, 24.06}]{\includegraphics[width=1.6in]{gaus2-gaus_noise.png}} \subfigure[ {BM3D $|$ 0.880, 28.56}]{\includegraphics[width=1.6in]{gaus2-gaus_bm3d.jpg}} \subfigure[ {N2V $|$ {0.889}, {28.43}}]{\includegraphics[width=1.6in]{gaus2-gaus_n2v.jpg}} \\ \vspace{-0.1in} \subfigure[ {S2S $|$ 0.813, 26.69}]{\includegraphics[width=1.6in]{gaus2-gaus_s2s.jpg}} \subfigure[ {LIR $|$ 0.867, 25.60}]{\includegraphics[width=1.6in]{gaus2-gaus_LIR.jpg}} \subfigure[ {U-Net $|$ \textbf{0.900}, \textbf{30.01}}]{\includegraphics[width=1.6in]{gaus2-gaus_full.jpg}} \subfigure[ {Our $|$ 0.889, 29.38}]{\includegraphics[width=1.6in]{gaus2-gaus_our.jpg}} \caption{Example results for Gaussian denoising, $\sigma=25$.} \label{fig_gaus} \end{figure*} \begin{table*}[t] \caption{PSNR results (dB) from BSD300 dataset for Gaussian, Speckle and Poisson noise. \textbf{Bold}: best. \textcolor{red}{Red}: second. \textcolor{blue}{Blue}: third.} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline &Test noise level& BM3D & WNNM&N2V&S2S&LIR&N2N&U-Net&ADANI\\ \hline \multirow{2}*{Gaussian }&$\sigma=25$&\textcolor{blue}{30.90}&29.96&{30.51}&29.13&{26.91}&\textcolor{red}{31.32}&\textbf{31.45}&30.68\\ \cline{2-10} &$\sigma \in (0, 50]$&27.89&31.16&{31.67}&27.06&{26.38}&\textcolor{red}{32.82}&\textbf{33.14}&\textcolor{blue}{31.85}\\ \hline \multirow{2}*{Speckle }&$v=0.1$&26.64&25.13&{28.40}&27.41&{25.66}&\textcolor{red}{31.12}&\textbf{31.18}&\textcolor{blue}{29.96}\\ \cline{2-10} &$v \in (0, 0.2]$&26.70&25.39&{28.77}&27.23&{25.44}&\textcolor{red}{31.50}&\textbf{31.55}&\textcolor{blue}{30.34}\\ \hline \multirow{2}*{Poisson }&$\lambda =30$&27.70&28.09&29.70&28.75 &{26.15}&\textcolor{red}{30.44}&\textbf{30.81}&\textcolor{blue}{29.85}\\ \cline{2-10} &$\lambda \in [5, 50]$&27.23&27.36&28.72&27.71&{25.62}&\textcolor{red}{29.65}& \textbf{30.14}&\textcolor{blue}{28.87} \\ \hline \end{tabular} \label{table1} \end{table*} \subsection{Architecture and training details} The implementation of ADANI is based on CNNs. For simplicity, both noise generator and BCM adopt the ResNet \cite{he2016identity} architecture. The discriminator is a general “PatchGAN” classifier \cite{li2016precomputed, isola2017image}. The noise level estimator is a simple four-layer network. \textit{Pre-training.} The BCM and noise level estimator are pre-trained with $D^{mix}$, and their weights are fixed when learning noise generation. The input $128\times128$ patches are randomly cropped from the training set, and the training ends at the 200th epoch. We use Adam with a batch size of 1 to train networks. The learning rate is initialized to 0.0002 and is linearly decayed to 0 over the training process. \textit{Training for noise generation and denoising.} The unpaired patches $x^r$ and $y^r$ are randomly cropped from $D^{noise}$ and $D^{clean}$. The hyper-parameters in Eq.(\ref{eq.L_all_final}) are set to $\beta=300$, $\gamma=0.1$. In each iteration, the noise generator produces a pair of data ($x^g$, $y^r$), which are directly used to guide an U-Net\footnote{More details for network architectures are shown in the supplement.} \cite{ronneberger2015u} to learn denoising according to L1 loss. Training ends at the 1000th epoch. Other parameters are the same as those of pre-training. \section{Experiments} In this section, we evaluate the performance of ADANI on several denoising tasks. \subsection{Synthetic noises} \label{synthetic} To prepare the unpaired training data, we use the 4744 clean images provided in \cite{ma2016waterloo} to synthesize noisy images (\textit{i.e.} $D^{noise}$) with Matlab. Besides, we collect another 5000 clean images from Internet as the clean set $D^{clean}$. The compared methods are state-of-the-art model-based methods BM3D \cite{dabov2007image} and WNNM \cite{gu2014weighted}, self-learning methods Noise2Void(N2V) \cite{krull2019noise2void} and Self2Self(S2S) \cite{quan2020self2self}, an unpaired learning method LIR \cite{du2020learning}, other deep learning methods include Noise2Noise(N2N) \cite{lehtinen2018noise2noise} and a common fully-supervised U-Net. For fair comparison, N2N, U-Net and our ADANI adopt the same architecture to perform denoising. For BM3D, we set its hyperparameter to $\sigma=25$ when removing Gaussian noise with a standard deviation of 25, while in other cases, BM3D keeps the default settings (\textit{i.e.} $\sigma = 50$). Our test set is the widely used BSD300 \cite{martin2001database}. \begin{figure*}[t] \centering \subfigure[ {Clean $|$ SSIM, PSNR}]{\includegraphics[width=1.5in]{speckle-spk_gt.jpg}} \subfigure[ {Input $|$ 0.585, 22.11}]{\includegraphics[width=1.5in]{speckle-spk_noise.png}} \subfigure[ {BM3D $|$ 0.742, 25.98}]{\includegraphics[width=1.5in]{speckle-spk_bm3d.jpg}} \subfigure[ {N2V $|$ {0.868}, {28.00}}]{\includegraphics[width=1.5in]{speckle-spk_n2v.jpg}} \\ \vspace{-0.1in} \subfigure[ {S2S $|$ 0.810, 26.80}]{\includegraphics[width=1.5in]{speckle-spk_s2s.jpg}} \subfigure[ {LIR $|$ 0.835, 25.83}]{\includegraphics[width=1.5in]{speckle-spk_LIR.jpg}} \subfigure[ {U-Net $|$ \textbf{0.907}, \textbf{29.97}}]{\includegraphics[width=1.5in]{speckle-spk_full.jpg}} \subfigure[ {Our $|$ 0.881, 29.16}]{\includegraphics[width=1.5in]{speckle-spk_our.jpg}} \caption{Example results for Speckle denoising, $v=0.1$.} \label{fig_spk} \end{figure*} \begin{figure*}[t] \centering \subfigure[ {Clean $|$ SSIM, PSNR}]{\includegraphics[width=1.5in]{poisson-pos_gt.jpg}} \subfigure[ {Input $|$ 0.508, 21.42}]{\includegraphics[width=1.5in]{poisson-pos_noise.png}} \subfigure[ {BM3D $|$ 0.721, 25.45}]{\includegraphics[width=1.5in]{poisson-pos_bm3d.jpg}} \subfigure[ {N2V $|$ {0.847}, {27.56}}]{\includegraphics[width=1.5in]{poisson-pos_n2v.jpg}} \\ \vspace{-0.1in} \subfigure[ {S2S $|$ 0.773, 26.23}]{\includegraphics[width=1.5in]{poisson-pos_s2s.jpg}} \subfigure[ {LIR $|$ 0.810, 27.38}]{\includegraphics[width=1.5in]{poisson-pos_LIR.jpg}} \subfigure[ {U-Net $|$ \textbf{0.864}, \textbf{28.63}}]{\includegraphics[width=1.5in]{poisson-pos_full.jpg}} \subfigure[ {Our $|$ 0.861, 27.78}]{\includegraphics[width=1.5in]{poisson-pos_our.jpg}} \caption{Example results for Poisson denoising, $\lambda=30$.} \label{fig_pos} \end{figure*} \begin{figure} \centering \subfigure[Gaussian $\sigma = 25$]{\includegraphics[width=1.05in]{histgram-gaus22.png}} \subfigure[Speckle $v=0.1$]{\includegraphics[width=1.05in]{histgram-speckle22.png}} \subfigure[Poisson $\lambda = 30$]{\includegraphics[width=1.05in]{histgram-pos.png}} \caption{Statistical histograms for noise generated by Matlab, ADANI and LIR. } \label{fig_hist} \end{figure} \begin{table*}[t] \caption{Quantitative results on $SIDD$ benchmark dataset. (CBDNet, VDN and U-Net are fully-supervised networks.)} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline {} & {BM3D } &{WNNM}&{NLM}&{KSVD\cite{aharon2006k}}& {EPLL\cite{zoran2011learning}}& {CBDNet\cite{guo2019toward}} &VDN \cite{yue2019variational}&N2V&U-Net& {ADANI}\\ \hline {SSIM}&{0.685}&0.809&0.699&0.842&0.870&0.868&\textbf{0.955}&0.507&0.951&0.944\\ \hline {PSNR}&{25.65}&25.78&26.75&26.88&27.11&33.28&\textbf{39.26}&22.41&38.68&37.64\\ \hline \end{tabular} \label{table_real} \end{table*} \textbf{\textit{Gaussian noise. }} The first experiment is blind Gaussian denoising. Each training image is corrupted by Gaussian noise with a random standard deviation $\sigma \in \left( 0,50\right]$. For testing, we synthesize noisy images according to two strategies: a fixed noise level $\sigma = 25$ and a variable $\sigma \in \left( 0,50\right]$. Quantitative results are shown in Table \ref{table1}. Our method significantly outperforms other unpaired or self-learning methods, and is close to supervised networks (U-Net and N2N). Although our method is inferior to BM3D on the test set of Gaussian noise $\sigma = 25$, the effectiveness of BM3D relies on accurate noise priors. For noise with unknown distribution, the performance of BM3D is poor. In contrast, our method can be adapted to various noises. Figure \ref{fig_gaus} shows the denoising results of different competing methods. Our denoising network achieves promising results in removing noise and enhancing image quality. \textbf{\textit{Speckle noise. }} To demonstrate the wide applicability of ADANI, we conduct experiments on speckle noise. Speckle noise is mostly detected in case of medical images and radar images. It is typically known as a multiplicative noise to the latent singal $y$, which can be modeled via the equation $x=y+y\cdot n$. In this equation, $n$ is the noise sampled from a uniform distribution with a mean of 0 and a variance of $v$. The noisy images for training are synthesized by varying the noise variance $v \in \left(0,0.2 \right]$. We report the comparison results in Table \ref{table1} and Figure \ref{fig_spk}. As can be seen, our ADANI consistently shows encouraging performance. \textbf{\textit{Poisson noise. }} Poisson noise is usually used to model the photon noise of imaging sensors. Its expected magnitude is signal dependent, so it is harder to remove than signal-independent noise. Following the setting in \cite{laine2019high}, we vary the noise magnitude $\lambda \in \left[5,50\right]$ during training. Comparison results are presented in Table \ref{table1} and Figure \ref{fig_pos}. \textbf{\textit{Discussion.}} These experiments on synthetic noise show the effectiveness and wide applicability of ADANI. It can generate realistic noisy images, as previously shown in Figure \ref{fig1}, to learn denoising, and the denoising performance is close to other networks trained with external paired data (U-Net and N2N). For practical applications where paired data is not available and noise statistics are unknown, our ADANI is better able to adapt than supervised methods. \begin{figure}[t] \centering \subfigure[ {Noise}]{\includegraphics[width=1.3in]{sidd3-sidd_noise.png}} \subfigure[ {N2V \cite{krull2019noise2void}}]{\includegraphics[width=1.3in]{sidd3-sidd_n2v.jpg}}\\ \vspace{-0.1in} \subfigure[ {U-Net}]{\includegraphics[width=1.3in]{sidd3-sidd_full.jpg}} \subfigure[ {Our }]{\includegraphics[width=1.3in]{sidd3-sidd_our.jpg}} \caption{Denoising results for SIDD dataset.} \label{fig_sidd} \end{figure} \textbf{\textit{Noise statistics.}} We then demonstrate the ability of ADANI to model noise over a wide range of distribution. We randomly crop 10,000 image patches with a size of $128\times 128$ from $D^{clean}$. These clean image patches are input into the above three noise generators together with noisy patches randomly sampled from $D^{noise}$. The noise level estimates of the output of the generators are provided by the corresponding noise level estimators. Since LIR \cite{du2020learning} also uses unpaired data to generate noisy images, we compare ADANI with LIR. The noise level statistical histograms are shown in Figure \ref{fig_level}. As observed, the distributional coverage of data output by ADANI is much wider than that of LIR. This shows that our adaptive noise imitation strategy can avoid mode collapse. We further evaluated the quality of noise generated by ADANI and LIR. To do this, we use the above 10,000 clean image patches to synthesize three noisy datasets with Matlab (\textit{i.e.} gaussian noise $\sigma =25$, speckle noise $v=0.1$, poisson noise $\lambda =30$). These noisy and clean image patches are randomly shuffled to form unpaired inputs for ADANI and LIR. To eliminate the influence of the background, the image background is subtracted from the noisy patch to obtain the noise component. Figure \ref{fig_hist} shows statistical histograms of noise generated by the three methods. As observed, the noise distribution produced by ADANI is similar to that of guided noise (Matlab), while the noise generated by LIR is obviously distorted. This experiment further demonstrates that ADANI can produce realistic noise by noise imitation. \subsection{Real-world noise} In this part, we evaluate the performance of ADANI on a real-world noise dataset Smartphone Image Denoising Dataset (SIDD) \cite{abdelhamed2018high}. SIDD contains thousands of images with various noise levels, dynamic ranges, and brightnesses. For each noisy image, the ground-truth is obtained via some statistical methods \cite{abdelhamed2018high}. For fast training, 320 pairs of high-resolution noisy/clean images are selected as the medium version of SIDD, called SIDD-Medium. We employ the SIDD-Medium dataset to train CNNs. To prepare unpaired data, SIDD-Medium is randomly divided into 2 parts, each with 160 pairs of images. We use 160 noisy images from the first part and 160 clean images from the second part to train ADANI. Quantitative results are listed in Table \ref{table_real}. As observed, ADANI achieves PSNR and SSIM comparable to other fully-supervised networks. Visual results are presented in Figure \ref{fig_sidd}. Since the noise in SIDD data is spatially correlated, which violates the assumption of N2V, it fails to remove this noises, unlike our proposed method. \begin{figure} \centering \subfigure[Clean{\color{white} aaaaaa} SSIM, PSNR]{\includegraphics[width=0.75in]{mri-mri_clean.jpg}} \subfigure[Input {\color{white} aaaaaa} 0.129, 19.67 ]{\includegraphics[width=0.75in]{mri-mri_noise.png}} \subfigure[U-Net {\color{white} aaaaa} 0.938, 37.02]{\includegraphics[width=0.75in]{mri-mri_full.jpg}} \subfigure[N2B {\color{white} aaaaaa} 0.928, 35.86]{\includegraphics[width=0.75in]{mri-mri_our.jpg}} \caption{MRI denoising example. } \label{fig_mri} \end{figure} \begin{figure}[t] \centering \subfigure[ {Clean $|$ SSIM, PSNR}]{\includegraphics[width=1.4in]{text-gt.jpg}} \subfigure[ {Noise $|$ 0.746, 18.46}]{\includegraphics[width=1.4in]{text-noise.jpg}}\\ \vspace{-0.1in} \subfigure[ {U-Net $|$ 0.957, 31.31}]{\includegraphics[width=1.4in]{text-full.jpg}} \subfigure[ {Our $|$ 0.937, 28.95}]{\includegraphics[width=1.4in]{text-our.jpg}} \caption{Example results for text inpainting, $p=0.15$.} \label{fig_text} \end{figure} \subsection{MRI denoising} Magnetic resonance imaging (MRI) is a non-invasive medical imaging technology, which can provide high-resolution images of human tissues and organs. The quality of MR image, however, is easily degraded by noise during image acquisition. The noise in MR images follows the Rician distribution, which is much more complex than traditional additive noise. Here, we show the ability of ADANI for MR image denoising. We conduct experiments on the liver images of the CHAOS\footnote{https://chaos.grand-challenge.org/} dataset. We randomly sample half of the clean images from the training set, and each image is degraded by a different level ($1\% - 13\%$ of maximum intensity) of Rician noise \cite{ran2019denoising}. The remaining clean images belong to $D^{clean}$. The 600 images in test set are damaged by noise at a level of $8\%$. ADANI is compared with the fully-supervised U-Net. Results are shown in Figure \ref{fig_mri}. Our denoising network cleanly removes noise and restores high-quality images. In addition, U-Net gives an average of 0.905/35.63 dB in terms of SSIM and PSNR, slightly better than the ADANI of 0.896/34.77 dB. \subsection{Blind image inpainting} ADANI can be applied to other image restoration tasks. Here, we show the application of ADANI in image inpainting. Similar to denoising, for image inpainting, ADANI requires neither a priori of the image degradation process nor the paired data. We use the clean images in subsection \ref{synthetic} to synthesize text-degraded data. This degradation contains a variety of random strings, which can be random font sizes, random colors, and random locations. Each pixel in the training samples is degraded with a variable probability $p\in \left(0,0.3\right]$. For test images, p is fixed to 0.15. ADANI is compared with a fully supervised U-Net. Subjective comparisons are presented in Figure \ref{fig_text}. ADANI gives $0.942/30.78 dB$ in terms of SSIM and PSNR for the BSD300 test set, close to the U-Net of $0.964/33.75 dB$. \begin{table}[t] \caption{Ablation study on the effect of $\alpha \mathcal{L}_{gradient}$ and $\gamma \mathcal{L}_{logit}$.} \centering \scriptsize \begin{tabular} {|c|p{0.05\columnwidth}|p{0.09\columnwidth}|p{0.11\columnwidth}|p{0.12\columnwidth}|p{0.11\columnwidth}|p{0.11\columnwidth}|} \hline \multicolumn{2}{|c|}{} & $\alpha=0$ $\gamma=0$&$\alpha=10$ $\gamma=0$&$\alpha=100$ $\gamma=0$&$\alpha=z^r_1$ $\gamma=0$&$\alpha=z^r_1$ $\gamma=0.1$\\ \hline Speckle &SSIM&0.819&0.856&0.830&0.868&\textbf{0.872}\\ \cline{2-7} ($v=0.1$)&PSNR&27.37&29.06&28.15&29.70&\textbf{29.96}\\ \hline \end{tabular} \label{table_ablation} \end{table} \subsection{Ablation study} The core of ADANI is the noise similarity loss $\alpha \mathcal{L}_{gradient}$ and logit consistency loss $\gamma \mathcal{L}_{logit}$ in Eq.(\ref{eq.L_all_final}). To verify the importance of the proposed adaptive noise imitation strategy, we compare ADANI with its 4 variants. In the first one, we set the hyperparameter $\gamma$ for $\mathcal{L}_{logit}$ to 0. For the remaining three examples, we further set $\alpha$ to the value sampled from $\{0,10,100 \}$ instead of the dynamic $z^r_1$. Similar to subsection \ref{synthetic}, we conduct the experiments over the speckle noise dataset with $v \in \left(0, 0.2\right]$. The comparison is reported in Table \ref{table_ablation}. $\mathcal{L}_{logit}$ is conducive to the generation of diverse noises, leading to the improvement of the denoising performance of our method. On the other hand, setting the hyperparameter $\alpha$ for $\mathcal{L}_{gradient}$ to 10 or 100 compromises our denoising network. This is because the importance of all noisy images is the same, which makes it difficult for ADANI to distinguish noise from the image edges in the gradient map ${\nabla }x^r$. If both $\alpha$ and $\gamma$ are 0, our ADANI turns into a normal GAN. Since the ambiguous GAN loss $\mathcal{L}_{GAN}$ cannot permit high-quality noise generation, the denoising results are poor. In contrast, our adaptive noise imitation strategy achieves high-quality noise synthesis and denoising. \section{Conclusion} We proposed a novel adaptive noise imitation (ADANI) algorithm, which enables the training of denoising CNNs without pre-collected paired data. ADANI generates new noisy data for learning denoising by observing its unpaired noisy/clean input. The noisy data produced by ADANI is visually and statistically similar to the real one, so that it achieves encouraging denoising performance. We demonstrate the effectiveness and wide applicability of ADANI over multiple denoising and image restoration tasks. Since ADANI does not require pre-collected paired data and pre-defined image degradation processes, it is a promising solution in many practical applications. {\small \bibliographystyle{ieee}
45e2e2d15c3d212bfb19c4708fbafdafd8df0b53
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{Sec:Introduction} The long conjectured phenomenon of many-body localization (MBL) \cite{Anderson:1958p7531} has been put on a much firmer basis by the work of Basko, Aleiner and Altshuler (BAA) \cite{Basko:2006hh,Basko:2006ly} and a set of following investigations \cite{Oganesyan:2007uq,Pal:2010zr,Bardarson:2012qa,Iyer:2013kl,Huse:2013tx,Huse:2013ys,Bahri:2013vn, Vosk:2013fu,Rigol:2008bh, Bauer:2013ghi,Serbyn:2013abcd}. For a range of energy densities above the ground state, the highly excited eigenstates of sufficiently disordered quantum Hamiltonians with locally bounded Hilbert spaces exhibit a set of interlinked properties: (i) these states fail to satisfy the eigenstate thermalization hypothesis (ETH) \cite{Deutsch:1991ve,Srednicki:1994qf} so that the canonical ensemble and temperature are no longer meaningful; (ii) the long wavelength thermal conductivity vanishes within this energy range; (iii) neighboring eigenstates in the many-body spectrum differ significantly in their local properties; and, (iv) the entanglement entropy of macroscopic domains is sub-extensive. This last point should be contrasted with usual extended states wherein the entanglement entropy scales with the volume of the domain and ought agree with the canonical thermodynamic entropy. While the above complex of properties is generally applicable to any MBL phase % \footnote{Here, a `phase' corresponds to a region in the energy density and parameter space, which mimics the terminology for equilibrium systems. Readers will keep in mind that the system is by definition {\it not} in equilibrium in an MBL phase.}, recently Huse et al \cite{Huse:2013tx} observed that the eigenstates could be more finely classified with reference to various measures of order. MBL eigenstates can spontaneously break or preserve global symmetries and exhibit or fail to exhibit topological order. These phenomena could violate the naive expectation from Peierls-Mermin-Wagner type arguments. Essentially, the localization of defects allows order to persist at energy densities where equilibrium arguments predict destruction of order. In this article we extend the analysis of Ref.~\onlinecite{Huse:2013tx} to a case intermediate between symmetry-breaking and topological order. This is the case of symmetry protected topological order (SPT) \cite{Gu:2009dq,Chen:2011cr,Fidkowski:2011tg,Schuch:2011kl}, wherein a symmetry is needed for the phase to exist but the order itself is topological in nature and cannot be characterized by a local order parameter. Clean zero temperature SPT phases have a bulk gap to well-defined excitations whose quantum numbers are not fractional. Furthermore, SPT ground states cannot be continuously connected to trivial product states without either breaking the protecting symmetry or closing the energy gap; however, such a continuous path must exist if the protecting symmetry is explicitly broken. The canonical example of an SPT phase is the Haldane phase in $d=1$ \cite{Haldane1983464,PhysRevLett.50.1153} and the most celebrated one is by now surely the $Z_2$ topological insulator in $d=3$ (reviewed in Ref.~\onlinecite{Hasan:2010hc}). With this background, we can now state our central question: Can highly excited eigenstates exhibit SPT order in the presence of MBL? We take such order to generalize the cluster of properties listed above. Specifically, we wish to examine Hamiltonians invariant under a protecting symmetry with highly excited eigenstates that lie in a mobility gap. We will require an eigenstate phase transition (at which the properties of the eigenstates change in some singular fashion) between the SPT region and the trivial region, which is well captured by product states as long as the protecting symmetry is intact. Further, there should be a path along which such a phase transition is absent when the symmetry is explicitly broken. In the following, we address this question via two examples. The first is the Haldane phase protected by a discrete symmetry. We present strong evidence that the SPT order extends in an MBL version to highly excited eigenstates even though equilibrium considerations preclude such order. We do so by introducing an appropriate generalization of the AKLT model of Affleck, Kennedy, Lieb and Tasaki \cite{PhysRevLett.59.799,affleck1988vbg} that allows the arguments of BAA to be brought to bear on highly excited states. We discuss various diagnostics of the Haldane phase that extend to this regime. We also note that the Haldane phase with continuous $SU(2)$ symmetry does not obviously extend to an MBL version and explain the obstacles involved in settling this question. Our second example is the topological Ising paramagnet in two dimensions \cite{Levin:2012jq,Chen:2011ij}. Here again we adapt the BAA arguments to establish MBL and discuss the diagnostics needed to establish SPT order. We conclude with some comments on generalizations and open questions. As we were finishing this article, we became aware of the preprint, Ref.~\onlinecite{Bahri:2013vn}. In this preprint, the authors study MBL in a one dimensional spin-1/2 model related to our first example, the Haldane phase, from the perspective of edge modes, the entanglement spectrum and string order. We discuss the precise connection between our work and theirs at the end of Sec.~\ref{Sec:Haldane}. \section{Haldane phase} \label{Sec:Haldane} \subsection{Review of low energy physics} We begin with the Haldane phase of the spin-1 antiferromagnetic chain. Although usually understood in the context of continuous rotational symmetry \footnote{It is interesting to note that Haldane discovered the phase that bears his name studying a dihedral-symmetric perturbation of the SU(2) invariant spin chains.}, the Haldane phase is an SPT which may be protected by any one of the following discrete symmetries: inversion, time reversal or the dihedral group $D$ of $\pi$-rotations around the $x,y,z$-spin axes \cite{Berg:2008fk, Pollmann:2010ih,Pollmann:2012nx}. At zero temperature, the clean phase is a gapped quantum spin liquid which breaks none of these symmetries. It has several defining characteristics. First, the bulk exhibits simultaneous long-range ``string'' order \cite{denNijs:1989dp, PhysRevB.45.304} in the operators $(\alpha = x,y,z)$ \begin{align} \label{eq:stringorderparameter} \sigma^{\alpha}_{ij} = - S_i^\alpha \left(\prod_{k=i+1}^{j-1} R_k^\alpha \right) S_j^\alpha \end{align} where $R_j^\alpha = e^{i\pi S^\alpha_j}$ represents a rotation by $\pi$ around the $\alpha$ spin axis of site $j$ and $S^\alpha_i$ are the usual spin-1 operators. Second, the boundary exhibits protected spin-$1/2$ edge modes as a consequence of which the ground state is four-fold degenerate on open chains. Third, the presence of the protected spin-$1/2$ edge modes implies a two-fold degeneracy in the entanglement spectrum for virtual (Schmidt) cuts in the bulk of the chain. Further, the underlying spin-1 degrees of freedom do not fractionalize in the bulk, in consonance with the definition of an SPT. The low energy excitations are gapped spin-1 bosons called `triplons', discussed later in the text. In contrast, in the trivial phase with the same discrete symmetries, the ground state can always be smoothly connected to a product state through a symmetric path\cite{Pollmann:2010ih}. The trivial phase has no string order, boundary modes or degenerate entanglement spectra; hence these properties signal the SPT order of the Haldane phase. \subsection{Ergodicity and localization in highly excited states} In the following, we will review how these signatures of the Haldane phase disappear at $T>0$ in clean systems as a consequence of the delocalization of the triplons in highly excited states. On the other hand, in the presence of sufficient disorder, we will argue that individual triplons Anderson localize. At sufficiently small, but non-zero, energy density, the dilute gas of localized triplons interacts only weakly so that the perturbative arguments of BAA apply and the system is many-body localized. Finally, we will discuss how various defining characteristics of the Haldane phase persist to finite energy density in a suitably modified form in this MBL phase. To be concrete, we introduce a frustration-free model for the Haldane phase. As the SPT order requires only the dihedral group $D = \{\mathbf{1},R^x,R^y,R^z\} \equiv \mathbb{Z}_2 \times \mathbb{Z}_2$ to protect it, our model has precisely this symmetry, but is otherwise very closely related to the celebrated $O(3)$-symmetric AKLT model \cite{PhysRevLett.59.799,affleck1988vbg}. The Hamiltonian, which we refer to as the BKLT Hamiltonian, is \begin{widetext} \begin{align} \label{eq:bklt_ham} H_{BKLT} = \sum_{i,\alpha} P^{(2)}_{i,i+1} \left( J_i + c_i^\alpha (S_i^\alpha + S_{i+1}^\alpha)^2 + d_i^\alpha (S_i^\alpha + S_{i+1}^\alpha)^4 \right) P^{(2)}_{i,i+1} \end{align} \end{widetext} where $P^{(2)}_{i,j}$ projects onto the spin-2 representation of the spins $i$ and $j$, and $J_i , c_i^\alpha, d_i^\alpha>0$ are coupling constants \footnote{If \unexpanded{$c_i^\alpha, d_i^\alpha>0$}, then each term is strictly positive. Note that in the spin-$2$ representation, the coupling constants are not all independent as $(S_i^\alpha + S_{i+1}^\alpha)^2=6$. Taking \unexpanded{$c_i^\alpha, d_i^\alpha$} to zero reduces BKLT to the traditional AKLT model. }. The ground state space of $H_{BKLT}$ is identical to that of the AKLT model: there are four ground states on open chains, each of which possesses an explicit, compact matrix product state (MPS) representation simultaneously annihilated by all $P^{(2)}_{i,i+1}$ and therefore by $H_{BKLT}$. The excitation gap is of order $J_i$ and the eigenstates may be labeled by the one-dimensional representations of $\mathbb{Z}_2\times\mathbb{Z}_2$. Even though the ground states are exactly known, $H_{BKLT}$ is not fully integrable. Its excited states should therefore be generic with respect to thermalization and many-body localization. The A/BKLT ground states can be constructed by splitting each spin $1$ site into two virtual spin $1/2$ degrees of freedom. Pictorially, \begin{align} \ket{A;v_L, v_R} = \vcenter{\hbox{\includegraphics[scale=0.8]{eqfig_akltstate.pdf}}} \end{align} where each small circle represents a virtual spin $1/2$, the solid lines denote singlet pairings and the ovals the symmetrization to reproduce a spin $1$ physical degree of freedom. Here, $v_L$ and $v_R$ are the state vectors for the boundary spins that label the four-dimensional ground state space on the open chain. This picture immediately reveals the physical origin of the spin $1/2$ boundary modes -- they correspond to the unpaired virtual degrees of freedom left on either end of the open chain. The picture also suggests the origin of the $2$-fold degeneracy in the entanglement spectrum as the cutting of the virtual Bell pair shared by a link. The virtual spin structure of the A/BKLT state suggests a natural candidate for the low energy bulk excitations, \begin{align} \label{eq:single_triplon} \ket{j, \alpha} = \vcenter{\hbox{\includegraphics[scale=0.8]{eqfig_triplonstate.pdf}}} \end{align} where the double line at bond $j$ indicates a virtual pair in triplet state $\alpha$. Note that we have suppressed the explicit boundary spin states $v_L, v_R$. The single `triplon' states $\ket{j,\alpha}$ are non-orthogonal but linearly independent. They span the manifold studied in the single-mode approximation (SMA) provided by $S^\alpha_j$ operators acting on $\ket{A}$ \cite{PhysRevLett.60.531}. \footnote{On a periodic chain of length $L$, there are $L$ linearly independent bond triplons as we have defined them. The spin operators $S^{\alpha}_j$ create superpositions of the form {$S^{\alpha}_j \ket{A} \propto \ket{j,\alpha} - \ket{j-1,\alpha}$} and thus there are only $L-1$ linearly independent states in the traditional SMA calculation.} These states are believed to be good variational approximations to the local excitations of $H_{AKLT}$, in part because the SMA calculations produce a single triplon band quantitatively in good agreement with numerical studies \cite{PhysRevLett.60.531}. We note in passing that the bond triplon states provide a superior framework for the study of excitations in higher dimensional valence-bond solid states as well, where the SMA is inadequate. In the $O(3)$ symmetric AKLT case, the three triplon states $\ket{j,\alpha}$ are strictly degenerate. Breaking the $O(3)$ symmetry down to the dihedral subgroup, as in BKLT, lifts the degeneracy and selects the dihedral-symmetric states $\ket{x}, \ket{y}, \ket{z}$ as an appropriate basis. In terms of virtual spins, \begin{align*} \ket{x} &= (\ket{\uparrow\up} - \ket{\downarrow\down})/\sqrt{2} \\ \ket{y} &= (\ket{\uparrow\up} + \ket{\downarrow\down})/\sqrt{2} \\ \ket{z} &= (\ket{\uparrow\downarrow} + \ket{\downarrow\uparrow})/\sqrt{2} \end{align*} where $\ket{x}$ has eigenvalues $+1, -1, -1$ under $R^x, R^y, R^z$, $\ket{y}$ has $-1, +1, -1$ and $\ket{z}$ has $-1, -1, +1$. The reader should recognize that dihedral symmetry has picked out the maximally entangled Bell states! Consider now the three diagnostics of the Haldane phase in the presence of a maximally localized triplon. (i) As the virtual spins in $\ket{j,\alpha}$ form a Bell state across every bond, the entanglement spectrum exhibits two-fold degeneracy across any real space cut. It is straightforward to confirm this using the explicit MPS representation of $\ket{j,\alpha}$ following from Eq.~\eqref{eq:single_triplon}. (ii) The triplon excitation produces a topological defect in the string order parameter $\sigma^\beta_{ik}$. Explicitly, \begin{align} \label{Eq:OneTripStringOrder} \bra{j, \alpha} \sigma^\beta_{ik}\ket{j,\alpha} = \left\{ \begin{array}{ll} -(-1)^{\delta_{\alpha\beta}}\frac{4}{9} & i \le j < k \\ \frac{4}{9} & \textrm{else} \end{array}\right. \end{align} That is, if the string operator crosses the triplon, it picks up a minus sign unless the flavors of the string and the triplon agree. (iii) On open chains, there remain four degenerate, linearly-independent variational states corresponding to the choice of boundary conditions ($v_L, v_R$) for the localized triplon state $\ket{j,\alpha}$ % \footnote{The issue of linear independence for bond triplon states is somewhat delicate. On an open chain of length $L$, there are naively $12(L-1)$ triplon states corresponding to the 4 boundary states, 3 triplon flavors and $L-1$ positions. These span only a $12(L-1) - 4$ dimensional space. On a closed chain, there are $3L$ linearly independent states.}. The demise of the Haldane phase at finite energy density in the clean system is now apparent. Diagonalizing $H_{BKLT}$ in the variational single triplon manifold gives rise to three delocalized bands of triplons corresponding to each of the flavors $\alpha$. This follows from solving the generalized eigenvalue problem where $H_{BKLT}$ is purely diagonal in the localized triplon basis while the overlap matrix $\braket{j,\alpha}{k,\beta} \sim \delta_{\alpha\beta} (1/3)^{|j-k|}$ produces the off-diagonal dispersion. At low energy densities, we expect a dilute gas of these delocalized triplons in the eigenstates of $H_{BKLT}$. This fluctuating gas (i) produces an extensive entanglement entropy for macroscopic domains which precludes an MPS representation for the highly excited eigenstates and washes out the two-fold entanglement degeneracy. (ii) As the triplons act as defects in the string order Eq.~\eqref{eq:stringorderparameter}, their spatial fluctuations suppress this order on the length scale of the inverse density. Finally, (iii) the spin-1/2 boundary modes decohere due to interaction with the delocalized bulk triplons on a time scale set by the density of triplons. This is all consistent with the expectation that there is no order, topological or otherwise, at finite temperature in one dimension. \begin{figure}[tbp] \centering \includegraphics{fig-anderson-500.pdf} \caption{Seven typical eigenmodes of the Anderson problem in the single $\alpha=z$ triplon manifold in a $500$-site chain with periodic boundary conditions. The coupling constants $J_i$ are drawn uniformly from the interval $\left(0,1\right)$. } \label{fig:anderson_WF} \end{figure} The presence of sufficient disorder leads to an entirely different picture of the highly excited eigenstates -- that they may many-body localize and thus retain their SPT character. Consider the introduction of disorder in the couplings of $H_{BKLT}$. So long as $J_i> 0$, the ground state is completely unperturbed by this variation, which is an extreme manifestation of the insensitivity of gapped phases to weak spatial disorder. The excitation spectrum, on the other hand, changes dramatically. Even for weak variations $\delta K \ll K$ for $K=J, c, d$, we expect the single triplon eigenstates to Anderson localize. This follows from analyzing the generalized eigenvalue problem described in the paragraph above with spatially varying diagonal matrix elements. Fig.~\ref{fig:anderson_WF} shows the typical localized triplon wavefunctions found by this analysis. We now make the case for MBL following BAA. Consider the excited states with a low density of localized triplons. The interaction $U$ between two triplons separated by a distance $l$ scales as $J e^{-l/\xi}$, where $\xi$ is the longer of the triplon overlap decay length ($1/\log(3)$) and the localization length. When the typical spacing $l$ between excitations is sufficiently large so that the typical energy splitting between nearby states (of order 1) is much larger than the interactions, $U \sim \pm J e^{-l / \xi}$, the perturbative BAA arguments protect triplon localization. That is, the system remains many-body localized up to a finite energy density $\epsilon$ such that the typical separation $J/\epsilon$ is small on the scale $\xi$. \begin{figure}[tbp] \centering \includegraphics{fig-entanglement-quad-obc.pdf} \caption{The entanglement spectra of four consecutive excited states, starting with the $60$th state above the ground state, for a 12-site open BKLT chain with disorder in all coupling constants. Each state is decomposed into two equal halves.} \label{fig:entspec_4} \end{figure} The naive application of the same argument fails as one approaches the $O(3)$ symmetric AKLT point by taking $c_i, d_i$ to zero. In this limit, the local fields splitting the triplet degeneracy vanish so that there is no regime where the typical interaction strength $U$ is smaller than the typical local level spacing. Rather, the localized triplons carry spin-1 and the system of a dilute random array of non-interacting triplons is highly degenerate. From this point of view, the interactions (still of order $U \sim J e^{-l / \xi}$) split this large degeneracy according to a disordered system of both ferro- and antiferromagnetic exchanges. Whether such an effective $O(3)$-symmetric random spin-1 chain can exhibit a MBL phase is an intriguing open question. The application of the real space renormalization group to such a system suggests that the system ought to grow large effective moments \cite{Westerberg:1995pi, Hyman:1997cu} which, if they behave classically as one expects of large spins, would lead to thermal conduction and equilibration \cite{Oganesyan:2009ff,Basko:2011lh}. Finally, we consider how the three signatures of the Haldane phase persist to finite energy density in the MBL regime. First, take the `caricatures' of the excited states at low energy density given by the MPS with a low density of double lines at prescribed bonds as in Eq.~\eqref{eq:single_triplon}. We have already noted that (i) the entanglement spectrum is doubly degenerate, (ii) the string order is `glassy' (Eq.~\eqref{Eq:OneTripStringOrder}), and , (iii) the expectation value of $H_{A/BKLT}$ is independent of the virtual spins $v_L$, $v_R$ on the boundary. Thus, if the `caricature' states were the true excited eigenstates in the presence of disorder, all of the characteristics of the Haldane phase would persist to low energy density. Of course, the simple caricatures neglect the `fuzziness' in the position of the triplons in Anderson localized single particle wavefunctions such as in Fig.~\ref{fig:anderson_WF}. To construct multi-triplon `filled' Anderson localized states, we define the bond triplon creation operators: \begin{align} t_j^\alpha = \prod_{i \le j} R^\alpha_i \end{align} These commuting, self-adjoint, unitary operators place triplons of type $\alpha$ at bond $j$ when acting upon the A/BKLT ground state space. The single triplon localized states are then created by \begin{align} \label{Eq:AndersonTrip} t_\psi^\alpha = \sum_j \psi^\alpha_j t_j^\alpha \end{align} acting on the A/BKLT vacuum, where $\psi^\alpha_j$ are the eigenmodes in the single triplon problem. We caution that the mode functions $\psi^\alpha_j$ are not orthonormal as they are coefficients with respect to a non-orthogonal basis, and neither do the $t_j^\alpha$ satisfy a canonical algebra. Nonetheless, for sufficiently dilute collections of triplons, we expect the Fock states $\ket{\Psi} = t_{\psi_1}^{\alpha_1}t_{\psi_2}^{\alpha_2}\cdots t_{\psi_N}^{\alpha_N}\ket{A}$ to be good approximate representations of the MBL eigenstates. Just as localized Fock states of normal bosons and fermions have entanglement entropy satisfying an area law, $\ket{\Psi}$ has an area law for localized $\psi^\alpha_j$. Thus, such states can be recast to exponential accuracy as finite dimensional MPS states which in turn fall into the two-fold SPT classification of dihedral symmetric states \cite{Pollmann:2010ih,Turner:2011fj}. We recapitulate this argument in more detail in Appendix~\ref{Sec:AppES} for non-translation invariant states. In the same appendix, we argue that the fuzzy Fock states above are in the same non-trivial class as the A/BKLT ground state, that is, they exhibit two-fold degenerate entanglement spectra in the bulk for a single spatial cut. Numerical exact diagonalization results are consistent with this prediction. In Fig.~\ref{fig:entspec_4}, we plot the entanglement spectra of a few excited states of the 12-site open BKLT chain with disorder. Dihedral symmetry forces the physical spin halves at the two boundaries to be maximally entangled; thus the spectrum should be 4-fold degenerate if the excited state has SPT order. There is evidence of this degeneracy in Fig.~\ref{fig:entspec_4}. In conclusion, states such as $\ket{\Psi}$ exhibit (i) two-fold degenerate entanglement spectra in the bulk, and (ii) long-range string glass order with softened frozen in domain walls and (iii) spin-1/2 boundary modes associated with the projective representation of the corresponding finite dimensional MPS. In the presence of dihedral symmetry, the string glass order diagnoses the non-analyticity of the eigenstates at the transitions between the SPT MBL phase and the trivial MBL phase (or the ergodic phase). On the other hand, without dihedral symmetry, such an order parameter distinction disappears. For example, turning on a local N\'{e}el field induces a N\'{e}el magnetization, and as shown in Ref.~\onlinecite{PhysRevB.45.304}, string order. Thus, the non-analyticity associated with the loss of the long range string glass order will be lost and the eigenstates in both MBL phases can be smoothly connected. We end with a few comments. First, the recent numerical study in Ref.~\onlinecite{Bahri:2013vn} probed the boundary modes of excited MBL states in a related one-dimensional model using spin-echo. Such numerical experiments are unavailable in the disordered BKLT model due to the large intrinsic correlation lengths as compared to accessible system sizes. Second, a consequence of the existence of boundary modes is a `pairing' regime in the many-body energy spectrum of open chains. In this regime, the four boundary states can be identified by their small splitting relative to the exponentially small many-body spacing \cite{Huse:2013tx}. However, there is evidence from perturbative and numerical calculations in the non-integrable Majorana chain that this pairing may persist to the clean limit \cite{Laumann:fu}. The relationship between pairing and coherent boundary modes is thus not settled and requires further study. Finally, the entire discussion in this section is not special to the A/BKLT point. The MBL phase at low energy densities continues away from these points. \section{Topological Ising paramagnet in d=2} \subsection{Review of low energy physics} We now turn to discrete SPT phases in higher dimension. In particular, we consider two dimensional spin systems with $\mathbb{Z}_2$ symmetry, where there is a two-fold classification of SPTs: the trivial and the topological Ising paramagnets \cite{Levin:2012jq,Chen:2011ij,Chen:2013bs}. We work near an exactly solvable model in the topological SPT phase, first constructed by Levin and Gu \cite{Levin:2012jq}: \begin{align} H_{LG} = - \sum_{s} \Lambda_s B_s, \qquad B_s = -\sigma_s^x\prod_{\langle s q q' \rangle} i^{\frac{1-\sigma_q^z\sigma_q^z}{2}}. \label{eq:LGHam} \end{align} Here, $\Lambda_s$ are coupling constants, $\sigma_s$ are Pauli spin-1/2 operators living on the sites $s$ of a triangular lattice, and the product in $B_s$ runs over the six triangles $\langle s q q' \rangle$ intersecting the site $s$ (see Fig.~\ref{fig:LGHam}). \begin{figure} \includegraphics[width=\columnwidth]{Ham.pdf} \caption{ (Left) Trivial paramagnet $H_0$ defined in \eqref{Eq: H_0} is a sum of $\sigma_s^x$ terms on sites $s$ of the triangular lattice. (Right) Topological paramagnet $H_{LG}$ defined in \eqref{eq:LGHam} is a sum of seven-spin terms involving a product of $\sigma_s^x$ and a phase factor $\prod_{\langle s q q'\rangle} i^ {\frac{1- \sigma_q^z \sigma_{q'}^z}{2}}$ from the six spins surrounding $s$. } \label{fig:LGHam} \end{figure} The Hamiltonian is invariant under the protecting Ising symmetry $S = \prod_s \sigma_s^x$. The $B_s$ operators on different sites commute with each other, and the gapped paramagnetic ground state is the simultaneous $B_s = +1$ eigenstate $ \forall s$. On closed manifolds, this ground state is unique and can be written explicitly in the $\sigma^z$ basis as a superposition of all spin configurations each with amplitude $(-1)^{N_{dw}}$, where $N_{dw}$ is the number of domain walls in the configuration. These non-trivial phase factors reflect the topological nature of the ground state. The topological paramagnet (TPM) is to be contrasted with the better-known trivial paramagnet (TrPM) with exactly solvable model Hamiltonian \begin{align} H_0 = -\sum_s \Gamma_s \sigma_s^x, \label{Eq: H_0} \end{align} where $\Gamma_s$ are coupling constants. The ground state of the trivial paramagnet is clearly a simple product state which, in the $\sigma^z$ basis, corresponds to a uniform superposition of all spin configurations with amplitude $1$. The excitations in both models correspond to `spin flips' which are sites $s$ with either $B_s = -1$ or $\sigma^x_s = -1$, respectively. At the exactly solvable points, such spin flips are static and thus the highly excited eigenstates are already many-body localized analogous to the `caricature' states of the previous section. Absent disorder, this form of MBL is non-generic: any non-commuting perturbations to the model Hamiltonians induce dispersion of the spin flips, which in turn destroys many-body localization. For specificity, we add a ferromagnetic coupling term to make the spin excitations dynamical and consider Hamiltonians of the form: \begin{align} \tilde{H}_{0/LG} = H_{0/LG} - J \sum_{\langle s s' \rangle} \sigma_s^z \sigma_{s'}^z. \label{eq:Htilde} \end{align} For $J$ large enough, the ferromagnetic term drives a transition out of either paramagnet into a symmetry broken ferromagnetic phase. \subsection{Ergodicity and localization in highly excited states} Now include randomness in the couplings $\Lambda_s$ and $\Gamma_s$. For simplicity, keep $\Lambda_s, \Gamma_s > 0$ to preserve the exact ground state. In this regime, the individual spin flip manifold remains Anderson localized even with small `hopping' $J$. BAA arguments suggest that dilute gases of these weakly interacting point particles remain many-body localized. It is intuitively clear that both paramagnets continue into MBL versions at finite energy density as the defects that would destroy the SPT order are localized. In the following, we will consider the extension of various SPT diagnostics to finite energy density MBL states to substantiate this intuition. We first distinguish the MBL topological and trivial paramagnets from the extended thermal paramagnetic phase, and then turn to diagnostics that differentiate the two MBL paramagnets. The MBL paramagnets can be easily distinguished from their thermal counterparts at nonzero energy densities using the behavior of certain Wilson loops \footnote{The two paramagnets are smoothly connected when thermalized, so no distinction is necessary.}. Recall that in 2+1 dimensions magnetic systems with site variables are dual to gauge theories with bond variables \cite{RevModPhys.51.659}. The spin models $H_0/ H_{LG}$ are respectively dual to the perturbed toric-code (t.c.)/ doubled-semion (d.s) $\mathbb{Z}_2$ gauge theories, with the t.c/d.s theories restricted to a static matter sector. These dual gauge theories live on the honeycomb lattice; their topologically ordered deconfined phases map to the paramagnetic phases of the spin models, while their confined phase maps to the ferromagnetic phase. The doubled semion model is discussed in \cite{levin:045110}. As the dual models are pure gauge, their respective deconfined phases may be diagnosed by the celebrated perimeter-law of equal time Wilson loops. Each of the two deconfined phases, has a (different) canonical Wilson loop which minimally probes the confinement of charges without further exciting the gauge sector \cite{levin:045110}. The Wilson loops of the dual gauge theories correspond to the following operators in the original spin variables $\sigma_s$, : \begin{align} W_0[C] = \left\langle\prod_{s\in A[C]} \sigma_s^x \right\rangle \label{eq:Wilson_0}\\ W_{LG} [C] = \left\langle \prod_{s \in A[C]} B_s \right\rangle \label{eq:Wilson_LG} \end{align} where the product is over all sites $s$ lying within $A[C]$, the area enclosed by the curve $C$. These Wilson loops exhibit the ``zero-law" $W_{0/LG} [C] = 1$ exactly at the pure trivial/topological paramagnetic points. The zero-law continues to a perimeter law $W[C] \propto e^{- c |C|}$ on perturbing away from the exactly solvable points. On the other hand, the Wilson loops exhibit an area law $W[C] \propto e^{-c'|A[C]|}$ in the ferromagnetic phase. For clean, ergodic systems, both Wilson loops exhibit an area law at any finite temperature $T >0$. This reflects the presence of a finite density of delocalized vortex excitations in the dual gauge theories. The problem with disorder was recently discussed for the standard $\mathbb{Z}_2$ gauge theory by Huse et. al. \cite{Huse:2013ys}. In the presence of sufficient randomness in the couplings of the dual gauge theory, there exists a MBL topologically ordered phase for the $\mathbb{Z}_2$ gauge theory at finite energy density. The excited MBL eigenstates have a finite density of localized vortices, whence the Wilson loop $W$ exhibits a ``spin-glass" version of the perimeter law --- the magnitude of $W$ decays as the perimeter of C, but with a sign that depends on the number of localized vortices enclosed by C \cite{Huse:2013ys}. An analogous story holds for the doubled-semion gauge theory as well. By duality, the MBL highly excited eigenstates of the trivial and topological paramagnets exhibit a spin-glass perimeter law for $W_0[C]$ and $W_{LG}[C]$ respectively. By contrast, excited eigenstates for the thermal paramagnet exhibit area laws for these quantities just as in the clean limit. Thus, a sharp distinction exists between the MBL and thermal phases for the two paramagnets, diagnosed by the behavior of the Wilson loop operator. We now turn to the question of diagnosing the two MBL paramagnets as distinct phases. One's first instinct might be to use the Wilson loops and, for the ideal Hamiltonians, they work: $W_0[C] = 1$ for the TrPM and vanishes for the TPM, while $W_{LG}[C] = 1$ for the TPM and vanishes for the TrPM. Unfortunately this does not hold more generally; both Wilson loops exhibit a perimeter law in both paramagnetic phases. Possibly the ``correct'' one is always dominant, but this is a topic for future work. Instead, let us consider other possible diagnostics to separate the MBL TrPM and TPM phases. (i) At $T=0$ in the ground state, the edge of the TPM must either by gapless or break the $\mathbb{Z}_2$ symmetry. (ii) If we gauge the models, the gauged TPM exhibits vortices with semionic statistics, which, (iii) in the presence of time reversal symmetry, bind Kramers doublets \cite{Zaletel:2013ud}\footnote{We thank M. Zaletel for bringing this to our attention.}. We expect each of these properties to extend to the MBL phase, as we explore below. The gaplessness of the symmetric edge is not a sharp diagnostic of the TPM, even at $T=0$, as already alluded to by Levin and Gu in the clean case. The edges can always spontaneously gap by breaking Ising symmetry for arbitrarily weak perturbations; of course, gapped symmetry-broken edges can also be present in the TrPM. With disorder at finite energy the situation is even worse --- the many-body spectrum is always gapless although local operators may exhibit a `mobility' gap in localized states. Thus, we might expect `mobility gaplessness' in the absence of symmetry breaking, but this is a delicate diagnostic at best. At $T=0$, Levin and Gu proposed a sharp distinction between the two paramagnets based on a different diagnostic. They coupled both paramagnets to a static gauge and then considered the statistics of braiding $\pi$ flux vortex insertions. For the TrPM the statistics are bosonic while for the TPM they are semionic, as the gauged models are dual to the toric code and doubled semion theories, respectively. In a putative MBL state, a slow physical process of inserting fluxes, braiding and annihilating them should accumulate the same semionic statistical phase (on top of `spin glass'-like Aharonov-Bohm contribution from each of the encircled localized charges). The definition of `slow' is subtle as the many-body spectrum is gapless, but again we expect a local $O(1)$ mobility gap. The exact mathematical operators which characterize this process in the exactly solvable models do not have simple extensions to the general MBL state. If the gauged paramagnet additionally has time reversal symmetry, then each vortex of the TPM binds a Kramers doublet (the semion and the anti-semion states). This can be seen in the exactly solvable model by defining a local charge operator on an area $A$, $Q[A] = \prod_{p\in A} B_p$, gauging it and noting that the gauged $Q$ is time-reversal odd (even) if $A$ encloses an odd (even) number of vortices. This implies an exact degeneracy for the entire spectrum. On the other hand, the TrPM vortices are bosonic and do not bind Kramers doublets (the gauged charge operators are always time-reversal even in the exact model). The degeneracy lifts exponentially in the separation between the vortices on perturbing away from the exactly solvable point and we expect this exponential degeneracy to persist into the MBL phase. The careful reader might note that the typical many-body level spacing for highly excited states is exponentially small in the system volume, and thus smaller than the separation between paired states. This is reminiscent of the paired MBL regime discussed by Huse et. al \cite{Huse:2013ys}, and the `paired' states share all their local properties unlike typical MBL eigenstates close in energy. A different but related diagnostic comes from measuring coherent `anyon oscillations' between the semion and anti-semion states in the localized background with a timescale set by their separation. We leave the detailed mathematical understanding of these last questions as open problems for future work. Finally, we comment briefly on the requirement that there be a continuous path connecting the MBL phases of the TPM and the TrPM if Ising symmetry is broken along the path. Levin and Gu explicitly construct a local Ising symmetry-breaking unitary operator $U(\theta)$ which transforms $H_0$ into $H_{LG}$ (with $\Lambda_s = \Gamma_s$) along a path in Hamiltonian space parameterized by the continuous variable $\theta$; the same unitary can also be used for random couplings $\Lambda_s$. The many-body energy spectrum, and hence the level-statistics of $H(\theta)$ are identical everywhere along the path which strongly indicates the absence of a MBL to ergodic phase transition in accordance with work done by Huse et. al. \cite{Oganesyan:2007uq}. More strongly, each localized excited eigenstate of $H_0$ continues to a localized eigenstate of $H(\theta)$ under the action of the local unitary, and there is a continuous mapping between MBL eigenstates everywhere along the path. This is to be contrasted with the eigenstate phase transition that we expect between the TPM and TrPM highly excited eigenstates when Ising symmetry is preserved. \section{Concluding remarks} Traditionally, the destruction of order and the proliferation of defects are closely intertwined in statistical mechanics. This has led previously to the idea that the localization of defects can improve order, e.g. in the case of superconductors in a magnetic field \cite{Huse:1992dz} and the quantum Hall effect away from the center of the plateau \cite{Jain:2007fk}. The work of Huse et al has generated the interesting possibility that this mechanism can operate also in many body localized quantum systems where statistical mechanics does not apply even for highly excited eigenstates. In this setting the sought after order has to be identified for individual many-body eigenstates and has a ``spin glass'' form or at least a spin glass component which is eigenstate specific. In this paper we have considered whether SPT order can exist in highly excited eigenstates in the MBL setting by examining two specific models. In both cases it is not hard to see that thermal states differ qualitatively from the ground states exhibiting SPT order while MBL states qualitatively resemble the ground states thanks to the localization of defects. This is strong evidence for existence of an eigenstate phase transition that must separate the trivial and SPT regions at nonzero energy density. For the case of the Haldane phase in $d=1$ we are able to go further and argue that highly excited MBL eigenstates in the SPT region can be directly distinguished from highly excited MBL eigenstates in the topologically trivial region. For the topological Ising paramagnet in $d=2$ this last step still needs to be taken. In both cases we have argued the absence of an eigenstate phase transition separating the regions when the preserving symmetry is allowed to be broken. Evidently it would be interesting to extend this investigation to the larger zoo of SPT phases identified in recent work, including in $d=3$ where SPT order can presumably survive to non-zero temperatures when the disordering defects have the topology of vortex lines. One immediate restriction suggested by our analysis is that we found it necessary to protect the Haldane phase via a discrete symmetry to invoke MBL. If that restriction is fundamental, it may be that SPT order is strengthened by MBL only if the protecting symmetry is discrete. \section{Acknowledgements} We thank David Huse for very useful discussions on the question of many body localization of SPT phases with continuous symmetries, and for his valuable comments on a draft of this article. We would also like to acknowledge him and Vadim Oganesyan more broadly for multiple enlightening discussions on MBL over several years. We also thank M. Zaletel for drawing our attention to the effects of time reversal symmetry on the TPM phase. SLS would also like to acknowledge these gentlemen along with Rahul Nandkishore and Arijeet Pal for previous collaborative work that served as the inspiration for this project and Ehud Altman for a related discussion. We acknowledge support by NSF Grant Numbers DMR 10-06608, PHY-1005429 (AC, VK and SLS), the John Templeton Foundation (SLS), the Lawrence Golub Fellowship (CRL), the NSF through a grant for ITAMP at Harvard University (CRL) and the Perimeter Institute for Theoretical Physics (AC and CRL). Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. \begin{appendix} \section{Entanglement spectrum of dihedral symmetric MPS without translational symmetry} \label{Sec:AppES} In Ref.~\onlinecite{Pollmann:2010ih}, Pollman and co-authors demonstrated the entanglement spectrum of a spatial cut diagnoses the two dihedral symmetric translationally invariant phases of integer spins in one dimension. In the topological/Haldane phase, they showed that the entanglement spectrum is exactly double degenerate in the thermodynamic limit, while in the trivial phase, it is not. The two phases persist in the absence of translational invariance. In this appendix, we show that the classification of the entanglement spectrum of the MPS also holds without translational symmetry. Our approach and notation closely follows that in Ref.~\onlinecite{Pollmann:2010ih}. Consider an open chain of a spin system with integer spin $S$ in the thermodynamic limit. Let the wavefunction of the system have a MPS representation (as is the case for the ground state of the clean system or the highly excited MBL states in the dirty system). The canonical form of such an MPS in the standard pictorial notation is: \begin{align} \label{Eq:MPSrep} \includegraphics{fig-mps-defn.pdf} \end{align} where $i$ is the site label, $\sigma_i$ is the physical spin index taking values $-S, -S+1, \ldots S$, $\Gamma_i$ is a matrix of dimension $\chi$ and $\Lambda_i$ is a real, diagonal matrix, also of dimension $\chi$, with non-negative values. $\chi$ is interpreted as the dimension of the virtual spins that make up the spin $S$. \footnote{The proof may easily be extended to site-dependent $\chi$ ($\tilde{\chi}_i$). Then, the $\chi$ defined in the text is $\chi = \textrm{max} \tilde{\chi}_i$.} For a more detailed introduction to MPS, see \cite{Schollwock:2011qf, Verstraete:2008ko}. An important property of the canonical representation is that the transfer matrix at site $i$, defined as the tensor in the dashed box below, has a unique left (and right) eigenvector of eigenvalue one: \begin{align} \label{Eq:TMatEig} \includegraphics{fig-mps-tmat-eig.pdf} \end{align} The diagonal elements of $\Lambda_i$ are the Schmidt numbers for a spatial cut between bonds $i$ and $i+1$; the entanglement energies are the negative logarithms of these diagonal elements. Properties of the entanglement spectrum therefore follow from the structure of $\Lambda_i, \Gamma_i$. To prove that there is a two-fold classification of the entanglement spectrum, we proceed as follows: \begin{enumerate} \item Identify the action of the dihedral symmetry on the physical spins as a site-dependent gauge transformation of the virtual spins \item Show that the gauge transformation is the identity up to a site-dependent phase \item Determine that the smallest irreducible representation for the gauge transformation is either of dimension one, or two \end{enumerate} When the smallest irreducible representation has dimension two, the Schmidt values are forced to come in degenerate pairs and the entanglement spectrum is doubly degenerate. When the dimension is one, there is no constraint on the entanglement spectrum. This then is the required classification. We now go through the steps in turn. Consider the action of the dihedral group on the state $\ket{\Psi}$. The matrix, $\Gamma_i$ in the MPS representation in Eq.~\eqref{Eq:MPSrep} becomes: \begin{align} \tilde{\Gamma}_i^{\sigma} = (R_i^\alpha)^{\sigma\sigma'} \Gamma_i^{\sigma'}, \, \alpha=x,y,z \end{align} By definition, under the action of $\prod_i R_i^x$, $\prod_i R_i^y$ and $\prod_i R_i^z$, the given state goes back to itself, up to boundary effects that are not relevant in the thermodynamic limit. Thus, $\tilde{\Gamma}$ should be related to $\Gamma$ by a gauge transformation: \begin{align} \tilde{\Gamma}_i^{\sigma} = e^{i\theta_{i}^\alpha} (U^\alpha_{i-1})^\dagger \Gamma_i^{\sigma} U^\alpha_{i}, \end{align} where $U_i^\alpha$ is a unitary matrix commuting with $\Lambda_i$ and $\theta_i^\alpha$ is real. Physically, the $U$ matrices implement the action of the symmetry on the virtual spins. They form a $\chi$-dimensional projective representation of the symmetry group of the wave function $\ket{\psi}$. Note that the MPS with matrices ($\tilde{\Gamma}_i^{\sigma}, \Lambda_i$) is also in the canonical representation. As the dihedral operators square to identity, another action of the dihedral group provides a relation for $\Gamma_i$: \begin{align} \label{Eq:GammaRel} \Gamma_i^{\sigma} = e^{i2\theta_{i}^\alpha}\, (U^\alpha_{i-1})^\dagger (U^\alpha_{i-1})^\dagger \,\Gamma_i^{\sigma} \, U^\alpha_{i} U^\alpha_{i}, \end{align} Substituting Eq.~\eqref{Eq:GammaRel} in Eq.~\eqref{Eq:TMatEig}, it is easily seen that: \begin{align} \label{Eq:TmatEigVec} \includegraphics[width=8cm]{fig-mps-utransfer.pdf} \end{align} The input left vector to the transfer matrix and the output vector are different. However, the norm of both vectors is $\chi$, equal to the norm of the unimodular eigenvector. As the transfer matrix has a unique unimodular eigenvector, both vectors have to be proportional to the identity eigenvector in Eq.~\eqref{Eq:TMatEig} up to a phase. Thus, \begin{align} (U_{i-1}^{\alpha\dagger})^2 = e^{i\phi_{i-1}^\alpha} \mathbf{1} \end{align} This gets us to the second step in the list above. Further, as the eigenvalue is one, we obtain a relationship between $\theta^\alpha_i$, $\phi^\alpha_i$ and $\phi^\alpha_{i-1}$. Finally, after a few steps of algebra, we find that: \begin{align} (U_{i}^x)^\dagger (U_{i}^z)^\dagger &= \kappa (U_{i}^z)^\dagger(U_{i}^x)^\dagger \\ \kappa &= \pm 1 \end{align} That is, on every site $i$, $U_i^x$ and $U_i^z$ either commute or anti-commute. If $U_i^x$ and $U_i^z$ commute (anti-commute), the smallest irreducible representation has dimension one (two). Up to accidental degeneracies, $U_i^x, U_i^z$ can then be expressed as direct sums of matrices with dimension one (two). Recall however that $U_i^\alpha$ and the diagonal matrix with the Schmidt numbers, $\Lambda_i$, commute. Thus, in the former case, there is no constraint on the entanglement spectrum, while in the latter, the entire entanglement spectrum (ES) has to be doubly degenerate. In the ground states of the clean/disordered A/BKLT chains, $\kappa=-1$ and the (ES) is two-fold degenerate. Consider now the fuzzy Fock states defined below Eq.~\eqref{Eq:AndersonTrip} using the localized single triplon wavefunctions, $\ket{\Psi} = t_{\psi_1}^{\alpha_1}t_{\psi_2}^{\alpha_2}\cdots t_{\psi_N}^{\alpha_N}\ket{A}$. In the extremely dilute limit, pick a bond $m$ where the weight of all the single triplon states occupied in $\ket{\Psi}$ is small. The local action of the dihedral group on this bond is the same as in the ground state and $\kappa=-1$ on this bond. As $\kappa$ is site-independent, $U_i^x$ and $U_i^z$ anti-commute for all $i$ and the entanglement spectrum will be doubly degenerate for any spatial cut. Thus, these approximate MBL states have the topological order of the Haldane phase. \end{appendix}
6182bd9b706a5e68fdf73f538525688716c8dc7c
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Radiation feedback from stars and black holes is an ubiquitous physical process important for understanding the evolution of galaxies and the growth of black holes (BHs). Radiation-driven galactic winds may generate powerful gas outflows regulating the co-evolution of galaxies and the supermassive black holes at their centers \citep{King:03, Proga:00, DiMatteo:05, MurrayQT:05, MurrayMT:11, HopkinsQM:12}. Rayleigh–Taylor instability (RTI) can occur in these winds because a high-density shell is supported by a low-density fluid against a gravitational field or because the shell is accelerating outward \citep[{\it e.g.},][]{Chandrasekhar:61, StoneG:07}. Renewed interest on the growth of the RTI has followed these studies because the fragmentation of dense galactic supershells accelerated by thermal or radiation pressure, can reduce significantly the wind power, making this feedback loop less effective \citep{KrumholzT:12, JiangDS:13}. This problem has also prevented a definitive identification of the driving mechanisms for winds observed in a variety of systems \citep[{\it e.g.},][]{FaucherQ:12}. Among various stabilization mechanisms of the RTI, the presence of a D-type (density) ionization front (I-front) has recently been considered \citep{Parketal:13}. That I-fronts are stable to longitudinal perturbations has long been known \citep{Kahn:58, Axford:64}. The stabilizing mechanism is the gas opacity to ionizing radiation, that is typically due to hydrogen recombinations in the {\ion{H}{ii} } region. Let's consider a perturbation at the I-front displacing the front further from the ionizing source. The resulting increase of the column density of ionized gas between the radiation source and the I-front increases the gas opacity to ionizing radiation (due to the increased number of recombinations per unit time along the ray), thus decreases the ionizing flux at the I-front acting as a restoring mechanism of the perturbation. Similarly, for perturbations displacing the I-front toward the ionizing source the increase of the ionization flux at the I-front acts as a restoring mechanism of the perturbation. Based on a simple dimensional analysis is therefore expected that the growth rate of the RTI is reduced for perturbations with wavenumber $k>k_{cr} \sim \nu_{\rm rec}^2/g$. Here, $\nu_{\rm rec}$ is the hydrogen recombination rate inside the {\ion{H}{ii} } region and $g$ is the effective acceleration in the frame of reference of the I-front. However, a formal perturbation analysis of the I-front is necessary to derive the linear growth rate of the RTI including the effect of recombinations. This paper presents a linear stability analysis of accelerating I-fronts and I-fronts in an external gravitational field. As far as we know this is the first published analytical perturbation analysis of accelerating I-fronts in which the RTI growth rate is derived including the stabilizing effect of the gas opacity to ionizing radiation ({\it i.e.}, recombinations). This work builds on previous pioneering analytical studies, dating about 50 years ago, on the stability of non-accelerating I-fronts. A brief historical prospective is useful to summarize what is already known on the stability of I-fronts. \cite{Vandervoort:62} was the first to present a quantitative perturbation analysis of a non-accelerating plane I-front subject to appropriate jump and boundary conditions, assuming that the gas is compressible and isothermal. For simplicity the author derived the growth rate of the front perturbations in the limiting case when the temperature of the neutral gas is negligible with respect to the temperature of the ionized gas. The author considered the general case of incident radiation on the I-front with arbitrary inclination, but neglected hydrogen recombinations inside the {\ion{H}{ii} } region. With these assumptions weak-D and D-critical type fronts were found to be unstable. \cite{Kahn:58} had previously argued that absorption of the incoming radiation by neutral hydrogen atoms produced by recombinations in the ablation outflow could have a stabilizing effect because of the stronger absorption near the dimples of the front surface compared to the bumps. \cite{Axford:64} building on the work of \cite{Vandervoort:62}, but assuming incident radiation perpendicular to the I-front, presented a quantitative study that showed that this stabilization mechanism is effective for perturbations with wavelengths larger than the recombination length. \cite{Newman:67} extended this analysis to strong D-type and D-critical fronts. \cite{Saaf:66} extended previous analysis relaxing the simplifying assumption of a neutral gas much colder than the ionized gas and found a stronger suppression of the instabilities for the short-wavelength solutions previously found unstable. \cite{Sysoev:97} provided more complete analysis and found numerical solutions where the growth of marginally stable solutions in the limit of cold neutral gas were unstable at longer-wavelengths for normally incident radiation. \cite{Giuliani:79} and \cite{Williams:02} presented analytical calculations and numerical simulations for the case of inclined incident radiation on the front showing that, in the limit of neutral gas much colder than the ionized gas, essentially all wavelength modes are unstable. There is less work and is entirely based on numerical simulations, considering the RTI at accelerating I-fronts. \cite{Mizuta:05, Mizuta:07} studied the hydrodynamic instability of accelerating I-fronts using two-dimensional hydrodynamic simulations in which recombinations are either turned off or included. The authors conclude that the RTI can only grow when recombinations are turned off. In a series of papers \cite{ParkR:11, ParkR:12, ParkR:13} focused on the problem of gas accretion onto BHs from galactic scales regulated by radiation feedback. In these simulations the ionizing radiation emitted by the BH produces a hot ionized bubble preceded by a denser neutral shell centered around the BH. The formation of the {\ion{H}{ii} } region temporarily stops the feeding of the BH. However, the gas density inside the ionized bubble decreases with time and eventually the dense shell collapses onto the BH producing a burst of accretion and luminosity. These events repeat cyclically. The D-type I-front should be unstable to RTI, with perturbations at the smallest scale resolved in the simulations growing on timescales shorter than the period between two bursts, but instead the front appears remarkably stable at all scales. \cite{Parketal:13} have shown that the growth of the RTI at the I-front is stabilized by recombinations in the ionized gas. The RTI can grow only during a short phase of the cycle when the I-front is accelerating during a BH luminosity burst. The authors speculate that the RTI is suppressed for perturbations with wavenumber $k>k_{cr}$, however a derivation of the growth rate of the RTI at I-fronts is not present in the literature. The goal of this paper is to fill the gap and provide such a derivation. This paper is organized as follows. In \S~\ref{sec:lsa} is presented a perturbation analysis of an accelerating I-front, and the characteristic equation for the growth rate of longitudinal perturbations is derived. In \S~\ref{sec:res} analytical solutions of the characteristic equation in the relevant limits are presented, providing the dispersion relation for unstable surface modes. A summary of the results and a brief discussion on astrophysical applications is presented in \S~\ref{sec:conc}. \section{Linear Stability Analysis}\label{sec:lsa} Let's consider a plane-parallel I-front and work in the frame of reference comoving with the I-front. The formalism adopted in this paper follows closely the one by \cite{Vandervoort:62, Axford:64} for non-accelerating I-fronts with two important modifications: (i) terms describing the acceleration ${\bf g}\equiv g {\bf n}_{0}$ in the frame of reference of the I-front are introduced, where ${\bf n}_{0}$ is the unit vector normal to the unperturbed front; (ii) an incompressible equation of state for the gas is assumed (${\bf \nabla \cdot u}=0$). The second assumption is justified for sake of simplicity given the increased complexity of the equations due to the additional terms arising because of the front acceleration. This approximation also allows to easily check the results against the well known growth rate of the incompressible RTI ({\it i.e.}, $n=\sqrt{gk}$). The methodology of the analysis can be summarized as follows. The governing hydrodynamic equations are linearized to first order for the neutral (quantities with subscripts ``1'') and ionized gas (quantities with subscripts ``2'') separately. These equation can be solved to derive the pressure perturbations as a function of the velocity and density perturbations. We consider perturbations on the surface of the I-front in the form ${\bf \xi}={\bf n}_{0} \xi$, where $\xi=\xi_0 \exp(nt+ikx)$ and $\xi_0$ is the amplitude of the front deformation. The unperturbed ($U$ velocity, $\rho$ density and $P$ pressure) and perturbed ($u$ velocity, $\delta \rho$ density and $\delta P$ pressure) physical quantities must obey the jump conditions at the I-front derived from mass and momentum conservation and the perturbed quantities must vanish at large distance from the front ({\it i.e.}, must obey the boundary conditions). Imposing these jump/boundary conditions results in a system of four linear equations in four unknowns: $u_1$, $u_2$, $\delta \rho_2$ and $\xi_0$ (it will be shown later that $\delta \rho_1=0$ in order to obey the boundary conditions at $z \rightarrow \infty$ and $\delta P$ can be expressed as a function of $u$ and $\delta \rho$). By setting the determinant of the linear system to zero (to ensure non-trivial solutions) the characteristic equation for the growth rate of the perturbations, $n$, is derived. Positive real solutions of the characteristic equations give the unstable growing modes of the system in the linear regime. \subsection{Governing Equations} A moving coordinate frame is chosen so that the I-front appears stationary, with z-axis perpendicular to the front and photons incident from $z<0$ onto the I-front placed at $z=0$ in the unperturbed state. The region $z>0$ contains the neutral gas and the region $z<0$ the ionized gas. The unperturbed velocity, density and pressure in the neutral (quantities with subscript ``1'') and ionized regions (subscript ``2'') are the constants $U_1$, $U_2$, $\rho_1$, $\rho_2$, $P_1$, $P_2$, respectively. An incident radiation field $J$ directed along the z-axis is assumed. Given the symmetry of the system we can consider longitudinal perturbations of the I-front aligned along the x-axis without loss of generality, making the problem two-dimensional. Perturbations of the velocity ${\bf U}={\bf U}+{\bf u}$, density $\rho=\rho+\delta \rho$, and pressure $P=P+\delta P$ fields will be considered on each side of the front. The following governing equations will allow to write $\delta P$ as a function of the other perturbed quantities. The linearized equations of continuity and motion for an inviscid gas with an external or fictitious acceleration ${\bf g}$ in the z-direction ({\it i.e.}, orthogonal to the unperturbed I-front) pointing toward the ionized gas, are \begin{align} \left(\frac{\partial}{\partial t}+U\frac{\partial}{\partial z}\right)\delta \rho+\rho(\nabla \cdot {\bf u})+{\bf u} \cdot \nabla \rho &= 0,\label{eq:euler1}\\ \rho \left(\frac{\partial}{\partial t}+U\frac{\partial}{\partial z}\right){\bf u} &= -\nabla \delta P - \delta \rho g {\bf n}_{0} . \label{eq:euler2} \end{align} As usual, solutions of these equations are sought in the form $A(x,z,t)=A_s(z)\exp{(nt+ i k x)}$, where $A$ is the perturbed quantity, $n$ is the growth rate and $k$ is the wavenumber of the perturbations along the x-axis. Equation~(\ref{eq:euler1}), assuming incompressible gas (${\bf \nabla \cdot u}=0$) can be written as \begin{align} (n+UD)\delta \rho=-u D \rho &= 0,\label{eq:euler1a}\\ D u+ik u_x &= 0, \label{eq:euler1b} \end{align} where $u$ and $u_x$ are the z- and x-components of the perturbed velocity, respectively. The abbreviation $D \equiv \partial/\partial z$ has been adopted. Similarly, Equation~(\ref{eq:euler2}) becomes \begin{align} \rho(n+UD)u &= -D \delta P -\delta \rho g,\label{eq:euler2a}\\ \rho n u_x &= -ik \delta P. \label{eq:euler2b} \end{align} Combining Equations~(\ref{eq:euler1b}) and (\ref{eq:euler2b}) an expression for the pressure perturbation is easily obtained: \begin{equation} \delta P= -\frac{\rho n Du}{k^2}.\label{eq:dp} \end{equation} Equations~(\ref{eq:euler1a}) and (\ref{eq:euler2a}) must satisfy the boundary conditions $\delta \rho \rightarrow 0$ and $u \rightarrow 0$ as $|z| \rightarrow \infty$. Since $U<0$ and unstable solutions with real part of $n$ positive are sought, the solution for the perturbed density is \begin{equation} \delta \rho= \begin{cases} 0 & \mbox{if }z>0,\\ \delta \rho_2 \exp\left(-\frac{n}{U_2}z\right) \exp{(nt+kx)} & \mbox{if } z<0. \end{cases} \label{eq:drho} \end{equation} Hence, the assumption of incompressible gas leads to perturbations of the density in the ionized component while the neutral gas density remains unperturbed ($\delta \rho_1=0$). Note that this is not the case when an isothermal gas is considered instead. Finally, Equation~(\ref{eq:euler2a}) can be solved in the neutral gas and ionized gas. For $z>0$ (neutral gas) where $\delta \rho=0$, the term proportional to $g$ cancels out, thus the governing equation is \begin{equation} D^2u - \frac{k^2U_1}{n}Du - k^2 u=0. \end{equation} The general solution of this equation is $u=u_1\exp{(p_1 z)}\exp{(nt+kx)}$, where \begin{equation} p_1=-\frac{k^2 U_1}{2n}\left(-1-\sqrt{1+\left(\frac{2n}{kU_1}\right)^2}\right) =-\epsilon\frac{k}{\tilde{n}}f_1(\tilde{n},\epsilon). \end{equation} Here, since $U_2<0$, the dimensionless growth rate $\tilde{n} \equiv -n/k U_2$ has the same sign as $n$, hence $p_1 \le 0$ for unstable modes ({\it i.e.}, $\tilde{n}>0$). The parameter $\epsilon \equiv U_1/U_2 = \rho_2/\rho_1 \equiv \delta^{-1}$ is the inverse of the density contrast between the neutral and ionized gas, and $f_1(\tilde{n},\epsilon) \equiv [1+(1+4(\tilde{n}/\epsilon)^2)^{1/2}]/2$. In the limit $\tilde{n} \gg \epsilon$, {\it i.e.}, for large density contrasts $\delta \gg 1$, $f_1 \rightarrow \tilde{n}/\epsilon$ (thus, $p_1=-k$)). In the limit $\tilde{n} \ll \epsilon$, $f_1 \rightarrow 1$. Thus, from Equation~(\ref{eq:dp}) the pressure perturbation in the neutral gas is $\delta P=\delta P_1 \exp{(nt+kx)}$ with \begin{equation} \delta P_1=-\frac{n p_1 \rho_1 u_1}{k^2}= -\rho_1 U_1 u_1 f_1(\tilde{n},\epsilon). \label{eq:dp1} \end{equation} For $z<0$ (ionized gas) where $\delta \rho \not= 0$, the governing equation is \begin{equation} D^2u - \frac{k^2U}{n}Du - k^2 u - \frac{gk^2}{n} \frac{\delta \rho_2}{\rho_2}\exp\left(-\frac{n}{U_2}z\right)=0 \end{equation} The general solution of this equation is $u=u(z)\exp{(nt+kx)}$ with $u(z)=g_2 \exp{(p_2 z)}+\delta \rho_2 g/(\rho_2 n \tilde{n}^2)\exp{(-nz/U_2)}$, where \begin{equation} p_2=-\frac{k^2 U_2}{2n}\left(-1+\sqrt{1+\left(\frac{2n}{k U_2}\right)^2}\right)=\frac{k}{\tilde{n}}f_2(\tilde{n}). \end{equation} Here, $f_2(\tilde{n})\equiv [-1+(1+4\tilde{n}^2)^{1/2}]/2$, and $p_2 \ge 0$ for unstable modes with $\tilde{n}>0$. In the limit $\tilde{n} \gg 1$ ({\it i.e.}, when $n \gg k|U_2|$, always valid for unstable modes when $U_2 \rightarrow 0$), $f_2 \rightarrow \tilde{n}$ ($p_2=k$). In the limit $\tilde{n} \ll 1$, $f_2 \rightarrow \tilde{n}^2$. The constant $g_2$ can be expressed in terms of $u_2=u(z=0)$ as $g_2=u_2-\delta \rho_2g/(\rho_2 n \tilde{n}^2)$ and using Equation~(\ref{eq:dp}) the pressure perturbation in the ionized gas is $\delta P=\delta P_2 \exp{(nt+kx)}$ with \begin{equation} \delta P_2=\rho_2 U_2 \left[u_2 f_2(\tilde{n})+ \frac{g}{n}\frac{\delta \rho_2}{\rho_2}\left(1-\frac{f_2(\tilde{n})}{2\tilde{n}^2}\right)\right]. \label{eq:dp2} \end{equation} \subsection{Jump Conditions at the Front} Four additional equations are needed to close the system as the normalization constants $u_1$, $u_2$, $\delta \rho_2$ and $\xi_0$ are still undetermined. The equations that can be used to close the system are the jump conditions at the I-front: two energy conservation equations (for the x and z direction) and two momentum conservation equations in the z-direction for the top and bottom layers. The energy conservation jump condition for the unperturbed front is, \begin{equation} \Delta[\rho U^2 + P]=0, \label{eq:jump0a} \end{equation} where the notation $\Delta[A] \equiv A_1 -A_2$ is adopted. The momentum conservation equations for the unperturbed quantities are: \begin{equation} \rho_1 U_1=\rho_2 U_2 = -\mu J_0, \label{eq:jump0b} \end{equation} where $J_0$ is the number of ionizing photons that reach the unperturbed I-front per unit area and time, and $\mu$ is the mean molecular mass. Let's consider, as mentioned above, the deformation of the ionization front of the form ${\bf \xi}={\bf n}_{0} \xi_0 \exp(nt+ikx)$, representing the displacement of the front with respect to the $z=0$ steady state position. The perturbation of the velocity of the front is ${\bf \delta V} \equiv \partial {\bf \xi}/\partial t=n {\bf \xi}$, and the unit vector normal to the front is ${\bf n}={\bf n}_{0}+{\bf \delta n}$, where ${\bf \delta n}= - \nabla {\bf \xi} = -i {\bf k} {\bf \xi}$. The flux at the I-front is also perturbed due to the front deformation and the density perturbations in the ionized gas: $J(\xi)=J_0+\delta J(\xi)\exp{(nt+kx)}$. Is important to consider the effects of absorption in the {\ion{H}{ii} } region of ionizing radiation are it is know to stabilize the perturbations. The absorption of ionizing photons is described by the equation \begin{equation} \frac{dJ}{dz}= - \nu_{\rm rec} \frac{\rho_2}{\mu},\label{eq:j0} \end{equation} where $\nu_{\rm rec} \equiv \rho_2 \alpha^{(2)}/\mu$ is the hydrogen recombination rate in the ionized region, with $\alpha^{(2)} \approx 2.6 \times 10^{-13}{\rm cm}^3{\rm s}^{-1} (T/10^4~K)^{-0.8}$ \citep{Spitzer:62}. Allowing for perturbations of the front position and density in the ionized gas, Equation~(\ref{eq:j0}) can be integrated to give \begin{equation} \delta J(\xi) = - \nu_{\rm rec} \frac{\rho_2}{\mu} \left(\xi_0 + 2\int_{-\infty}^0 \frac{\delta \rho_2}{\rho_2} dz\right)+O(\delta \rho_2^2).\label{eq:dj} \end{equation} The first term on the right hand side of Equation~(\ref{eq:dj}) describes the front stabilizing mechanism proposed by \cite{Kahn:58}, while is not clear a priori whether the second term due to perturbations of the density in the ionized gas is stabilizing or destabilizing (it will be found to be a stabilizing term). The linearized perturbed Equations~(\ref{eq:jump0a}) and (\ref{eq:jump0b}) are \begin{align} \Delta[\rho({\bf \delta n} \cdot {\bf U}){\bf U}]+\Delta[\delta \rho({\bf n}_{0} \cdot {\bf U}){\bf U}]+\Delta[\rho {\bf n}_{0} \cdot ({\bf u}-{\bf \delta V}){\bf U}] + \Delta[\rho({\bf n}_{0} \cdot {\bf U})({\bf u}-{\bf \delta V})] &= 0,\label{eq:jump1a}\\ + {\bf \delta n}\Delta[P]+{\bf n}_{0}\Delta(\delta P)+{\bf n}_{0} g \int_{2}^{1} dz \delta \rho &= 0,\label{eq:jump1b}\\ \rho ({\bf \delta n} \cdot {\bf U})+\delta \rho {\bf n}_{0} \cdot {\bf U} +\rho{\bf n}_{0} \cdot ({\bf u}-{\bf \delta V}) &= -\mu \delta J\exp{(nt+kx)}.\label{eq:jump1c} \end{align} Substituting the expressions for ${\bf \delta V}$ and ${\bf \delta n}$ two jump conditions at the front and two continuity equations for the neutral and ionized gas are obtained: \begin{align} \Delta[\delta \rho U^2]+2\Delta[\rho U (u-n\xi)]+\Delta(\delta P) + g \int_{2}^{1} dz \delta \rho &= 0,\label{eq:jump1bis}\\ \Delta[U \delta P] + n \xi \Delta[P] &= 0,\\ \delta \rho U +\rho (u-n\xi) &= -\mu \delta J\exp{(nt+kx)}.\label{eq:jump1bisc} \end{align} The integral in Equations~(\ref{eq:jump1b}) and (\ref{eq:jump1bis}) can be easily evaluated across the front discontinuity: \begin{equation} \int_2^1 dz \delta \rho = -\frac{1}{n}(- U_2 \delta \rho_2 + u_1 \rho_1 - u_2 \rho_2), \end{equation} while the ionization flux perturbation at the front location is obtained from Equation~(\ref{eq:dj}) using Equation~(\ref{eq:drho}) for $\delta \rho$: \begin{equation} -\mu \delta J=\rho_2 \nu_{\rm rec} \left(\xi_0 -2 \frac{U_2}{n}\frac{\delta \rho_2}{\rho_2} \right). \end{equation} A few manipulations of Equations~(\ref{eq:jump1bis})-(\ref{eq:jump1bisc}) give the following system of four equations in four unknowns: \begin{align} \delta \rho_2 U_2^2 - 2\mu \delta J \Delta[U] + \Delta(\delta P) - g \Delta[\rho] \xi_0 &= 0, \label{eq:jump2a}\\ \Delta[U \delta P] - n\rho_1 U_1 \Delta[U] \xi_0 &= 0, \label{eq:jump2b}\\ \rho_1 u_1 &= -\mu \delta J + n \rho_1 \xi_0, \label{eq:jump2c}\\ \rho_2 u_2 &= -\mu \delta J + n \rho_2 \xi_0 - U_2 \delta \rho_2, \label{eq:jump2d}\\ \end{align} where the relationship $\Delta[P]=-\Delta [\rho U^2]=-\rho_1 U_1 \Delta [U]$ has been used. Finally, Equations~(\ref{eq:dp1}) and (\ref{eq:dp2}) allow to express the terms in $\delta P$ as: \begin{align} \Delta [\delta P] &= -(\rho_1 u_1 U_1 f_1 + \rho_2 u_2 U_2 f_2)-\frac{g}{n} U_2 \delta \rho_2 \left(1-\frac{f_2(\tilde{n})}{2\tilde{n}^2}\right),\\ \Delta [U\delta P] &= -(\rho_1 u_1 U_1^2 f_1 + \rho_2 u_2 U_2^2 f_2)-\frac{g}{n} U_2^2 \delta \rho_2 \left(1-\frac{f_2(\tilde{n})}{2\tilde{n}^2}\right).\\ \end{align} In order to keep the equations concise, is convenient to work using the dimensionless variables $\tilde{n} \equiv -n/(kU_2)$, $\tilde{\nu} \equiv -\nu_{\rm rec}/(kU_2)$ and $\tilde{g} \equiv g/(kU_2^2)$ and define the functions: \begin{alignat}{2} F_0 &\equiv f_1+f_2 & (\lim_{\epsilon \to 0} F_0 &= \tilde{n}/\epsilon), \\ F_1 &\equiv \epsilon f_1 + f_2 & (\lim_{\epsilon \to 0} F_1 &= 2\tilde{n}),\\ F_2 &\equiv \epsilon^2 f_1 + f_2 & (\lim_{\epsilon \to 0} F_2 &= \tilde{n}),\\ G_0 &\equiv \tilde{g}\left(1-\frac{f_2}{2\tilde{n}^2}\right) \quad & (\lim_{\epsilon \to 0} G_0 &= \tilde{g}). \end{alignat} For convenience the limits of these functions for high density contrast $\delta \gg 1$ ({\it i.e.}, $\epsilon \rightarrow 0$) are shown as well. With these definitions and substituting $u_1$ and $u_2$ from Equations~(\ref{eq:jump2c})-(\ref{eq:jump2d}) into Equations~(\ref{eq:jump2a})-(\ref{eq:jump2b}) gives the following system of two equations in two unknowns ($\delta \rho_2$ and $\xi_0$): \begin{align} \left\{1+f_2+\frac{G_0}{\tilde{n}}+2\frac{\tilde{\nu}}{\tilde{n}}[F_1+2(1-\epsilon)]\right\}\delta \rho_2 + k\rho_2\left\{\tilde{n} F_0 + \tilde{\nu}[F_1+2(1-\epsilon)]-\tilde{g}\frac{\Delta\rho}{\rho_2}\right\}\xi_0 &= 0\\ \left\{f_2+\frac{G_0}{\tilde{n}}+2\frac{\tilde{\nu}}{\tilde{n}}F_2\right\}\delta \rho_2 + k\rho_2 \left\{\tilde{n}(F_1 -1+\epsilon)+\tilde{\nu} F_2\right\} \xi_0 &= 0. \end{align} Non-trivial solutions for the dimensionless growth rate $\tilde{n}$ are found by setting the discriminant of the linear system to zero: \begin{equation} \begin{split} \left[(1+f_2)(F_1-1+\epsilon)-f_2 F_0\right]\tilde{n}^2\\ +\left\{[(F_1+2-2\epsilon)(2F_1-2+2\epsilon-f_2)+F_2(1+f_2-2 F_0)]\tilde{\nu} - \left[G_0(F_0+F_1-1+\epsilon)-f_2\tilde{g}\left(\frac{1}{\epsilon}-1\right)\right]\right\}\tilde{n} \\ - \left[2F_2 \tilde{g} \left(\frac{1}{\epsilon}-1\right)+G_0(F_1+2-2\epsilon-F_2)\right]\tilde{\nu}+G_0\tilde{g}\left(\frac{1}{\epsilon}-1\right)=0, \label{eq:car} \end{split} \end{equation} \section{Results}\label{sec:res} The general solution of the characteristic Equation~(\ref{eq:car}) can be found numerically as a function of $g$, $\nu_{\rm rec}$ and $\epsilon$. However, in the limit of high density contrast across the I-front ({\it i.e.}, $\delta \gg 1$), Equation~(\ref{eq:car}) simplifies significantly and has an analytical solution. The special cases of neglecting the effective gravity ({\it i.e.}, stability of non-accelerating I-fronts) and neglecting recombinations will be considered separately to check whether known results on the classical RTI growth rate are recovered. Only density contrasts $\delta>2$ ({\it i.e.}, $\epsilon < 0.5$) should be considered as this condition is necessary for a D-type front to exist. \subsection{Limit 1: Neglecting Gravity} Setting both $g=0$ and $\nu_{\rm rec}=0$, Equation~(\ref{eq:car}) becomes \begin{equation} (1+f_2)(F_1-1+\epsilon)-f_2 F_0 =0. \end{equation} The real roots of this equation are negative for any value $\epsilon < 0.4$, meaning that the I-front is stable for density contrast $\delta >2.5$. Including recombinations ({\it i.e.}, $\nu_{\rm rec} >0$) has minor effects on the stability. This is contrary to results of \cite{Vandervoort:62} who found unstable solutions for weak D-type fronts when neglecting recombinations. However, \cite{Vandervoort:62} assumed isentropic perturbations while the present result is derived in the limit of incompressible perturbations. Strong D-type fronts, that have $|U_2|>c_{s2}$ where $c_{s2}$ is the sound speed in the ionized gas, are instead stable even neglecting recombinations \citep{Newman:67}. Is likely that the unstable modes found neglecting recombinations appear because of the effects of gas compressibility. \subsection{Limit 2: Neglecting Recombinations} In this subsection is shown that neglecting recombinations but including non-zero effective gravity the RTI for incompressible gas is recovered in the limit of large density contrast ($\epsilon \rightarrow 0$). Setting both $\nu_{\rm rec}=0$, Equation~(\ref{eq:car}) becomes \begin{equation} [(1+f_2)(F_1-1+\epsilon)-f_2 F_0]\tilde{n}^2-\left[G_0(F_0+F_1-1+\epsilon)-f_2\tilde{g}\left(\frac{1}{\epsilon}-1\right)\right]\tilde{n} +G_0\tilde{g}\left(\frac{1}{\epsilon}-1\right)=0. \label{eq:car1} \end{equation} This equation can be solved for $\tilde{n}$ only numerically, however in the limit of large density contrast $\delta \gg 1$ ($\epsilon \rightarrow 0$) and $\tilde{n} \gg 1$, the expressions for the functions $f_1$ and $f_2$ become: $f_1 \rightarrow \tilde{n}/\epsilon$ and $f_2 \rightarrow \tilde{n}$. Hence: $F_0 \rightarrow \tilde{n}/\epsilon$, $F_1 \rightarrow 2\tilde{n}$, $F_2 \rightarrow \tilde{n}$. Substituting these limits in Equation~(\ref{eq:car}) the following 4th order equation is obtained: \begin{equation} \tilde{n}^4=\tilde{g}^2 \end{equation} This equation has two real solutions $\tilde{n}=\pm \sqrt{\tilde{g}}$. Hence the growing mode in in physical units is $n=\sqrt{gk}$ that indeed is the well known dispersion relation for the incompressible RTI in the limit $\delta \gg 1$. A numerical inspection of the Equation~(\ref{eq:car1}) shows that for values of $\epsilon\not=0$ a better solution of the equation is $n=\sqrt{gk A}$, where $A \equiv (\rho_1-\rho_2)/(\rho_1+\rho_2)=(1-\epsilon)/(1+\epsilon)$ is the Atwood number. \subsection{General Solution} The general solution of Equation~(\ref{eq:car}) can only be found numerically. However, in the limit $\epsilon \rightarrow 0$ ($\delta \gg 1$) and $\tilde{n} \gg 1$ the following simplified 4th order equation is obtained: \begin{equation} \tilde{n}^4 + 2 \tilde{\nu} \tilde{n}^3 + 2\tilde{\nu} \tilde{g} \tilde{n} - \tilde{g}^2=0. \end{equation} This equation has two complex solutions ($\pm \sqrt{-\tilde{g}}$) and two real solutions: \begin{equation} \tilde{n}=-\tilde{\nu} \pm \sqrt{\tilde{\nu}^2+\tilde{g}} \end{equation} Hence, in physical units the growing mode has a rate \begin{equation} n=-\nu_{\rm rec} + \sqrt{\nu_{\rm rec}^2+g k}. \end{equation} Therefore, at large scales $k \ll k_{cr} \equiv \nu_{\rm rec}^2/g$ the growth of the RTI is $n \approx \sqrt{gk}(gk/4\nu_{\rm rec}^2)^{1/2}$, suppressed by a factor $(k/4k_{cr})^{1/2}$ with respect to the classical RTI. A numerical inspection of the Equation~(\ref{eq:car}) shows that for values of $\epsilon\not=0$ a better approximation of the solution is $n=-\nu_{\rm rec}+(\nu_{\rm rec}^2+ gk A)^{1/2}$, where $A$ is the Atwood number. \subsection{Discussion on the Effects of Gas Compressibility} Let's now discuss how the assumption of gas incompressibility adopted in this work may affect the results. The discussion will be guided by previous results on the stability of Rayleigh-Taylor modes and non-accelerating I-fronts assuming isothermal or isentropic perturbations. The effect of compressibility on the linear growth of the RTI was considered by several authors assuming either isothermal or isentropic equilibrium states and perturbations \citep[see ][for a review]{Gauthier:10}. Compressibility modifies the RTI growth rate with respect to the incompressible case as follows: the growth rate decreases as the ``stratification'' parameter, $g/kc_{s2}^2$, increases and the adiabatic indices decrease. Stratification and compressibility effects are more important at small wavenumbers and the growth rates are larger when the light fluid is more compressible than the heavy one. Compressibility effects are larger at small Atwood numbers ({\it i.e.}, low density contrast). For the isothermal case, compressibility stabilizes the RTI \citep[{\it e.g.},][]{Blake:72, Mathews:77, Shivamoggi:83, Ribeyre:04}. However, the departure from incompressible type behavior is small in most circumstances. The correction term, $g/kc_{s2}^2$, is much smaller than unity unless the effective acceleration at the I-front is $g_{eff} \gg GM/R_s^2$. It is easy to show that for $g=GM/R_s$ the stabilizing term is of the order $kR_{b,in}/R_s^2$, that is much smaller than unity at wavelengths of interest $\lambda < 2\pi R_s$, where $R_{b,in}\ll R_s$ is the Bondi radius inside the {\ion{H}{ii} } region. The stability of isentropic non-accelerating I-fronts depends on the recombination rate but also on the Mach number of the gas downstream the front. Supersonic flows (strong D-type and weak R-type) are stable even neglecting recombinations \citep{Newman:67}. D-critical and weak D-type fronts with Mach number in the ionized gas ${\cal M}_2 << 1$ ($|U_2| \ll c_{s2}$) are also stable \citep{Axford:64}. The stability of the front found in this paper may be understood in this context taking the limit $c_{s2} \rightarrow \infty$, that gives ${\cal M}_2 \ll 1$. Compressibility effects destabilize the front for weak D-type fronts with ${\cal M}_2 \lower.5ex\hbox{\ltsima} 1$, but only on scales smaller than one tenth of the recombination scale in the ionized gas. In light of the discussion above, the effects of compressibility on the stability of accelerating I-fronts should be small in many circumstances. Compressibility effects should be most relevant for weak D-type fronts with ${\cal M}_2 \sim 0.5$. In this regime $\tilde{g} \sim 4g/kc_{s2} \sim 1$, and $\tilde{n} \lower.5ex\hbox{\ltsima} 1$. However, is unclear whether the instability growth rate is increased or reduced because Rayleigh-Taylor modes are typically stabilized while the I-front is slightly destabilized by compressibility effects. \section{Summary and Conclusions}\label{sec:conc} The growth of the RTI in D-type ionization fronts has been investigated via perturbation analysis assuming incompressible gas. In the limit of a large density contrast at the ionization front the RTI growth rate has the simple analytical solution $n=-\nu_{\rm rec}+(\nu_{\rm rec}^2+gk)^{1/2}$. Therefore, recombinations in the ionized gas stabilize the RTI on scales larger than \begin{equation} \lambda_{cr}=2\pi \frac{g}{\nu_{\rm rec}^2}, \label{eq:crit} \end{equation} suppressing their growth for $\lambda \gg \lambda_{cr}$ by a factor $\sim (\lambda_{cr}/4\lambda)^{1/2}$ with respect to the non-radiative case. Contrary to previous analysis of isothermal non-accelerating I-fronts, the front is stable when gravity and recombinations are set to zero, suggesting that the assumption of gas incompressibility has a stabilizing effect. The stabilization is very effective because in most problems $\lambda_{cr}$ is much smaller than the scales of interest. For instance, for non-accelerating fronts in a gravitational field produced by a point mass $M$ with $g=GM/r_s^2$, where $r_s$ is the location of the I-front, \begin{equation} \theta_{cr} \equiv \frac{\lambda_{cr}}{2\pi r_s}=\frac{GM}{\nu_{\rm rec}^2 r_s^3} \sim 4 \times 10^{-10} \frac{X}{l}, \label{eq:thcrit} \end{equation} where $r_s=(3S_0/4\pi n^2 \alpha^{(2)})^{1/3}$ is the Str\"omgren radius and $S_0=5.7 \times 10^{48} s^{-1} (M/M_{\odot}) l/X$ is the number of hydrogen ionizing photons per unit time, expressed as a function of the Eddington ratio $l \equiv L/L_{Ed}$ and the mean ionizing photon energy $X$ in Rydbergs. In this example, at scales $\theta \lower.5ex\hbox{\gtsima} 10^{-3}$, the front becomes unstable to RTI only for very small Eddington ratios: $l \lower.5ex\hbox{\ltsima} 4 \times 10^{-7} X$ \citep[see,][]{Parketal:13}. However, during a burst of luminosity when the front acceleration timescale $t_{acc}=(dv/dt/r)^{-1/2}$ is comparable or shorter than the recombination timescale, the RTI can develop even for Eddington ratios of the order of unity. Another case in which the front is accelerating and the RTI may develop is when the front propagates toward regions of lower density, for instance if the ionizing source is at the center of a halo with a gas density profile $\rho \propto r^{-2}$ \citep{WhalenN:08}. This analytical work has been inspired by the results of numerical simulations \citep{ParkR:11, ParkR:12, ParkR:13} showing that recombinations in {\ion{H}{ii} } regions produced by BHs accreting from a neutral medium have a stabilizing effect on the growth of RTI at the I-front. In these simulations the I-front is unstable only during two phases of the duty cycle: (i) during bursts of accretion onto the BH, when the outward acceleration of the front increases the effective gravity and (ii) just before a burst when the accretion luminosity reaches a minimum value triggering the collapse of the shell and the {\ion{H}{ii} } region onto the BH. During the burst the front fragments on the smallest resolved scales into knots that are optically thick, casting a shadow before dissolving (self-gravity is not included in the simulations). The phase when the I-front collapses onto the BH is characterized by denser fingers of gas protruding toward the BH. The results of the present study are in agreement with the observed phenomenology as discussed in detail in \cite{Parketal:13}. Simulations of moving intermediate mass BHs with radiation feedback \citep{ParkR:13} also show the formation of a dense shell upstream the BH supported against gravity by less dense hot gas in a cometary-shaped {\ion{H}{ii} } region. Also in this case RTI at the front appears to be stabilized by recombinations. The results of this work may also be relevant for the stability of supershells in AGN or galactic winds. In general, outflows or winds can be produced by pressure gradients as a result of thermal energy injection or transfer of momentum from the radiation field to the gas. Momentum driven winds can be produced by Compton scattering of photons on electrons, by photons scattered or absorbed by dust grains or photo-ionization of hydrogen and heavy ions by UV and X-rays. For instance, radiation pressure on dust has been suggested as an important feedback mechanism in regulating star formation on galactic scales. However, for optically thick media and Eddington ratios close to unity (typical of many galaxies and star clusters), the gas and the radiation field are coupled producing phenomena such as photon bubbles and radiation RTI \citep{Turneretal:05, JaquetK:11, JiangDS:13}. In this regime \cite{KrumholzT:12, JiangDS:13} showed that the transfer of IR radiation through the neutral gas triggers radiative RTI, driving turbulence but not a wind. Although these models do not include ionized bubbles, a realistic description of the wind involves supershells produced by OB associations near the galactic disk that may provide some stabilization during the initial phases of the wind launching process. A more direct application of the results of this work regards a possible driving mechanism of AGN winds based on photo-ionization of metal ions within 0.1 pc from the supermassive BH \citep{Proga:00, DebuhrQM:12, Novak:12}. Ionization fronts of metal ions of different elements are located at various distances near the AGN, and the ions may (or may not) be coupled to the hydrogen gas trough Coulomb collisions \citep{BaskinL:12}. Momentum is deposited in shells at each ionization fronts, producing an accelerating wind subject to RTI. Ion recombinations should stabilize these shells at least on scales larger than a critical value that can be estimated using a modification of Equation~(\ref{eq:crit}) in which $\nu_{\rm rec}$ is calculated for the appropriate metal ion. \section*{Acknowledgments} MR's research is supported by NASA grant NNX10AH10G and NSF CMMI1125285. This work made in the ILP LABEX (under reference ANR-10-LABX-63) was supported by French state funds managed by the ANR within the Investissements d'Avenir programme under reference ANR-11-IDEX-0004-02. Many thanks to KH Park, J. Drake and C. Reynolds for inspiring the paper. Many thanks to S. Falle for refereeing the paper. \bibliographystyle{/Users/ricotti/Latex/TeX/mn2e}
01307e6d55aaaa9d42ff88683853de61bd4bc085
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \noindent Consider all the possible states of a system in \emph{Boltzmannian statistical mechanics}. A measure is defined over these states. The main aim of statistical mechanics is to work as a bridge between the microscopic and macroscopic levels of description of a system, and the measure is a crucial ``level-bridging'' ingredient. An important question in the foundations of Boltzmannian statistical mechanics is how to interpret this measure. A recent proposal is that it is best interpreted as a \emph{typicality measure}: it represents the \emph{relative size of sets of states}, and typical states show a certain property if the measure of the set that corresponds to this property is one or close to one. In Boltzmannian statistical mechanics the same measure is appealed to in the equilibrium and non-equilibrium context. Hence the interpretation as a typicality measure is relevant for both contexts. This approach enjoys particular prominence among contemporary physicists who endorse Boltzmannian statistical mechanics (D\"{u}rr 1998; Goldstein 2001; Goldstein and Lebowitz 2004; Lebowitz 1993a; Lebowitz 1993b; Lebowitz 1999) but has also been advocated by philosophers (Maudlin 2007; Volchan 2007).\\ \noindent Not only in statistical mechanics but also for \emph{measure-theoretic dynamical systems} an interpretation needs to be found for the standard measures used, and one suggestion is to interpret them as typicality measures. Unlike statistical mechanics, measure-theoretic dynamical systems theory is not concerned with bridging scales of description. Moreover, while statistical mechanics is a physical theory, dynamical systems theory is a set of mathematical tools with applications in various sciences (among others, in physics, chemistry, biology, meteorology and climate science). Furthermore, understanding the behaviour of ``most'' initial conditions was always a central concern in dynamical systems theory. Hence the notion of typicality does not cause as much controversy here as in statistical mechanics.\\ \noindent What is missing is a thorough treatment of the question why these measures are a good choice of typicality measures.\footnote{Physicists sometimes seem to hold that it is fine to take the standard measures used in statistical mechanics as a postulate. They might hold such a pragmatic viewpoint because it does not lead to wrong conclusions for the practical problems they are interested in, and they do not have time to engage in more thorough foundational debates. Still, philosophers and some physicists want to arrive at a conceptual understanding of statistical mechanics, and hence are interested in a thorough conceptual motivation of typicality measures.} Witness Volchan (2007, 13): ``Many tough questions are still to be addressed. For example, as typicality is relative to the measure used, how one justifies a particular choice? Under what criteria?''. This paper attempts to contribute to fill this gap by presenting \emph{a first tentative proposal of how to justify typicality measures in statistical mechanics and in measure-theoretic dynamical systems theory more generally}. An argument will be called a justification of typicality measures exactly when (i) the premises imply that there is a unique typicality measure or a set of typicality measures which agree on which sets are typical/atypical (here any measure of this set can be used as typicality measure); and (ii) the premises are at least potentially more plausible than just postulating that certain measures are a good choice of typicality measures.\\ \noindent This paper proceeds as follows. Section~\ref{DynamicalSystems1} introduces Boltzmannian statistical mechanics and Section~\ref{DynamicalSystems2} introduces measure-theoretic dynamical systems. Section~\ref{Typicality} discusses the notion of typicality. Section~\ref{Pitowsky} outlines Pitowsky's (2012) justification of typicality measures, which is the only paper known to the author which advances a justification of typicality measures of statistical mechanics and measure-theoretic dynamical systems. Section~\ref{PitowskyCriticism} argues that Pitowsky's argument does not fit the bill. Section~\ref{New1} and \ref{New2} present a first proposal of how to justify typicality measures of Boltzmannian statistical mechanics and measure-theoretic dynamical systems. Finally, Section~\ref{Uniqueness} shows that if systems are ergodic/epsilon-ergodic, one obtains uniqueness results about the typicality measures. The paper concludes in Section~\ref{Conclusion}. \section{Boltzmannian Statistical Mechanics}\label{DynamicalSystems1} \noindent The concern in this paper is Boltzmannian statistical mechanics\footnote{For a detailed discussion of Boltzmannian SM see Frigg (2008) and Uffink (2007).} (SM) and not Gibbsian SM. (I adopt the common distinction that while Gibbsian SM is about ensembles, Boltzmannian SM is about a single system. So, e.g., equilibrium is associated with a single state in Boltzmannian SM and with a probability distribution in Gibbsian SM (cf.\ Frigg 2008; Uffink 1996)).\footnote{How Boltzmannian SM relates to Gibbsian SM, and why the very same measures are used in both frameworks, are deep and interesting foundational questions, which are beyond the scope of this paper (for more on his issue see Frigg 2008 and Uffink 2007).} In Boltzmannian statistical mechanics the object of study is a system consisting of $n$ classical particles. The boundary conditions assumed here are that the system is in a bounded container and is isolated from the environment. The \emph{microscopic description} is as follows. The \emph{microstate} of the system is represented by a point $x=(p_{1},\ldots p_{n},q_{1},\ldots q_{n})$, where $q_{i}$ is the (three-dimensional) position of the $i$-th particle and $p_{i}$ is the (three-dimensional) momentum of the $i$-th particle ($1\leq i \leq n$). The microstates $x$ are elements of the $6n$-dimensional state space $\Gamma$ (where $\Gamma$ represents all possible position and momentum coordinates of all the particles). Because the energy is conserved, the motion of the system is confined to a $(6n-1)$-dimensional energy hypersurface $\Gamma_{E}$, which can be shown to be compact ($E$ is the value of the energy of the system). The system starts in a certain initial condition -- an initial microstate $x$. The evolution of the system is governed by Hamilton's equations, whose solutions are the phase flow $\phi_{t}$ on the energy hypersurface $\Gamma_{E}$. That is, $\phi_{t}(x)$ gives the microstate of the system that started in initial condition $x$ after $t$ time steps. \\ \noindent The \emph{macroscopic description} is as follows. \emph{Macrostates} $M_{i}$, $1\leq i\leq n$, are characterised by the values of macroscopic parameters such as local pressure and local temperature. To each macrostate $M_{i}$, $i=1,\ldots, k$ $(k\in\field{N})$, there corresponds a \emph{macroregion} $\Gamma_{M_{i}}$ which consists of all $x\in \Gamma_{E}$ for which the macroscopic parameters take values which are characteristic for $M_{i}$. The $\Gamma_{M_{i}}$, $1\leq i \leq k$, do not overlap and jointly cover $\Gamma_{E}$.\footnote{That is, technically the $\Gamma_{M_{i}}$ are a \emph{partition}, meaning that $\Gamma_{M_{i}}\in\Sigma_{\Gamma_{E}}$ for all $i$, $\Gamma_{M_{i}}\cap \Gamma_{M_{j}}=\emptyset$ for $i\neq j$ and $\cup_{i=1}^{k}\Gamma_{M_{i}}=\Gamma_{E}$.} Two macrostates are of particular importance: the equilibrium state $M_{eq}$ and the macrostate at the beginning of the process $M_{p}$. The macroscopic evolution of the system with initial microstate $x$ is given by $M_{x}(t)=M(\phi_{t}(x))$, where $M(y)$ is the macrostate corresponding to microstate $y$.\\ \noindent Regarding the measures of interest in Boltzmannian SM, note that for Hamiltonian systems the uniform measure $\mu$ defined on $\Gamma$ (the Lebesgue measure) is invariant under the dynamics; this result is known as Liouville's theorem (Petersen 1983, 5). $\mu$ can be restricted to a normalized measure $\mu_{E}$ on $\Gamma_{E}$, which is also invariant under the dynamics. $\mu_{E}$ is called the \emph{microcanonical measure} and is the standard measure used in Boltzmannian SM. Hence the question is how to justify $\mu_{E}$ as typicality measure. \\ \noindent Because the same measures are employed in Bolzmannian SM for questions regarding equilibrium and non-equilibrium, the interpretation as typicality measure is relevant in both contexts.\footnote{For instance, it is often argued that typical initial states show thermodynamic-like behaviour (a claim about non-equilibrium) or that typical states are in equilibrium (a claim about equilibrium).} In general, much more is known about equilibrium SM than non-equilibrium SM.\footnote{In practice physicists usually employ Gibbsian SM, and here the standard measures (microcanonical measures, canonical measures, and grand-canonical measures) are very successful in deriving predictions about macroscopic behaviour. The relationship between Boltzmannian and Gibbsian SM is a controversial theme, which goes beyond the scope of this paper (for more see Frigg 2008 and Uffink 2007).} In particular, it is a central aim of non-equilibrium SM to explain \emph{thermodynamic-like behaviour}, but this is extremely difficult. To characterise thermodynamic-like behaviour, the Boltzmann entropy first needs to be introduced. The \emph{Boltzmann entropy of a macrostate} $M_{i}$ is $S_{B}(M_{i})= k_B \log(\mu_{E}(\Gamma_{M_{i}}))$, where $k_B$ is the Boltzmann constant; and the \emph{Boltzmann entropy of a system} at time $t$ is $S_{B}(t)= S_{B}(M_{x(t)})$. It can be shown that for gases the equilibrium macroregion covers nearly all of $\Gamma_{E}$, and hence the Boltzmann entropy is highest in equilibrium. The macrostate at the beginning of the process $M_{p}$ is, by assumption, a low entropy state. Thermodynamic-like behaviour is characterised as follows (Lavis 2005): the general tendency is that the entropy of a system that is initially in a low-entropy state increases until it comes close to its maximum value and then stays there, but frequent small and very rare large downward fluctuations (contra irreversibility) are allowed. Now proponents of the typicality approach argue that typical initial states show thermodynamic-like behaviour, and hence that \emph{an explanation of thermodynamic-like behaviour can be given in terms of typicality}.\footnote{For a list of proponents of the typicality approach, see the references given in the introduction.} \section{Measure-Theoretic Dynamical Systems}\label{DynamicalSystems2} \noindent Unlike statistical mechanics, measure-theoretic dynamical systems theory is not concerned with bridging scales of description. Moreover, while statistical mechanics is a physical theory, dynamical systems theory is a set of mathematical tools with applications in various sciences (among others, in physics, chemistry, biology, meteorology and climate science). Furthermore, understanding the behaviour of ``most'' initial conditions was always a central concern in dynamical systems theory. Hence the notion of typicality does not cause as much controversy here as in statistical mechanics.\\ \noindent Dynamical systems theory can be split into measure-theoretic dynamical systems theory (where there is a measure and statistical properties are studied) and topological dynamical systems theory (where there is no measure and topological properties are studied).\footnote{Dynamical systems have been studied from both the measure-theoretic and the topological perspective, and the interrelations between these perspectives can be very complex (cf.\ Petersen 1983).} \emph{This paper is only concerned with measure-theoretic dynamical systems theory}. There are also interesting foundational questions in topological dynamical systems theory, where a topological notion of typicality plays a role, but this topic is beyond the scope of this paper (see, e.g., Frigg and Werndl, 2012, for some thoughts on topological typicality).\\ \noindent Formally, measure-theoretic dynamical systems (mt-dynamical systems) are defined as follows (Petersen 1983). $(X,\Sigma_{X},\mu_{X},f_{t})$ is a \emph{mt-dynamical system} iff (if and only if) $X$ is a set (the \emph{phase space}), where in this paper $X$ is an interval\footnote{An interval in $\field{R}^{n}$ is any set $A_{1}\times \ldots \times A_{n}$ with $A_{i}=(a_{i},b_{i})$ ($a_{i},b_{i}\in\field{R}\cup\{-\infty,\infty\}$), $(a_{i},b_{i}]$ ($a_{i}\in\field{R}\cup\{-\infty,\infty\}$, $b_{i}\in\field{R}$), $[a_{i},b_{i})$ ($a_{i}\in\field{R}$, $b_{i}\in\field{R}\cup\{-\infty,\infty\}$), or $[a_{i},b_{i}]$ ($a_{i},b_{i}\in\field{R}$), where always $a_{i}<b_{i}$, $1\leq i \leq n$.} in $\field{R}^{n}$; $\Sigma_{X}$ is the Lebesgue $\sigma$-algebra of subsets of $X$ (elements of $\Sigma_{X}$ are called \emph{measurable} sets); $\mu_{X}:\Sigma_{X}\rightarrow [0,1] $ is a normalized measure on $X$; and $f_{t}:X\rightarrow X$ (the \emph{evolution equations}, $f_{t}(x)$ denotes the state of the system that started in initial state $x$ after $t$ time steps), where $t\in\field{R}^{+}_{0}$ (continuous time) or $t\in\field{N}_{0}$ (discrete time), are surjective measurable mappings such that $f_{t+s}=f_{t}(f_{s})$ for all $t,s\in\field{R}^{+}_{0}$ or $\field{N}_{0}$, $f_{t}(x)$ is jointly measurable in $(x,t)$, and the measure is \emph{invariant under the dynamics}, i.e., \begin{equation}\label{invariance} \mu_{X}(A)=\mu_{X}(f_{t}^{-1}(A))\,\,\textnormal{for all measurable}\,\,A\subseteq X\,\,\textnormal{and all}\,\,t\in\field{R}^{+}_{0}\,\,\textnormal{or}\,\, \field{N}_{0}.\end{equation} Statistical mechanical systems $(\Gamma_{E}, \Sigma_{\Gamma_{E}}, \mu_{E},\phi_{t})$ are (continuous-time) mt-dynamical systems (where $\Sigma_{\Gamma_{E}}$ is the Lebesgue $\sigma$-algebra of $\Gamma_{E}$). This just means that the $(\Gamma_{E}, \Sigma_{\Gamma_{E}}, \mu_{E},\phi_{t})$ satisfy the formal definition of a mt-dynamical system. It does \emph{not} imply that SM is reducible to dynamical systems theory in any sense (the physical postulates of SM are not part of the mathematical framework of dynamical systems theory). \\ \noindent Let me introduce two (discrete-time) mt-dynamical systems which will serves as examples. They are very simple and hence particularly suited for illustration purposes.\\ \noindent \noindent\emph{Example 1.} The first example is the \emph{tent map} $(Y,\Sigma_{Y},\mu_{Y},s_{t})$ where $Y=[0,1]$ is the unit interval, $s_{t}(x)=s^{t}(x)$ (i.e., the $t$-th iterate\footnote{The $0$-th iterate is the identity function, i.e., $s^{0}(x)=x$ for all $x\in X$.} of $s(x)$), $t\in\field{N}_{0}$, with \begin{equation} s(x)= \begin{cases} 2x & \text{if $0\leq x<\frac{1}{2}$,} \\ 2(1-x) &\text{if $\frac{1}{2}\leq x\leq 1$,} \end{cases} \end{equation} and $\mu_{Y}$ is the uniform measure (Lebesgue measure). Figure 1(a) shows the density of the measure $\mu_{Y}$ of the tent map. \\ \begin{figure} \centering \includegraphics{TentLogMeasures2} \caption{(a)$\,\,$the$\,\,$measure$\,\,$of$\,\,$the$\,\,$tent$\,\,$map;$\,\,$(b)$\,\,$the$\,\,$measure$\,\,$of$\,\,$the$\,\,$logistic$\,\,$map}\label{billiard} \end{figure} \noindent \emph{Example 2.} The second example is the \emph{logistic map} $(Z,\Sigma_{Z},\mu_{Z},r_{t})$ where $Z=[0,1]$, $r_{t}(x)=r^{t}(x)$, $t\in\field{N}_{0}$, with \begin{equation} r(x)=4x(1-x), \end{equation} and the standard measure is given by \begin{equation} \mu_{Z}(A)=\int_{A}\frac{1}{\pi\sqrt{x(1-x)}}d\lambda, \end{equation} where $\lambda$ is the Lebesgue measure. Figure 1(b) shows the density of the measure $\mu_{Z}$ of the logistic map. The measure of the logistic map is particularly interesting because, unlike that of the tent map, it is \emph{not} uniform. \\ \noindent The definition of mt-dynamical systems immediately implies that all dynamical systems are \emph{forward deterministic}: the state of the system at one time determines the state of the system at all future times. If, additionally, the state of the system at one time determines the state of the system at all past times, then the system is \emph{deterministic} (i.e., \emph{forward and backward deterministic}). Some mt-dynamical systems such as Example 1 (the tent map) or Example 2 (the logistic map) are only forward deterministic. Other mt-dynamical systems such as those in SM are deterministic. \section{Typicality}\label{Typicality} \noindent A typicality measure represents the \emph{relative size of sets of states}. Intuitively speaking, any function $\mu_{T}$ describing the size of sets of states should satisfy the standard axioms of a \emph{measure}: $\mu_{T}(\emptyset)=0$, $\mu_{T}(A)\geq 0$ for any measurable set $A$, $\mu_{T}(A\cup B)=\mu_{T}(A)+\mu_{T}(B)$ whenever $A\cap B=\emptyset$ where $A, B$ are measurable sets, and \label{sad} $\mu_{T}(\cup_{i=1}^{\infty}A_{i})=\sum_{i=1}^{\infty}\mu_{T}(A_{i})$ whenever $A_{i}\cap A_{j}=\emptyset$ for all $i,j,i\neq j$, where all $A_{i}$ are measurable sets. One might also argue that, because typicality measures represent the \emph{relative} size of sets of states, they should be normalized, i.e., $\mu_{T}(C)=1$, where $C$ is the set of all elements. Normalized measures are called \emph{probability measures}. In what follows, I assume that typicality measures are normalized. However, this assumption is irrelevant for the main result in this paper (Theorems 1 and 2), where there is exactly the same conclusion if the measures are not normalized.\footnote{It is easy to see that the proof of Theorem 1 and 2 also goes through when the measures are not normalized.} The only results which depend on the assumption that typicality measures are normalized are the uniqueness results (Theorem 3 and 4), which invoke the notion of ergodicity and epsilon-ergodicity.\\ \noindent Intuitively speaking, something is typical if it happens in the vast majority of cases. For instance, typically, when throwing dice, at some point the number six will show up; or typical lottery tickets are blanks. Formally, given a basic set of elements $C$, one says that \emph{typical elements have property P} relative to a typicality measure $\mu_{T}$ iff \emph{$\mu_{T}(C\setminus D)=0$}, where $D\subseteq C$ consists of those elements in $C$ that have property $P$ (this will be called \emph{typicality I}). Derivatively, $D$ is called the \emph{typical set} and $C\setminus D$ the \emph{atypical set}. A less stringent notion of typicality arises when one only requires that the relative size of atypical sets is close to zero (and not zero). Formally, given a basic set of elements $C$, one then says that \emph{typical elements have property P} relative to a typicality measure $\mu_{T}$ iff \emph{$\mu_{T}(C\setminus D)\leq \beta$}, where $D\subseteq C$ consists of those elements in $C$ that have property $P$ and $\beta$ is a very small real number (this will be called \emph{typicality II}). Then, again, $D$ is called the \emph{typical set} and $C\setminus D$ the \emph{atypical set}.\\ \noindent For example, in SM it is claimed that $\mu_{E}$ is the typicality measure and that typical initial states show thermodynamic-like behaviour. That is, $C=\Gamma_{E}, \mu_{T}=\mu_E$, $P$ is the property of showing thermodynamic-like behaviour and $D$ is the set of initial states showing thermodynamic-like behaviour. Measures of mt-dynamical systems, such as the uniform measure $\mu_{Y}$ of Example 1 of the tent map (see Figure 1(a)) and the measure $\mu_Z$ of Example 2 of the logistic map (see Figure 1(b)), are also candidates for typicality measures.\\ \noindent Part and parcel of the notion of typicality measures is that states cannot become more or less when they are evolved under the dynamics of the system. Formally, this means that they should be \emph{invariant under the dynamics: the typicality measure of a set has to equal (i.e., cannot be greater or smaller than) the typicality measure of the evolved set.} Formally, for evolution equations such as those in SM, which are invertible (and hence forward and backward deterministic), this amounts to the requirement that the size of a set of states $A$ should be the same as the size of the image or pre-image of $A$: \begin{equation}\label{I2} \mu_{T}(A)\!=\!\mu_{T}(f_{t}(A))\!=\!\mu_{T}(f_{t}^{-1}(A))\,\,\textnormal{for all}\,\,t\!\in\! \field{R}^{+}_{0}\,\,\textnormal{or}\,\,t\in\!\field{N}_{0}\,\,\textnormal{for all measurable}\,\,A. \end{equation} For evolution equations such as those of Example 1 (the tent map) or Example 2 (the logistic map), which are not invertible (hence only forward deterministic), the requirement is that the size of a set of states $A$ should be the same as the size of the pre-image of $A$: \begin{equation}\label{I1} \mu_{T}(A)=\mu_{T}(f_{t}^{-1}(A))\,\,\textnormal{for all}\,\,t\in \field{R}^{+}_{0}\,\,\textnormal{or}\,\,t\in\field{N}_{0}\,\,\textnormal{for all measurable}\,\,A. \end{equation} Proponents of the typicality approach also require this (e.g., Goldstein 2001, 15; Lebowitz 1993b, 8).\footnote{This invariance condition is so important in measure-theoretic dynamical systems theory that for a mt-dynamical system it is required that the measure is invariant (see equation~(\ref{invariance})).}\\ \noindent When discussing the invariance condition, it is important to mention \emph{Liouville's equation}, which is a central equation in SM that describes the time-evolution of a probability density $\rho(x)=\rho(p_{1},\ldots,p_{n},q_{1},\ldots,q_{n})$ in phase space: \begin{equation} \frac{\partial \rho}{\partial t}=-\sum_{i=1}^{n}\frac{\partial \rho}{\partial q_{i}}\dot q_{i}+\frac{\partial\rho}{\partial p_{i}}\dot p_{i}, \end{equation} where the time-derivatives are denoted by dots. The notion of absolute continuity will be important in what follows. Formally, for measures $\mu_1$ and $\mu_2$ defined on $(X,\Sigma_{X})$, $\mu_2$ is \emph{absolutely continuous w.r.t.\ $\mu_1$} $(\mu_2\ll\mu_1)$ iff \begin{equation}\label{ac} \textnormal{if}\,\, \mu_{1}(A)=0\,\,\textnormal{for a measurable set}\,\,A,\,\,\textnormal{then}\,\,\mu_{2}(A)=0. \end{equation} The stationary solutions of the Liouville equation (i.e., when $\partial \rho/\partial t=0$) give the probability densities which do not change in time and hence the invariant measures absolutely continuous w.r.t.\ the Lebesgue measure.\footnote{According to the Radon-Nikodym theorem, a measure is absolutely continuous w.r.t.\ the Lebesgue measure iff there is a probability density (Nielsen, 1997).} Finding the stationary solutions of the Liouville equation for systems in SM is extremely hard. In general, one does not know the class of all its stationary solutions. In Section 7 where a justification of typicality measures for typicality I is proposed, it is argued that typicality measures are invariant and absolutely continuous w.r.t.\ the Lebesgue measure.\footnote{The latter claim follows from Premises 2 and 3.} Therefore, the stationary solutions of the Liouville equation are of interest because they are potential candidates for typicality measures.\\ \noindent The next section will outline Pitowsky's (2012) argument, which is the only justification of typicality measures of SM and measure-theoretic dynamical systems known to the author. \section{Pitowsky's Justification of Typicality Measures}\label{Pitowsky} \noindent Pitowsky's argument goes as follows. Consider $\{0,1\}^{n}$ -- the set of all sequences of length $n$ consisting of zeros and ones ($n\in\field{N}$). When the phase space consists of a \emph{finite} number of $N$ elements ($N\in\field{N}$), Pitowsky claims that the typicality measure is easy to find: just count the number of elements of a set and divide it by $N$. Hence, for an arbitrary subset $A$ of $\{0,1\}^{n}$, the typicality measure is $\mu_{n}(A)=|A|/2^{n}$, where $|A|$ is the number of elements of $A$. Pitowsky states that the real difficulty lies in the question of how to determine typicality measures for phase spaces with an \emph{infinite} number of elements (as in SM and measure-theoretic dynamical systems theory). Pitowsky does not provide any arguments why the uniform measure is the correct typicality measure for phase spaces with a finite number of elements. \emph{It is important to point out that this claim can be questioned}. I will argue that even if this claim is accepted, Pitowsky's argument fails.\\ \noindent To tackle the question how to justify typicality measures for phase spaces with an infinite number of elements, Pitowsky starts with the special case of $\{0,1\}^{\infty}$ -- the set of all infinite sequences of zeros and ones. A unique measure $\mu_{\infty}$ on $\{0,1\}^{\infty}$ is obtained by approximating a set $B$ in $\{0,1\}^{\infty}$ by sets $A$ in $\{0,1\}^{n}$, $n\in\field{N}$, and then defining the measure of $B$ to be the limit of the measure of the approximating sets $A$. The measure $\mu_{\infty}$, Pitowsky claims, it the correct typicality measure on $\{0,1\}^{\infty}$. \\ \noindent How to arrive at typicality measures on phase spaces of SM and measure-theoretic dynamical systems theory, which are usually quite different from $\{0,1\}^{\infty}$? Pitowsky (2012) concentrates on the case where \emph{the phase space is the unit interval $[0,1]$}. He claims that all other cases can be reduced to this case because any relevant\footnote{Technically, the condition is that the measure space is a Lebesgue space (for a definition see Petersen 1983).} phase space with a measure on it is isomorphic to the measure space of the unit interval $[0,1]$ with the uniform measure on it. To spell out my criticism of Pitowsky (2012), only a discussion of this special case of the unit interval will be needed. Thus, for simplicity, what follows concentrate on it. (However, I will briefly comment on Pitowsky's appeal to the isomorphism result in footnote~\ref{isom}).\\ \noindent To arrive at a typicality measure on $[0,1]$, Pitowsky first notes that the points of $\{0,1\}^{\infty}$ can be put into one-to-one correspondence with the points of $[0,1]$ by assigning to each sequence $\omega=(\omega_{1},\omega_{2},\ldots)$, $\omega\in \{0,1\}^{\infty}$, the number\footnote{Strictly speaking, the function only establishes a one-to-one correspondence of $\{0,1\}^{\infty}\setminus Q$ and $[0,1]$, where $Q$ is the set of all sequences ending with an infinite number of ones excluding the sequence $(1,1,\ldots)$ consisting only of ones. That $Q$ is excluded is irrelevant because $\mu_{\infty}(Q)=0$ (cf.\ Pitowsky 2012).} \begin{equation} \phi(\omega)=\sum_{i=1}^{\infty}\omega_{i}/2^{i}. \end{equation} $\phi$ maps the measure $\mu_{\infty}$ on $\{0,1\}^{\infty}$ to a measure on $[0,1]$. Namely, one obtains the uniform measure on $[0,1]$. Consequently, Pitowsky concludes, the uniform measure is the uniquely correct typicality measure on $[0,1]$. \section{Criticism of Pitowsky's Justification}\label{PitowskyCriticism} \noindent Unfortunately, problems arise with Pitowsky's justification. First of all, Pitowsky arrives at a measure on $[0,1]$ by using $\phi$ to map the measure $\mu_{\infty}$ on $\{0,1\}^{\infty}$ to a measure on $[0,1]$. However, \emph{other functions than $\phi$ which provide a one-to-one correspondence between the points of $\{0,1\}^{\infty}$ and $[0,1]$ could be used to map the measure on $\{0,1\}^{\infty}$ to a measure on $[0,1]$.} Indeed, one can arrive at infinitely many different measures on $[0,1]$ by relating $[0,1]$ to the set of sequences of zeros and ones. For example, suppose that to each sequence $\omega=(\omega_{1},\omega_{2},\ldots)$, $\omega\in \{0,1\}^{\infty}$, is assigned the number\footnotemark[\value{footnote}] \begin{equation}\label{logisticC} \psi(\omega)=\sin^{2}(\frac{\pi}{2}\sum_{i=1}^{\infty}\omega_{i}/2^{i}). \end{equation} Then, when $\psi$ is used to map the measure $\mu_{\infty}$ to a measure on $[0,1]$, one obtains the invariant measure of the logistic map shown in Figure 1(b) (Aligood et al.\ 1996). \emph{To sum up, it is unclear which map should be used to arrive at a typicality measure on $[0,1]$.} Pitowsky does not address why $\phi$ should be preferred over other maps, and there seems to be no easy answer to the question which map to prefer. Thus Pitowsky's argument is wanting.\\ \noindent Second, Pitowsky (2012, 17-18) explicitly states that his argument not only applies to statistical mechanical systems but more generally to many other mt-dynamical systems. However, as I will show now, here his argument leads to undesirable conclusions. According to Pitowsky, his argument applies to the logistic map (Example 2)\footnote{Pitowsky (2012) states that his argument applies to two-symbol Bernoulli systems, and the logistic map is a two-symbol Bernoulli system (cf.\ footnote~\ref{BernoulliSystem}).}, which is defined on the unit interval. Hence for the logistic map the uniform measure is the correct typicality measure. However, \emph{the measure which dynamical system theorists' choose (i.e., the correct measure according to practice in dynamical systems theory) is different}, namely $\mu_{Z}$ (as shown in Figure 1(b)). Furthermore, recall that, as argued in Section~\ref{Typicality}, typicality measures should be invariant under the dynamic. However, for the logistic map the uniform measure is \emph{not} invariant and hence cannot be a typicality measure. Instead, the standard measure $\mu_{Z}$ \emph{is} invariant. (I will argue in Sections~\ref{New1}, ~\ref{New2} and 9 that $\mu_{Z}$ is the correct typicality measure).\\ \noindent The general picture is that for some mt-dynamical systems with phase space $[0,1]$ the uniform measure is the correct measure according to practice in dynamical systems theory and is invariant under the dynamics. The tent map (Example 1) is a case in point. \emph{However, for many other mt-dynamical systems with phase space $[0,1]$, such as the logistic map (Example 2), the correct measure according to practice in dynamical systems theory is not the uniform measure and the uniform measure is not invariant}. Yet Pitowsky's argument implies that the uniform measure is always the correct typicality measure. Hence his argument is flawed. This point suggests that when justifying typicality measures not only the phase space (as in Pitowsky's justification) but also the \emph{dynamics} needs to be taken into consideration: as Example 1 (the tent map) and Example 2 (the logistic map) illustrate, different dynamics on $[0,1]$ will lead to different typicality measures. \\ \noindent Indeed, Pitowsky (2012, 17-18) gives examples of mt-dynamical systems which highlight these problems. Intuitively speaking, \emph{two-symbol Bernoulli systems} are dynamical systems whose solutions can be put into one-to-one correspondence with the set of all infinite sequences arising from independent trials of tossing a coin (i.e., the set $\{0,1\}^{\infty}$).\footnote{ Formally, let $\Omega=\{0,1\}^{\infty}$. \label{BernoulliSystem} That is, $\Omega$ is the set of all infinite sequences $(\omega_{1}, \omega_{2},\ldots)$ with $\omega_{i} \in \{0,1\}$ corresponding to the possible outcomes of an infinite sequence of independent trials of a (possibly biased) coin. Let $\Sigma_{\Omega}$ be the set of all subsets of infinite sequences to which probabilities can be assigned, let $\mu_{\Omega}$ be the normalized measure on $\Omega$ arising from the probability measure of tossing the coin, and let $S:\Omega \rightarrow \Omega$, $S((\omega_{1}, \omega_{2}, \ldots))=(\omega_{2}, \omega_{3}, \ldots)$. $(\Omega,\Sigma_{\Omega},\mu_{\Omega},S)$ is called a \emph{two-symbol Bernoulli shift}. Finally, \emph{two-symbol Bernoulli systems} are defined to be mt-dynamical systems which are isomorphic (from a measure-theoretic perspective) to two-symbol Bernoulli shifts (cf.\ Werndl 2009a).} Because the solutions of two-symbol Bernoulli systems correspond to the set of all infinite sequences $\{0,1\}^{\infty}$, Pitowsky argues that \emph{two-symbol Bernoulli systems and their measures illustrate the correctness of his justification.} However, contrary to Pitowsky, the measure of many two-symbol Bernoulli systems with phase space $[0,1]$ is \emph{not} the uniform measure. For instance, the logistic map (Example 2) is a two-symbol Bernoulli system and its solutions can be put into one-to-one correspondence with $\{0,1\}^{\infty}$ (see equation (\ref{logisticC})) (Werndl 2009a). Yet $\mu_{Z}$ -- the measure of the logistic map -- is not uniform. Consequently, \emph{rather than illustrating the correctness of Pitowsky's argument, two-symbol Bernoulli systems highlight its problems}.\footnote{As mentioned above, Pitowsky (2012) claims that cases where the phase space is not $[0,1]$ can be reduced to the case where the phase space is $[0,1]$ because any relevant phase space with a measure on it (technically: any Lebesgue space) is isomorphic to the measure space of $[0,1]$ with the uniform measure on it. I do not think that Pitowskys' appeal to this isomorphism result is successful. Pitowsky's concern is to find, given a phase space $S$ such as $\Gamma_{E}$, the correct typicality measure. But then \emph{this isomorphism result is of no help because it presupposes that there is already a measure defined on S}. If just the phase space $S$ is given (this is the problem of concern), then there are usually infinitely many ways of mapping $S$ to the unit interval. Consequently, even if one assumes that the correct typicality measure on $[0,1]$ is the uniform measure, there are infinitely many different possible measures on $S$ arising from mapping the uniform measure on $[0,1]$ to $S$ (and hence infinitely many different possible measures on $S$ such that the resulting measure space is isomorphic to $[0,1]$ with the uniform measure on it.) Hence \emph{there is no hope that the isomorphism result can justify that a certain measure such as $\mu_{E}$ on $\Gamma_{E}$ is the correct typicality measure.} \label{isom}}\\ \noindent To conclude, Pitowsky's justification of typicality measures does not fit the bill. The next two sections aim to provide a first tentative proposal of how to justify typicality measures in SM and measure-theoretic dynamical systems theory. Section~\ref{New1} presents the argument for typicality I and Section~\ref{New2} the argument for typicality II (see Section~\ref{Typicality} for a definition of typicality I and typicality II). The guiding idea is that typicality measures are invariant and are related to the initial probability distributions of interest (in particular, typicality measures should serve as a shortcut to make claims about any initial probability distribution of interest). \section{A New Proposal: Typicality I}\label{New1} First of all, as argued in Section 4, typicality measures are invariant:\vspace{2mm} \\ \noindent\textbf{Premise 1: Typicality measures are invariant under the dynamics.}\vspace{2mm} \\ \noindent It is important to note that \emph{the requirement of invariance alone cannot justify typicality measures.} To stress the point, consider the phase space $W=[0,1]$ with the evolution equation $f_{t}(w)=w$ for all $t\in\field{N}_{0}$. Clearly, for this phase space and evolution equation \emph{all} measures fulfill equation (\ref{I2}). Thus they are all invariant under the dynamics, but they do not agree on which sets are typical/atypical. Hence the condition of invariance alone cannot justify typicality measures. \\ \noindent An \emph{initial probability distribution} gives the probability that a system is prepared in a certain microstate (e.g., that an experimenter prepares a gas in a certain microstate). This paper will assume that these initial probability distributions $p$ are \emph{translation-continuous}, i.e., that \begin{equation} \lim_{\left|\left|\tau\right|\right|\rightarrow 0}p(Tr(A,\tau))=p(A)\,\,\textnormal{for all open sets}\,\,A, \end{equation} where $\left|\left|\tau\right|\right|$ is the standard Euclidean norm in $\field{R}^{n}$ (i.e., the standard Euclidean metric between the vector $\tau$ and the zero-vector in $\field{R}^{n}$) and $Tr(A,\tau)$ is the set $A$ translated by $\tau$, i.e., \begin{equation} Tr(A,\tau)=X\cap\{x+\tau\,\,|\,\,x\in A\}. \end{equation} Malament and Zabell (1980) have provided a nice motivation of this condition: they have argued that when one prepares a system in SM, one does not have sufficient accuracy to create any other probability measure than a translation-continuous probability measure.\footnote{ Leeds (1989) has reconstructed Malament and Zabell's argument for translation-continuity as the claim that the microstate of a system in SM is a continuous function of the parameters of the preparation of the system. As van Lith (2001) and Vranas (1998) have pointed out, this claim is problematic: clearly, the microstate of a system not only depends on the parameters of preparation but also on the microstate prior to preparation. Hence similar values of the preparation parameters may well lead to quite different microstates. However, Malament and Zabell seemed to have the different claim in mind that we do not have sufficient accuracy to create probability measures that are not translation-continuous.} Malament and Zabell (1980) have also shown that a probability measure $p$ is translation-continuous iff $p\ll\lambda$, i.e., $p$ is absolutely continuous w.r.t.\ the Lebesgue measure $\lambda$ (see equation (\ref{ac})).\footnote{They proved this result only for the phase space $\field{R}^{n}$. Yet it is clear that it also holds for any arbitrary interval of $\field{R}^{n}$. Although the phase space of systems in SM are not intervals, Malament and Zabell (1980) and Vranas (1998) believe that the equivalence of translation-continuity and absolute continuity also holds for the phase spaces in SM, and I follow them in assuming this.} That $p\ll \lambda$ seems to be often endorsed in the physics community (cf.\ Leeds 1989; Maudlin 2007).\\ \noindent Now, different initial probability distributions might arise in SM: Systems can be prepared in different macrostates, different scientists might prepare systems differently. Thus there is a \emph{class $P$ of initial probability distributions of interest}. Open sets are extended regions of phase space. This motivates the only additional requirement on this class, namely that for any arbitrary open region of phase space one cannot exclude that there might be some way of preparing the system such that there is a positive probability that one ends up in this region. So:\vspace{2mm}\\ \noindent \textbf{Premise 2. The initial probability distributions of interest are a class $P$ of translation-close probability distributions where for every open set $A$ there is a $p\in P$ with $p(A)>0$.}\vspace{2mm} \\ Davey (2008), Hemmo and Shenker (2012) and Leeds (1989) have criticised the extant foundational literature on Boltzmannian SM for assuming that the initial probability distribution is $\mu_{E}$ and has to be invariant under the dynamics. They rightly argue that the initial probability distribution can be different from $\mu_{E}$ and may well not be invariant under the dynamics. For instance, Hemmo and Shenker (2012, 11) complain: ``In particular the measure need not be the Lebesgue measure, and may not even be conserved under the dynamics''.\footnote{Note that this remark about the initial probability distribution could not also be made about the typicality measure. The typicality measure is not the initial probability distribution over the states in an experiment. The typicality measure counts states and, as argued, it is part and parcel of a measure that counts states that it is invariant (because states cannot become more or less when they are evolved under the dynamics).} So \emph{the great flexibility on the initial probability distribution in SM allowed by Premise 2 should be welcome}.\\ \noindent Concerning dynamical systems theory, note that in certain applications initial probability distributions over the states play a role. The justification of typicality measures advanced here will be relevant whenever Premise 2 holds. For instance, consider the dynamical system of a frictionless \emph{pendulum}, consisting of a mass $m$ that experiences a single force $F$ which depends only on the position of the mass and a constant (this is the harmonic oscillator). Suppose that one does not have sufficient accuracy to create any other initial probability distribution than a translation-continuous one (which has some plausibility). Further, since in experiments one has the freedom to prepare the system in all kinds of different initial positions and initial velocities, it seems reasonable that for any arbitrary open region of phase space there is a probability distribution of interest which assigns positive probability to this region. Then the uniform measure of the pendulum can be justified as typicality measure.\\ \noindent The next two premises are motivated by the idea that \emph{typicality measures should be related to the initial probability distributions of interest}. More specifically, first, it seems reasonable to require that if a set of states has probability zero for all initial probabilities of interest, the set is atypical: \vspace{2mm} \\ \noindent \textbf{Premise 3. If p(A)=0 for all probability distributions of interest for some measurable A, then T(A)=0.}\\ (Equivalent formulation: If $p(A)=1$ for all probability distributions of interest for some measurable $A$, then $T(A)=1$).\\ \noindent Second, the typicality measure should serve as a shortcut to make claims about the likely behaviour of the system for any possible initial probability distribution. That is, it will be required that whenever a set of states is typical, it has probability one for all initial probability distributions of interest: \vspace{2mm} \\ \noindent \textbf{Premise 4. If $T(A)=1$ for some measurable $A$, then $p(A)=1$ for all probability distributions of interest.} \\ (Equivalent formulation: If $T(A)=0$ for some measurable $A$, then $p(A)=0$ for all probability distributions of interest).\\ \noindent Also, the following premise is adopted:\vspace{2mm} \\ \textbf{Premise 5. Whenever a measure fulfills Premises 1, 3 and 4 (for the probability distributions as characterised by Premise 2), then it is a typicality measure}.\vspace{2mm} \\ It will be shown\footnote{This is shown when proving Theorem 1 (see Appendix 1).} that measures satisfying Premises 1, 3 and 4 agree on which sets are typical or there is a unique measure satisfying these premises. Hence it seems reasonable to take these conditions as sufficient for typicality measures.\\ \noindent Finally, let me specify the systems under consideration:\vspace{2mm} \\ \textbf{Premise 6. Let a statistical mechanical system $(\Gamma_{E},\Sigma_{\Gamma_{E}},\phi_{t},\mu_{E})$ be given, or let a dynamical system $(X,\Sigma_{X},\phi_{t},\mu_{X})$ be given where $X$ is an interval in $\field{R}^{n}$, $\mu_{X}\ll \lambda$ and $\lambda\ll \mu_{X}$.}\vspace{2mm} \\ For many standardly used measures $\mu_{X}$ in measure-theoretic dynamical systems theory $\mu_{X}\ll \lambda$ and $\lambda\ll \mu_{X}$. For instance, it is clear that this condition holds for the measure $\mu_{Y}$ of the tent map (Example 1) and the measure $\mu_{Z}$ of the logistic map (Example 2).\\ \noindent The following result is derivable from these premises (for the proof see Appendix 1):\vspace{2mm} \\ \textbf{ Conclusion (Theorem 1). The measure $\mu_{E}$ of systems in statistical mechanics/the measure $\mu_{X}$ of dynamical systems is a typicality measure. Furthermore, any other typicality measure will agree with $\mu_{E}$/$\mu_{X}$ on which sets are typical.} \vspace{2mm} \\ This result aims to justify typicality measures $\mu_{E}$ in SM and typicality measures $\mu_{X}$ of dynamical systems such as the tent map (Example 1) or the logistic map (Example 2). $\mu_{E}$ and $\mu_{X}$ are typicality measures. They may not be the unique typicality measures, but this does not matter because any other typicality measure agrees on which sets are typical. Note that in my argument typicality is different from probability: the typicality measures do not refer to probabilities in the philosophical sense (they do not refer to, e.g., ontic probabilities describing the distributions which arise in experiments or to degrees of belief).\footnote{Typicality measures are what are formally called ``probability measures'' (see Section~\ref{Typicality} for a formal definition). However, just because they are formally probability measures, this does not mean that they refer to probabilities in the philosophical sense. To give an example: the Lebesgue measure on [0,1] is formally a probability measure, but it is often interpreted as ``length'', which is not probabilistic in the philosophical sense. Similarly, typicality measures which count states do not refer to probabilities. When one says, e.g., that 999/1000 of the balls are red, this statement does not refer to probabilities in the philosophical sense.}\\ \noindent Let me now present a \emph{cost-benefit analysis} of the argument. As outlined when introducing Premise $2$, it can be motivated that the initial probability distributions in SM are translation-continuous. Also, it seems plausible that for any arbitrary open region of phase space one cannot exclude that it might be possible to prepare the system in such a way that there is a positive probability of ending up in this region. Still, these two requirements are not incontestable because the knowledge about these initial probability distributions is limited. Premise 5 (that measures fulfilling Premises 1, 3 and 4 are typicality measures) seems reasonable because all such typicality measures agree on which sets are typical. Still, some might think that these premises are only necessary and not sufficient for typicality measures. Thus they might want to add further requirements (in this case my argument is still relevant: it shows that any typicality measure fulfilling these additional requirements will agree with $\mu_{E}$/$\mu_{X}$ about judgement of typicality).\\ \noindent Concerning the benefits, first, there is a conceptual gain from knowing that the choice of the standard typicality measures can be motivated from Premises $1$-$6$. Second, there is an empirical gain because the typicality measures are related to the initial probability distributions of interest (Premises 3 and 4). In particular, claims about typical behaviour translate into claims about the likely behaviour of the system for any initial probability distribution of interest. Without such a connection, inevitably the question would arise how typicality measures are connected to experiments. Third, according to my argument, a wide range of initial probability distributions are allowed (see Premise 2). This is a strength because, as argued above, different probability distributions may arise in different contexts, and my argument is consistent with this (this is another empirical gain). Particularly the empirical gains might make the argument also attractive for physicists. \section{A New Proposal: Typicality II}\label{New2} Let me now outline the analogous argument for the less stringent notion of typicality II. The first premise remains the same:\vspace{2mm} \\ \noindent\textbf{Premise 1$^{*}$. Typicality measures are invariant under the dynamics.}\\ \noindent Recall that the preparation of a system in SM is described by an initial probability distribution. Here we assume that these initial probability distributions $p$ are $\delta*\kappa$-\emph{translation-close}, i.e., that for all open sets $A$ in $X$: \begin{equation}\label{transclose} \left|\left|\tau\right|\right|<\delta\rightarrow |p(Tr(A,\tau))-p(A)|<\kappa, \end{equation} where $\delta> 0$ and $0<\kappa<1$ are very small real numbers. The condition of translation-closeness can be motivated in a similar fashion as translation-continuity. That is, when one prepares a system in SM, one does not have sufficient accuracy to create any other probability measure than a translation-close probability measure.\footnote{Vranas (1998) advances another argument for translation-closeness. According to Vranas, measurements correspond to coarse-graining the phase space into finitely many sets (called phase cells) whose states lead to the same measured value. The idea is that in practice all that matters are coarse-grained probability measures (where a measure is assigned only to unions of phase cells) and all that is needed is an argument that any coarse-grained probability measure is translation-close. Vranas's approach does not work because \emph{there are irresolvable technical difficulties}. When the phase space is coarse-grained, then $f_{t}(A)$, where $A$ is a phase cell and $t\in\field{R}^{+}$ or $\field{N}$, will often not be a phase cell again (cf.\ Werndl 2009a). However, the definition of mt-dynamical systems requires that $f_{t}(A)$ is an element of the phase space for all phase cells $A$ and all $t$ (because otherwise the $f_{t}$s are not functions from the phase space to the phase space). Also, because $f_{t}(A)$ and $f^{-1}_{t}(A)$ may not be phase cells again, the conditions of invariance (equations (\ref{I2}) and (\ref{I1})) cease to be applicable. Furthermore, $Tr(A,\tau)$ will usually not be an element of the coarse-grained phase space, implying that the notion of translation-continuity ceases to be applicable. Problems arise even if these technical difficulties are set aside: Vranas claims that as the measurement precision increases, the maximum measure of the size of a phase cell will become smaller. From this he concludes that sufficiently small displacements of sets only lead to a very small change of the coarse-grained measure (implying translation-closeness). However, as van Lith (2001) has pointed out, this conclusion is unwarranted because the decreasing cell size is accompanied by a compensating increase of the number of cells that are added or deleted because of the displacement (hence it is unclear whether the total size of the added or deleted phase cells will become smaller).} Vranas (1998) has shown a result analogous to Malament and Zabell's equivalence of the conditions of absolute continuity and translation-closeness. To state it, I first need to introduce a definition. The measure \emph{$\nu$ is $\varepsilon_{1}/\varepsilon_{2}$-continuous w.r.t.\ the measure $\mu$} (where $\nu$ and $\mu$ are both defined on $(X,\Sigma_{X})$) $(\nu_{\varepsilon_{2}}\!\!\ll\!\mu_{\varepsilon_{1}})$ iff: \begin{equation} \textnormal{if}\,\,\mu(A) \leq \varepsilon_{1}\,\,\textnormal{for a measurable set}\,\,A, \,\,\textnormal{then}\,\,\nu(A)\leq\varepsilon_{2}. \end{equation} The result shown by Vranas is that if $p$ is $\delta*\kappa$-translation-close, $p$ is $\varepsilon_{1}/\varepsilon_{2}$-continuous w.r.t.\ the Lebesgue measure for $\kappa<\varepsilon_{2}<1$ and $\varepsilon_{1}<(\varepsilon_{2}-\kappa)\delta_{n}$, where $\delta_{n}$ is the volume of the $n$-sphere with radius $\delta$.\footnote{Vranas proved this result only for the phase space $\field{R}^{n}$, but it is clear from the proof that it also holds for any arbitrary interval of $\field{R}^{n}$. Although the phase spaces in SM are not intervals, I follow Malament and Zabell (1980) and Vranas (1998) in assuming that this result also holds for systems in SM.}\\ \noindent As before, different initial probability distributions might arise in SM, and thus there is a \emph{class $P$ of initial probability distributions of interest}. Open sets (and especially open sets whose Lebesgue measure is not very small) are extended regions of phase space. This motivates the only additional requirement on this class, namely that for an open region which is not very small, one cannot exclude that there might be some way of preparing the system such that the probability of ending up in this region is not very small. Formally, if $\lambda(A)>\Psi$ for an open set $A$ for a very small $\Psi>0$, then $P$ includes an initial probability distribution $p$ with $p(A)>\Psi$. This condition will be required to hold for $\Psi=\varepsilon_{2}$ and $\Psi=\varepsilon_{3}$. So: \vspace{2mm}\\ \noindent \textbf{Premise 2$^{*}$. The initial probability distribution of interest are a class $P$ of translation-close probability distributions where if $\lambda(A)>\Psi$ for an open set $A$, then there is a $p\in P$ with $p(A)>\Psi$ (where $\Psi=\varepsilon_{2}$ or $\varepsilon_{3}$).}\vspace{2mm} \\ Note that \emph{this great flexibility on the initial probability distribution in SM allowed by Premise 2$^{*}$ should be welcome}. \\ \noindent Concerning dynamical systems theory, also in certain applications initial probability distributions over the states play a role. The justification of typicality measures advanced here is relevant whenever Premise 2$^{*}$ holds. For instance, consider again the dynamical system of a frictionless \emph{pendulum}, consisting of a mass $m$ that experiences a single force $F$ which depends only on the position of the mass and a constant. Suppose that one does not have sufficient accuracy to create any other initial probability distribution than a translation-close one (which has some plausibility). Further, since in experiments one has the freedom to prepare the system in all kinds of different initial positions and initial velocities, it seems plausible that for an open region of phase space that is not very small, there is an initial probability distribution which assigns a non-negligible probability to this region. Then the uniform measure can be justified as typicality measure.\\ \noindent The next two premises \emph{relate typicality to the initial probability distributions of interest}. First, it seems reasonable to require that if a set of states has very small probability for all initial probabilities of interest, then the set is atypical: \vspace{2mm} \\ \noindent \textbf{Premise 3$^{*}$. If $p(A)\leq\varepsilon_{2}$ for all probability distributions of interest for some measurable $A$, then $T(A)\leq\beta$ (for very small $\varepsilon_{2}, \beta$; $0\leq\varepsilon_{2}, \beta$).}\vspace{2mm} \\ \noindent Second, because the typicality measure should serve as a shortcut, it is required that if a set of states is typical, it is of very high probability for all initial probability distributions of interest: \vspace{2mm} \\ \noindent \textbf{Premise 4$^{*}$. If $T(A)> 1-\beta$ for some measurable $A$, then $p(A)> 1-\varepsilon_{3}$ for all probability distributions of interest (for very small $\varepsilon_{3}, \beta$; $0\leq\beta<\varepsilon_{3}$).}\\ \noindent As before, the following assumption is made:\vspace{2mm} \\ \textbf{Premise 5$^{*}$. Whenever a measure fulfills Premises 1$^{*}$, 3$^{*}$ and 4$^{*}$ (for the probability distributions as characterized by Premise 2$^{*}$), it is a typicality measure}.\vspace{2mm} \\ It will later be shown that measures satisfying Premises 1$^{*}$, 3$^{*}$ and 4$^{*}$ can be interchangeably used as typicality measures.\footnote{This is shown when proving Theorem 2 (see Appendix 2).} Hence it seems reasonable to take these conditions as sufficient for typicality measures.\\ \noindent Let me specify the systems under consideration:\vspace{2mm} \\ \textbf{Premise 6$^{*}$. Let a statistical mechanical system $(\Gamma_{E},\Sigma_{\Gamma_{E}},\phi_{t},\mu_{E})$ be given, or let a dynamical system $(X,\Sigma_{X},\phi_{t},\mu_{X})$ be given where $X$ is an interval in $\field{R}^{n}$, $\mu_{X \beta}\!\!\ll\!\lambda_{\varepsilon_{2}}$ and $\lambda_{\varepsilon_{4}}\!\!\ll\!\mu_{X \beta}, (\varepsilon_{2}<\beta; \beta<\varepsilon_{4}; \varepsilon_{4}\leq\varepsilon_{3}+\varepsilon_{1}-\varepsilon_{2})$.}\vspace{2mm} \\ For many of the standard measures $\mu_{X}$ in measure-theoretic dynamical systems theory $\mu_{X \beta}\!\!\ll\!\lambda_{\varepsilon_{2}}$ and $\lambda_{\varepsilon_{4}}\!\!\ll\!\mu_{X \beta}$ for very small $\varepsilon_{2},\beta$ and $\varepsilon_{4}$. For instance, this trivially holds for the measure $\mu_{Y}$ of the tent map (Example 1), and it is easy to see that it holds for the measure $\mu_{Z}$ of the logistic map (Example 2).\footnote{Let $\mu_{X}(A)\leq\beta$ for an arbitrary $\beta$. Then $\beta\geq\mu_{X}(A)\geq \int_{A}\omega^{-1} d\lambda=\omega^{-1}\lambda(A)$, where $\omega^{-1}=\min_{x}\frac{1}{\pi\sqrt{x(1-x)}}$. Hence $\lambda_{\omega\beta}\!\!\ll\!\mu_{X \beta}$. Conversely, let $\lambda(A)\leq\beta$ for an arbitrary $\beta$. Then $\mu_{X}(A)\leq \int\rho d\lambda\leq \rho\lambda(A)\leq\rho\beta$, where $\rho=\int_{0}^{\beta/2}\frac{1}{\pi\sqrt{x(1-x)}}d\lambda+ \int_{1-\beta/2}^{1}\frac{1}{\pi\sqrt{x(1-x)}}d\lambda$. Thus $\mu_{X\rho\beta}\!\!\ll\!\lambda_{\beta}$.}\\ \noindent Finally, I also adopt the following premise:\vspace{2mm} \\ \textbf{Premise 7$^{*}$. Assume that $\nu_{\beta_{2}}\!\!\ll\!\mu_{\beta_{1}}$ and $\mu_{\beta_{4}}\!\!\ll\!\nu_{\beta_{3}}$ (for very small $\beta_{i}$, $0<\beta_{i}; 1\leq i\leq 4$) and that $\mu$ and $\nu$ both satisfy Premises 1$^{*}$, 3$^{*}$ and 4$^{*}$. Then, pragmatically speaking, $\mu$ and $\nu$ can be interchangeably used as typicality measures.}\vspace{2mm} \\ This can be motivated as follows. The antecedent of Premise 7$^{*}$ implies that claims about typicality in terms of $\mu$ are translatable into claims about typicality in terms of $\nu$, and vice versa: If a set $D$ is typical with respect to $\mu$, i.e.\ $\mu(D)> 1-\beta_{1}$, $D$ is typical w.r.t.\ $\nu$ because $\nu(D)> 1-\beta_{2}$. Conversely, if a set $D$ is typical with respect to $\nu$, i.e.\ $\nu(D)> 1-\beta_{3}$, $D$ is typical w.r.t.\ $\mu$ because $\mu(D)> 1-\beta_{4}$. That such measures can, pragmatically speaking, be both used as typicality measures seems to be often endorsed by proponents of the typicality approach, e.g., Goldstein (2001) or Maudlin (2007).\\ \noindent The following result is derivable from these premises:\vspace{2mm} \\ \textbf{ Conclusion (Theorem 2). The measure $\mu_{E}$ of systems in statistical mechanics/the measure $\mu_{X}$ of dynamical systems is a typicality measure. Furthermore, for any other typicality measure $T$, pragmatically speaking, $\mu_{E}$ and $T$/$\mu_{X}$ and $T$ can be interchangeably used as typicality measures.}\vspace{2mm} \\ This result aims to justify typicality measures $\mu_{E}$ in SM and typicality measures $\mu_{X}$ of dynamical systems such as the tent map (Example 1) or the logistic map (Example 2). $\mu_{E}$ and $\mu_{X}$ are typicality measures. They may not be the unique typicality measures, but this does not matter because any other typicality measure can be interchangeably used as typicality measure. Note that, again, the typicality measures do not refer to probabilities in the philosophical sense.\\ \noindent The cost-benefit analysis is similar as for typicality I. Concerning Premise 2$^{*}$, it seems plausible that the initial probability distributions are translation-close and that for an open region which is not very small, one cannot exclude that there might be some way of preparing the system such that the probability of ending up in this region is not very small. Still, these two requirements are not incontestable because the knowledge about these initial probability distributions is limited. Premise 5$^{*}$ seems reasonable because it follows that all such typicality measures can, pragmatically speaking, be interchangeably used as typicality measures. Still, some might want to add further requirements for something to count as typicality measure (then my argument is still relevant: it shows that measures fulfilling these additional requirements can, pragmatically speaking, be interchangeably used with $\mu_{E}$/$\mu_{X}$ as typicality measures). The motivation of Premise $7^{*}$ is that claims about typicality in terms of $\mu$ are translatable into claims about typicality in terms of $\nu$, and vice versa. Still, $\mu$ and $\nu$ can disagree on the size assigned to sets, and some might not want to allow this.\\ \noindent Regarding the benefits, first, there is a conceptual gain from knowing that the choice of the standard typicality measures follows from Premises $1^{*}$-$7^{*}$. Second, there is an empirical gain because the typicality measures are related to the initial probability distributions of interest (Premises $3^{*}$ and $4^{*}$). Third, it is a strength that a wide range of probability distributions are allowed (see Premise 2$^{*}$), and this is another empirical gain. These empirical gains might make the argument attractive for physicists. \section{Uniqueness Results}\label{Uniqueness} \noindent In this section I show that if a further assumption is added to the argument, then the typicality measure is unique (typicality I)/unique from a pragmatic perspective (typicality II). This further assumption is the dynamical condition of ergodicity (typicality I)/epsilon-ergodicity (typicality II). A mt-dynamical system $(X,\Sigma_{X},\mu_{X},f_{t})$ is \emph{ergodic} iff there is no measurable set $A$ in $X$, $0<\mu_{X}(A)<1$, such that $f_{t}(A)=A$ for all $t\in\field{R}^{+}$ or $\field{N}$. A mt-dynamical system $(X,\Sigma_{X},\mu_{X},f_{t})$ is epsilon-ergodic iff it is ergodic on a set of measure $1-\varepsilon_{0}$, for a very small $\varepsilon_{0}\geq 0$.\footnote{Here it is always assumed that $\varepsilon_{0}$ is negligible compared to the measure of any of the macroregions, i.e., that $\varepsilon_{0}/\min_{i}(\mu_{E}(\Gamma_{M_{i}}))=\Theta$, for a very small $\Theta\geq 0$.\label{complication}} Formally: $(X,\Sigma_{X},\mu_{X},f_{t})$ is \emph{$\varepsilon_{0}$-ergodic}, where $0\leq\varepsilon_{0}<1$ is a real number, iff there is a measurable set $V\subset X$, $\mu(V)=1-\varepsilon_{0}$, with $f_{t}(V)\subseteq V$ for all $t\in\field{R}^{+}$ or $\field{N}$ such that the mt-dynamical system $(V,\Sigma_{V},\mu_{V},f_{t}^{V})$ is ergodic where $\Sigma_{V}=\{V\cap A; A\in\Sigma_{X}\}$, $\mu_{V}(\cdot)=\mu_{X}(\cdot)/\mu_{X}(V)$ for any set in $V$, and $f_{t}^{V}$ is $f_{t}$ restricted to $V$. $(X,\Sigma_{X},\mu_{X},f_{t})$ is \emph{epsilon-ergodic} iff it is $\varepsilon_{0}$-ergodic for a very small $\varepsilon_{0}\geq 0$.\\ \noindent To state the uniqueness results, one more definition is needed. The measures $\mu_{X}$ and $T$ both defined on $(X,\Sigma_{X})$ are \emph{$\theta$-close}, where $0<\theta< 1$ is a very small real number, iff: \begin{equation}\label{eclose} |\mu_{X}(A)-T(A)|<\theta\,\,\,\textnormal{for all measurable sets}\,\,A\,\,\,\textnormal{in}\,\,X. \end{equation} Note that for $\theta$-close measures $\mu$ and $\nu$, $\nu_{\beta_{1}+\theta}\ll\mu_{\beta_{1}}$ and $\mu_{\beta_{3}+\theta}\ll\nu_{\beta_{3}}$ for any $\beta_{1},\beta_{3}\geq 0$. Hence, pragmatically speaking, they can be interchangeably used as typicality measures (cf.~Premise~7$^{*}$). Assuming that one does not care about differences of typicality measures of size $\theta$, $\theta$-close measures are \emph{identical from a pragmatic perspective}. \vspace{2mm} \\ \noindent \textbf{Theorem 3. Suppose that Premises 1-6 hold and that the statistical mechanical system $(\Gamma_{E},\Sigma_{\Gamma_{E}},\phi_{t},\mu_{E})$/the dynamical system $(X,\Sigma_{X},\phi_{t},\mu_{X})$ is ergodic. Then $\mu_{E}$/$\mu_{X}$ is the unique typicality measure.} \vspace{2mm} \\ For a proof of Theorem 3, see Appendix 3. \vspace{2mm} \\ \noindent \textbf{Theorem 4. Suppose that Premises 1$^{*}$-7$^{*}$ hold and that the statistical mechanical system $(\Gamma_{E},\Sigma_{\Gamma_{E}},\phi_{t},\mu_{E})$/the dynamical system $(X,\Sigma_{X},\phi_{t},\mu_{X})$ is $\beta$-ergodic. Then any other typicality measure $T$ is $\varepsilon_{6}$-close to $\mu_{E}$/$\mu_{X}$, where $\varepsilon_{6}=2(\beta+\varepsilon_{4}-\varepsilon_{1})+\beta/(1-\beta)$.} \vspace{2mm} \\ For a proof of Theorem 4, see Appendix 4. There are many dynamical systems, including the tent map (Example 1) and the logistic map (Example 2) and generally all chaotic systems, which are ergodic/epsilon-ergodic (cf.\ Aligood, Sauer and Yorke 1996; Werndl 2009b). Hence for these systems Theorem 3/Theorem 4 shows that the typicality measure is unique/unique from a pragmatic perspective.\\ \noindent In SM ergodicity and epsilon-ergodicity have a long and notorious history. Boltzmann already invoked the notion of ergodicity in some of his arguments (Uffink 2007). In the contemporary literature on the foundations of SM several papers have defended the claim, or have assumed in their arguments, that systems in SM are ergodic or epsilon-ergodic (e.g., Frigg and Werndl 2012; Malament and Zabell 1980; Pitowsky 2012; Vranas 1998). One of the most important mathematical results so far about ergodicity is the proof of the \emph{Boltzmann-Sinai hypothesis} -- that the motion of $n$ hard-spheres on the two or three-dimensional torus is ergodic (Sim\'{a}nyi 2010). The relevant mathematics is so difficult that for more realistic systems than hard spheres the knowledge is limited and largely based on numerical simulations. Because of this, some argue that one simply does not know yet whether more realistic systems in SM are ergodic or epsilon-ergodic (cf.~Uffink 2007). In this paper no commitment to ergodicity or epsilon-ergodicity is needed. Here it is just important that for systems such as hard spheres Theorem 3/Theorem 4 demonstrates that the typicality measure is unique/unique from a pragmatic perspective. Moreover, if some more realistic systems in SM turn out to be ergodic/epsilon-ergodic (which is certainly possible), Theorem 3/Theorem 4 shows that the typicality measure is unique/unique from a pragmatic perspective.\\ \noindent As a side remark, it should be noted that some have argued that the condition of ergodicity or epsilon-ergodicity guarantees thermodynamic-like behaviour (cf.\ Frigg and Werndl 2011). First, consider ergodicity, which is equivalent to the condition that the portion of time an arbitrary solution stays in $A$ equals the measure of $A$. Formally: $L_{A}(x)=\mu_{E}(A)$ for all initial conditions $x\in\Gamma_{E}$ except, perhaps, a set $B$ with $\mu_{E}(B)=0$, where \begin{equation} L_{A}(x)=\lim_{t\rightarrow \infty}\int_{A}\chi_{A}(\phi_{\tau}(x))d\tau \end{equation} (here $\chi_{A}(x)$ is the characteristic function\footnote{That is, $\chi_{A}(x)=1$ for $x\in A$ and $0$ otherwise.} of $A$). Consider an initial condition $x\in\Gamma_{E}\setminus B$. Then the dynamics will carry $x$ to $M_{eq}$ and will keep it there most of the time. The system will move out of the equilibrium region every now and then and visit non-equilibrium regions. Yet since these regions are small compared to the equilibrium region, it will only spend a small fraction of time there. Therefore, the entropy is close to its maximum most of the time and fluctuates away from it only occasionally. Hence if $\mu_{E}$ is interpreted as probability/typicality measure, thermodynamic-like behaviour is of probability $1$/typical (typicality I). Concerning $\varepsilon$-ergodic systems, note that such a system is ergodic on $V$. Consequently, it shows thermodynamic-like behaviour for the initial conditions in $V$. Then, by the same moves as explained above for ergodicity, one finds that thermodynamic-like behaviour is of probability $1-\varepsilon$/typical (typicality II). \section{Conclusion}\label{Conclusion} A popular view in contemporary Boltzmannian statistical mechanics is to interpret the measures as typicality measures, i.e.\ as representing the relative size of sets of states. In measure-theoretic dynamical systems theory measures can similarly be interpreted as typicality measures. However, a justification why these measures are a good choice of typicality measures is missing, and the paper attempted to contribute to fill this gap.\\ \noindent The paper first criticised Pitowsky (2012) -- the only justification of typicality measures known to the author. Pitowsky's argument goes as follows. Consider the set $\{0,1\}^{\infty}$ of all infinite sequences of zeros and ones. By approximation with the measures defined on the sets $\{0,1\}^{n}$ of finite sequences of zeros and ones, a unique measure $\mu_{\infty}$ is obtained on $\{0,1\}^{\infty}$. Let $\phi$ be the map which assigns to each infinite sequence $\omega$ of zeros and ones the number in the unit interval whose binary development is $\omega$. When $\phi$ is used to map the measure $\mu_{\infty}$ on $\{0,1\}^{\infty}$ to a measure on the unit interval, one obtains the uniform measure. Hence, Pitowsky concludes, the uniform measure is the uniquely correct typicality measure. This paper argued that Pitowsky's argument is problematic. It is unclear why $\phi$ and not another function is used to map the measure $\mu_{\infty}$ to the unit interval. Furthermore, there are counterexamples because for many systems on the unit interval the uniform measure is not the standard measure used and is not invariant.\\ \noindent The paper then provided a first tentative proposal of how to justify typicality measures for two notions, namely typicality I (atypical means measure zero) and typicality II (atypical means very small measure). The main premises of the argument are as follows. The initial probability distributions of interest are translation-continuous (for typicality I) or translation-close (for typicality II). A typicality measure should be related to these initial probability distributions in two ways: First, if a set is of probability zero (for typicality I) or of very small probability (for typicality II) for all probability distributions, then it is atypical. Second, if a set is typical, it is of probability one (for typicality I) or of very high probability (for typicality II) for all probability distributions. Furthermore, typicality measures should be invariant. The conclusion are two theorems which show that the standard measures of statistical mechanics and dynamical systems theory are typicality measures. There may be other typicality measures, but these agree on which sets are typical (for typicality I) or can be interchangeably used as typicality measures (for typicality II). Finally, two theorems were presented, showing that if systems are ergodic (for typicality I) or epsilon-ergodic (for typicality II), the typicality measure is unique (for typicality I) or unique from a pragmatic perspective (for typicality II). \section{Appendix} \subsection{Proof of Theorem 1} First of all, consider: \begin{equation}\label{raus} \textnormal{if}\,\,\lambda(A)>0,\,\,\textnormal{then there is a }\,\,p\in P\,\,\textnormal{with}\,\,p(A)>0. \end{equation} Later in the proof the result is needed that condition~(\ref{raus}) is implied by the condition that if $A$ is an open set, there is a $p\in P$ with $p(A)>0$. This follows because if $\lambda(A)>0$, $A$ contains an open subset $O$ with $\lambda(O)>0$ (since $\lambda$ is regular). Second, note that $(\Gamma_{E},\Sigma_{\Gamma_{E}},\phi_{t},\mu_{E})$ is a dynamical system. $\mu_E$ is defined as: \begin{equation}\label{MM} \mu_{E}(A)=\int_{A}\mid\nabla_{x} H\mid^{-1}d\lambda/\int_{\Gamma_{E}}\mid\nabla_{x} H\mid^{-1}d\lambda. \end{equation} Hence it follows (from the Radon-Nikodym theorem) that $\mu_{E}\ll\lambda$. Let $\alpha=\min_{x}(\mid\!\!\nabla_{x}H\!\!\mid^{-1})/\int_{\Gamma_{E}}\mid\!\!\nabla_{x} H\!\!\mid^{-1}d\lambda$, $\alpha>0$ (the minimum exists because $\Gamma_{E}$ is compact). Then if $\mu_{E}(A)=0$, $0=\mu_{E}(A)\geq \alpha \int_{A}d\lambda=\alpha\lambda(A)$, and hence $\lambda(A)=0$. Therefore, also $\lambda\ll\mu_{E}$. Thus it suffices to focus on the case where a dynamical system $(X,\Sigma_{X},\phi_{t},\mu_{X})$ is given where $\mu_{X}\ll \lambda$ and $\lambda\ll\mu_{X}$. \\ \noindent The definition of a dynamical system requires that $\mu_{X}$ is invariant, and hence $\mu_{X}$ satisfies Premise 1. Next, assume that $p(A)=0$ for all probability distributions of interest for some measurable $A$. Then also $\lambda(A)=0$ (from Premise 2 and condition~(\ref{raus})). Because $\mu_{X}\ll\lambda$ (from Premise 6), $\mu_{X}(A)=0$. Consequently, $\mu_{X}$ satisfies Premise 3. Note that Premise 4 is equivalent to the claim that $p\ll T$ for all probability distributions $p$ of interest. Because $\lambda\ll\mu_{X}$ (from Premise 6), $p\ll\mu_{X}$ for all probability distributions of interest (from Premise 2). Thus $\mu_{X}$ satisfies Premise 4. Because $\mu_{X}$ satisfies Premises 1, 3 and 4, it is a typicality measure (from Premise 5). \\ \noindent Let $T$ be another typicality measure. Then $\lambda\ll T$ (from Premises 2 and 4 and condition~(\ref{raus})). If $\lambda(A)=0$ for some measurable $A$, then $p(A)=0$ for all probability distributions of interest (from Premise 2), and if $p(A)=0$ for all probability distributions of interest, $T(A)=0$ (from Premise 3). Hence $T\ll \lambda$. Therefore, $T(A)=0$ iff $\lambda(A)=0$. Because $\lambda(A)=0$ iff $\mu_{X}(A)=0$ (from Premise 6), $T(A)=0$ iff $\mu_{X}(A)=0$. Hence any other typicality measure $T$ will agree with $\mu_{X}$ about which sets are typical. \subsection{Proof of Theorem 2} First of all, consider: \begin{equation}\label{raus2} \textnormal{if}\,\,\lambda(A)>\Psi,\,\,\textnormal{then there is a }\,\,p\in P\,\,\textnormal{with}\,\,p(A)>\Psi. \end{equation} Later in the proof the result is needed that condition (\ref{raus2}) is implied by the condition that if $\lambda(A)>\Psi$ for an open set $A$, then $P$ includes an initial probability distribution $p$ with $p(A)>\Psi$. This follows because if $\lambda(A)>\Psi$, then there is an open subset $O$ of $A$ with $\lambda(O)>\Psi$ (since $\lambda$ is regular). Second, note that $(\Gamma_{E},\Sigma_{\Gamma_{E}},\phi_{t},\mu_{E})$ is a dynamical system. Vranas (1998) has already shown that $\lambda$ is $\varepsilon/\xi\varepsilon$-continuous w.r.t.\ $\mu_{E}$ for any $\varepsilon\in (0,1)$ and for $\xi=\max_{x}(\mid\!\!\nabla_{x}H\!\!\mid)*\int_{\Gamma_{E}}\mid\!\!\nabla_{x} H\!\!\mid^{-1}d\lambda$ (the maximum exists because $\Gamma_{E}$ is compact). Let $\lambda(A)\leq\varepsilon$, $\varepsilon\in (0,1)$. Then from equation~(\ref{MM}) it follows that $\mu_{E}(A)\leq\chi\int_{A}d\lambda=\chi\lambda(A)\leq\chi\varepsilon$ for $\chi\!\!=\!\!\max_{x}(\mid\!\!\nabla_{x}H\!\!\mid^{-1})/\int_{\Gamma_{E}}\mid\!\!\nabla_{x} H\!\!\mid^{-1}d\lambda$ (the maximum exists because $\Gamma_{E}$ is compact). Hence $\mu_{E}$ is $\varepsilon/\chi\varepsilon$-continuous w.r.t.\ $\lambda$. Thus it suffices to focus on the case where a dynamical system $(X,\Sigma_{X},\phi_{t},\mu_{X})$ is given with $\mu_{X\beta}\!\!\ll\!\lambda_{\varepsilon_{1}}$ and $\lambda_{\varepsilon_{4}}\!\!\ll\!\mu_{X \beta}$.\\ \noindent Premise 1$^{*}$ holds because for a dynamical system $\mu_{X}$ is invariant. Next, assume that $p(A)\leq\varepsilon_{2}$ for all probability distributions of interest for some measurable $A$. Then $\lambda(A)\leq\varepsilon_{2}$ (from Premises 2$^{*}$ and condition~(\ref{raus2})). Hence $\mu_{X}(A)\leq\beta$ (from Premise 6$^{*}$), and $\mu_{X}$ satisfies Premise 3$^{*}$. Premise 4$^{*}$ is equivalent to the claim that $p_{e_{3}}\ll T_{\beta}$ for all probability distribution of interest. Suppose that $\mu_{X}(A)\leq\beta$. Then $\lambda(A)\leq\varepsilon_{4}$ (from Premise 6$^{*}$) and $p(A)\leq\varepsilon_{2}+(\varepsilon_{4}-\varepsilon_{1})\leq\varepsilon_{3}$ (from Premise 2$^{*}$). Therefore, $\mu_{X}$ satisfies Premise 4$^{*}$. Because $\mu_{X}$ satisfies Premises 1$^{*}$, 3$^{*}$ and 4$^{*}$, it is a typicality measure (from Premise 5$^{*}$.\\ \noindent Let $T$ be another typicality measure. If $T(A)\leq\beta$, then $\lambda(A)\leq\varepsilon_{3}$ (from Premise 2$^{*}$ and 4$^{*}$ and equation~(\ref{raus2})). Thus, $\mu_{X}(A)\leq\beta+(\varepsilon_{3}-\varepsilon_{2})$ (from Premise 6$^{*}$). Conversely, if $\mu_{X}(A)\leq\beta$, $\lambda(A)\leq\varepsilon_{4}$ (from Premise 6$^{*}$). Hence $p(A)\leq\varepsilon_{2}+(\varepsilon_{4}-\varepsilon_{1})$ for all probability distributions of interest (from Premise 2$^{*}$), and $T(A)\leq\beta+(\varepsilon_{4}-\varepsilon_{1})$ (from Premise 3$^{*}$). Consequently, pragmatically speaking, $\mu_{X}$ and $T$ can be interchangeably used as typicality measures. \subsection{Proof of Theorem 3} As for the reasons given when proofing Theorem 1, it suffices to focus on the case where a dynamical system $(X,\Sigma_{X},\phi_{t},\mu_{X})$ is given where $\mu_{X}\ll \lambda$ and $\lambda\ll\mu_{X}$. According to a theorem in ergodic theory, if $(X,\Sigma_{X},\phi_{t},\mu_{X})$ is ergodic and $T$ is a measure invariant under the dynamics with $T\ll \mu_{X}$, then $T=\mu_{X}$ (cf.\ Cornfeld et al.\ 1982). Any other typicality measure $T$ is invariant (from Premise 1). Also, $T\ll \lambda$ (from Premises 2 and 3), $\lambda\ll\mu_{X}$ (from Premise 6), and hence $T\ll\mu_{X}$. Consequently, $T=\mu_{X}$. \subsection{Proof of Theorem 4} As for the reasons given when proofing Theorem 2, it suffices to focus on the case where a dynamical system $(X,\Sigma_{X},\phi_{t},\mu_{X})$ is given where $\mu_{X\beta}\!\!\ll\!\lambda_{\varepsilon_{1}}$ and $\lambda_{\varepsilon_{4}}\!\!\ll\!\mu_{X \beta}$. According to a theorem by Vranas (1998), if $(X,\Sigma_{X},\phi_{t},\mu_{X})$ is $\beta$-ergodic, $T$ is invariant and $T_{\beta+\varepsilon_{4}-\varepsilon_{1}}\ll\mu_{X\beta}$, then $\mu_{X}$ and $T$ are $\varepsilon_{6}$-close with $\theta=2(\beta+\varepsilon_{4}-\varepsilon_{1})+\beta/(1-\beta)$. Let $T$ be any other typicality measure. Then $T$ is invariant (from Premise $1^{*}$). Also, $\lambda_{\varepsilon_{4}}\ll \mu_{X \beta}$ (from Premise $6^{*}$), $T_{\beta+(\varepsilon_{4}-\varepsilon_{1})}\ll\lambda_{\varepsilon_{4}}$ (from Premises 2$^{*}$ and 3$^{*}$), and hence $T_{\beta+(\varepsilon_{4}-\varepsilon_{1})}\ll\mu_{X \beta}$. Consequently, $\mu_{X}$ and $T$ are $\theta$-close. \section*{References} \addcontentsline{toc}{section}{References} \begin{list}{}{ \setlength{\labelwidth}{0pt} \setlength{\labelsep}{0pt} \setlength{\leftmargin}{24pt} \setlength{\itemindent}{-24pt} } \item Aligood, K., Sauer, T. and Yorke, J. (1996), {\em Chaos: An Introduction to Dynamical Systems}, New York: Springer. \item Cornfeld, I.\ P., Fomin, S.\ V.\ and Sinai, Ya.\ G.\ (1982), \emph{Ergodic Theory}, Berlin et al.: Springer. \item Davey, K. (2008), `The justification of probability measures in statistical mechanics', {\em Philosophy of Science} {\bf 75},~28--44. \item D\"{u}rr, D. (1998), `\"{U}ber den {Zufall} in der {Physik}', paper given at the 1998 {Leopoldina Meeting, Halle}.\\ http://www.mathematik.uni-muenchen.de/~duerr/zufall/zufall.html. \item Frigg, R. (2008), A field guide to recent work on the foundations of statistical mechanics, {\em in} D.~Rickles, ed., `The Ashgate Companion to Contemporary Philosophy of Physics', Ashgate, London, pp.~99--196. \item Frigg, R. and Werndl, C. (2011), `Explaining thermodynamic-like behaviour in terms of epsilon-ergodicity', {\em Philosophy of Science} {\bf 78},~628--652. \item Frigg, R. and Werndl, C. (2012), `Demystifying typicality', \emph{Philosophy of Science} \textbf{5}, 917--929. \item Goldstein, S. (2001), {Boltzmann's} approach to statistical mechanics, {\em in} J.~Bricmont, D.~D\"{u}rr, M.~Galavotti, G.~Ghirardi, F.~Pettrucione and N.~Zanghi, eds, `Chance in Physics: Foundations and Perspectives', Springer, Berlin and New York, pp.~39--54. \item Goldstein, S. and Lebowitz, J. (2004), `On the ({Boltzmann}) entropy of nonequilibrium systems', {\em Physica D} {\bf 193},~53--66. \item Hemmo, M. and Shenker, O. (2012), Measures over initial conditions, {\em in} J.~Ben-Menahem and M.~Hemmo, eds, `Probability in Physics', The Frontiers Collection, Springer, Berlin and New York, pp.~87--98. \item Lavis, D. (2005), `Boltzmann and {Gibbs}: An attempted reconciliation', {\em Studies in History and Philosophy of Modern Physics} {\bf 36},~245--273. \item Lebowitz, J.~L. (1993a), `Boltzmann's entropy and time's arrow', {\em Physics Today} {\bf September Issue},~32--38. \item Lebowitz, J.~L. (1993b), `Macroscopic laws, microscopic dynamics, time's arrow and {Boltzmann's} entropy', {\em Physica A} {\bf 194},~1--27. \item Lebowitz, J.~L. (1999), `Statistical mechanics: A selective review of two central issues', {\em Reviews of Modern Physics} {\bf 71},~346--357. \item Leeds, S. (1989), `Malament and {Zabell} on {Gibbs} phase averaging', {\em Philosophy of Science} {\bf 56},~325--340. \item Malament, D. and Zabell, S. (1980), `Why {Gibbs} phase averages work - the role of ergodic theory', {\em Philosophy of Science} {\bf 47},~339--349. \item Maudlin, T. (2007), `What could be objective about probabilities?', {\em Studies in History and Philosophy of Modern Physics} {\bf 38},~275--291. \item Nielsen, O.\ A.\ (1997), \emph{An Introduction to Integration and Measure Theory}, Wiley-Interscience, New York. \item Petersen, K. (1983), {\em Ergodic Theory}, Cambridge University Press, Cambridge. \item Pitowsky, I. (2012), Typicality and the role of the {Lebesgue} measure in statistical mechanics, {\em in} J.~Ben-Menahem and M.~Hemmo, eds, `Probability in Physics', The Frontiers Collection, Springer, Berlin and New York, pp.~51--58. \item Sim\'{a}nyi, N. (2010), `The Boltzmann--Sinai Ergodic Hypothesis In Full Generality', arXiv:1007.1206v2 [math.DS]. \item Uffink, J. (1996), `Nought but Molecules in Motion, a review essay of L. Sklar's ``Physics and Chance''', \emph{Studies in History and Philosophy of Modern Physics} \textbf{27}, 373--378. \item Uffink, J. (2007), Compendium to the foundations of classical statistical physics, {\em in} J.~Butterfield and J.~Earman, eds, `Philosophy of Physics', North-Holland., Amsterdam, pp.~923--1074. \item van Lith, J. (2001), Stir in Stilness, PhD thesis, University of Utrecht. \item Volchan S.~B. (2007), `Probability as typicality', {\em Studies in History and Philosophy of Modern Physics} {\bf 38},~801--814. \item Vranas, P.~B. (1998), `Epsilon-ergodicity and the success of equilibrium statistical mechanics', {\em Philosophy of Science} {\bf 65},~688--708. \item Werndl, C. (2009a), `Are deterministic descriptions and indeterministic descriptions observationally equivalent?', {\em Studies in History and Philosophy of Modern Physics} {\bf 40},~232--242. \item Werndl (2009b), `What are the new implications of chaos for unpredictability?', {\em The British Journal for the Philosophy of Science} {\bf 60},~195--220. \end{list} \end{document}
868bd74542290a90c682160ef7da8d1b41ba9e52
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{sec:intro} The formation of a Solar-type planetary system starts with the collapse of a cold ($\leq$10 K) and dense ($\geq$10$^{5}$ cm$^{-3}$) core, called a prestellar core, in a molecular cloud. The evolution of the prestellar core into a protostar, a protoplanetary disk and, eventually, a planetary system, is also accompanied by the evolution of its chemical composition (e.g. \citealt{Caselli2012}). Cyanopolyynes are a class of molecules composed of a long chain of carbon atoms with a hydrogen atom at one end and a cyanide (CN) group at the other end. They are widespread in the interstellar medium (ISM) and they have been detected at all the stages of the star formation process, from dark clouds \citep[e.g.][]{Walmsley1980} to protoplanetary disks \citep{Chapillon2012}, and comets \citep[e.g.][]{Bock-Morvan2000}. While small cyanopolyynes such as HC$_{3}$N and HC$_{5}$N are regularly observed with (sub-)millimeter telescopes, much less is known about the presence and evolution of heavy species (e.g. chains with more than seven carbon atoms) which have their peak of emission at longer wavelengths. Even the relatively simple and abundant cyanotriacetylene (HC$_{7}$N) has only been detected in a handful of Solar-type prestellar cores and protostars (e.g. \citealt{Cernicharo1986,Gupta2009, Cordiner2012, Friesen2013, Jaber2017}). Yet, large carbon chains might have a crucial role in the heritage of organic material from the pre- and proto- stellar phase to the objects of the newly formed planetary system, such as asteroids and comets (e.g. \citealt{Mumma2011}). Despite the importance of large carbon species in the astrobiological context and its potential diagnostic power, only the starless core TMC-1 has been extensively explored so far \citep[e.g.][]{Cernicharo2021a}. This object has been the target of two deep surveys: GOTHAM \citep[GBT Observations of TMC-1: Hunting Aromatic Molecules,][]{McGuire2020} and the QUIJOTE \citep[Q-band Ultrasensitive Inspection Journey to the Obscure TMC-1 Environment,][]{Cernicharo2021a} projects, which extensively investigated the cyanopolyyne chemistry in this source. In particular, they revealed for the first time several cyanopolyynes isotopologues and isomers (such as HC$_4$NC and HC$_6$NC) \citep{Cernicharo2020}, as well as the presence of HC$_{11}$N \citep{Loomis2021}, the largest cyanopolyyne so far discovered in the ISM. Yet, TMC-1 actually is a starless core which does not have any sign of collapsing and becoming eventually a planetary system. In this respect, the study of large cyanopolyynes in a prestellar core, which is believed to eventually form a Solar-type planetary system, is particularly important. L1544, in the Taurus molecular cloud complex at a distance of 170 pc \citep[e.g.][]{Galli2019}, is considered the prototype of prestellar cores, being on the verge of gravitational collapse (e.g. \citealt{Caselli2012}). Its central high density ($\sim$10$^{6}$ cm$^{-3}$) and very low temperature ($\sim$ 7 K) result in the peculiar chemistry typical of cold and CO depleted gas, namely a very high deuteration of species \citep[e.g.][]{Caselli1999,Crapsi2007,Ceccarelli2007,caselli2022}. In the external layers, however, different rich chemical processes take place which lead to the formation of interstellar Complex Organic Molecules (iCOMs) and carbon chains \citep[e.g.][]{Bizzocchi2014, Vastel2016,Jimenez2016, Punanova2018,Ceccarelli2022}. Indeed, recent (single-dish) IRAM 30m observations in the mm-window show the presence of small carbon chains such as HC$_{3}$N, c-C$_{3}$H$_{2}$, C$_{3}$H, C$_{4}$H, C$_{2}$O and C$_{3}$O over extended portions of L1544 \citep{Vastel2014, Spezzano2017, Urso2019}. We carried out a pilot line survey in the X (8.0-11.6 GHz) and Ku (13.5-14.4 GHz) bands with the GBT towards L1544. The goal of the project is to obtain the first census of molecular lines in this radio spectral range towards a well known and established analogue of the Solar System precursor. In this first article, we report the study of the cyanopolyynes. The article is organised as follows. We report the observations and the detected lines in Sec. \ref{sec:obs-results}. In Sec. \ref{sec:modeling}, we analyse the detected cyanopolyyne lines to derive constraints on the physical conditions of the emitting gas. To this end, we carried out new computations to derive the collisional coefficients of HC$_5$N and HC$_7$N with H$_2$ in Sec. \ref{sec:nonLTE-coll-coef}. Section \ref{sec:discussion} reports the implications of our new observation for the understanding of the cyanopolyyne chemistry in the earliest phases of a Solar-type planetary system and Sec. \ref{sec:conclusions} concludes the article. \section{Observations and results} \label{sec:obs-results} \subsection{Observations} \label{subsec:observations} The observations presented here were carried out between 2019 June and 2020 February, on the GBT, under project codes AGBT19A$\_$048 and AGBT20A$\_$135. The target source L1544 was observed at the coordinates $\alpha_{\rm J2000}$ = 05$^{\rm h}$ 04$^{\rm m}$ 16$\fs$60, $\delta_{\rm J2000}$ = +25$\degr$ 10$\arcmin$ 48$\farcs$0. The source calibrator 0530+1331 was used to perform pointing and focus. Observation were performed using position switching mode with an ON-OFF throw position of 1$\degr$. Two receivers were used to cover the X-band (8.0-11.6 GHz) and the Ku-band (13.5-14.4 GHz) in combination with the VEGA spectrometer in high-resolution mode. The bandwidth per spectral window was of 187.5 MHz with 131,072 channels corresponding to a resolution of 1.4 kHz (0.05 km s$^{-1}$ at 9 GHz). The r.m.s. is typically $\sim$ 5 mK in a channel of $\sim$ 1.4 kHz. The telescope half-power beam width (HPBW) varies between $\sim$ 54$\arcsec$ in Ku-band and 1.4$\arcmin$ in X-band corresponding to $\sim$ 9180 au and $\sim$ 14280 au at the source distance. The calibration was performed using GBTIDL. The spectra are first inspected by eye and cleaned of any artifacts. Successively, each scan is corrected for Doppler tracking and calibrated in flux using the internal noise diodes from the receiver. The calibrated scans for a single observing session are then noise-weighted averaged. The baseline subtraction is performed by automatically identifying the line-free channels and performing a polynomial fit. The spectra were finally corrected for the GBT telescope efficiencies\footnote{https://www.gb.nrao.edu/scienceDocs/GBTpg.pdf}. Calibration uncertainties are estimated to be $\sim$ 20$\%$. \subsection{Results} \label{subsec:results} We detected several bright emission lines from cyanopolyynes in L1544. More specifically, we detected 3 emission lines from HC$_3$N, 9 lines from HC$_5$N, 5 lines from HC$_7$N, and 9 lines from HC$_9$N (see Table \ref{Tab:lines}). The threshold for detection is a Signal-to-Noise larger than 3$\sigma$ at the line peak. Line identification has been performed using the Cologne Database for Molecular Spectroscopy\footnote{\url{http://www.astro.uni-koeln.de/cdms/}} \citep{Muller2001, Muller2005}. Table \ref{Tab:lines} reports the list of the detected transitions with their spectroscopic and derived line parameters, namely the line frequency, $\nu$, the upper level energy, E$_{\rm up}$, the line strength, S$\mu^2$, the line peak intensity in main beam temperature scale, T$_{\rm peak}$, the root mean square noise, rms, and the integrated intensity I$_{\rm int}$. The upper level energy of the detected lines is low, E$_{\rm up}$ $\leq$ 10 K. The line analysis has been performed using the GILDAS\footnote{http://www.iram.fr/IRAMFR/GILDAS} CLASS package. The observed spectra are reported in Figures \ref{fig:spectra-HC3N} -- \ref{fig:spectra-HC9N}. \startlongtable \begin{deluxetable*}{lcccccc} \tablecaption{List of transitions and line properties (in T$_{\rm MB}$ scale) of the cyanopolyyne emission. The columns report the transition and their frequency (GHz), the upper level energy E$_{\rm up}$ (K), the line strength $S\mu^2$ (D$^2$), the line peak temperature (mK), the rms (mK), and the velocity integrated line intensity I$_{\rm int}$ (mK km s$^{-1}$). \label{Tab:lines}} \tablewidth{0pt} \tablehead{ \colhead{Transition} & \colhead{$\nu$$^{\rm a}$} & \colhead{E$_{\rm up}$$^{\rm a}$} & \colhead{$S\mu^2$$^{\rm a}$} & \colhead{T$_{\rm peak}$} & \colhead{rms} & \colhead{I$_{\rm int}$$^{\rm b}$} \\ \nocolhead{} & \colhead{(GHz)} & \colhead{(K)} & \colhead{(D$^2$)} & \colhead{(mK)} & \colhead{(mK)} & \colhead{(mK km s$^{-1}$)}\\} \startdata \hline \hline \multicolumn7c{HC$_{\rm 3}$N} \\ \hline HC$_{\rm 3}$N 1--0, F= 1--1 & 9.0970 & 0.44 & 4.6 & 941 & 5 & 247 (1) \\ HC$_{\rm 3}$N 1--0, F= 2--1 & 9.0983 & 0.44 & 7.7 & 1485 & 5 & 397 (1) \\ HC$_{\rm 3}$N 1--0, F= 0--1 & 9.1003 & 0.44 & 1.5 & 333 & 5 & 90 (1) \\ \hline \multicolumn7c{HC$_{\rm 5}$N} \\ \hline HC$_{\rm 5}$N 3--2, F= 2--1 &7.98778 & 0.77 & 33.8 & 121 & 8 & 30 (1)\\ HC$_{\rm 5}$N 3--2, F= 3--2 & 7.98799 & 0.77 & 50.0 & 229 & 8 & 47 (1)\\ HC$_{\rm 5}$N 3--2, F= 4--3 &7.98804 & 0.77 & 72.3 & 317 & 9 & 67 (1)\\ HC$_{\rm 5}$N 3--2, F= 2--2 &7.98992 & 0.77 & 6.2 & 43 & 6 & 7 (1)\\ HC$_{\rm 5}$N 4--3, F= 4--4 & 10.64923 & 1.28 & 4.7 & 54 & 6 & 9 (1)\\ HC$_{\rm 5}$N 4--3, F= 3--2 & 10.65056 & 1.28 & 53.6 & 567 & 6 & 108 (1)\\ HC$_{\rm 5}$N 4--3, F= 4--3 & 10.65065 & 1.28 & 70.3 & 700 & 7 & 141 (1) \\ HC$_{\rm 5}$N 4--3, F= 5--4 & 10.65068 & 1.28 & 91.7 & 896 & 6 & 179 (1)\\ HC$_{\rm 5}$N 4--3, F= 3--3 & 10.65249 & 1.28 & 4.7 & 61 & 6 & 13 (1)\\ \hline \multicolumn7c{HC$_{\rm 7}$N} \\ \hline \vspace{0.15cm} HC$_{\rm 7}$N 8--7, F= 7--6 & 9.02399 & 1.95 & 161.1 & 92 & 5 & 20 (1) \\ HC$_{\rm 7}$N 8--7, F= 8--7 & 9.02401 & \multirow{2}{*}{1.95} & 183.0 & \multirow{2}{*}{109} & \multirow{2}{*}{5} & \multirow{2}{*}{46 (1)}\\ \vspace{1cm} HC$_{\rm 7}$N 8--7, F= 9--8 & 9.02402 & & 207.7 & & & \\[+1cm] HC$_{\rm 7}$N 9--8, F= 8--7 & 10.15200 & \multirow{3}{*}{2.44 }& 184.5 & \multirow{3}{*}{217} & \multirow{3}{*}{4} & \multirow{3}{*}{93 (1)} \\ HC$_{\rm 7}$N 9--8, F= 9--8 & 10.15201 & & 206.5& & & \\ \vspace{0.15cm} HC$_{\rm 7}$N 9--8, F= 10--9 & 10.15202 & & 231.1& & & \\ HC$_{\rm 7}$N 10--9, F= 10--10 & 11.28900 & \multirow{3}{*}{3.00 }& 207.9 & \multirow{3}{*}{323} & \multirow{3}{*}{8} & \multirow{3}{*}{133 (2)} \\ HC$_{\rm 7}$N 10--9, F= 10--9 & 11.28001 & & 230.0 & & & \\ \vspace{0.15cm} HC$_{\rm 7}$N 10--9, F= 11--10 & 11.28001 & & 254.4 & & &\\ HC$_{\rm 7}$N 13--12, F= 12--11 & 14.66399 & \multirow{3}{*}{4.93 }& 277.8 & \multirow{3}{*}{443} & \multirow{3}{*}{7} & \multirow{3}{*}{151 (1)} \\ HC$_{\rm 7}$N 13--12, F= 13--12 & 14.66399 & & 300.3 & & & \\ HC$_{\rm 7}$N 13--12, F= 14--13 & 14.66400 & & 324.4 & & & \\ \hline \multicolumn7c{HC$_{\rm 9}$N} \\ \hline HC$_{\rm 9}$N 14--13, F= 13--12 & 8.13450 & \multirow{3}{*}{2.93 }& 350.5 & \multirow{3}{*}{32} & \multirow{3}{*}{5} &\multirow{3}{*}{14 (1)} \\ HC$_{\rm 9}$N 14--13, F= 14--13 & 8.13450 & & 376.7 & & & \\ \vspace{0.15cm} HC$_{\rm 9}$N 14--13, F= 15--14 & 8.13451 & & 404.6 & & & \\ HC$_{\rm 9}$N 15--14, F= 14--13 & 8.71553 & \multirow{3}{*}{3.35 }& 377.6 & \multirow{3}{*}{41}& \multirow{3}{*}{5}& \multirow{3}{*}{13 (1)} \\ HC$_{\rm 9}$N 15--14, F= 15--14 & 8.71554 & & 376.7 & & & \\ \vspace{0.15cm} HC$_{\rm 9}$N 15--14, F= 16--15 &8.71554 & & 431.7 & & & \\ HC$_{\rm 9}$N 16--15, F= 15--14 & 9.29657 & \multirow{3}{*}{3.79 }& 404.8 & \multirow{3}{*}{42} & \multirow{3}{*}{5} & \multirow{3}{*}{15 (1) } \\ HC$_{\rm 9}$N 16--15, F= 16--15 & 9.29657 & & 430.9 & & & \\ \vspace{0.15cm} HC$_{\rm 9}$N 16--15, F= 17--16 & 9.29657 & & 458.9 & & & \\ HC$_{\rm 9}$N 17--16, F= 16--15 & 9.87760 & \multirow{3}{*}{4.27 }& 431.8 & \multirow{3}{*}{57} & \multirow{3}{*}{5} & \multirow{3}{*}{19 (1) } \\ HC$_{\rm 9}$N 17--16, F= 17--16 & 9.87761 & & 458.1 & \\ \vspace{0.15cm} HC$_{\rm 9}$N 17--16, F= 18--17 & 9.87761 & & 486.0 & & & \\ HC$_{\rm 9}$N 18--17, F= 17--16 & 10.45864 & \multirow{3}{*}{4.77 }& 458.9 & \multirow{3}{*}{55}& \multirow{3}{*}{5}&\multirow{3}{*}{18 (1) } \\ HC$_{\rm 9}$N 18--17, F= 18--17 & 10.45864 & & 485.2 & & & \\ \vspace{0.15cm} HC$_{\rm 9}$N 18--17, F= 19--18 & 10.45864 & & 513.0 \\ HC$_{\rm 9}$N 19--18, F= 18--17 & 11.03967 & \multirow{3}{*}{5.30 }& 486.0 & \multirow{3}{*}{95} & \multirow{3}{*}{7} &\multirow{3}{*}{22 (1) } \\ HC$_{\rm 9}$N 19--18, F= 19--18 & 11.03967 & & 512.3 & & & \\ \vspace{0.15cm} HC$_{\rm 9}$N 19--18, F= 20--19 & 11.03967 & & 540.1 & & & \\ HC$_{\rm 9}$N 24--23 & 13.94483 & 8.37 &1947.0 & 152 & 6 & 24 (1) \\ HC$_{\rm 9}$N 25--24 & 14.52586 & 9.06 & 2028.1 & 157 & 6 & 25 (1) \\ HC$_{\rm 9}$N 26--25 & 15.10689 & 9.79 & 2108.9 & 166 & 7 & 26 (1) \\ \hline \enddata \tablecomments{$^{\rm a}$ Frequencies and spectroscopic parameters have been provided by \citet{deZafra1971} for HC$_{\rm 3}$N, \citet{Giesen2020} for HC$_{\rm 5}$N, and HC$_{\rm 9}$N, and \citet{McCarthy2000} for HC$_{\rm 9}$N, respectively, and retrieved from the Cologne Database for Molecular Spectroscopy \citep{Muller2005}. $^{\rm b}$ Errors on the integrated intensity do not include 20$\%$ of calibration.\\ } \end{deluxetable*} \begin{figure}[ht] \includegraphics[scale=0.7]{spettri-L1544-HC3N.eps} \caption{HC$_{\rm 3}$N transitions observed towards L1544 with the GBT. The vertical dashed lines mark the ambient LSR velocity (+ 7.2 km s$^{-1}$, \citealt{Tafalla1998}). The upper level energy of each transition is reported on the right inside the top panel.} \label{fig:spectra-HC3N} \end{figure} \begin{figure*}[ht] \includegraphics[scale=0.7]{spettri-L1544-HC5N.eps} \caption{HC$_{\rm 5}$N transitions observed towards L1544 with the GBT. The vertical dashed lines mark the ambient LSR velocity (+ 7.2 km s$^{-1}$, \citealt{Tafalla1998}). The upper level energy of each transition is reported on the right inside the top panels.} \label{fig:spectra-HC5N} \end{figure*} \begin{figure}[ht] \includegraphics[scale=0.7]{spettri-L1544-HC7N.eps} \caption{HC$_{\rm 7}$N transitions observed towards L1544 with the GBT. The vertical dashed lines mark the ambient LSR velocity (+ 7.2 km s$^{-1}$, \citealt{Tafalla1998}). The upper level energy of each transition is reported on the right inside each panel.} \label{fig:spectra-HC7N} \end{figure} \begin{figure*}[ht] \includegraphics[scale=0.7]{spettri-L1544-HC9N.eps} \caption{HC$_{\rm 9}$N transitions observed towards L1544 with the GBT. The vertical dashed lines mark the ambient LSR velocity (+ 7.2 km s$^{-1}$, \citealt{Tafalla1998}). The upper level energy of each transition is reported on the right inside each panel. The blue lines indicate the position of the hyperfine components.} \label{fig:spectra-HC9N} \end{figure*} The observed cyanopolyyne lines show a double peaked profile, revealed thanks to the very high spectral resolution ($\sim$ 1.4 kHz, corresponding to $\sim$ 30 m s$^{-1}$ at 14 GHz) provided by the GBT. The two emission peaks are located at +7.1 km s$^{-1}$ and +7.3 km s$^{-1}$ with a dip located at the systemic source velocity (+7.2 km s$^{-1}$). The red-shifted peak is brighter than the blue-shifted one by a factor between 3 and 5, depending on the line. We report for the first time towards the source the heavier cyanopolyyne HC$_9$N, while HC$_3$N, HC$_5$N, and HC$_7$N have been previously observed in L1544 \citep{Snell1981, Cernicharo1986,Quenard17, Hily2018}. The HC$_3$N spectra previously observed at the IRAM 30m have a spectral resolution $\sim$ 0.2 km s$^{-1}$, and they do not reveal the double peak profile. Interestingly, the velocities of the two peaks are consistent with the blue- and red-shifted velocities revealed by the peak velocity distribution derived by \citet{Spezzano2016} in c-C$_3$H$_2$ using the IRAM 30m. In addition, the spectral profiles of the present data set are well consistent with those at high spectral resolution (down to $\simeq$ 0.04 km s$^{-1}$) previously observed at GBT towards L1544 in other complex C-bearing species such as C$_4$H, C$_6$H and C$_6$H$^{-}$ \citep{Gupta2009}. \section{Physical conditions and abundance ratios} \label{sec:modeling} \subsection{Full radiative transfer modeling}\label{subsec:model-loc} In order to interpret the line profiles, we use the full radiative transfer code LOC \citep{Juvela2020} with a 1D model that assumes spherically symmetric distribution of physical structures, characterised by volume density, $\rho(r)$, kinetic temperature, $T(r)$ and velocity field, including both micro-turbulence, $\sigma_{\mathrm{turb}}$ and radial velocity, $V(r)$. We adopted the parameterised forms of $\rho(r)$, $T(r)$ and $V(r)$ following \citet{Keto2010} for L1544. The core radius is assumed to be 0.3 pc. In the modeling, linear discretization is used for the grids with a physical resolution of 60 au. We then convolved the output spectral cube with the corresponding observational beam for each frequency. We tested several abundance profiles for HC$_3$N, with both constant abundance profile and constant abundance profile with depletion in the inner few thousand au of the core. While we are able to reproduce the overall intensity of the lines with abundances of few 10$^{-9}$, we cannot reproduce the observed line profiles. Figure \ref{fig:LOC-HC3N} shows the results of one of our test, ran using a constant abundance profile of 5$\times$10$^{-9}$ with respect to H$_2$ and complete freeze-out towards the inner 5000 au of the core. The 1-0 transition of HC$_3$N, given the very low critical density ($\sim$10$^3$ cm$^{-3}$), traces the outer layers of the core, as also found by e.g. \citet{Liu2022} in cold Planck clumps, and it is likely that the line profile that we observe is showing us the asymmetry in these outer layers. As a consequence, by using a spherical model we cannot reproduce the line profiles. The profile of many emission lines have been successfully reproduced with the Keto \& Caselli physical structure of L1544 using transitions with larger critical densities (10$^4$-10$^5$ cm$^{-3}$) with respect to the 1-0 transition of HC$_3$N. Likely the spherical symmetry approximation is not realistic for transitions tracing low density gas. \begin{figure*}[ht] \includegraphics[scale=0.5]{Modelling-HC3N.png} \caption{Comparison between the observed HC$_3$N line profiles and the theoretical profiles predicted using the radiative transfer code LOC (see Sect. \ref{subsec:model-loc}). The model, assuming spherical symmetry distribution, reproduce the line intensities but not the shape of the profiles. This suggests an asymmetric distribution of cyanopolyynes across the core, with the red-shifted component brighter than the blue-shifted one.} \label{fig:LOC-HC3N} \end{figure*} \subsection{Large Velocity Gradient modelling} \label{subsec:model-LVG} In order to understand the nature and the spatial origin of the gas emitting in cyanopolyynes, we analysed the observed HC$_5$N, HC$_7$N and HC$_9$N lines via a non Local thermodynamic equilibrium (non-LTE) Large Velocity Gradient (LVG) approach. To this end, we used the code \textsc{grelvg}, initially developed by \citet{Ceccarelli2003}. We used the collisional coefficients of HC$_5$N, HC$_7$N and HC$_9$N with para-H$_2$ between 10 and 100 K, computed from the HC$_3$N collisional coefficients as described in Sec. \ref{sec:nonLTE-coll-coef}. The first 50 levels of each species are included, which means transitions with upper level energies of 160, 70 and 35 K for HC$_5$N, HC$_7$N and HC$_9$N, respectively. We will discuss the impact of these limits when analysing the results of the modeling. To compute the line escape probability as a function of the line optical depth we adopted a semi-infinite expanding slab geometry \citep{Scoville1974} and a line width equal to 0.4 km~s$^{-1}$, following the observations. We ran several grids of models to sample the $\chi^2$ surface in the parameters space. Specifically, we varied the HC$_5$N column density N(HC$_5$N) from $1\times 10^{12}$ to $2\times 10^{15}$ cm$^{-2}$, the HC$_5$N/HC$_7$N abundance ratio $f_{5-7}$ from 3 to 9, the HC$_5$N/HC$_9$N abundance ratio $f_{5-9}$ from 9 to 49, the H$_2$ density n$_{H2}$ from 100 to $10^{6}$ cm$^{-3}$ and the temperature T$_{gas}$ from 5 to 100 K. We then fitted the measured HC$_5$N, HC$_7$N and HC$_9$N velocity-integrated line intensities by comparing them with those predicted by the model, leaving N(HC$_5$N), $f_{5-7}$, $f_{5-9}$, n$_{H2}$, $T_{gas}$ and the emitting size $\theta$ as free parameters. We proceeded in two steps. In a first step, we considered the three species independently and obtained constraints on their column densities and the emitting gas properties ($\theta$, n$_{H2}$, T$_{gas}$). In a second step, we fitted the lines from the three species simultaneously, obtaining constraints on all parameters: N(HC$_5$N), $f_{5-7}$, $f_{5-9}$, $\theta$, n$_{H2}$ and T$_{gas}$. \subsubsection{Step 1: HC$_5$N, HC$_7$N and HC$_9$N separate fit}\label{subsubsec:Step1} \textit{HC$_5$N:} Given the limited number (two) of lines and range of upper level energy, only a lower limit to the N(HC$_5$N) is obtained, $\geq 4\times$ 10$^{13}$ cm$^{-2}$, with an emitting size of about 100$''$. \vspace{0.2cm} \textit{HC$_7$N:} With four lines, the HC$_7$N fitting constrains better the gas parameter than the HC$_5$N one. The column density N(HC$_7$N) is constrained at 1 $\sigma$ between 0.6 and 3 $\times$ 10$^{13}$ cm$^{-2}$, where the emitting size are from 100$''$ to 38$''$, respectively. Both the gas temperature and densities are difficult to constrain as they depend on the HC$_7$N column density (see \S ~\ref{subsubsec:Step2} below). \vspace{0.2cm} \textit{HC$_9$N:} The column density N(HC$_9$N) is, at 1 $\sigma$, between 0.1 and 3 $\times$ 10$^{13}$ cm$^{-2}$, where the emitting size are from 150$''$ to 38$''$, respectively. Again, the gas temperature and densities are difficult to constraint as they depend on the HC$_9$N column density (see \S ~\ref{subsubsec:Step2} below). \subsubsection{Step 2: HC$_5$N, HC$_7$N and HC$_9$N simultaneous fit}\label{subsubsec:Step2} We assumed that the three cyanopolyynes originate in the same gas. This assumption is based on the similarity of the line shapes of the three species. We considered all the lines detected in the observations presented here, namely a total of 18 lines which leads to 12 degrees of freedom in the fit. \begin{figure*} \centering \includegraphics[width=1\columnwidth,trim=0 0.5cm 0 1.2cm]{chi2-c5c7c9.png} \includegraphics[width=1\columnwidth,trim=0 0.5cm 0 1.2cm]{hc5-7-9n_v25f6f25-chi2Nx.png} \includegraphics[width=1\columnwidth,trim=0 0.5cm 0 1.2cm]{hc5-7-9n_v25f6f25-chi2Tn.png} \includegraphics[width=1\columnwidth,trim=0 0.5cm 0 1.2cm]{hc5-7-9n_v25f6f25-model.png} \caption{Results from the $\chi^2$ minimisation obtained fitting simultaneously the HC$_5$N, HC$_7$N and HC$_9$N velocity-integrated line intensities. \textit{Top left panel}: Reduced $\chi^2$ as a function of the HC$_5$N/HC$_7$N and HC$_5$N/HC$_9$N abundance ratios. The cross shows the best fit and the green area the 1$\sigma$ uncertainty parameter space. The dashed lines show the HC$_7$N/HC$_9$N ratio. \textit{Top right panel}: Minimum $\chi^2$ obtained at each HC$_5$N column density N(HC$_5$N) (minimised with respect to the gas temperature and density) as a function of N(HC$_5$N). The dashed horizontal line shows the 1$\sigma$ interval. \textit{Bottom left panels}: Contour plot of the $\chi^2$ as a function of the density and temperature, obtained for the best fit HC$_5$N column density of $4.3\times$ 10$^{13}$ cm$^{-2}$ and source size of 80$^{\prime\prime}$ (see upper panel). The red cross indicates the best fit gas density and temperature and the blue curve the 1$\sigma$ interval. \textit{Bottom right panel}: Observed velocity-integrated line intensities of HC$_5$N (circles), HC$_7$N (stars) and HC$_9$N (triangles) and the modelled ones (red curves) as a function of the upper J of the transitions, computed at the best fit (see text).} \label{fig:nonLTE-analysis} \end{figure*} We started by exploring the $\chi^2$ surface over large ranges and gradually zoomed to smaller ones to be sure not to miss local minima. In practice, each grid consisted of about 10$^4$ models and we ran a dozen grids. The results of the analysis are shown in Fig. \ref{fig:nonLTE-analysis} and reported in Table \ref{tab:LVG-results}. \begin{table}[] \centering \begin{tabular}{c|c|c} \hline Parameter [units] & Best-fit & 1$\sigma$ range \\ \hline $\theta$ [$^{\prime\prime}$] & 80 & 32--100 \\ N(HC$_5$N) [$\times 10^{13}$ cm$^{-2}$]& 4.3 & 3--20 \\ N(HC$_5$N)/N(HC$_7$N) & 6 & 4--7 \\ N(HC$_7$N)/N(HC$_9$N) & 4 & 3--5 \\ H$_2$ density [cm$^{-3}$] & 100 & $\geq 100$ \\ Gas temperature [K] & 7.5 & 5--12\\ \hline \end{tabular} \caption{Results of the non-LTE LVG analysis. Best fit emitting region size (row 1), HC$_5$N column density (row 2), HC$_5$N/HC$_7$N and HC$_5$N/HC$_9$N column density ratios (row 3 nd 4, respectively), H$_2$ density (row 5), and gas temperature (row 6). For each parameter the 1$\sigma$ range is reported in column 2.} \label{tab:LVG-results} \end{table} First, the top panel of Fig. \ref{fig:nonLTE-analysis} shows the reduced-$\chi^2$ as a function of the HC$_5$N/HC$_7$N and HC$_5$N/HC$_9$N abundance ratios. The best fit (reduced-$\chi^2$= 0.31) is obtained for $f_{5-7}=6^{+1}_{-2}$ and $f_{5-9}=25 \pm10$, along the a banana-like curve with HC$_7$N/HC$_9$N=$4\pm1$. The best fit is obtained at HC$_5$N/HC$_7$N=6 and HC$_7$N/HC$_9$N=4. With these values, the HC$_5$N column density N(HC$_5$N) is equal to $\sim$3--20 $\times 10^{13}$ cm$^{-2}$ and $\theta$ is $\sim$100--32$\arcsec$ (the larger N(HC$_5$N) the smaller $\theta$), as shown in Fig. \ref{fig:nonLTE-analysis}. The lowest $\chi^2$ is obtained for N(HC$_5$N)=$4.3\times 10^{13}$ cm$^{-2}$ and $\theta$=80$\arcsec$. With these values, the lowest $\chi^2$ is obtained for $T_{gas}$=7.5 K and n$_{H2}$=100 cm$^{-3}$. At the 1$\sigma$ level the temperature is between 5 and 12 K and the density remains unconstrained (Fig. \ref{fig:nonLTE-analysis}). The observed and predicted intensities of all the lines are shown in the bottom panel of Fig. \ref{fig:nonLTE-analysis}. The HC$_5$N, HC$_7$N and HC$_9$N lines are all optically thin (the highest $\tau$ is 0.6 for the HC$_5$N J=4 line) and the line are moderately subthermally populated (T$_{ex} \sim$6--7 K). \subsubsection{Impact of the limited number of considered levels} Finally, we comment about the possible impact on the gas temperature caused by the limited number of levels, only 50 for the three cyanopolyynes. While the number of levels of HC$_5$N are enough for a reasonable analysis for temperatures below 30 K, the limited number of levels of HC$_7$N and HC$_9$N may, in principle, be problematic. However, since the gas temperature derived from the previous LVG analysis is lower than 12 K, the analysis is probably not greatly impacted by the range of energy levels probed by the observations. For example, the excitation temperature T$_{ex}$ of the HC$_7$N \textit{J}=13-12 and HC$_9$N \textit{J}=26-25 lines are 7.3 and 6.95 K, respectively, when the gas kinetic temperature is 7.5 K. Therefore, the highest levels of HC$_7$N and HC$_9$N are probably not populated, which makes the above analysis reliable. \section{Collisional coefficients}\label{sec:nonLTE-coll-coef} To the best of our knowledge, no collisional data are available for the HC$_5$N and HC$_7$N molecules. However, as already suggested by \cite{Snell:81}, HC$_5$N--H$_2$ and HC$_7$N--H$_2$ rate coefficients may be estimated from HCN--H$_2$ and HC$_3$N--H$_2$ rate coefficients considering that the rate coefficients will be proportional to the size of the molecules. We considered the HC$_3$N--H$_2$ rate coefficients as a reference for the estimation of HC$_5$N--H$_2$ and HC$_7$N--H$_2$ rate coefficients. As a crude approximation, HC$_5$N and HC$_7$N are respectively 1.5 and 2 times longer than HC$_3$N whereas HCN is 2 times shorter than HC$_3$N. \cite{Snell:81} scale the HC$_3$N rate coefficients by a factor 1.5 and 2 to get HC$_5$N and HC$_7$N rate coefficients respectively. They also checked that HCN and HC$_3$N rate coefficients obey to the same rules. However, as noticed by \cite{Snell:81}, the scaling factors are averages and significant deviations occur depending on the transition and the temperature considered. In this work, in order to improve the accuracy of the estimation, we consider scaling factors depending on the size of the molecule (1.5 for HC$_5$N and 2 for HC$_7$N) but also on the transition and the temperature considered. Indeed, the ratio of HC$_3$N--H$_2$ over HCN--H$_2$ rate coefficients was also used to evaluate the HC$_5$N--H$_2$ and HC$_7$N--H$_2$ as follows: \begin{eqnarray} k^{\rm{HC_7N-H_2}}_{J \to J'} (T) & = & k^{\rm{HC_3N-H_2}}_{J \to J'} (T) \frac{k^{\rm{HC_3N-H_2}}_{J \to J'} (T)}{k^{\rm{HCN-H_2}}_{J \to J'} (T)} \end{eqnarray} and \begin{eqnarray} k^{\rm{HC_5N-H_2}}_{J \to J'} (T) & = & \frac{1}{2} (k^{\rm{HC_3N-H_2}}_{J \to J'} (T)+k^{\rm{HC_7N-H_2}}_{J \to J'} (T)) \end{eqnarray} The HCN--H$_2$ rate coefficients of \cite{benabdallah:12} and the HC$_3$N--H$_2$ rate coefficients of \cite{Wernli:07} were used in the above formula. Using such an approach, we expect to have a better description of large $\Delta J$ (\textit{J}'-\textit{J} $>>$ 1) for collisionally-induced transitions and for very low temperatures. In the astrophysical models, the HC$_9$N--H$_2$ rate coefficients were considered to be the same as the HC$_7$N--H$_2$ ones. It should be noted that the accuracy of the present calculations is expected to be accurate only to within about an order of magnitude. Future calculations of rate coefficients for HC$_5$N and HC$_7$N molecules have to be performed using a more reliable approach. \section{Discussion} \label{sec:discussion} \subsection{Origin of the cyanopolyyne emission in L1544} \label{subsec:discussion-origin} The high spectral resolution provided by the present observations allow us to resolve in details the shape of the observed cyanopolyynes lines (see Sec \ref{subsec:results}). Specifically, the lines present a double peak with the red-shifted intensity brighter than the blue-shifted one (Sec. \ref{subsec:results}). Previous maps by \cite{Spezzano2016} and \cite{Spezzano2017} show that the emission of carbon chains is concentrated in south-east region of L1544, in contrast with that of methanol, whose lines are bright in the north. The velocity in the two regions is slightly different, with the C-chain peak red-shifted with respect to the velocity of the methanol peak. In the present observations, the velocity of the cyanopolyyne red-shifted peak is consistent to that of the carbon chains emitting region. Therefore, the fact that the cyanopolyynes lines are brighter in the red-shifted peak implies that they mainly originate in the south-east region of the L1544 core. These results support the idea that the carbon chains abundance is enhanced towards the external part of the core, where material is more exposed to the interstellar field radiation. The southern part of the source is particularly exposed since it is located at the edge of the cloud \citep{Andre2010}, and this would increase the free carbon atoms available to form long chains (see also \citealt{Spezzano2016}). The line profiles observed with the GBT are consistent with those observed by \citet{Gupta2009} in other carbon chains but clearly different from those observed in high-density gas tracers towards L1544 (e.g., N$_2$D$^+$ and N$_2$D$^+$; \citealt{Caselli2002,Caselli2017}), supporting this interpretation. The non-LTE LVG analysis (Sec. \ref{subsec:model-LVG}) indicates that the cyanopolyyne emission originates predominantly from an extended region ($\sim$ 13600 au in radius) at low temperature (5--12 K). Unfortunately, the gas density is unconstrained other than being larger than $\sim$ 100 cm$^{-3}$. \begin{figure} \centering \includegraphics[width=1\columnwidth]{Herschel-L1544-Td.eps} \caption{Dust temperature calculated from l/SPIRE data at 250, 350, and 500 $\mu$m observations towards L1544 and presented in \citet[][]{Spezzano2016}. We show superposed in black the GBT HPBWs in the two observed bands.} \label{fig:herschel} \end{figure} The derived temperature is in agreement with the dust temperature measured with Herschel and shown in Fig. \ref{fig:herschel}. SPIRE is only sensitive to the extended emission and the core regions probed by the GBT have a dust temperature ranging between 11.5 K and 15 K. On the other hand, other molecular tracers, such as NH$_3$ and its isotopologues, show that the gas temperature decreases towards the central part of the core down to $\sim$ 7 K \citep{Crapsi2007}. However, the analysis has several limitation which should be taken into account. First, the physical model of the source used in our analysis of Sec. \ref{subsec:model-loc}, i.e. the one by \citet{Keto2010}, assumes spherical symmetry, within the observed region. On the contrary, the non-LTE LVG analysis in Sec. \ref{subsec:model-LVG}, assumes a semi-infinite slab. On the other hand, the spectral profiles suggest that the cyanopolyyne emission is non-homogeneously distributed and the modelling does not distinguish the different emitting components. The results of the modelling would be then representative of the physical conditions of the main emitting component but they could be locally different. Another assumption of the Sec. \ref{subsec:model-LVG} non-LTE LVG modelling is that all the cyanopolyynes are cospatial and trace the same gas. Although this hypothesis is supported by the similar line profiles, it can not be verified only using single-dish GBT observations. Finally, recent observations have shown the presence of deuterated carbon chains (c-C$_3$HD and c-C$_3$D$_2$) towards the dusty peak at the center of the core \citep{Giers2022}. This would suggest that even if the bulk of the emission comes from the external layers, a significant fraction of molecular species rich in carbon can still be present in the densest regions of the core, where free carbon atoms are produced by the CO destruction from the molecular ions created by cosmic-rays (such as He$^{+}$), as first proposed by \citet{Ruffle1999}. Further interferometric observations are needed to map the cyanopolyyne distribution across the core. In this respect, the next generation of radio interferometer such as SKA$\footnote{https://www.skao.int/}$ and ngVLA$\footnote{https://ngvla.nrao.edu/}$ will be a major step ahead. \subsection{Comparison of L1544 with other sources} \label{subsec:discussion-comparison} Cyanopolyyne emission is ubiquitous in the ISM. The simplest cyanopolyyne, HC$_3$N, was one of the firsts molecular species detected outside of our Galaxy \citep{Mauersberger1990}. Small cyanopolyynes, up to HC$_7$N, have been detected in several starless and protostellar cores in different star forming regions. However, few measurements exist so far of more complex cyanopolyynes such as HC$_9$N and HC$_{11}$N. Figure \ref{fig:cyano-comparison} (left panel) and Table \ref{Tab:cyano-other-sources} report the column densities measured in starless cores for cyanopolyynes from HC$_5$N to HC$_9$N. The only source for which all the cyanopolyynes from HC$_3$N to HC$_{11}$N have been measured so far is TMC-1. Figure \ref{fig:cyano-comparison} (right panel) shows the comparison between L1544 (this work) and TMC-1 for cyanopolyynes from HC$_3$N to HC$_{11}$N. The deep surveys QUIJOTE and GOTHAM, performed on the source with the Yebes 40m telescope and the GBT, respectively, lead to precise column density measurements. The major uncertainties are introduced by the assumption on the source size, which is highly covariant with the column density. Moreover, since the cyanopolyynes are very abundant in the source, the column densities of some species such as HC$_5$N and HC$_9$N could be affected by line opacity effects \citep[][and private communication]{Cernicharo2020}. For all these reasons, every comparison between the column densities measured in different sources has to be taken with caution. In TMC-1 all the cyanopolyynes have higher column densities than in L1544. In particular N(HC$_3$N) is higher of a factor 2--3, while N(HC$_5$N) of a factor 2--4, N(HC$_7$N) of a factor 3--5 and N(HC$_9$N) of a factor 12. The associated errors are quite large and further constraints on the cyanopolyyne spatial distribution are needed in order to derive reliable abundances measurements and to effectively constrain the formation routes. However, the measurements in L1544 confirm that TMC-1 is not a unique source but an active carbon chains chemistry is efficient also in other sources. The comparison between the HC$_5$N, HC$_7$N and HC$_9$N column densities, reported in Figure \ref{fig:cyano-comparison} (left panel), suggest that the same chemistry could be active also in other cold cores. The fact that the measured column densities are similar within one order of magnitude, considering the different instruments and the large errors, may suggest that the heavy (with more than 5 carbons) cyanopolyyne formation is similar in different star forming regions, once free gaseous carbon is available (see Subsection \ref{subsec:discussion-tmc1}). \begin{figure*} \centering \includegraphics[width=1\columnwidth]{Cyano-star-forming-regions.pdf} \includegraphics[width=1\columnwidth]{L1544-TMC1-comparison2.pdf} \caption{Comparison between the cyanopolyyne column densities measured in different star forming regions. \textit{Left panel}: Comparison of column densities of HC$_5$N, HC$_7$N and HC$_9$N. We indicate with the same color the cold cores located in the same star forming regions. The cyanopolyyne emission is commonly detected in several star forming regions and the column densities are in agreement within one order of magnitude, considering the errors and the different instruments. The measurements are taken from the references in Table \ref{Tab:cyano-other-sources}. \textit{Right panel}: Comparison between the cyanopolyyne abundances from HC$_3$N to HC$_{11}$N measured in TMC-1 and L1544 (this work). For TMC-1, the filled circles are from \citet{Loomis2021}, while the empty ones refer to \citet{Cernicharo2020} and \citet{Cabezas2022}.} \label{fig:cyano-comparison} \end{figure*} \begin{table*}[] \centering \begin{tabular}{cc|ccccc} \hline Star forming region & Source & T$_{ex}$ & N(HC$_5$N) & N(HC$_7$N) & N(HC$_9$N) & References \\ & & K & 10$^{12}$ cm$^{-2}$ & 10$^{12}$ cm$^{-2}$ & 10$^{12}$ cm$^{-2}$ & \\ \hline Taurus & TMC-1 & 8 & 66.9 (1.3) & 36.5 (1.3) & 22 (2) & [1]\\ Taurus & TMC-1 & 8.6 (0.2) -- 7.6 (0.2) & 180 (20) & 21 (2) & -- & [2,3]\\ Taurus & L1527 IRAS 04368+2557 & 12.3 & 5.4 & 1.5 & 0.14 & [4]\\ Lupus & Lupus-1A & 10.0 (0.2) & 12.1 (0.6) & 3.5 (0.2) & 2.2 (0.6) & [5,6,7]\\ Serpens & Serpens South 1a & 7 & 12 (1) & 6.0 (0.2) & 3.1 (0.2) & [8]\\ Lupus & LupusI-2 & 11.5 (0.2) & 22 (1) & 6.1 (0.3) & -- & [7]\\ Lupus & LupusI-5 & 11.2 (0.1) & 39 (1) & 11.0 (0.4) & -- & [7]\\ Lupus & LupusI-7/8/9 & 10.2 (0.1) & 17.7 (0.6) & 4.7 (0.2) & -- & [7]\\ Lupus & LupusI-11 & 11.9 (0.8) & 25.6 (0.8) & 8.0 (0.3) & -- & [7]\\ Chameleon & Cha-MMS1 & 7 (1) & 4.5 (1.6) & 3 (8) & -- & [9]\\ Taurus-Auriga & L1512 & 8.7 (0.7) & 4.9 (0.1) & 1.9 (0.1) & -- & [10]\\ Cepheus & L1251A & 6.2 (0.3) & 7.5 (0.2) & 4.7 (0.4) & -- & [10]\\ Aquila Rift & L492 & 6.5--10 & 41 & 3.2 & -- & [11]\\ \hline \end{tabular} \caption{Cyanopolyynes abundances in starless cores. [1] \citealt{Loomis2021}; [2] \citealt{Cernicharo2020}; [3] \citealt{Cabezas2022} ; [4] \citealt{Sakai2008}; [5] \citealt{Sakai2009}; [6] \citealt{Sakai2010}; [7] \citealt{Wu2019}; [8] \citealt{Li2016}; [9] \citealt{Cordiner2012}; [10] \citealt{Cordiner2011}; [11] \citealt{Hirota2006}. \label{Tab:cyano-other-sources}} \end{table*} \subsection{Chemistry of cyanopolyynes} \label{subsec:discussion-chemistry} There is an ample consensus that the formation of cyanopolyynes is dominated by gas-phase reactions, in contrast with other large (e.g. with more than 5 atoms) species where a dust-grain surface chemistry can be at work \citep[e.g.][]{Ceccarelli2022}. The reason is that unsaturated carbon chains, such as the cyanopolyynes, would rapidly be hydrogenated on the dust-grain surfaces so that, in order to produce the large amount of observed cyanopolyynes and to have them in the gas-phase at the low temperatures where they are observed, the grain-surface chemistry seems not a viable solution. Several gas-phase formation routes have been invoked in the literature. In general, for cyanopolyynes with more than five carbon atoms the following reactions are believed to be (the most) important \citep[for a review see e.g.][]{Fukuzawa1998}: \begin{tabular}{clcl} 1 & C$_{2n+2}$H + N & $\rightarrow$ & HC$_{2n+1}$N + C \\ 2 & C$_{2n}$H$_2$ + CN & $\rightarrow$ & HC$_{2n+1}$N + H \\ 3 & H$_3$C$_{2n+1}$N$^+$ + e$^+$ & $\rightarrow$ & HC$_{2n+1}$N + H$_2$ \\ 4 & H$_2$C$_{2n+1}$N$^+$ + e$^+$ & $\rightarrow$ & HC$_{2n+1}$N + H \\ \end{tabular} where $n\geq2$. Destruction routes are dominated by reactions with ions such as e.g., C$^+$, H$^+$, H$_3^+$ and HCO$^+$. \cite{Loison2014} published a critical and general review of the reactions involving carbon chains, including cyanopolyynes up to HC$_9$N. They only list reactions (1), where they roughly evaluate the rate and branching ratios based on the capture theory and exothermicity of the products, respectively. Since the rate constants of reactions (3) are considered the same for HC$_7$N and HC$_9$N, and the destruction also occurs at the same rate, the ratios HC$_5$N:HC$_7$N:HC$_9$N would reflect the parent species ratios, namely HC$_6$:HC$_8$:HC$_{10}$. In other words, the HC$_n$N/HC$_{n+2}$N ratios is inherited from the HC$_n$/HC$_{n+2}$ one, as no reaction directly link HC$_n$N to HC$_{n+2}$N in the \cite{Loison2014} scheme. Anyway, \cite{Loison2014} modeled the TMC-1 case and predicted the HC$_5$N:HC$_7$N:HC$_9$N ratios to be 1:0.14--0.3:0.10--0.13 at around $1\times10^5$ yr, where the model predictions agree better with observations (of several carbon chains). The two low and high value of each ratio are obtained assuming an elemental C/O abundance equal to 0.7 and 0.95, respectively. In general, the HC$_7$N:HC$_9$N ratio observed toward TMC-1 is larger than the predicted one, even though the error bars are relatively large. Likewise, the HC$_5$N:HC$_7$N:HC$_9$N ratios that we measured in L1544, about 1:6:4 (see Fig. \ref{fig:nonLTE-analysis}) are not consistent with the \cite{Loison2014} model predictions. Reactions (2) was first mentioned and theoretically studied, via ab initio calculations, by \cite{Fukuzawa1998} for $n$=1--4. These authors found that the reactions (2) with $n\geq2$ are exothermic and the transition state barriers are all embedded. However, to our best knowledge, \cite{Fukuzawa1998} did not obtain the kinetics of the reactions. Reactions (3) and (4) were first introduced by \cite{Herbst1989} and successively studied by \cite{Loomis2016}. These authors obtained roughly estimates of their rate constants by educated guess (extrapolation from similar reactions with small carbon chains or via the Langevin rate). However, again to our best knowledge, no specific experimental or theoretical ab initio studies exist in the literature on the rate constants and branching ratios of reactions (3) and (4). That said, \cite{Loomis2016} developed an astrochemical model to reproduce the observations of TMC-1 and predicted HC$_5$N:HC$_7$N:HC$_9$N ratios equal to about 1:4:4, which is in relatively good agreement with our measured ratios towards L1544. Finally, comparing observations of the $^{12}$C/$^{13}$C cyanopolyynes towards TMC-1 and model predictions, \citet{Burkhardt2018} found that reaction (4) best reproduce the observations. Other processes could account for the formation of HC$_7$N and HC$_9$N. For instance, the reaction C$_3$N + C$_2$H$_2$ was proved to be very fast in CRESU experiments also at very low temperature \citep{Fournier2014}. Therefore, the reactions of the C$_3$N radical with C$_4$H$_2$ and C$_6$H$_2$ are expected to be at least as fast because of the increased dimension of the molecular partner. In addition, reactions of C$_4$H (detected in this object) with HC$_3$N or HC$_5$N are also expected to be very fast as the reaction of C$_4$H with acetylene is in the gas-kinetics limit at very low T \citep{Bertheloite2010}. \subsection{Age or UV illumination?} \label{subsec:discussion-tmc1} As already mentioned above, the crucial point is of the cyanopolyyne chemistry is the presence of gaseous carbon atoms, which, under standard conditions, are predominantly locked in CO molecules. Two general cases are possible: (a) either the object is very young and the locking of carbon atoms into CO is not complete yet or (b) carbon atoms are liberated from CO thanks to processes such as intense UV illumination or cosmic-ray irradiation, able to destroy a fraction of CO. The first case has been advocated for TMC-1 \citep[e.g.][]{agundez2013}. Following the discussion of Sec. \ref{subsec:discussion-origin}, the second case, specifically the intense UV illumination, seems to apply to L1544 \citep[e.g.][]{Spezzano2016,Spezzano2017}. In the case of UV illumination, the column density of the region where carbon is atomic is dictated by the penetration of the UV field, which is approximately given by a dust column density with a visual extinction of about 2--3 mag \citep[e.g.][]{Hollenbach1997,Snow2006}, which corresponds to N(H$_2$)$\sim$4--6$\times 10^{21}$ cm$^{-2}$. For example, the abundance of HC$_7$N (with respect to H$_2$) in L1544 would be $\sim 2\times10^{-9}$, where we used N(HC$_7$N)$\sim 8\times 10^{12}$ cm$^{-2}$ (\S~\ref{subsec:model-LVG}). In the case of a very young cloud, the column density of cyanopolyynes is, instead, determined by the H$_2$ column density of the cloud itself. For the TMC-1 cloud, the latest estimates indicate N(H$_2$) $\sim 3 \times 10^{22}$ cm$^{-2}$, which correspond to a visual extinction of about 15 mag \citep{Fuente2019}. Assuming N(HC$_7$N)$\sim 2\times 10^{13}$ cm$^{-2}$ (Sec. \ref{subsec:discussion-comparison}), the HC$_7$N abundance in TMC-1 is $\sim 7\times10^{-10}$, similar to L1544 within a factor of 3 which is the uncertainty in the estimates. Therefore, based on the simple estimates of the $n\geq 5$ cyanopolyyne abundances of TMC-1 and L1544, we confirm the general affirmation that the cyanopolyyne chemistry only depends on the gaseous carbon abundance. In other words, the $n\geq 5$ cyanopolyyne abundance ratios are the same regardless the cause of the presence of gaseous carbon atoms. On the other hand, the $n\geq 5$ cyanopolyyne column density is a proxy of the cause of the gaseous carbon. For example, a column density of HC$_7$N $\sim 8\times 10^{12}$ cm$^{-2}$ would be a strong indication of a UV-illuminated gas. \section{Summary and Conclusions} \label{sec:conclusions} We performed new observations using the GBT towards the prestellar core L1544. We detected several emission lines from cyanopolyynes from HC$_3$N to HC$_9$N, detected for the first time towards the source. The resolved spectral profiles show a double-peak profile, suggesting that the bulk of the cyanopolyyne emission is associated to the southern region of the core, where other smaller carbon chains peak. This supports the idea that cyanopolyynes are mainly formed in the external part of the core, where the interstellar field radiation increases the free carbon atoms available to form long chains. We perform a large velocity gradient analysis of the observed HC$_5$N, HC$_7$N, and HC$_9$N lines, thanks to a new estimation of the collisional coefficients. The simultaneous fitting of the species, indicates a gas kinetic temperature of 5--12 K, a source size of 80$\arcsec$ and a gas density larger than 100 cm$^{-3}$. The HC$_5$N/HC$_7$N/HC$_9$N abundance ratios measured in L1544 are about 1/6/4. The measured column densities are lower by a factor 2 to 5 than those measured in TMC-1. Even if the measurements in other star forming regions are scarce, the results obtained in L1544 suggest that a complex C-chain chemistry is active in other sources and it is related to the presence of free gaseous carbon. The latter can be abundant either because the core is very young and the conversion to CO is not completed, or because the CO is destroyed by UV illumination or cosmic-ray irradiation. We suggest that the column density of heavy cyanopolyynes (larger that HC$_5$N) could be a proxy to discriminate between these two regimes. \begin{acknowledgments} We are grateful to Jos\'e Cernicharo for valuable discussions and suggestions. We thank the anonymous referee for the constructive comments. This project has received funding from: 1) the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program, for the Project “The Dawn of Organic Chemistry” (DOC), grant agreement No 741002; 2) the PRIN-INAF 2016 The Cradle of Life - GENESIS-SKA (General Conditions in Early Planetary Systems for the rise of life with SKA); 3) the European Union’s Horizon 2020 research and innovation programs under projects “Astro-Chemistry Origins” (ACO), Grant No 811312; 4) the German Research Foundation (DFG) as part of the Excellence Strategy of the federal and state governments - EXC 2094 - 390783311. The Green Bank Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.\\ \end{acknowledgments} \vspace{5mm} \facilities{GBT} \software{astropy \citep{Astropy1,Astropy2}, matplotlib \citep{Matplotlib}}
aef0c244901d2bd3b4945d9aeea88b51f4f5f352
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} For two rooted trees $T$ and $U$ we say that $\phi: V(T) \to V(U)$ is an {\it embedding} of $T$ into $U$ if there is an extension of $\phi$ from a subdivision of $T$ to the smallest subtree $U_T$ of $U$ containing all vertices from $\phi(T)$. Of course, there must be no other vertex from $U_T$ between the root of $U$ and $\phi$'s image of $T$'s root. Equivalently, one can define this embedding by using the {\it tree-order}: for a tree $T$, $a\leq b$ provided that $a$ lies in the path from the root of $T$ to $b$. In light of this, an embedding between two trees is then a tree-order preserving function $T \to U$. If any embedding $T \to U$ exists, $T$ is said to be a {\it topological minor} of $U$ and we write $T \preceq U$. It is simple to show that the collection of all locally finite trees with the topological minor relation forms a quasi-ordered set of size $\mathfrak{c}$ (i.e., size continuum). When both $T\preceq Y$ and $U\preceq T$ are true, $T$ and $U$ are said to be {\it equivalent} and the equivalence classes generated by this relation are called {\it topological types}. A natural question to ask is: \begin{question} What is the size of the partially ordered set generated by considering all locally finite trees modulo this topological equivalence? \end{question} This question was originally posed by H. van der Holst and partially answered by Matthiesen in \cite{MR2236511} by non-constructive means. More precisely, let $\lambda$ denote the number of topological types of locally finite trees: clearly, $\omega \leq \lambda \leq \mathfrak{c}$ and Matthiesen refined it to $\omega_1 \leq \lambda \leq \mathfrak{c}$ by clever use of Nash-William's Theorem (which states that the infinite rooted trees are better-quasi-ordered under topological minor \cite{MR0175814}, \cite{MR1816801}). A good introduction to the subject can be found in \cite{MR2159259}. Matthiesen leaves as an open problem a constructive proof of this fact. Working solely within ZFC, we address Mathiesen's problem and extend her result by providing a construction of $\omega_1$ many topological types of free locally finite trees. In light of the Continuum Hypothesis (CH) this is the largest construction allowed within ZFC. We address this and other affine issues at the end of the paper. \section{The construction} We will inductively define a family ${\mathcal T} = \{T_\alpha \mid \alpha < \omega_1\}$ with the property that for all $\alpha < \beta < \omega_1$, $T_\alpha \preceq T_\beta \npreceq T_\alpha$. This collection would then be a specimen of an uncountable collection of topological types. Let's begin by defining $T_1$ to be a ray $R = v_1v_2v_3\ldots $ (denoted simply by $R$ from now on). The tree $T_2$ will be constructed by {\it attaching} (by {\it attach} we mean ``to join with an edge'') to each vertex of $R$ a copy of $T_1$ by its root. The resulting tree resembles a comb whose teeth are copies of $T_1$ and we denote this tree by $T_2$. In general: \begin{itemize} \item For $\alpha +1 =\beta$: $T_\beta$ is forged by a ray $R$ so that each vertex $v_i$ is attached to the root of a - unique - copy of $T_\alpha$. \medskip \item Whenever $\beta$ is limit: choose any strictly monotone and cofinal $\psi: \omega \to \beta$, for each $v_i$ in the ray $R$ attach a - unique - copy of $T_{\psi(i)}$ by its root. \end{itemize} \noindent For each tree $T_\alpha$ the ray used on its construction will be called its {\it spine}. Next we prove our main result. \begin{theorem}\label{thm:construction} For any $\alpha < \beta < \omega_1$, $T_\alpha \preceq T_\beta \npreceq T_\alpha$. \end{theorem} \begin{proof} By design it should be clear that for any $\alpha < \beta < \omega_1$, $T_\alpha \preceq T_\beta$. We thus focus on the latter assertion and notice that $T_1 \preceq T_2 \npreceq T_1$ starts the induction. For any $\beta = \alpha +1$ notice that since ${\mathcal T}$ is nested, we need only show that $T_\beta \npreceq T_\alpha$. If $\beta$ is limit, we still have to show that $T_\beta \npreceq T_\alpha$ for all $\alpha < \beta$. Assume that for a given $\alpha < \beta < \gamma < \omega_1$ we have $T_\alpha \preceq T_\beta \npreceq T_\alpha$.\\ \noindent \underline{$\gamma$ is a limit ordinal:}\\ Let $\psi: \omega \to \gamma$ be the strictly monotone and cofinal function defining $T_\gamma$. Assume that for some $\alpha < \gamma$, $T_\gamma \preceq T_\alpha$ and notice that for some $j\in \omega$ we have $ \alpha < \psi(j)$. Since the j$^\text{th}$ branch of $T_\gamma$ is $T_{\psi(j)}$, this then yields $T_{\psi(j)} \preceq T_\alpha$, an impossibility. \\ \noindent \underline{$\gamma = \beta + 1$:}\\ Assume that there is an embedding from $T_\gamma$ into $T_\beta$ and consider any branch $T_\beta^i$ (the i$^{\text{th}}$ copy of $T_\beta$ along $T_\gamma$'s spine). The embedding cannot map $T_\beta^i$ strictly within any branch that stems off of $T_\beta$'s spine (recall that by induction $T_\beta$ cannot be embedded into any such branch). However, if any vertex of $T_\beta^i$ is mapped to a vertex, $p$, on the spine of $T_\beta$ then the since the embedding preserves tree-order (i.e., incomparable vertices are mapped to incomparable vertices) no branch in $T_\gamma$ higher up than $T_\beta^i$ can be mapped anywhere above $p$. This leaves only finitely many branches in $T_\beta$ for mapping the rest of $T_\gamma$. Yielding that at least one branch of $T_\gamma$ will be mapped within a branch of $T_\beta$ and another contradiction. \end{proof} \begin{corollary} The family ${\mathcal T}$ contains $\omega_1$ many topological types of free locally finite trees. \end{corollary} \begin{proof} It is evident from the proof of Theorem~\ref{thm:construction} that even considered as free trees, the family ${\mathcal T}$ contains $\omega_1$ many topological types. \end{proof} \begin{rem} Due Nash-Williams Theorem we are forced to construct well-ordered chains when searching for large families of topological types of locally finite trees; the {\it width} of any such family must be finite and thus the bulk of its cardinality must be derived from its height. \end{rem} \section{Conclusions} Working within ZFC and due to CH, in terms of cardinality, one cannot construct a larger example than the one presented here. It remains an open question whether or not it is a ZFC theorem that any family of topological types of locally finite trees must have cardinality of at most $\omega_1$. \begin{question} Is it a theorem of ZFC that there does not exists a family of topological types whose size exceeds $\omega_1$? \end{question} It is simple to show that any tree that contains a copy of all trees in ${\mathcal T}$ must have a copy of the full binary tree. By our closing remark above, this then suggest that the bulk of any potentially large family of topological types (consistent with ZFC) must be developed from well-ordered chains of trees containing the full binary tree. More precisely we have the following questions. \begin{question} How many topological types of locally finite trees with a finite number of rays are there? \end{question} For any well-ordered set, its {\it order type} is the only ordinal which is order-equivalent to it. \begin{question} Is it possible to construct, for any $\alpha \in \omega_1$ a family of topological types of locally finite trees of order type $\alpha$? \end{question} We deal with the above questions in \cite{trees}. \bibliographystyle{plain}
661522e5864535e4820dd14ac20107c6d2d231f3
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} In this paper we assume that \begin{empheq}[box=\mybluebox]{equation*} \label{T:assmp} X \text{~~is a real Hilbert space}, \end{empheq} with inner product $\innp{\cdot,\cdot}$ and induced norm $\norm{\cdot}$. A classical problem in optimization is to find a minimizer of the sum of two proper convex lower semicontinuous functions. This problem can be modelled as \begin{equation} \label{e:sumprob} \text{find $x\in X$ such that $0\in (A+B)x$,} \end{equation} where $A$ and $B$ are maximally monotone operators on $X$, namely the subdifferential operators of the functions under consideration. For detailed discussions on problem \cref{e:sumprob} and the connection to optimization problems we refer the reader to \cite{BC2011}, \cite{Borwein50}, \cite{Brezis}, \cite{BurIus}, \cite{Comb96}, \cite{Simons1}, \cite{Simons2}, \cite{Rock98}, \cite{Zeidler2a}, \cite{Zeidler2b}, and the references therein. Due to its general convergence results, the Douglas--Rachford algorithm has become a very popular splitting technique to solve the sum problem \cref{e:sumprob} provided that the solution set is nonempty. The algorithm was first introduced in \cite{DoRa} to numerically solve certain types of heat equations. Let $x\in X$, let $T=T_{(A,B)}$ be the Douglas--Rachford operator associated with the ordered pair $(A,B)$ (see \cref{def:T}) and let $J_A$ be the resolvent of $A$ (see \cref{f:JA:RA}). In their masterpiece \cite{L-M79}, Lions and Mercier extended the algorithm to be able to find a zero of the sum of two, not necessarily linear and possibly multivalued, maximally monotone operators. They proved that the ``governing sequence" $(T^n x)_\ensuremath{{n\in{\mathbb N}}}$ converges weakly to a fixed point of $T$, and that if $A+B$ is maximally monotone, then the weak cluster points of the ``shadow sequence'' $(J_AT^n x)_\ensuremath{{n\in{\mathbb N}}}$ are solutions of \cref{e:sumprob}. In \cite{Svaiter}, Svaiter provided a proof of the weak convergence of the shadow sequence, regardless of $A+B$. Nonetheless, very little is known about the behaviour of the algorithm in the inconsistent setting, i.e., when the set of zeros of the sum is empty. In \cite{BCL04} (see \cref{rem:BCL04}), the authors showed that for the case when $A$ and $B$ are normal cone operators of two nonempty closed convex subsets of $X$, and $P_{\overline{\ensuremath{\operatorname{ran}}}(\ensuremath{\operatorname{Id}} -T)}\in \ensuremath{\operatorname{ran}} (\ensuremath{\operatorname{Id}}-T)$ (see \cref{F:v:WD}), then the shadow sequence $(J_AT^n x)_\ensuremath{{n\in{\mathbb N}}}$ is bounded and its weak cluster points solve a certain best approximation problem. In this paper we derive some new and useful identities for the Douglas--Rachford operator. The main contribution of the paper is generalizing the results in \cite{BCL04} by proving the full weak convergence of the shadow sequence in the convex feasibility setting (see \cref{thm:nc:shad}). While the general case case remains open (see \cref{ex:gen:incon} and \cref{rem:gen:case}), we provide some sufficient conditions for the convergence of the shadow sequence in some special cases (see \cref{thm:abs:gen:shad}). As a by product of our analysis we present a new proof for the result in \cite{Svaiter} concerning the weak convergence of the shadow sequence (see \cref{thm:simp:pr:d}). Our proof is in the spirit of the techniques used in \cite{L-M79}. The notation used in the paper is standard and follows largely \cite{BC2011}. \section{Useful identities for the Douglas--Rachford operator} We start with two elementary identities which are easily verified directly. \begin{lem} \label{lem:simple:mi} Let $(a,b,z)\in X^3$. Then the following hold: \begin{enumerate} \item \label{lem:simple:mi:i} $\innp{z,z-a+b}=\normsq{z-a+b}+\innp{a,z-a}+\innp{b,2a-z-b}$. \item \label{lem:simple:mi:ii} $\innp{z,a-b}=\normsq{a-b}+\innp{a,z-a}+\innp{b,2a-z-b}$. \item \label{lem:simple:mi:iii} $\norm{z}^2=\normsq{z-a+b}+\normsq{b-a} +2\innp{a,z-a}+2\innp{b,2a-z-b}.$ \end{enumerate} \end{lem} \begin{lem} \label{lem:abs:8} Let $(a,b,x,y,a^*,b^*,u,v)\in X^8$. Then \begin{align} \innp{(\at,\bt)-(\xt, \yt),(\ut,b^*)-(u,v)}& =\innp{\at-\bt,\ut +\innp{\xt,u} -\innp{\xt,\ut} -\innp{\at-\bt,u \nonumber\\ &\qquad +\innp{\bt,\ut+\vt +\innp{\yt,v} -\innp{\yt,\vt} -\innp{\bt,u+v} \end{align} \end{lem} Unless stated otherwise, we assume from now on that \begin{empheq}[box=\mybluebox]{equation*} A:X\rras X \text{~~and~~} B:X\rras X \text{~~are maximally monotone operators.} \end{empheq} The following result concerning the \emph{resolvent} $J_A:=(\ensuremath{\operatorname{Id}}+A)^{-1}$ and the \emph{reflected resolvent} $R_A:=2J_A-\ensuremath{\operatorname{Id}}$ is well known; see, e.g., \cite[Corollary~23.10(i)\&(ii)]{BC2011}. \begin{fact} \label{f:JA:RA} $J_A:X\to X$ is firmly nonexpansive and $R_A:X\to X$ is nonexpansive. \end{fact} Let us recall the well-known inverse resolvent identity (see, \cite[Lemma~12.14]{Rock98}): \begin{equation \label{e:iri} J_A+J_{A^{-1}}=\ensuremath{\operatorname{Id}}, \end{equation} and the following useful description of the graph of $A$. \begin{fact}[\bf Minty parametrization]{\rm(See \cite{Minty}.)} \label{f:Mintypar} $M\colon X\to \gra A:x\mapsto(J_A x,J_{A^{-1}}x)$ is a continuous bijection, with continuous inverse $M^{-1}\colon \gra A \to X :(x,u)\to x+u$ ; consequently, \begin{equation}\label{Min:par} \gra A=M(X) = \menge{(J_A x,x-J_A x)}{x\in X}. \end{equation} \end{fact} \begin{definition} The \emph{Douglas--Rachford splitting operator} associated with $(A,B)$ is \begin{empheq}[box=\mybluebox]{equation} \label{def:T} T:=T_{(A,B)}=\tfrac{1}{2}(\ensuremath{\operatorname{Id}}+R_BR_A)=\ensuremath{\operatorname{Id}}-J_A+J_B R_A. \end{empheq} \end{definition} We will simply use $T$ instead of $T_{(A,B)}$ provided there is no cause for confusion. The following result will be useful. \begin{lem} \label{lem:cluster:fen} Let $x\in X$. Then the following hold: \begin{enumerate} \item \label{lem:cluster:fen:i} $x-Tx=J_A x -J_BR_A x =J_{A^{-1}} x +J_{B^{-1}}R_A x$. \item \label{lem:cluster:fen:ii} $(J_A x,J_BR_Ax,J_{A^{-1}} x, J_{B^{-1}}R_A x)$ lies in $\gra (A\times B)$. \end{enumerate} \end{lem} \begin{proof} \ref{lem:cluster:fen:i}: The first identity is a direct consequence of \cref{def:T}. In view of \cref{e:iri} $J_A x -J_BR_A x -J_{A^{-1}} x -J_{B^{-1}}R_A x= J_A x -(x-J_{A} x)- (J_B+J_{B^{-1}})R_A x =R_A x -R_Ax=0 $, which proves the second identity. \ref{lem:cluster:fen:ii}: Use \cref{e:prod:resolvent} and \cref{f:Mintypar} applied to $A\times B$ at $(x,R_Ax)\in X\times X$. \end{proof} The next theorem is a direct consequence of the key identities presented in \cref{lem:simple:mi}. \begin{thm} \label{thm:simp:pr} Let $x\in X$ and let $y\in X$. Then the following hold: \begin{enumerate} \item $ \begin{aligned}[t] \innp{Tx-Ty,x-y}&=\normsq{Tx-Ty}+\innp{J_Ax-J_Ay,J_{A^{-1}} x-J_{A^{-1}} y}\\ &\qquad +\innp{J_BR_Ax-J_BR_Ay,J_{B^{-1}}R_A x-J_{B^{-1}}R_A y}. \end{aligned} $ \label{thm:simp:pr:-i} \item \label{thm:simp:pr:-ii} $ \begin{aligned}[t] \innp{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y,x-y}&=\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y} +\innp{J_Ax-J_Ay,J_{A^{-1}} x-J_{A^{-1}} y}\\ &\qquad+\innp{J_BR_Ax-J_BR_Ay,J_{B^{-1}}R_A x-J_{B^{-1}}R_A y}. \end{aligned} $ \item \label{thm:simp:pr:i} $ \begin{aligned}[t] \normsq{x-y}&=\normsq{Tx-Ty}+\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y} +2\innp{J_Ax-J_Ay,J_{A^{-1}} x-J_{A^{-1}} y}\\ &\qquad+2\innp{J_BR_Ax-J_BR_Ay,J_{B^{-1}}R_A x-J_{B^{-1}}R_A y}. \end{aligned} $ \item \label{thm:simp:pr:ii} $ \begin{aligned}[t] &\norm{J_{A} x-J_{A}y}^2+\norm{J_{A^{-1}} x-J_{A^{-1}}y}^2 - \norm{J_{A}T x-J_{A}Ty}^2-\norm{J_{A^{-1}} Tx-J_{A^{-1}} Ty}^2\\ & \normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y} +2\innp{J_A Tx-J_A Ty,J_{A^{-1}} Tx-J_{A^{-1}} Ty}\\ &\qquad+2\innp{J_BR_Ax-J_BR_Ay,J_{B^{-1}}R_A x-J_{B^{-1}}R_A y}. \end{aligned} $ \item \label{thm:simp:pr:vv} $ \norm{J_{A}T x-J_{A}Ty}^2+\norm{J_{A^{-1}} Tx-J_{A^{-1}} Ty}^2 \le \norm{J_{A} x-J_{A}y}^2+\norm{J_{A^{-1}} x-J_{A^{-1}}y}^2. $ \end{enumerate} \end{thm} \begin{proof} \ref{thm:simp:pr:-i}--\ref{thm:simp:pr:i}: Use Lemma~\ref{lem:simple:mi}--\ref{lem:simple:mi:iii} respectively, with $z=x-y$, $a=J_A x-J_Ay$ and $b=J_BR_A x-J_BR_A y$, \cref{Min:par} and \cref{def:T}. \ref{thm:simp:pr:ii}: It follows from the \cref{e:iri} that \begin{subequations} \label{JA:decom} \begin{align} \norm{x-y}^2&=\norm{J_Ax-J_Ay+J_{A^{-1}}{x}-J_{A^{-1}}{y}}^2\\ &=\norm{J_Ax-J_Ay}^2+\norm{J_{A^{-1}}{x}-J_{A^{-1}}{y}}^2 +2\innp{J_Ax-J_Ay,J_{A^{-1}}{x}-J_{A^{-1}}{y}}. \end{align} \end{subequations} Applying \eqref{JA:decom} to $(Tx,Ty)$ instead of $(x,y)$ yields \begin{align}\label{e:xy:TxTy} \norm{Tx-Ty}^2 &=\norm{J_ATx-J_ATy}^2+\norm{J_{A^{-1}}{Tx}-J_{A^{-1}}{Ty}}^2\notag\\ &\qquad+2\innp{J_ATx-J_ATy,J_{A^{-1}}{Tx}-J_{A^{-1}}{Ty}}. \end{align} Now combine \cref{JA:decom}, \eqref{e:xy:TxTy} and \ref{thm:simp:pr:i} to obtain \ref{thm:simp:pr:ii}. \ref{thm:simp:pr:vv}: In view of \eqref{Min:par}, the monotonicity of $A$ and $B$ implies $\innp{J_ATx-J_ATy,J_{A^{-1}} Tx-J_{A^{-1}} Ty}\ge 0$ and $\innp{J_BR_Ax-J_BR_Ay,J_{B^{-1}}R_A x-J_{B^{-1}}R_A y}\ge 0$. Now use \ref{thm:simp:pr:ii}. \end{proof} \section{The Douglas--Rachford operator, Attouch--Th\'era duality and solution sets} The \emph{Attouch--Th\'{e}ra dual pair} (see \cite{AT}) of $(A,B)$ is $(A,B)^*:=(A^{-1},B^{-\ovee})$, where \begin{equation} B^{\ovee}:= (-\ensuremath{\operatorname{Id}})\circ B\circ(-\ensuremath{\operatorname{Id}})\quad{\text{and}} \quad B^{-\ovee}:=(B^{-1})^\ovee=(B^\ovee)^{-1}. \end{equation} We use \begin{equation} Z:=Z_{(A,B)}=(A+B)^{-1}(0) \qquad\text{and }\qquad K:=K_{(A,B)}=(A^{-1}+B^{-\ovee})^{-1}(0) \end{equation} to denote the primal and dual solutions, respectively (see, e.g., \cite{JAT2012}). Let us record some useful properties of $T_{(A,B)}$. \begin{fact} \label{f:sd:ZK} The following hold: \begin{enumerate} \item \label{T:fne} {\bf (Lions and Mercier).} $T_{(A,B)}$ is firmly nonexpansive. \item \label{T:selfdual} {\bf (Eckstein).} $T_{(A,B)} = T_{(A^{-1},B^{-\ovee})}. $ \item \label{fix:inc:Z} {\bf(Combettes).} $Z=J_A(\ensuremath{\operatorname{Fix}} T)$. \item \label{fix:inc:K} $K=J_{A^{-1}}(\ensuremath{\operatorname{Fix}} T).$ \end{enumerate} \end{fact} \begin{proof} \ref{T:fne}: See, e.g., \cite[Lemma~1]{L-M79}, \cite[Corollary~4.2.1 on page~139]{EckThesis}, \cite[Corollary~4.1]{EckBer}, or \cref{thm:simp:pr}\ref{thm:simp:pr:-i} \ref{T:selfdual}: See e.g., \cite[Lemma~3.6 on page~133]{EckThesis} or \cite[Proposition~2.16]{JAT2012}). \ref{fix:inc:Z}: See \cite[Lemma~2.6(iii)]{Comb04}. \ref{fix:inc:K}: See \cite[Corollary~4.9]{JAT2012}. \end{proof} The following notion, coined by Iusem \cite{Iusem98}, is very useful. We say that $C:X\rras X$ is \emph{paramonotone} if it is monotone and we have the implication \begin{equation} \left. \begin{array}{c} (x,u)\in \gra C\\ (y,v)\in \gra C\\ \innp{x-y,u-v}=0 \end{array} \right\} \quad\RA\quad \big\{(x,v),(y,u)\big\}\subseteq \gra C. \end{equation} \label{fact:para:i} \begin{example} \label{ex:para:goodsub} Let $f:X\to\left]-\infty,+\infty\right]$ be proper, convex and lower semicontinuous. Then $\pt f$ is paramonotone by \cite[Proposition~2.2]{Iusem98} (or by \cite[Example~22.3(i)]{BC2011}). \end{example} We now recall that the so-called ``extended solution set'' (see \cite[Section~2.1]{EckSvai08} and also \cite[Section~3]{JAT2012}) is defined by \begin{equation} \label{def:ex:sol} \ensuremath{{\mathcal S}} := \ensuremath{{\mathcal S}}_{(A,B)} :=\stb{(z,k)\in X\times X~|~-k\in Bz, k\in Az}\subseteq Z\times K. \end{equation} \begin{fact} \label{fact:para:cc} Recalling \cref{f:Mintypar}, we have the following: \begin{enumerate} \item \label{fact:para:cc:ii} $\ensuremath{{\mathcal S}} = M(\ensuremath{\operatorname{Fix}} T) = \menge{(J_A\times J_{A^{-1}}) (y,y)}{y\in \ensuremath{\operatorname{Fix}} T}$. \item \label{fact:para:cc:iii} $\ensuremath{\operatorname{Fix}} T=M^{-1}(\ensuremath{{\mathcal S}}) = \menge{z+k}{(z,k)\in \ensuremath{{\mathcal S}}}$. \item \label{fact:para:cc:i} {(\bf Eckstein and Svaiter).} $\ensuremath{{\mathcal S}} $ is closed and convex. \end{enumerate} If $A$ and $B$ are paramonotone, then we additionally have: \begin{enumerate} \setcounter{enumi}{3} \item \label{fact:para:ii:b} $\ensuremath{{\mathcal S}}=Z\times K$. \item \label{fact:para:ii:a} $\ensuremath{\operatorname{Fix}} T=Z+K$. \end{enumerate} \end{fact} \begin{proof} \ref{fact:para:cc:ii}\&\ref{fact:para:cc:iii}: This is \cite[Theorem~4.5]{JAT2012}. \ref{fact:para:cc:i}: See \cite[Lemma~2]{EckSvai08}. Alternatively, since $\ensuremath{\operatorname{Fix}} T$ is closed, and $M$ and $M^{-1}$ are continuous, we deduce the closedness from \ref{fact:para:cc:ii}. The convexity was proved in \cite[Corollary~3.7]{JAT2012}. \ref{fact:para:ii:b}~\&~\ref{fact:para:ii:a}: See \cite[Corollary~5.5(ii)\&(iii)]{JAT2012}. \end{proof} Working in $X\times X$, we recall that (see, e.g., \cite[Proposition~23.16]{BC2011}) \begin{equation} \label{e:prod:resolvent} \text{$A\times B$ is maximally monotone \; and }\; J_{A\times B}=J_A\times J_B. \end{equation} \begin{cor} \label{thm:simp:pr:d:iii} Let $x\in X$ and let $(z,k)\in \ensuremath{{\mathcal S}}$. Then \begin{subequations} \begin{align} \normsq{(J_{A}T x,J_{A^{-1}} Tx)-(z,k)} &= \norm{J_{A}T x-z}^2+\norm{J_{A^{-1}} Tx-k}^2\\ &\le \norm{J_{A} x-z}^2+\norm{J_{A^{-1}} x-k}^2\\ &=\normsq{(J_{A}x,J_{A^{-1}} x)-(z,k)}. \end{align} \end{subequations} \end{cor} \begin{proof} It follows from \cite[Theorem~4.5]{JAT2012} that $z+k\in \ensuremath{\operatorname{Fix}} T$, $J_A(z+k)=z$ and $J_{A^{-1}}(z+k)=k$. Now combine with \cref{thm:simp:pr}\ref{thm:simp:pr:vv} with $y$ replaced by $z+k$. \end{proof} We recall, as consequence of \cite[Corollary~22.19]{BC2011} and \cref{ex:para:goodsub}, that when $X=\ensuremath{\mathbb R}$, the operators $A$ and $B$ are paramonotone. In view of \cref{fact:para:cc}\ref{fact:para:ii:b}, we then have $\ensuremath{{\mathcal S}}=Z\times K$. \begin{lem} \label{lem:real:RR:static} Suppose that $X=\ensuremath{\mathbb R}$. Let $x\in X$ and let $ (z,k)\in Z\times K$. Then the following hold: \begin{enumerate} \item \label{lem:real:1} $\norm{J_{A} Tx-z}^2\le \norm{J_{A}x-z}^2$. \item \label{lem:real:2} $\norm{J_{A^{-1}} Tx-k}^2\le\norm{J_{A^{-1}} x-k}^2$. \end{enumerate} \end{lem} \begin{proof} \ref{lem:real:1}: Set \begin{equation} \label{eq:R:q} q(x,z):=\norm{J_{A} Tx-z}^2- \norm{J_{A}x-z}^2. \end{equation} If $x\in \ensuremath{\operatorname{Fix}} T$ we get $q(x,z)=0$. Suppose that $x\in \ensuremath{\mathbb R}\smallsetminus \ensuremath{\operatorname{Fix}} T$. Since $T$ is firmly nonexpansive we have that $\ensuremath{\operatorname{Id}}-T$ is firmly nonexpansive (see \cite[Proposition~4.2]{BC2011}), hence monotone by \cite[Example~20.27]{BC2011}. Therefore $(\forall x\in \ensuremath{\mathbb R}\smallsetminus \ensuremath{\operatorname{Fix}} T)(\forall f\in \ensuremath{\operatorname{Fix}} T)$ we have \begin{equation}\label{ine:fix} {(x-Tx)(x-f)}=\bk{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)f}\bk{x-f}> 0. \end{equation} Notice that \eqref{eq:R:q} can be rewritten as \begin{equation}\label{eq:R:q:i} q(x,z)=(J_{A}T x-J_{A} x)(\bk{J_{A} Tx-z}+\bk{J_{A}x-z}). \end{equation} We argue by cases. \emph{Case~1:} $x< Tx$.\\ It follows from \eqref{ine:fix} that \begin{equation}\label{e:incr} (\forall f\in \ensuremath{\operatorname{Fix}} T)~x<f. \end{equation} On the one hand, since $J_A$ is firmly nonexpansive, we have $J_A$ is monotone and therefore $J_A T x-J_A x\ge 0$. On the other hand, it follows from \cref{f:sd:ZK}\ref{fix:inc:Z} that $(\exists f\in \ensuremath{\operatorname{Fix}} T)$ such that $z=J_A f=J_A T f$. Using \eqref{e:incr} and the fact that $J_A$ is monotone we conclude that $J_{A} x-z=J_{A} x-J_{A} f\le 0$. Moreover, since $J_A$ and $T$ are firmly nonexpansive operators on $\ensuremath{\mathbb R}$, we have $J_A\circ T$ is firmly nonexpansive hence monotone and therefore \eqref{e:incr} implies that $J_{A} Tx-z=J_{A} Tx-J_{A} Tf\le 0$. Combining with \eqref{eq:R:q:i} we conclude that \ref{lem:real:1} holds. \emph{Case 2:} $x>Tx$.\\ The proof follows similar to \emph{Case~1}. \ref{lem:real:2} Apply the results of \ref{lem:real:1} to $A^{-1}$ and use \cref{e:iri}. \end{proof} In view of \cref{def:ex:sol} one might conjecture that \cref{thm:simp:pr:d:iii} holds when we replace $\ensuremath{{\mathcal S}}$ by $Z\times K$. The following example gives a negative answer to this conjecture. It also illustrates that when $X\neq \ensuremath{\mathbb R}$, the conclusion of \cref{lem:real:RR:static} could fail. \begin{ex}\label{ex:cn:skew} Suppose that $X=\ensuremath{\mathbb R}^2$, that $A$ is the normal cone operator of $\ensuremath{\mathbb R}^2_+$, and that $B:X\to X:(x_1,x_2)\mapsto (-x_2,x_1)$ is the rotator by $\pi/2$. Then $\ensuremath{\operatorname{Fix}} T=\ensuremath{\mathbb R}_+\cdot(1,-1)$, $Z=\ensuremath{\mathbb R}_+\times\stb{0}$ and $K=\stb{0}\times\ensuremath{\mathbb R}_{+}$. Moreover, $(\exists x\in \ensuremath{\mathbb R}^2)$ $(\exists (z,k)\in Z\times K)$ such that $\normsq{(J_{A}T x,J_{A^{-1}} Tx)-(z,k)} -\normsq{(J_{A} x,J_{A^{-1}} x)-(z,k)}>0$ and $\normsq{J_ATx-z}-\normsq{J_Ax-z}= \tfrac{5}{4}a^2>0$. \end{ex} \begin{proof} Let $(x_1,x_2)\in \ensuremath{\mathbb R}^2$. Using \cite[Proposition~2.10]{Sicon2014} we have \begin{equation} J_B(x_1,x_2)=(\tfrac{1}{2}(x_1+x_2),\tfrac{1}{2}(-x_1+x_2))\;\;\; \text{and}\;\;\;R_B(x_1,x_2)=(x_2,-x_1)=-B(x_1,x_2). \end{equation} Hence, $R_B^{-1}=(-B)^{-1}=B$. Using \eqref{def:T} we conclude that $(x_1,x_2)\in\ensuremath{\operatorname{Fix}} T\siff (x_1,x_2)\in\ensuremath{\operatorname{Fix}} R_BR_A$. Hence \begin{subequations} \begin{align} \ensuremath{\operatorname{Fix}} T&=\menge{(x_1,x_2)\in \ensuremath{\mathbb R}^2}{(x_1,x_2)=R_B R_A(x_1,x_2)}\\ &=\menge{(x_1,x_2)\in \ensuremath{\mathbb R}^2}{R_B^{-1}(x_1,x_2)=2J_A(x_1,x_2)-(x_1,x_2)}\\ &=\menge{(x_1,x_2)\in \ensuremath{\mathbb R}^2}{B(x_1,x_2)+(x_1,x_2)= 2P_{\ensuremath{\mathbb R}^2_+}(x_1,x_2)}\\ &=\menge{(x_1,x_2)\in \ensuremath{\mathbb R}^2}{(x_1-x_2,x_1+x_2)=2P_{\ensuremath{\mathbb R}^2_+}(x_1,x_2)}. \end{align} \end{subequations} We argue by cases. \emph{Case~1:} $x_1\ge 0$ and $x_2\ge 0$. Then $(x_1,x_2)\in \ensuremath{\operatorname{Fix}} T$ $\siff(x_1-x_2,x_1+x_2)=2P_{\ensuremath{\mathbb R}^2_+}(x_1,x_2)=(2x_1,2x_2)$ $\siff x_1=-x_2$ and $x_1=x_2$ $\siff x_1=x_2=0$. \emph{Case~2:} $x_1< 0$ and $x_2< 0$. Then $(x_1,x_2)\in \ensuremath{\operatorname{Fix}} T$ $\siff (x_1-x_2,x_1+x_2)=2P_{\ensuremath{\mathbb R}^2_+}(x_1,x_2)=(0,0)$ $\siff x_1=x_2$ and $x_1=-x_2$ $\siff x_1=x_2=0$, which contradicts that $x_1< 0$ and $x_2< 0$. \emph{Case~3:} $x_1\ge 0$ and $x_2< 0$. Then $(x_1,x_2)\in \ensuremath{\operatorname{Fix}} T$ $\siff (x_1-x_2,x_1+x_2)=2P_{\ensuremath{\mathbb R}^2_+}(x_1,x_2)=(2x_1,0)$ $\siff x_1=-x_2$. \emph{Case~4:} $x_1< 0$ and $x_2\ge0$. Then $(x_1,x_2)\in \ensuremath{\operatorname{Fix}} T$ $\siff(x_1-x_2,x_1+x_2)=2P_{\ensuremath{\mathbb R}^2_+}(x_1,x_2)=(0,2x_2)$ $\siff x_1=x_2$, which never occurs since $x_1< 0$ and $x_2\ge0$. Altogether we conclude that $\ensuremath{\operatorname{Fix}} T=R_+\cdot(1,-1)$, as claimed. Using \cref{f:sd:ZK}\ref{fix:inc:Z}\&\ref{fix:inc:K} we have $ Z=J_A(\ensuremath{\operatorname{Fix}} T)=\ensuremath{\mathbb R}_+\times\stb{0}, $ and $ K=J_{A^{-1}}(\ensuremath{\operatorname{Fix}} T)=(\ensuremath{\operatorname{Id}}-J_A)(\ensuremath{\operatorname{Fix}} T)=\stb{0}\times\ensuremath{\mathbb R}_-. $ Now let $a>0$, let $x=(a,0)$, set $z:=(2a,0)\in Z$ and set $k:=(0,-a)\in K$. Notice that $Tx=x-P_{\ensuremath{\mathbb R}^2_+}+\frac{1}{2}(\ensuremath{\operatorname{Id}}-B)x =(a,0)-(a,0)+\frac{1}{2}((a,0)-(0,a))=(\frac{1}{2}a,-\frac{1}{2}a)$. Hence, $J_Ax=P_{\ensuremath{\mathbb R}^2_+}(a,0)=(a,0)$, $J_{A^{-1}} x=P_{\ensuremath{\mathbb R}^2_-}(a,0)=(0,0)$, $J_ATx=P_{\ensuremath{\mathbb R}^2_+}(\frac{1}{2}a,-\frac{1}{2}a)=(\frac{1}{2}a,0)$, and $J_{A^{-1}} x=P_{\ensuremath{\mathbb R}^2_-}(\frac{1}{2}a,-\frac{1}{2}a)=(0,-\frac{1}{2}a)$. Therefore \begin{align*} &\quad\normsq{(J_{A}T x,J_{A^{-1}} Tx)-(z,k)} -\normsq{(J_{A} x,J_{A^{-1}} x)-(z,k)}\\ &=\norm{J_{A}T x-z}^2+\norm{J_{A^{-1}} Tx-k}^2 -\norm{J_{A} x-z}^2-\norm{J_{A^{-1}} x-k}^2 \\ &=\normsq{(\tfrac{1}{2}a,0)-(2a,0)}+\normsq{(0,-\tfrac{1}{2}a)-(0,-a)} -\normsq{(a,0)-(2a,0)}-\normsq{(0,0)-(0,-a)}\\ &=\tfrac{9}{4}a^2+\tfrac{1}{4}a^2-a^2-a^2=\tfrac{1}{2}a^2>0. \end{align*} Similarly one can verify that $\normsq{J_ATx-z}-\normsq{J_Ax-z}= \tfrac{5}{4}a^2>0$. \end{proof} \section{Linear relations} In this section, we assume that\footnote{$A\colon X\rras X$ is a \emph{linear relation} if $\gra A$ is a linear subspace of $X\times X$.} \begin{empheq}[box=\mybluebox]{equation*} A:X\rras X \text{~and~} B:X\rras X\text{~ are maximally monotone linear relations}; \end{empheq} equivalently, by \cite[Theorem~2.1(xviii)]{BMW2}, that \begin{equation} J_A ~\text{and }~J_B ~\text{~are linear operators from $X$ to $X$}. \end{equation} This additional assumption leads to stronger conclusions. \begin{lem} \label{lem:lin:B} $\ensuremath{\operatorname{Id}}-T ={J_A -2J_BJ_A+J_B}$. \end{lem} \begin{proof} Let $x\in X$. Then indeed $x-Tx=J_Ax-J_BR_Ax =J_Ax-J_B(2J_Ax-x) =J_Ax-2J_BJ_Ax+J_Bx$. \end{proof} \begin{lem} Suppose that $U$ is a linear subspace of $X$ and that $A=P_U$. Then $A$ is maximally monotone, \begin{equation} \label{eq:JPU} J_A=J_{P_U} =\tfrac{1}{2}(\ensuremath{\operatorname{Id}}+P_{U^\perp}), \quad\text{and}\quad R_A=P_{U^\perp}=\ensuremath{\operatorname{Id}}-A. \end{equation} \end{lem} \begin{proof} Let $(x,y)\in X\times X$. Then \begin{equation} \label{eq:PUX:PUY} y=J_Ax\siff x=y+P_Uy. \end{equation} Now assume $y=J_Ax$. Since $P_U$ is linear, \cref{eq:PUX:PUY} implies that $P_{U^\perp}x=P_{U^\perp}y$. Moreover, $y=x-P_Uy=\tfrac{1}{2}(x+x-2P_Uy) =\tfrac{1}{2}(x+y+P_Uy-2P_Uy)=\tfrac{1}{2}(x+y-P_Uy) =\tfrac{1}{2}(x+P_{U^\perp}y)=\tfrac{1}{2}(x+P_{U^\perp}x)$. Next, $R_A=2J_A-\ensuremath{\operatorname{Id}}=(\ensuremath{\operatorname{Id}}+P_{U^\perp})-\ensuremath{\operatorname{Id}}=P_{U^\perp}$. \end{proof} We say that a linear relation $A$ is \emph{skew} (see, e.g., \cite{BWY2010}) if $(\forall (a,a^*)\in \gra A)$ $\innp{a,a^*}=0$. \begin{lem} Suppose that $A:X\to X$ and $B:X\to X$ are both skew, and $A^2=B^2=-\ensuremath{\operatorname{Id}}$. Then $\ensuremath{\operatorname{Id}} -T=\tfrac{1}{2}(\ensuremath{\operatorname{Id}}-BA)$. \end{lem} \begin{proof} It follows from \cite[Proposition~2.10]{Sicon2014} that $R_A=A$ and $R_B=B$. Therefore \cref{def:T} implies that $\ensuremath{\operatorname{Id}}-T=\tfrac{1}{2}(\ensuremath{\operatorname{Id}}-R_BR_A)= \tfrac{1}{2}(\ensuremath{\operatorname{Id}}-BA)$. \end{proof} \begin{example} \label{thm:simp:UV} Suppose that $A$ and $B$ are skew. Let $x\in X$ and let $y\in X$. Then the following hold: \begin{enumerate} \item \label{thm:simp:UV:i} $ \innp{Tx-Ty,x-y}=\normsq{Tx-Ty}. $ \item \label{thm:simp:UV:ii} $ \begin{aligned}[t] \innp{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y,x-y}&=\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}. \end{aligned} $ \item \label{thm:simp:UV:iii} $ \normsq{x-y}=\normsq{Tx-Ty}+\normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}. $ \item \label{thm:simp:UV:iv} $ \begin{aligned}[t] &\norm{J_{A} x-J_{A}y}^2+\norm{J_{A^{-1}} x-J_{A^{-1}}y}^2 - \norm{J_{A}T x-J_{A}Ty}^2-\norm{J_{A^{-1}} Tx-J_{A^{-1}} Ty}^2\\ & \normsq{(\ensuremath{\operatorname{Id}}-T)x-(\ensuremath{\operatorname{Id}}-T)y}. \end{aligned} $ \item \label{thm:simp:UV:vi} $\normsq{x}=\normsq{Tx}+\normsq{x-Tx}$. \item \label{thm:simp:UV:vii} $\innp{Tx, x-Tx}=0$. \end{enumerate} \end{example} \begin{proof} \ref{thm:simp:UV:i}--\ref{thm:simp:UV:iv}: Apply \cref{thm:simp:pr}, and use \cref{Min:par} as well as the skewness of $A$ and $B$. \ref{thm:simp:UV:vi}: Apply \ref{thm:simp:UV:iii} with $y=0$. \ref{thm:simp:UV:vii}: We have $2\innp{Tx, x-Tx}=\normsq{x}-\normsq{Tx}-\normsq{x-Tx}.$ Now apply \ref{thm:simp:UV:vi}. \end{proof} Suppose that $U$ is a closed affine subspace of $X$. One can easily verify that \begin{equation} \label{eq:aff:orthog} (\forall x\in X)(\forall y\in X)\quad \innp{P_U x-P_U y, (\ensuremath{\operatorname{Id}}-P_U)x-(\ensuremath{\operatorname{Id}}-P_U)y}=0. \end{equation} \begin{example} Suppose that $U$ and $V$ are closed affine subspaces of $X$ such that $U\cap V\neq \fady$, that $A=N_U$, and that $B=N_V$. Let $x\in X$, and let $(z,k)\in Z\times K)$. Then \begin{subequations} \begin{align} &\hspace{-2cm}\normsq{(P_U x,(\ensuremath{\operatorname{Id}}-P_{U})x)-(z,k)}- \normsq{(P_U Tx,(\ensuremath{\operatorname{Id}}-P_{U}) Tx)-(z,k)}\\ &= \norm{x-(z+k)}^2 -\norm{Tx-(z+k)}^2 \label{eq:lin:p:i}\\ &= \norm{x-Tx}^2 \label{eq:lin:p:i:i}\\ &=\norm{P_Ux-P_Vx}^2. \label{eq:lin:p:ii} \end{align} \end{subequations} \end{example} \begin{proof} As subdifferential operators, $A$ and $B$ are paramonotone (by \cref{ex:para:goodsub}). It follows from \cref{fact:para:cc}\ref{fact:para:ii:a} and \cite[Theorem~4.5]{JAT2012} that \begin{equation} \label{eq:zk:infix} z+k\in \ensuremath{\operatorname{Fix}} T, \quad P_U(z+k)=z\quad\text{and}\quad (\ensuremath{\operatorname{Id}}-P_{U})(z+k)=k. \end{equation} Hence, in view of \cref{eq:aff:orthog} we have \begin{subequations} \label{sub:e:i} \begin{align} &\hspace{-2 cm}\normsq{(P_U x,(\ensuremath{\operatorname{Id}}-P_{U}) x)-(z,k)}\\ &=\norm{P_Ux-z}^2+\norm{(\ensuremath{\operatorname{Id}}-P_{U})x-k}^2 \\ &=\norm{P_Ux-P_U(z+k)}^2+\norm{(\ensuremath{\operatorname{Id}}-P_{U})x-(\ensuremath{\operatorname{Id}}-P_{U})(z+k)}^2\\ &\quad+2\innp{P_Ux-P_U(z+k),(\ensuremath{\operatorname{Id}}-P_{U}){x}-(\ensuremath{\operatorname{Id}}-P_{U}){(z+k)}}\\ &=\norm{P_Ux-P_U(z+k)+(\ensuremath{\operatorname{Id}}-P_{U}){x} -(\ensuremath{\operatorname{Id}}-P_{U}){(z+k)}}^2\\ &=\norm{x-(z+k)}^2. \end{align} \end{subequations} Applying \eqref{sub:e:i} with $x$ replaced by $Tx$ yields \begin{equation} \label{sub:e:ii} \normsq{(P_U Tx,(\ensuremath{\operatorname{Id}}-P_{U})Tx)-(z,k)}=\norm{Tx-(z+k)}^2 \end{equation} Combining \eqref{sub:e:i} and \eqref{sub:e:ii} yields \cref{eq:lin:p:i}. It follows from \cref{eq:aff:orthog} and \cref{thm:simp:pr}\ref{thm:simp:pr:i} applied with $(A,B, y)$ replaced by $(N_U,N_V,z+k)$ that $\normsq{x-(z+k)}-\normsq{Tx-T(z+k)}=\normsq{x-Tx-((z+k)-T(z+k))}$, which in view of \cref{eq:zk:infix}, proves \cref{eq:lin:p:i:i}. Now we turn to \cref{eq:lin:p:ii}. Let $w\in U\cap V$. Then $U=w+\parl U$ and $V=w+\parl V$. Suppose momentarily that $w=0$. In this case, $\parl U=U$ and $\parl V=V$. Using \cite[Proposition~3.4(i)]{JAT2014}, we have \begin{equation} \label{T:UV:back} T=T_{(U,V)}=P_VP_U+P_{V^\perp}P_{U^\perp}. \end{equation} Therefore \begin{subequations} \begin{align} x-Tx &=P_Ux+P_{U^\perp}x-P_VP_Ux- P_{V^\perp}P_{U^\perp}x =(\ensuremath{\operatorname{Id}} -P_V)P_Ux+(\ensuremath{\operatorname{Id}}-P_{V^\perp}) P_{U^\perp}x\\ &=P_{V^\perp}P_Ux+P_VP_{U^\perp}x. \label{x:Tx:UV} \end{align} \end{subequations} Using \eqref{x:Tx:UV} we have \begin{subequations} \label{eq:lin:case} \begin{align} \norm{x-Tx}^2&=\norm{P_{V^\perp} P_Ux+P_VP_{U^\perp}x}^2 =\norm{P_Ux-P_{V}P_Ux +P_Vx-P_VP_{U}x}^2\\ &=\norm{P_Ux-2P_{V}P_Ux +P_Vx}^2\\ &=\norm{P_Ux}^2+\norm{P_Vx}^2 +4\norm{P_VP_Ux}^2\\ &\qquad +2\innp{P_Ux,P_Vx} -4\innp{P_Ux,P_VP_Ux}-4\innp{P_Vx,P_VP_Ux}\\ &=\norm{P_Ux}^2+\norm{P_Vx}^2 -2\innp{P_Ux,P_Vx}=\norm{P_Ux-P_Vx}^2. \end{align} \end{subequations} Now, if $w\neq 0$, by \cite[Proposition~5.3]{BLM16} we have $Tx=T_{(\parl U,\parl V)}(x-w)+w$. Therefore, \cref{eq:lin:case} yields $\normsq{x-Tx}=\normsq{(x-w)-T_{(\parl U,\parl V)}(x-w)} =\normsq{P_{\parl U}(x-w)-P_{\parl V}(x-w)} =\normsq{w+P_{\parl U}(x-w)-(w+P_{\parl V}(x-w))}= \norm{P_Ux-P_Vx}^2$, where the last equality follows from \cite[Proposition~3.17]{BC2011}. \end{proof} \section{Main results} In this section we consider the case when the set $Z$ is possibly empty. We recall the following important fact. \begin{fact}[\bf{Infimal displacement vector}] \label{F:v:WD} {\rm (See, e.g., \cite{Ba-Br-Reich78},\cite{Br-Reich77} and \cite{Pazy}.)} Let $T:X\to X$ be nonexpansive. Then $\overline{\ensuremath{\operatorname{ran}}}(\ensuremath{\operatorname{Id}}-T)$ is convex; consequently, the infimal displacement vector \begin{empheq}[box=\mybluebox]{equation} \label{eq:def:v} \gap:=P_{\overline{\ensuremath{\operatorname{ran}}}(\ensuremath{\operatorname{Id}}-T)}. \end{empheq} is the unique and well-defined element in $\overline{\ensuremath{\operatorname{ran}}}{(\ensuremath{\operatorname{Id}}-T)}$ such that $ \norm{\gap}=\ds\inf_{x\in X}\norm{x-Tx}. $ \end{fact} Following \cite{Sicon2014}, the \emph{normal problem} associated with the ordered pair $(A,B)$ is to\footnote{Let $w\in X$ be fixed. For the operator $A$, the \emph{inner and outer shifts} associated with $A$ are defined by $ \inns[w]{A}\colon X\ensuremath{\rightrightarrows} X \colon x\mapsto A(x-w)$ and $ \outs[w]{A}\colon X\ensuremath{\rightrightarrows} X \colon x\mapsto Ax-w.$ Note that $\inns[w]{A}$ and $\outs[w]{A}$ are maximally monotone. } \begin{equation} \text{find $x\in X$ such that~} 0\in {_{\gap}A} x+B_{\gap}x=Ax-\gap+B(x-\gap). \end{equation} We shall use \begin{equation} Z_\gap:=Z_{({_{\gap}A} ,B_{\gap}) \qquad\text{and }\qquad K_\gap:=K_{(({_{\gap}A})^{-1} ,(B_{\gap})^{-\ovee})} \end{equation} to denote the \emph{primal normal} and \emph{dual normal solutions}, respectively. It follows from \cite[Proposition~3.3]{Sicon2014} that \begin{equation} Z_v\neq \fady\siff v\in \ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}-T). \end{equation} \begin{cor} Let $x\in X$ and let $y\in X$. Then the following hold: \begin{subequations} \label{eqs:series:summ} \begin{align} \sum_{n=0}^\infty\normsq{(\ensuremath{\operatorname{Id}}-T)T^nx-(\ensuremath{\operatorname{Id}}-T)T^ny}<+\infty,\\ \sum_{n=0}^\infty\underbrace{ \innp{J_AT^nx-J_AT^ny,J_{A^{-1}} T^nx-J_{A^{-1}} T^ny}}_{\ge 0}<+\infty,\\ \sum_{n=0}^\infty\underbrace{\innp{J_BR_AT^nx-J_BR_AT^ny, J_{B^{-1}}R_A T^nx-J_{B^{-1}}R_AT^n y}}_{\ge 0}<+\infty. \end{align} \end{subequations} Consequently, \begin{subequations} \label{eqs:limits:zeros} \begin{align} {(\ensuremath{\operatorname{Id}}-T)T^nx-(\ensuremath{\operatorname{Id}}-T)T^ny}\to 0,\\ \innp{J_AT^nx-J_AT^ny,J_{A^{-1}} T^nx-J_{A^{-1}} T^ny}\to 0, \label{e:golden:ineq}\\ \innp{J_BR_AT^nx-J_BR_AT^ny, J_{B^{-1}}R_A T^nx-J_{B^{-1}}R_AT^n y}\to 0. \end{align} \end{subequations} \end{cor} \begin{proof} Let $\ensuremath{{n\in{\mathbb N}}}$. Applying \cref{Min:par}, to the points $T^n x$ and $T^n y$, we learn that $\stb{(J_AT^n x, J_{A^{-1}}T^n x), (J_AT^n y, J_{A^{-1}}T^n y)} \subseteq \gra A$, hence, by monotonicity of $A$ we have $\innp{J_AT^nx-J_AT^ny,J_{A^{-1}} T^nx-J_{A^{-1}} T^ny}\ge 0$. Similarly $\innp{J_BR_AT^nx-J_BR_AT^ny,J_{B^{-1}}R_A T^nx-J_{B^{-1}}R_AT^n y} \ge0$. Now \cref{eqs:series:summ} and \cref{eqs:limits:zeros} follow from \cref{thm:simp:pr}\ref{thm:simp:pr:i} by telescoping. \end{proof} The next result on \fejer\ monotone sequences is of critical importance in our analysis. (When $(u_n)_\ensuremath{{n\in{\mathbb N}}}=(x_n)_\ensuremath{{n\in{\mathbb N}}}$ one obtains a well-known result; see, e.g., \cite[Theorem~5.5]{BC2011}.) \begin{lemma}[\bf new \fejer\ monotonicity principle] \label{Lem:sweet:lem} Suppose that $E$ is a nonempty closed convex subset of $X$, that $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ is a sequence in $X$ that is \emph{\fejer monotone with respect to $E$}, i.e., \begin{equation} (\forall e\in E)(\forall\ensuremath{{n\in{\mathbb N}}})\quad\|x_{n+1}-e\|\leq\|x_n-e\|, \end{equation} that $(u_n)_\ensuremath{{n\in{\mathbb N}}}$ is a bounded sequence in $X$ such that its weak cluster points lie in $E$, and that \begin{equation} \label{e:key:property} (\forall e\in E)\;\;\innp{u_n-e,u_n-x_n}\to 0. \end{equation} Then $(u_n)_\ensuremath{{n\in{\mathbb N}}}$ converges weakly to some point in $E$. \end{lemma} \begin{proof} It follows from \cref{e:key:property} that \begin{equation} \label{e:key:zerolim} (\forall (e_1,e_2)\in E\times E)\quad \innp{e_2-e_1,u_n-x_n}=\innp{u_n-e_1,u_n-x_n} -\innp{u_n-e_2,u_n-x_n}\to 0. \end{equation} Now obtain four subsequences $(x_{k_n})_\ensuremath{{n\in{\mathbb N}}}$, $(x_{l_n})_\ensuremath{{n\in{\mathbb N}}}$, $(u_{k_n})_\ensuremath{{n\in{\mathbb N}}}$ and $(u_{l_n})_\ensuremath{{n\in{\mathbb N}}}$ such that $x_{k_n}\weak x_1$, $x_{l_n}\weak x_2$, $u_{k_n}\weak e_1$ and $u_{l_n}\weak e_2$. Taking the limit in \cref{e:key:zerolim} along these subsequences we have $\innp{e_2-e_1,e_1-x_1}=0=\innp{e_2-e_1,e_2-x_2}$, hence \begin{equation} \label{e:final:conc} \normsq{e_2-e_1}=\innp{e_2-e_1,x_2-x_1}. \end{equation} Since $\stb{e_1,e_2}\subseteq E$, we conclude, in view of \cite[Theorem~6.2.2(ii)]{Bau96} or \cite[Lemma~2.2]{BDM15:LNA}, that $\innp{e_2-e_1,x_2-x_1}=0$. By \cref{e:key:zerolim}, $e_1=e_2$. \end{proof} We are now ready for our main result. \begin{thm}[\bf shadow convergence] \label{thm:abs:gen:shad} Suppose that $ x\in X$, that the sequence $(J_A T^n x)_\ensuremath{{n\in{\mathbb N}}}$ is bounded and its weak cluster points lie in $Z_v$, that $Z_v\subseteq \ensuremath{\operatorname{Fix}}(v+T)$ and that $(\forall n\in \ensuremath{\mathbb N})$ $(\forall y\in \ensuremath{\operatorname{Fix}}(v+T))$ $J_A T^n y=y$. Then the ``shadow'' sequence $(J_A T^n x)_\ensuremath{{n\in{\mathbb N}}}$ converges weakly to some point in $Z_v$. \end{thm} \begin{proof} Let $y\in \ensuremath{\operatorname{Fix}} (v+T)$. Using \cref{e:golden:ineq} and \cite[Proposition~2.4(iv)]{BM15} we have \begin{subequations} \begin{align} \innp{J_AT^n x - y, T^n x+nv-J_AT^n x} &=\innp{J_AT^n x - y, T^n x-J_AT^n x-(y-nv-y)}\\ &=\innp{J_AT^n x -J_A T^n y, (\ensuremath{\operatorname{Id}}-J_A)T^n x-(\ensuremath{\operatorname{Id}}-J_A)T^n y}\\ &\to 0. \end{align} \end{subequations} Note that \cite[Proposition~2.4(vi)]{BM15} implies that $(T^n x +nv)_\ensuremath{{n\in{\mathbb N}}}$ is \fejer monotone with respect to $\ensuremath{\operatorname{Fix}} (v+T)$ and consequently with respect to $Z_v$. Now apply \cref{Lem:sweet:lem} with $E$ replaced by $Z_v$, $(u_n)_\ensuremath{{n\in{\mathbb N}}}$ replaced by $(J_AT^n x)_\ensuremath{{n\in{\mathbb N}}}$, and $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ replaced by $(T^n x +nv)_\ensuremath{{n\in{\mathbb N}}}$. \end{proof} As a powerful application of \cref{thm:abs:gen:shad}, we obtain the following striking strengthening of a previous result on normal cone operators. \begin{thm} \label{thm:nc:shad} Suppose that $U$ and $V$ are nonempty closed convex subsets of $X$, that $A=N_U$, that $B=N_V$, that $\gap=P_{\overline{\ensuremath{\operatorname{ran}}}(\ensuremath{\operatorname{Id}}-T)}$ and that $U\cap(v+V)\neq \fady$. Let $x\in X$. Then $(P_UT^n x)_\ensuremath{{n\in{\mathbb N}}}$ converges weakly to some point in $Z_v=U\cap(v+V)$. \end{thm} \begin{proof} It follows from \cite[Theorem~3.13(iii)(b)]{BCL04} that $(P_UT^n x)_\ensuremath{{n\in{\mathbb N}}}$ is bounded and its weak cluster points lie in $U\cap(v+V)$. Moreover \cite[Theorem~3.5]{BCL04} implies that $Z_v=U\cap(v+V)\subseteq U\cap(v+V) +N_{\overline{U-V}}(v)\subseteq \ensuremath{\operatorname{Fix}} (v+T)$. Finally, \cite[Lemma~3.12~\&~Proposition~2.4(ii)]{BCL04} imply that $(\forall y\in \ensuremath{\operatorname{Fix}} (v+T))(\forall \ensuremath{{n\in{\mathbb N}}})$ $P_UT^n y=P_U(y-nv)=y$, hence all the assumptions of \cref{thm:abs:gen:shad} are satisfied and the result follows. \end{proof} \begin{rem} \label{rem:BCL04} Suppose that $v\in \ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}} -T)$. More than a decade ago, it was shown in \cite{BCL04} that when $A=N_U$ and $B=N_V$, where $U$ and $V$ are nonempty closed convex subsets of $X$, that $(P_UT^n x)_\ensuremath{{n\in{\mathbb N}}}$ is bounded and its weak cluster points lie in $U\cap(v+V)$. \cref{thm:nc:shad} yields the much stronger result that $(P_UT^n x)_\ensuremath{{n\in{\mathbb N}}}$ converges weakly to a point in $U\cap(v+V)$. \end{rem} Here is another instance of \cref{thm:abs:gen:shad}. \begin{ex} \label{ex:gen:incon} Suppose that $U$ is a closed linear subspace of $X$, that $b\in U^\perp\smallsetminus \stb{0}$, that $A=N_U$ and that $B=\ensuremath{\operatorname{Id}}+N_{(-b+U)}$. Then $Z=\fady$, $v=b\in \ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}-T)$, $Z_v=\stb{0}$ and $K_v=U^\perp$. Moreover, $(\forall x\in X)$ $(\forall n\in \ensuremath{\mathbb N})$ $P_UT^n x=\tfrac{1}{2^n}P_Ux\to 0$ and $\norm{P_{U^\perp} T^n x}\to \infty$. \end{ex} \begin{proof} By the Brezis-Haraux theorem (see \cite[Theorems~3~\&~4]{Br-H} or \cite[Theorem~24.20]{BC2011}) we have $X=\intr X \subseteq \intr \ensuremath{\operatorname{ran}} B = \intr (\ensuremath{\operatorname{ran}} \ensuremath{\operatorname{Id}}+\ensuremath{\operatorname{ran}} N_{(-b+U)}) \subseteq X$, hence $\ensuremath{\operatorname{ran}} B=X$. Using \cite[Corollary~5.3(ii)]{BHM15} we have $\overline{\ensuremath{\operatorname{ran}}}(\ensuremath{\operatorname{Id}}-T) =\overline{(\ensuremath{\operatorname{dom}} A-\ensuremath{\operatorname{dom}} B)}\cap\overline{(\ensuremath{\operatorname{ran}} A+\ensuremath{\operatorname{ran}} B)} =(U+b-U)\cap(U^\perp+X)=b+U$. Consequently, using \cite[Definition~3.6]{Sicon2014} and \cite[Proposition~3.17]{BC2011} we have \begin{equation}\label{ex:e:loc:gap} v=P_{\overline{\ensuremath{\operatorname{ran}}}(\ensuremath{\operatorname{Id}}-T)}0=P_{b+U}0=b+P_{U}(-b)=b\in U^\perp\smallsetminus \stb{0}. \end{equation} Note that $\ensuremath{\operatorname{dom}} \outs[v]{A}=\ensuremath{\operatorname{dom}} A=U$ and $\ensuremath{\operatorname{dom}} \inns[v]{B}=v+\ensuremath{\operatorname{dom}} B=b-b+U=U$, hence $\ensuremath{\operatorname{dom}} ( \outs[v]{A}+\inns[v]{B})=U\cap U=U$. Let $x\in U$. Using \cref{ex:e:loc:gap} we have \begin{subequations} \begin{align} x\in Z_v&\siff 0\in N_{U}x-b+x-b+N_{-b+U}(x-b) = N_{U}x-b+x-b+N_{U}x\\ &\siff 0\in U^{\perp}-b+x-b+U^\perp=x+U^\perp\siff x\in U^\perp, \end{align} \end{subequations} hence $Z_v=\stb{0}$, as claimed. As subdifferentials, both $A$ and $B$ are paramonotone, and so are the translated operators $\outs[v]{A}$ and $\inns[v]{B}$. Since $ Z_v=\stb{0}$, in view of \cite[Remark~5.4]{JAT2012} and \cref{ex:e:loc:gap} we learn that \begin{equation} K_v=(N_U 0-b)\cap(0-b+N_{-b+U}(0-b))=(U^\perp-b)\cap(-b+U^\perp)=U^\perp. \end{equation} Next we claim that \begin{equation} \label{eq:PUTX} (\forall x\in X)\quad P_U Tx=\tfrac{1}{2}P_U x. \end{equation} Indeed, note that $J_B=(\ensuremath{\operatorname{Id}}+B)^{-1} =(2\ensuremath{\operatorname{Id}}+N_{-b+U})^{-1} =(2\ensuremath{\operatorname{Id}}+2N_{-b+U})^{-1} =(\ensuremath{\operatorname{Id}}+N_{-b+U})^{-1}\circ(\tfrac{1}{2}\ensuremath{\operatorname{Id}}) =P_{-b+U}\circ(\tfrac{1}{2}\ensuremath{\operatorname{Id}})=-b+\tfrac{1}{2}P_{U}$, where the last identity follows from \cite[Proposition~3.17)]{BC2011} and \cref{ex:e:loc:gap}. Now, using that\footnote{It follows from \cite[Corollary~3.20]{BC2011} that $P_U$ is linear, hence, $P_UR_U=P_U(2P_U-\ensuremath{\operatorname{Id}})=2P_U-P_U=P_U$.} $P_UR_U=P_U$ and \cref{ex:e:loc:gap} we have $P_U T =P_U(P_{U^\perp} +J_BR_U) =P_UJ_BR_U =P_U(-b+\tfrac{1}{2}P_{U})R_U =P_U(-b+\tfrac{1}{2}P_{U}) =\tfrac{1}{2}P_U$. To show that $(\forall x\in X)$ $(\forall n\in \ensuremath{\mathbb N})$ $P_UT^n x=\tfrac{1}{2^n}P_Ux$, we use induction. Let $x\in X$. Clearly, when $n=0$, the base case holds. Now suppose that for some $n\in \ensuremath{\mathbb N}$, we have, for every $x\in X$, $P_UT^n x=\tfrac{1}{2^n}P_Ux$. Now applying the inductive hypothesis with $x$ replaced by $Tx$, and using \cref{eq:PUTX}, we have $P_UT^{n+1} x =P_UT^n (T x) =\tfrac{1}{2^n}P_U Tx =\tfrac{1}{2^n}P_U(\tfrac{1}{2}P_Ux) =\tfrac{1}{2^{n+1}}P_U x$, as claimed. Finally, using \cref{ex:e:loc:gap} and \cite[Corollary~6(a)]{Pazy} we have $\norm{T^n x}\to +\infty$, hence \begin{equation} \normsq{P_{U^\perp}T^n x}=\normsq{T^n x}-\normsq{P_UT^n x} =\normsq{T^n x}-\tfrac{1}{4^n}\|P_U x\|^2\to+\infty. \end{equation} \vspace{-5mm} \end{proof} In fact, as we shall now see, the shadow sequence may be unbounded in the general case, even when one of the operators is a normal cone operator. \begin{rem}{\bf{(shadows in the presence of normal solutions)}} \label{rem:gen:case} \begin{enumerate} \item \label{rem:i} \cref{ex:gen:incon} illustrates that even when normal solutions exists, the shadows need not converge. Indeed, we have $K_v=U^\perp\neq \fady$ but the \emph{dual} shadows satisfy $\norm{P_{U^{\perp}} T^n x}\to +\infty$. \item Suppose that $A$ and $B$ are as defined in \cref{ex:gen:incon}. Set $\widetilde{A}=A^{-1}$, $\widetilde{B}=B^{-\ovee}$ and $\widetilde{Z}= Z_{(\widetilde{A},\widetilde{B})}$. By \cite[Proposition~2.4(v)]{JAT2012} $\widetilde{Z}\neq \fady\siff K_{(\widetilde{A},\widetilde{B})}=Z_{(A,B)}\neq \fady$, hence $\widetilde{Z}= \fady$. Moreover, \cite[Remarks~3.13~\&~3.5]{Sicon2014} imply that $v=b\in \ensuremath{\operatorname{ran}}(\ensuremath{\operatorname{Id}}-T)$ and $\widetilde{Z}_v=U^\perp+b=U^\perp\neq \fady$. However, in the light of \ref{rem:i} the \emph{primal} shadows satisfy $\norm{J_{\widetilde{A}}T^n x}=\norm{J_{A^{-1}}T^n x} =\norm{P_{U^{\perp}} T^n x}\to +\infty$. \item Concerning \cref{thm:abs:gen:shad}, it would be interesting to find other conditions sufficient for weak convergence of the shadow sequence or to even characterize this behaviour. \end{enumerate} \end{rem} \section{A proof of the Lions-Mercier-Svaiter theorem} In this section, we work under the assumptions that \begin{equation} Z\neq \fady \quad\text{and} \quad \ensuremath{\operatorname{Fix}} T\neq \fady. \end{equation} Parts of the following two results are implicit in \cite{Svaiter}; however, our proofs are different. \begin{prop} \label{cor:cluster:fen} Let $x\in X$. Then the following hold: \begin{enumerate} \item \label{cor:cluster:fen:i} $T^n x-T^{n+1} x=J_AT^n x -J_BR_AT^n x =J_{A^{-1}}T^n x +J_{B^{-1}}R_AT^n x \to 0$. \item \label{cor:cluster:fen:ii} The sequence $(J_AT^n x,J_BR_AT^n x,J_{A^{-1}}T^n x, J_{B^{-1}}R_AT^n x)_\ensuremath{{n\in{\mathbb N}}}$ is bounded and lies in $\gra (A\times B)$. \end{enumerate} Suppose that $(a,b,a^*,b^*)$ is a weak cluster point of $(J_AT^n x,J_BR_AT^n x,J_{A^{-1}}T^n x, J_{B^{-1}}R_AT^n x)_\ensuremath{{n\in{\mathbb N}}}$. Then: \begin{enumerate} \setcounter{enumi}{2} \item \label{eq:a=b} $ a-b=a^*+b^*=0.$ \item \label{eq:a=-b} $\innp{a,a^*}+\innp{b,b^*}=0.$ \item \label{eq:in:gra} $\bk{a,a^*}\in \gra A$ and $\bk{b,b^*}\in \gra B.$ \item \label{cor:cluster:fen:conc} For every $x\in X$, the sequence $(J_AT^n x,J_{A^{-1}}T^n x)_\ensuremath{{n\in{\mathbb N}}}$ is bounded and its weak cluster points lie in $\ensuremath{{\mathcal S}}$. \end{enumerate} \end{prop} \begin{proof} \ref{cor:cluster:fen:i}: Apply \cref{lem:cluster:fen}\ref{lem:cluster:fen:i} with $x$ replaced by $T^n x$. The claim of the strong limit follows from combining \cref{f:sd:ZK}\ref{T:fne} and \cite[Corollary~2.3]{Ba-Br-Reich78} or \cite[Theorem~5.14(ii)]{BC2011}. \ref{cor:cluster:fen:ii}: The boundedness of the sequence follows from the weak convergence of $(T^n x)_\ensuremath{{n\in{\mathbb N}}}$ (see, e.g.,\cite[Theorem~5.14(iii)]{BC2011}) and the nonexpansiveness of the resolvents and reflected resolvents of monotone operators (see, e.g., \cite[Corollary~23.10(i) and (ii)]{BC2011}). Now apply \cref{lem:cluster:fen}\ref{lem:cluster:fen:ii} with $x$ replaced by $T^n x$. \ref{eq:a=b}: This follows from taking the weak limit along the subsequences in \ref{cor:cluster:fen:i}. \ref{eq:a=-b}: In view of \ref{eq:a=b} we have $\innp{a,a^*}+\innp{b,b^*}=\innp{a, a^*+b^*}=\innp{a,0}=0$. \ref{eq:in:gra}: Let $((x,y),(u,v))\in \gra (A\times B)$ and set \begin{equation} \label{e:def:seq} a_n:=J_A T^n x, a^*_n:=J_{A^{-1}}T^nx, b_n:=J_BR_A T^n x, b^*_n:=J_{B^{-1}}R_AT^nx. \end{equation} Applying \cref{lem:abs:8} with $(a,b,a^*,b^*)$ replaced by $(a_n,b_n,a^*_n,b^*_n)$ yields \begin{align} \label{eq:apply:lem} \innp{(a_n,b_n)-(x,y),(a^*_n,b^*_n)-(u,v)}& =\innp{\at_n-\bt_n,\ut_n +\innp{\xt,u} -\innp{\xt,\ut_n} -\innp{\at_n-\bt_n,u \nonumber\\ &\quad+\innp{\bt_n,\ut_n+\vt_n +\innp{\yt,v} -\innp{\yt,\vt_n} -\innp{\bt_n,u+v} \end{align} By \cref{e:prod:resolvent}, $A\times B$ is monotone. In view of \cref{e:def:seq}, \cref{eq:apply:lem} and \cref{cor:cluster:fen}\ref{cor:cluster:fen:ii}, we deduce that \begin{align} \label{eq:maxm:ve:d} &\innp{\at_n-\bt_n,\ut_n +\innp{\xt,u} -\innp{\xt,\ut_n} -\innp{\at_n-\bt_n,u \nonumber\\ &+\innp{\bt_n,\ut_n+\vt_n +\innp{\yt,v} -\innp{\yt,\vt_n} -\innp{\bt_n,u+v}\ge 0. \end{align} Taking the limit in \cref {eq:maxm:ve:d} along a subsequence and using \cref{e:def:seq}, \cref{cor:cluster:fen}\ref{cor:cluster:fen:i}, \ref{eq:a=b} and \ref{eq:a=-b} yield \begin{align} 0&\le \innp{x,u} -\innp{x,a^*} +\innp{y,v} -\innp{y,b^*}-\innp{b,u+v}\nonumber\\ &=\innp{x,u} -\innp{x,a^*} +\innp{y,v} -\innp{y,b^*}-\innp{a,u}-\innp{b,v} +\innp{a,a^*}+\innp{b,b^*}\nonumber\\ &=\innp{a-x,a^*-u}+\innp{b-y,b^*-v} =\innp{(a,b)-(x,y),(a^*,b^*)-(u,v)}. \end{align} By maximality of $A\times B$ (see \cref{e:prod:resolvent}) we deduce that $((a,b),(a^*,b^*))\in \gra (A\times B)$. Therefore, $(a,a^*)\in \gra A$ and $(b,b^*)\in \gra B$. \ref{cor:cluster:fen:conc}: The boundedness of the sequence follows from \ref{cor:cluster:fen:ii}. Now let $(a,b,a^*,b^*)$ be a weak cluster point of $((J_AT^n x,J_BR_AT^n x,J_{A^{-1}}T^n x, J_{B^{-1}}R_AT^n x))_\ensuremath{{n\in{\mathbb N}}}$. By \ref{eq:in:gra} we know that $(a,a^*)\in \gra A$ and $(b,b^*)=(a,b^*)\in \gra B$, which in view of \ref{eq:a=-b} implies $a^*\in Aa$ and $-a^*=b^*\in Bb=Ba$, hence $(a,a^*)\in \ensuremath{{\mathcal S}}$, as claimed (see \cref{def:ex:sol}). \end{proof} \begin{thm} \label{thm:simp:pr:d} Let $x\in X$ and let $(z,k)\in \ensuremath{{\mathcal S}}$. Then the following hold: \begin{enumerate} \item \label{thm:simp:pr:d:iii-1} For every $\ensuremath{{n\in{\mathbb N}}}$, \begin{subequations} \begin{align} \normsq{(J_{A}T^{n+1} x,J_{A^{-1}} T^{n+1}x)-(z,k)} &= \norm{J_{A}T^{n+1} x-z}^2+\norm{J_{A^{-1}} T^{n+1}x-k}^2\\ &\le \norm{J_{A} T^{n}x-z}^2+\norm{J_{A^{-1}} T^{n}x-k}^2\\ &=\normsq{(J_{A}T^{n}x,J_{A^{-1}} T^{n}x)-(z,k)}. \end{align} \end{subequations} \item \label{thm:simp:pr:d:iii-2} The sequence $(J_AT^n x, J_{A^{-1}}T^nx)_\ensuremath{{n\in{\mathbb N}}}$ is Fej\'{e}r monotone with respect to $\ensuremath{{\mathcal S}}$. \item\label{thm:simp:pr:iv} The sequence $(J_AT^n x, J_{A^{-1}}T^nx)_\ensuremath{{n\in{\mathbb N}}}$ converges weakly to some point in $\ensuremath{{\mathcal S}}$. \end{enumerate} \end{thm} \begin{proof} \ref{thm:simp:pr:d:iii-1}: Apply \cref{thm:simp:pr:d:iii} with $x$ replaced by $T^n x$. \ref{thm:simp:pr:d:iii-2}: This follows directly from \ref{thm:simp:pr:d:iii-1}. \ref{thm:simp:pr:iv}: Combine \cref{cor:cluster:fen}\ref{cor:cluster:fen:conc}, \ref{thm:simp:pr:d:iii-2}, \cref{fact:para:cc}\ref{fact:para:cc:i} and \cite[Theorem~5.5]{BC2011}. \end{proof} \begin{cor}{\bf(Lions--Mercier--Svaiter).} $(J_AT^n x)_\ensuremath{{n\in{\mathbb N}}}$ converges weakly to some point in $Z$. \end{cor} \begin{proof} This follows from \cref{thm:simp:pr:d}\ref{thm:simp:pr:iv}; see also Lions and Mercier's \cite[Theorem~1]{L-M79} and Svaiter's \cite[Theorem~1]{Svaiter}. \end{proof} \begin{rem}[\bf brief history] \label{rem:history} The Douglas--Rachford algorithm has its roots in the 1956 paper \cite{DoRa} as a method for solving a system of linear equations. Lions and Mercier, in their brilliant seminal work \cite{L-M79} from 1979, presented a broad and powerful generalization to its current form. (See \cite{BLM16} and \cite{Comb04} for details on this connection.) They showed that $(T^nx)_\ensuremath{{n\in{\mathbb N}}}$ converges weakly to a point in $\ensuremath{\operatorname{Fix}} T$ and that the bounded shadow sequence $(J_AT^nx)_\ensuremath{{n\in{\mathbb N}}}$ has all its weak cluster points in $Z$ provided that $A+B$ was maximally monotone. (Note that resolvents are \emph{not} weakly continuous in general; see, e.g., \cite{Zar71:1} or \cite[Example~4.12]{BC2011}.) Building on \cite{Bau09} and \cite{EckSvai08}, Svaiter provided a beautiful complete answer in 2011 (see \cite{Svaiter}) demonstrating that $A+B$ does not have to be maximally monotone and that the shadow sequence $(J_AT^nx)_\ensuremath{{n\in{\mathbb N}}}$ in fact does converge weakly to a point in $Z$. (He used \cref{thm:simp:pr:d}; however, his proof differs from ours which is more in the style of the original paper by Lions and Mercier \cite{L-M79}.) Nonetheless, when $Z=\varnothing$, the complete understanding of $(J_AT^nx)_\ensuremath{{n\in{\mathbb N}}}$ remains open --- to the best of our knowledge, \cref{thm:abs:gen:shad} is currently the most powerful result available. \end{rem} In our final result we show that when $X=\ensuremath{\mathbb R}$, the \fejer monotonicity of the sequence $(J_AT^n x, J_{A^{-1}}T^n x)_\ensuremath{{n\in{\mathbb N}}}$ with respect to $S$ can be decoupled to yield \fejer monotonicity of $(J_AT^n x)_\ensuremath{{n\in{\mathbb N}}}$ and $( J_{A^{-1}}T^n x)_\ensuremath{{n\in{\mathbb N}}}$ with respect to $Z$ and $K$, respectively. \begin{lem} \label{lem:real} Suppose that $X=\ensuremath{\mathbb R}$. Let $x\in X$ and let $ (z,k)\in Z\times K$. Then the following hold: \begin{enumerate} \item \label{lem:real:1:dyn} The sequence $(J_A T^n x)_\ensuremath{{n\in{\mathbb N}}}$ is \fejer monotone with respect to $Z$. \item \label{lem:real:2:dyn} The sequence $(J_{A^{-1}} T^n x)_\ensuremath{{n\in{\mathbb N}}}$ is \fejer monotone with respect to $K$. \end{enumerate} \end{lem} \begin{proof} Apply \cref{lem:real:RR:static} with $x$ replaced by $T^n x$. \end{proof} We point out that the conclusion of \cref{lem:real} does not hold when $\dim X\ge 2$, see \cite[Section~5~\&~Figure~1]{JAT2014}. \section*{Acknowledgments} HHB was partially supported by the Natural Sciences and Engineering Research Council of Canada and by the Canada Research Chair Program. \small
a14e92f954e24e1e1b4a5bbcdc08eaac042a2e57
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Despite all the exceptional abilities, the human visual system is particularly weak in counting objects in the image. In fact, given a visual scene with a collection of objects, one can only make a rapid, accurate and confident judgement if the number of items is below five, with an ability termed as subitizing~\cite{Kaufman49}. While for scenes with an increasing number of objects, the accuracy and confidence of the judgements tend to decrease dramatically. Until at some point, counting can only be accomplished by calculating estimates or enumerating the instances, which incurs low accuracy or tremendous time cost. In this paper, our goal is to develop a {\bf generalised visual object counting} system, that augments humans' ability for recognising the number of objects in a visual scene. Specifically, generalised visual object counting refers to the problem of identifying the number of the salient objects of {\em arbitrary} semantic class in an image ({\em i.e.}~open-world visual object counting) with {\em arbitrary} number of instance ``exemplars'' provided by the end user, to refer the particular objects to be counted, {\em i.e.}~from zero-shot to few-shot object counting. To this end, we propose a novel architecture that transforms the input image (with the few-shot annotations if any) into a density map, and the final count can be obtained by simply summing over the density map. Specifically, we take inspiration from Lu {\em et al.}~\cite{Lu18} that self-similarity is a strong prior in visual object counting, and introduce a transformer-based architecture where the self-similarity prior can be explicitly captured by the built-in attention mechanisms, both among the input image patches and with the few-shot annotations~(if any). We propose a two-stage training scheme, with the transformer-based image encoder being firstly pre-trained with self-supervision via masked image modeling~\cite{he2022masked}, followed by supervised fine-tuning for the task at hand. We demonstrate that self-supervised pre-training can effectively learn the visual representation for counting, thus significantly improving the performance. Additionally, to tackle the long-tailed challenge in existing generalised visual object counting datasets, where the majority of images only contain a small number of objects, we propose a simple, yet scalable pipeline for synthesizing training images with a large number of instances, as a consequence, establishing reliable data sources for model training, to condition on the user-provided instance exemplars. To summarise, in this paper, we make four contributions: {\em First}, we introduce an architecture for generalised visual object counting based on transformer, termed as \textbf{CounTR}~(pronounced as counter). It exploits the attention mechanisms to explicitly capture the similarity between image patches, or with the few-shot instance ``exemplars'' provided by the end user; {\em Second}, we adopt a two-stage training regime~(self-supervised pre-training, followed by supervised fine-tuning) and show its effectiveness for the task of visual counting; {\em Third}, we propose a simple yet scalable pipeline for synthesizing training images with a large number of instances, and demonstrate that it can significantly improve the performance on images containing a large number of object instances; {\em Fourth}, we conduct thorough ablation studies on the large-scale counting benchmark, {\em e.g.}~FSC-147~\cite{Ranjan21}, and demonstrate state-of-the-art performance on both zero-shot and few-shot settings, improving the previous best approach by over 18.3\% on the mean absolute error of the test set. \begin{comment} that a property of images that has been largely ignored explicitly in previous counting approaches – that of image self-similarity. At a simplistic level, an image is deemed self-similar if patches repeat to some approximation – for example if patches can be represented by other patches in the same image. Self-similarity has underpinned applications for many vision tasks, ranging from texture synthesis [11], to image denoising [7], to super-resolution [13]. Any-shot counting can be divided into zero-shot counting and few-shot counting. In zero-shot counting, the categories in training and test are disjoint, and the model can directly count novel categories without fine-tuning. Given an image containing counted objects, the model can directly calculate the salient objects in it, regardless of the categories of objects. Zero-shot counting method has excellent convenience. However, because the meaning of "salient objects" is not clear enough, in most cases, the model would only count the objects with the most repeated occurrences, which may lead to the fact that the actual counted objects may not be what we want. To make up for this deficiency, few-shot counting is proposed. In the few-shot scenario, the model enables to incorporate the input from end-users in the form of instance exemplars and selectively counts the objects based on these exemplars. In this way, we can solve the object counting problem by matching. The matching here is not simply to calculate the squared error pixel by pixel, but to study the texture and morphology feature correlation between exemplars and the query image. After that, we can naturally regress the density map corresponding to the original image according to the refined feature, and then sum it up to get the total number of objects. This approach can also enhance the interpretability of the object counting model to a certain extent. We exploit a two-stage training mechanism, namely, self-supervised pre-training, followed by fine-tuning. We found that since images used for object counting generally contain a large number of similar objects, these images tend to have good self-similarity. During self-supervised learning, we exploit this self-similarity prior in counting problem, and pre-train the visual representations through Masked Autoencoders (MAE). MAE uses unmasked patches to reconstruct randomly masked pixels to train the encoder's ability to extract visual representations. This idea is very similar to the prior of image self-similarity, and that's why we choose MAE. During supervised fine-tuning, we exploit an "inversed" transformer architecture to conduct feature correlation. The density map is regressed based on this refined feature. We also use several data augmentation methods, including random cropping, geometric transformation, mosaic, etc. Mosaic is proposed by us to improve the model's ability to identify different object categories and count large numbers of objects. Since the train set and test set categories are disjoint, data augmentation can also help prevent overfitting and improve our model's performance. Besides, we also conducted thorough ablation studies to prove the effectiveness of self-supervised training (e.g. use or not use, masking ratio) and analyze the performance of data augmentation during fine-tuning (e.g. random cropping, geometric transformation, mosaic, etc.) We found that MAE pre-training plays an important role in our object counting model. Of all the data augmentation methods, mosaic makes the greatest contribution to our model's performance. We also analyze the effect of shot number in few-shot counting. While evaluating on FSC147, we demonstrate state-of-the-art performance. Our model achieves 14.76 MAE, 57.36 RMSE on the validation set and 13.62 MAE, 101.68 RMSE on the test set. It outperforms other models especially on MAE. \weidi{We need a teaser figure for the introduction, demonstrating the generalised object counting problem.} \end{comment} \section{Related Work} \paragraph{Visual object counting.} In the literature, object counting approaches can generally be cast into two categories: detection-based counting~\cite{barinova2012detection,desai2011discriminative,hsieh2017drone} or regression-based counting~\cite{arteta2014interactive,arteta2016counting,cho1999neural,kong2006viewpoint,lempitsky2010learning,marana1997estimation,xie2018microscopy}. The former relies on a visual object detector that can localize object instances in an image, this method, however, requires training individual detectors for different objects, and the detection problem remains challenging if only a small number of annotations are given. The latter avoids solving the hard detection problem, instead, methods are designed to learn either a mapping from global image features to a scalar (number of objects), or a mapping from dense image features to a density map, achieving better results on counting overlapping instances. However, previous methods from both lines~(detection, regression) have only been able to count objects of one particular class~({\em e.g.}~cars, cells).\\[-0.8cm] \paragraph{Class-agnostic object counting.} Recently, class-agnostic few-shot counting~\cite{Lu18,Ranjan21,you2022iterative} has witnessed a rise in research interest in the community. Unlike the class-specific models that could only count objects of specific classes like cars, cells, or people, class-agnostic counting aims to count the objects in an image based on a few given ``exemplar'' instances, thus is also termed as few-shot counting. Generally speaking, class-agnostic few-shot counting models need to mine the commonalities between the counts of different classes of objects during training. In~\cite{Lu18}, the authors propose a generic matching network~(GMN), which regresses the density map by computing the similarity between the CNN features from image and exemplar shots; FamNet~\cite{Ranjan21} utilizes feature correlation for prediction and uses adaptation loss to update the model's parameters at test time; SAFECount~\cite{you2022iterative} uses the support feature to enhance the query feature, making the extracted features more refined and then regresses to obtain density maps; In a very recent work~\cite{Hobley22}, the authors exploit a pre-trained DINO~\cite{Mathilde21} model and a lightweight regression head to count without exemplars. In this paper, we also use transformer-based architecture, however, train it from scratch, and augment it with the ability to count the objects with {\em any shot}. \begin{comment} \weidi{TO CONTINUE ...} \paragraph{Image self-similarity prior.} This particular property of images lays the theoretical foundation for many underlying application fields of computer vision: for example, texture synthesis, which is infinite expansion according to a given small texture template; image denoising, which finds similar patches in an image and averages these patches to remove the noise; super-resolution, which replaces small areas with the help of multi-scale regional similarity in the image. In object counting problem, this similarity is often "approximate". The objects may vary a lot in their morphology, while their textures show similar characteristics. Currently, no work has been found to use the image self-similarity prior and learn visual representations suitable for object counting tasks accordingly.\\[-0.8cm] \end{comment} \begin{comment} \paragraph{Self-supervised learning algorithms.} In recent years, self-supervised learning has significantly progressed in basic computer vision tasks, such as image classification. It not only makes pre-trained model comparable to strongly supervised learning in performance. It can also reduce the distribution differences between domains and improve model's robustness. However, no previous work has tried to mine the unique prior knowledge in object counting and design a proxy task for it. Masked autoencoder (MAE) proposed by He et al. is very suitable for pre-training our model. It masks a percentage of random patches in the input image and reconstructs those masked pixels from the remained parts of the image. In this way, the encoder can learn image's visual representation. By using MAE, the pre-training speed of transformer models can triple or more, and models' performance can also be improved a lot. \\[-0.8cm] \end{comment} \section{Methods} In this paper, we consider the challenging problem of generalised visual object counting, where the goal is to count the salient objects of \textbf{arbitrary} semantic class in an image, {\em i.e.}~open-world visual object counting, with \textbf{arbitrary} number of ``exemplars'' provided by the end user,{\em i.e.}~from zero-shot to few-shot object counting.\\[-0.8cm] \paragraph{Overview.} Given a training set, {\em i.e.}~$\mathcal{D}_{\text{train}} = \{(\mathcal{X}_1, \mathcal{S}_1, {y}_1), \dots, (\mathcal{X}_N, \mathcal{S}_N, {y}_N)\}$, where $\mathcal{X}_i \in \mathbb{R}^{H \times W \times 3}$ denotes the input image, $\mathcal{S}_i = \{b_i\}^K$ denotes the box coordinates~($b_i^k \in \mathbb{R}^4$) for a total of $K \in \{0,1,2,3...\}$ given ``exemplars'', {\em i.e.}~zero-shot or few-shot counting, $y_i \in \mathbb{R}^{H \times W \times 1}$ refers to a binary spatial density map, with $1$'s at the objects' center location, indicating their existence, and $0$'s at other locations without the objects, the object count can thus be computed by spatially summing over the density map. Our goal here is thus to train a generalised visual object counter that can successfully operate on a test set, given zero or few exemplars, {\em i.e.}~$\mathcal{D}_{\text{test}} = \{(\mathcal{X}_{N+1}, \mathcal{S}_{N+1}), \dots, (\mathcal{X}_{M}, \mathcal{S}_{M})\}$. \textbf{Note that}, the semantic categories for objects in training set~($\mathcal{C}_{\text{train}}$) and testing set~($\mathcal{C}_{\text{test}}$) are disjoint, {\em i.e.}~$\mathcal{C}_{\text{train}} \cap \mathcal{C}_{\text{test}} = \emptyset$. To achieve this goal, we introduce a novel transformer-based architecture, termed as Counting TRansformer~(\textbf{CounTR}). Specifically, the attention mechanisms in transformer enable to explicitly compare visual features between any other spatial locations and with ``exemplars'', which are provided by the end user in the few-shot scenario. In Section~\ref{sec:twostage}, we further introduce a two-stage training regime, in which the model is firstly pre-trained with self-supervision via masked image reconstruction~(MAE), followed by fine-tuning on the downstream counting task. To the best of our knowledge, this is the first work to show the effectiveness of self-supervised pre-training for generalised visual object counting. Additionally, in Section~\ref{sec:mosaic}, we propose a novel and scalable {\em mosaic} pipeline for synthesizing training images, as a way to resolve the challenge of long-tailed distribution ({\em i.e.}~images with a large number of instances tend to be less frequent) in the existing object counting dataset. In Section~\ref{sec:ttnorm}, we will introduce our test-time normalisation method including test-time normalisation and test-time cropping. \subsection{Architecture} \label{sec:arch} Here, we introduce the proposed Counting TRansformer~(\textbf{CounTR}), as shown in Figure~\ref{fig:fewshot}. In specific, the input image~($\mathcal{X}_i$), and user-provided ``exemplars''~($\mathcal{S}_i^k, \forall k \in \{0,1, 2, 3\}$) are fed as input and mapped to a density heatmap, where the object count can be obtained by simply summing over it: \begin{align} y_i = \Phi_{\textsc{decoder}}( \Phi_{\textsc{FIM}}(\Phi_{\textsc{ViT-Enc}} (\mathcal{X}_i), \Phi_{\textsc{CNN-Enc}}(\mathcal{S}_i^k))), \hspace{3pt} \forall k \in \{0,1,...,K\} \end{align} In the following sections, we will detail the three building components, namely, visual encoder~($\Phi_{\textsc{ViT-Enc}}(\cdot)$ and $\Phi_{\textsc{CNN-Enc}}(\cdot)$), feature interaction module~({\em i.e.}~FIM, $\Phi_{\textsc{FIM}}(\cdot)$), and visual decoder~($\Phi_{\textsc{decoder}}(\cdot)$). \begin{figure}[t] \centering \includegraphics[width=0.98\textwidth]{images/arch.jpg} \\ \caption{\textbf{Architecture detail for \textbf{CounTR}}. The query image and exemplars are encoded by separate visual encoders. The image features are then fed into the feature interaction module as query vectors, and the exemplar features are fed as key and value vectors. When there is no instance exemplar provided, a learnable [\texttt{SPE}] token is used as key and value instead. The outputs are up-sampled in the decoder and finally, we get the corresponding density map. The object count can be obtained by summing the density map. \textbf{Note that}, given the different exemplars with diversity, the model should ideally understand the invariance~(shape, color, scale, texture), for example, if the three given exemplars are all in the same color, the model should only count the objects of that color, otherwise, count all instances of the same semantic. } \label{fig:fewshot} \vspace{-10pt} \end{figure} \subsubsection{Visual Encoder} The visual encoder is composed of two components, serving for two purposes: {\em first}, an encoder based on Vision Transformer~(ViT)~\cite{Dosovitskiy21} for processing the input image that maps it into a high-dimensional feature map; {\em second}, compute the visual features for the ``exemplars'', if there is any. Specifically, as for ViT, the input image is broken into patches with a size of $16 \times 16$ pixels and projected to tokens by a shared MLP. To indicate the order of each token in the sequence, positional encoding is added, ending up with $M$ `tokens'. They are further passed through a series of transformer encoder layers, in our model, $12$ layers are used. We do not include the [CLS] token in the sequence, and the output from the ViT encoder is a sequence of $D$-dim vectors : \begin{align} \mathcal{F}_{\textsc{ViT}} = \Phi_{\textsc{ViT-Enc}}(\mathcal{X}_i) \in \mathbb{R}^{M \times D} \end{align} for more details, we refer the readers to the original ViT paper. For few-shot counting, we use the exemplar encoder to extract the visual representation. It exploits a lightweight ConvNet architecture~(4 convolutional layers, followed by a global average pooling), that maps the given exemplars~(resized to the same resolution) into vectors, \begin{align} \mathcal{F}_{\textsc{CNN}} = \Phi_{\textsc{CNN-enc}}(\mathcal{S}_i^k) \in \mathbb{R}^{K \times D} \end{align} Note that, under the zero-shot scenario with no exemplar given, we adopt a learnable [\texttt{SPE}] token as the substitute to provide cues for the model. \subsubsection{Feature Interaction Module} Here, we introduce the proposed feature interaction module~(FIM), for fusing information from both encoders. Specifically, the FIM is constructed with a series of standard transformer decoder layers, where the image features act as the $\texttt{Query}$, and two different linear projections~(by MLPs) of the exemplar features~(or learnable special token), are treated as the $\texttt{Value}$ and $\texttt{Key}$. With such design, the output from FIM remains the same dimensions as image features~($\mathcal{F}_{\textsc{ViT}}$), throughout the interaction procedure: \begin{align} \mathcal{F}_{\textsc{FIM}} = \Phi_{\textsc{FIM}}(\mathcal{F}_{\textsc{VIT}},\text{ } W^{\texttt{k}} \cdot \mathcal{F}_{\textsc{CNN}}, \text{ } W^{\texttt{v}} \cdot \mathcal{F}_{\textsc{CNN}}) \in \mathbb{R}^{M \times D} \end{align} Conceptually, such transformer architecture perfectly reflects the self-similarity prior to the counting problem, as observed by Lu {\em et al.}~\cite{Lu18}. In particular, the self-attention mechanisms in transformer decoder enables to measure of the self-similarity between regions of the input image, while the cross-attention between \texttt{Query} and \texttt{Value} allows to compare image regions with the \textbf{arbitrary} given shots, incorporating users' input for more customised specification on the objects of interest, or simply learn to ignore the ConvNet branch while encountering the learnable [\texttt{SPE}] token. \subsubsection{Decoder} At this stage, the outputs from the feature interaction module are further reshaped back to 2D feature maps and restored to the original resolution as the input image. We adopt a progressive up-sampling design, where the vector sequence is first reshaped to a dense feature map and then processed by a ConvNet-based decoder. Specifically, we use 4 up-sampling blocks, each of which consists of a convolution layer and a $2\times$ bilinear interpolation. After the last up-sampling, we adopt a linear layer as the density regressor, which outputs a one-channel density heatmap: \begin{align} y_i = \Phi_{\textsc{decoder}}(\mathcal{F}_{\textsc{FIM}}) \in \mathbb{R}^{H \times W \times 1} \end{align} \subsection{Two-stage Training Scheme} \label{sec:twostage} In images, the visual signals are usually highly redundant, {\em e.g.}~pixels within local regions are spatially coherent. Such prior is even more obvious in the counting problem, as the objects often tend to appear multiple times in a similar form. Based on such observation, we here consider exploiting the self-supervised learning to pre-train the visual encoder~($\Phi_{\textsc{ViT-Enc}}(\cdot)$). Specifically, we adopt the recent idea from Masked Autoencoders~(MAE), to train the model by image reconstruction with only partial observations. \paragraph{Self-supervised Pre-training with MAE.} In detail, we first divide the image into regular non-overlapping patches, and only sample a subset of the patches (50\% in our case) as input to the ViT encoders. The computed features are further passed through a lightweight decoder, consisting of several transformer decoder layers, where the combination of learnable mask tokens and positional encoding is used as \texttt{Query} to reconstruct the input image from only observed patches. The training loss is simply defined as the Mean Squared Error (MSE) between the reconstructed image and the input image in pixel space. \\[-0.6cm] \paragraph{Supervised Fine-tuning.} After the pre-training, we initialise the image encoder with the weights of the pre-trained ViT, and fine-tune our proposed architecture on generalised object counting. In detail, our model takes the original image $\mathcal{X}_i$ and $K$ exemplars $\mathcal{S}_i = \{b_i\}^K$ from $\mathcal{D}_{\text{train}}$ as input and outputs the density map $\hat{y_i} \in \mathbb{R}^{H \times W \times 1}$ corresponding to the original image $\mathcal{X}_i$. The statistical number of salient objects in the image $C_i \in \mathbb{R}$ can be obtained by summing the discrete density map $\hat{y_i}$. We use the mean square error per pixel to evaluate the difference between the predicted density map $\hat{y_i}$ and the ground truth density map $y_i$. The ground truth density maps are generated based on the dot annotations: $\mathcal{L}(\hat{y_i}, y_i) = \frac{1}{HW} \sum || y_i - \hat{y_i}||_{2}^2$. \subsection{Scalable Mosaicing} \label{sec:mosaic} In this section, we introduce a scalable {\em mosaic} pipeline for synthesizing training images, in order to tackle the long-tailed problem ({\em i.e.}~very few images contain a large number of instances) in existing counting datasets. We observe that existing datasets for generalised object counting are highly biased towards a small number of objects. For example, in the FSC-147 dataset, only 6 out of 3659 images in the train set contain more than 1000 objects. This is potentially due to the costly procedure for providing manual annotation. In the following, we elaborate on the two steps of the proposed mosaic training data generation, namely, collage and blending (as shown in Figure~\ref{fig:mosaic}). Note that, we also notice one concurrent work~\cite{Hobley22} uses a similar idea.\\[-0.6cm] \begin{figure}[t] \centering \subfigure[Type A: using four images.]{ \begin{minipage}[t]{0.49\linewidth} \centering \includegraphics[width=0.95\textwidth]{images/mosaic1.png} \end{minipage}% }% \subfigure[Type B: using one image.]{ \begin{minipage}[t]{0.49\linewidth} \centering \includegraphics[width=0.95\textwidth]{images/mosaic2.png} \end{minipage}% }% \vspace{-0.1cm} \caption{\textbf{The mosaic pipeline for synthesizing training images.} (1) stands for crop and scale, and (2) stands for collage and blending. In the following section, we will combine crop, scale, and collage as the collage stage. Type A uses four different images to improve background diversity and Type B uses only one image to increase the number of objects contained in an image. White highlights are the dot annotation density map after Gaussian filtering for visualization.} \label{fig:mosaic} \vspace{-0.5cm} \end{figure} \paragraph{Collage.} Here, we first crop a random-sized square area from the image and scale it to a uniform size, {\em e.g.}~a quarter of the size of the original image. After repeating the region cropping multiple times, we collage the cropped regions together and update the corresponding density map. It comes in two different forms: using only one image or four different images. If we only use one image, we can increase the number of objects contained in the image, which helps a lot with tackling the long-tail problem. If we use four different images, we can significantly improve the training images' background diversity and enhance the model's ability to distinguish between different classes of objects. To fully use these two advantages, we make the following settings. If the number of objects contained in the image is more than a threshold, we use the same image to collage; if not, we use four different images. Note that if four different images are used, we could only use the few-shot setting for inference, otherwise the model will not know which object to count. If we use the same image, the mosaiced image can be used to train few-shot setting and zero-shot setting. \\[-0.8cm] \paragraph{Blending.} Simply cropping and collaging do not synthesize perfect images, as there remain sharp artifacts between the boundaries. To resolve these artifacts, we exploit blending at the junction of the images. In practise, we crop the image with a slightly larger size than a quarter of the original image size, such that we can leave a particular space at the border for $\alpha$-channel blending. We use a random $\alpha$-channel border width, which makes the image's composition more realistic. \textbf{Note that}, we only blend the original image instead of the density map, to maintain the form of dot annotation (only 0 and 1). Since there are few objects inside the blending border and the mosaic using one image is only applied to images with a very large number of objects, the error caused by blending is almost negligible. \begin{figure}[t] \centering \subfigure[Test-time Normalisation.]{ \begin{minipage}[t]{0.95\linewidth} \centering \includegraphics[width=0.98\textwidth]{images/ttn.png} \end{minipage}% }% \subfigure[Test-time Cropping.]{ \begin{minipage}[t]{0.95\linewidth} \centering \includegraphics[width=0.95\textwidth]{images/ttc.png} \end{minipage}% }% \caption{\textbf{The test-time normalisation process visualisation.} In test-time normalisation, if the average sum of the exemplar positions in the density map is over $1.8$, the sum of the density map will be divided by this average to become the final prediction. In test-time cropping, if at least one exemplar's side length is smaller than 10 pixels, the image will be cropped into 9 pieces and the model will process these 9 images separately. The final prediction will be the sum of the results of these 9 images. } \label{fig:tt} \vspace{-0.5cm} \end{figure} \subsection{Test-time Normalisation} \label{sec:ttnorm} For few-shot counting, we have introduced a test-time normalisation strategy to calibrate the output density map in the main text. Specifically, at inference time, we exploit the prior knowledge that the object count at the exemplar position should exactly be $1.0$, any prediction deviation can thus be calibrated by dividing the density map by the current predicted count at the exemplar position. We take this approach because, due to the ambiguity of the bounding boxes, the model sometimes chooses the smallest self-similarity unit of an object to count, rather than the entire object, as shown in Figure~\ref{fig:tt}~(a). Therefore, if the average sum of the density map area corresponding to the bounding boxes exceeds a threshold, such as 1.8, we will exploit this test-time normalisation approach. Additionally, for images with tiny objects~(one exemplar with side length shorter than 10 pixels), we adopt a sliding window prediction, by dividing the image equally into nine pieces and scale them to their original size, to be individually processed by our model. The total number of objects is the sum of the individual count results of the nine images. \begin{comment} For few-shot counting, we also propose a test-time normalisation strategy to calibrate the output density map. Specifically, at inference time, we exploit prior knowledge that the object count at the exemplar position should exactly be $1.0$, any prediction deviation can thus be calibrated by dividing the density map by the current predicted count at the exemplar position. Additionally, for images with tiny objects, {\em i.e.}~the predicted count on exemplar position is far smaller than $1.0$, we divide the image equally into nine pieces, and individually processed by the model. The total number of objects is the sum of the individual count results of the nine images. We refer the reader to supplementary for more details.\\[-0.8cm] \end{comment} \section{Experiments} Here, we start by briefly introducing the few-shot counting benchmark, FSC-147 dataset, and the evaluation metrics; In Section~\ref{sec:details}, we describe the implementation details of our model and the design of test-time normalisation; In Section~\ref{sec:sota}, we compare our model's performance with other counting models and demonstrate state-of-the-art performance on both zero-shot and few-shot settings; In Section~\ref{sec:ablation}, we conduct a series of ablation studies to demonstrate the effectiveness of the two-stage training and the image mosaicing, In Section~\ref{sec:ae}, we give additional experiment results on Val-COCO, Test-COCO and CARPK.\\[-0.5cm] \subsection{Datasets and Metrics} \label{sec:dataset} \paragraph{Datasets.} We experiment on FSC-147~\cite{Ranjan21}, which is a multi-class few-shot object counting dataset containing 6135 images. Each image's number of counted objects varies widely, ranging from 7 to 3731, and the average is 56. The dataset also provides three randomly selected object instances annotated by bounding boxes as exemplars in each image. The training set has 89 object categories, while the validation and test sets both have 29 disjoint categories, making FSC-147 an open-set object counting dataset. \\[-0.6cm] \paragraph{Metrics.} We use two standard metrics to measure the performance of our model, namely, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). \begin{equation} MAE = \frac{1}{N_I} \sum^{N_I}_{i=1}|C_i-C^{GT}_i|, \hspace{20pt} RMSE = \sqrt{\frac{1}{N_I} \sum^{N_I}_{i=1}(C_i-C^{GT}_i)^2} \end{equation} Here, $N_I$ is the total number of testing images, and $C_i$ and $C^{GT}_i$ are the predicted number and ground truth of the $i^{th}$ image. \subsection{Implementation} \label{sec:details} \subsubsection{Training Details} In this section, we aim to give the detail of our proposed two-stage training procedure, that is, first pre-train the ViT encoder with MAE~\cite{he2022masked}, and then fine-tune the whole model on supervised object counting.\\[-0.6cm] \paragraph{MAE Pre-training.} As input, the image is of size $384 \times 384$, which is first split into patches of size $16 \times 16$, and projected into $576$ vectors. Our visual encoder uses 12 transformer encoder blocks with a hidden dimension of 768, and the number of heads in the multi-head self-attention layer is 12. The decoder uses 8 transformer layers with a hidden dimension of 512. As shown in Figure~\ref{fig:pretrain}, as input to ViT, we randomly drop 50\% of the visual tokens, and task the model to reconstruct the masked patches with pixelwise mean square error. During pre-training, we chose a batch size of 16 and trained on the FSC-147 for 300 epochs with learning rate of $5 \times 10^{-6}$.\\[-0.6cm] \begin{comment} \begin{figure}[!htb] \centering \includegraphics[width=.8\textwidth]{images/pretrain.png} \vspace{-5pt} \caption{\textbf{The pre-train process visualisation.} The input image is first divided into patches and randomly masked 50\% of the patches. Then the encoder extracts visual features from these visible patches and the decoder uses the patches to reconstruct the input image. } \label{fig:pretrain} \end{figure} \end{comment} \paragraph{Fine-tuning stage.} The feature interaction module uses 2 transformer decoder layers with a hidden dimension of 512. The ConvNet encoder exploits 4 convolutional layers and a global average pooling layer to extract exemplar features with 512 dimensions. The image decoder uses 4 up-sampling layers with a hidden dimension of 256. For optimisation, we minimise the mean square error between the model's prediction and the ground truth density map, which is generated with Gaussians centering each object. We scale the loss by a factor of 60, and randomly drop 20\% non-object pixels, to alleviate the sample imbalance issue. We use AdamW as the optimiser. Our model is trained on the FSC-147 training set with learning rate of $1 \times 10^{-5}$ and a batch size of 8. Our model is trained and tested on NVIDIA GeForce RTX 3090. \subsubsection{Inference Details} At inference time, we adopt sliding windows for images of different resolutions, with the model processing a portion of an image with a fixed-size square window as used in training, and gradually moving forward with a stride of 128 pixels. The density map for overlapped regions is simply computed by averaging the predictions.\\[-0.5cm] \vspace{-5pt} \subsection{Comparison to state-of-the-art} \label{sec:sota} We evaluate the proposed CounTR model on the FSC-147 dataset and compare it against existing approaches. As shown in Table~\ref{tab:result}, CounTR has demonstrated new state-of-the-art on both zero-shot and few-shot counting, outperforming the previous methods significantly, almost halving the errors on the validation set. \begin{table}[!htb] \centering \footnotesize \setlength\tabcolsep{7pt} \begin{tabular}{cccccccc} \toprule \multirow{2}*{Methods} & \multirow{2}*{Year} & \multirow{2}*{Backbone} & \multirow{2}*{\# Shots}& \multicolumn{2}{c}{Val} & \multicolumn{2}{c}{Test} \\ \cmidrule(lr){5-6}\cmidrule(lr){7-8} & & & & MAE & RMSE & MAE & RMSE\\ \midrule RepRPN-C~\cite{ranjan2022exemplar} & Arxiv2022 & ConvNets & 0 & 31.69 & 100.31 & 28.32 & 128.76\\ RCC~\cite{Hobley22} & Arxiv2022 & Pre-trained ViT & 0 & 20.39 & 64.62 & 21.64 & 103.47 \\ \textbf{CounTR~(ours)} & 2022 & ViT & 0 & 17.40 & 70.33 & 14.12 & 108.01 \\ \midrule FR~\cite{kang2019few} & ICCV2019 & ConvNets & 3 & 45.45 & 112.53 & 41.64 & 141.04 \\ FSOD~\cite{fan2020few} & CVPR2020 & ConvNets & 3 & 36.36 & 115.00 & 32.53 & 140.65 \\ P-GMN~\cite{Lu18} & ACCV2018 & ConvNets &3 & 60.56 & 137.78 & 62.69 & 159.67 \\ GMN~\cite{Lu18} & ACCV2018 & ConvNets &3 & 29.66 & 89.81 & 26.52 & 124.57 \\ MAML~\cite{finn2017model} & ICML2017 & ConvNets & 3 & 25.54 & 79.44 & 24.90 & 112.68\\ FamNet~\cite{Ranjan21} & CVPR2021 & ConvNets & 3 & 23.75 & 69.07 & 22.08 & 99.54 \\ BMNet+~\cite{shi2022represent} & CVPR2022 & ConvNets & 3 & 15.74 & 58.53 & 14.62 & 91.83 \\ \textbf{CounTR~(ours)} & 2022 & ViT & 3 & \textbf{13.13} & \textbf{49.83} & \textbf{11.95} & \textbf{91.23}\\ \bottomrule \end{tabular} \captionof{table}{\textbf{Comparison with state-of-the-art on the FSC-147 dataset.} P-GMN stands for Pre-trained GMN. RepRPN-C stands for RepRPN-Counter. RCC stands for reference-less class-agnostic counting with weak supervision. } \label{tab:result} \vspace{-5pt} \end{table} \begin{comment} \begin{figure}[!htb] \begin{minipage}[T]{0.55\linewidth} \footnotesize \end{minipage} \hspace{3pt} \begin{minipage}[T]{0.43\linewidth} \includegraphics[width=.98\textwidth]{images/test_stat.png} \caption{Relationship between ground-truth and predicted count is shown here, with a strong linear relation.} \label{fig:stat} \end{minipage} \end{figure} \end{comment} \vspace{-0.3cm} \subsection{Ablation Study} \label{sec:ablation} In this section, we have conducted thorough ablation studies to demonstrate the effectiveness of the proposed ideas, as shown in Table~\ref{tab:ablation}, we can make the following observations: (1)~\textbf{Data augmentation:} While comparing the Model-A, we include image-wise data augmentation in Model-B training, including Gaussian noise, Gaussian blur, horizontal flip, color jittering, and geometric transformation. As indicated by the result, Model B slightly outperforms Model A on both validation and test set, suggesting that these augmentation methods can indeed be useful to the model to a certain extent, however, is limited. (2)~\textbf{Self-supervised pre-training:} In Model-C, we introduce the self-supervised pre-training for warming up the ViT encoder. Compared with Model B which directly fine-tunes the ViT encoder~(pre-trained on ImageNet) on FSC-147 training set, Model C has improved all results on both validation and test sets significantly. (3)~\textbf{Effectiveness of mosaic:} With the help of the mosaic method, Model-D has shown further performance improvements, demonstrating its effectiveness for resolving the challenge from the long-tailed challenge, by introducing images with a large number of object instances, and object distractors from different semantic categories. (4)~\textbf{Test-time normalisation:} In Model-E, we experiment with test-time normalisation for the few-shot counting scenario, where the output prediction is calibrated by the given exemplar shot. On both validation and test set, test-time normalisation has demonstrated significant performance boosts. (5)~\textbf{On shot number:} In Model E, as the number of given shots increases, {\em e.g.}~1, 2, or 3, we observe no tiny difference in the final performance, showing the robustness of our proposed CounTR for visual object counting under {\em any} shots.\\[-0.4cm] \begin{table}[!htb] \footnotesize \setlength\tabcolsep{3pt} \centering \begin{tabular}{cccccccccc} \toprule \multirow{2}*{Model} & \multirow{2}*{Augmentation} & \multirow{2}*{Selfsup} & \multirow{2}*{Mosaic} & \multirow{2}*{TT-Norm.} & \multirow{2}*{\# Shots} &\multicolumn{2}{c}{Val} & \multicolumn{2}{c}{Test} \\ \cmidrule(lr){7-8}\cmidrule(lr){9-10} & & & & & & MAE & RMSE & MAE & RMSE\\ \midrule A0 & \XSolidBrush & \XSolidBrush & \XSolidBrush & \XSolidBrush& 0 & 24.84 & 86.33 & 21.06 & 130.04 \\ A1 & \XSolidBrush & \XSolidBrush & \XSolidBrush & \XSolidBrush& 3 & 24.68 & 85.89 & 20.98 & 129.58 \\ \midrule B0 & \Checkmark & \XSolidBrush & \XSolidBrush & \XSolidBrush& 0 & 23.80 & 81.53 & 21.14 & 131.27 \\ B1 & \Checkmark & \XSolidBrush & \XSolidBrush & \XSolidBrush& 3 & 23.67 & 81.40 & 20.93 & 130.75 \\ \midrule C0 & \Checkmark & \Checkmark & \XSolidBrush & \XSolidBrush& 0 & 18.30 & 72.21 & 16.20 & 114.30 \\ C1 & \Checkmark & \Checkmark & \XSolidBrush & \XSolidBrush& 3 & 18.19 & 71.47 & 16.05 & 113.11 \\ \midrule D0 & \Checkmark & \Checkmark & \Checkmark & \XSolidBrush & 0 & 18.07 & 71.84 & 14.71 & 106.87 \\ D1 & \Checkmark & \Checkmark & \Checkmark & \XSolidBrush & 3 & 17.40 & 70.33 & 14.12 & 108.01 \\ \midrule E1 & \Checkmark & \Checkmark & \Checkmark & \Checkmark & 1 & 13.15 & 49.72 & 12.06 & 90.01 \\ E2 & \Checkmark & \Checkmark & \Checkmark & \Checkmark & 2 & 13.19 & 49.73 & 12.02 & 90.82 \\ E3 & \Checkmark & \Checkmark & \Checkmark & \Checkmark & 3 & \textbf{13.13} & \textbf{49.83} & \textbf{11.95} & \textbf{91.23} \\ E3~(no 7171.jpg) & \Checkmark & \Checkmark & \Checkmark & \Checkmark & 3 & 13.13 & 49.83 & 11.22 & 87.68 \\ \bottomrule \end{tabular} \caption{\textbf{Ablation study}. We observe that one image in the test set~(image id:7171) has significant annotation error~(see supp. material), result without it has also been reported. \textbf{Selfsup}: refers to the proposed two-stage training regime. \textbf{TT-Norm}: denotes test-time normalisation} \label{tab:ablation} \vspace{-10pt} \end{table} \vspace{-0.5cm} \subsection{Additional Experiments} \label{sec:ae} In this section, we further evaluate the model on several other datasets, {\em e.g.}, Val-COCO, Test-COCO, and CARPK. \\[-0.9cm] \paragraph{Val-COCO and Test-COCO.} Val-COCO and Test-COCO are FSC-147 subsets collected from COCO, and they are often used as evaluation benchmarks for detection-based object counting models. Here we compared our CounTR model with several counting models based on detection including Faster-RCNN~\cite{ren2015faster}, RetinaNet~\cite{lin2017focal}, and Mask-RCNN~\cite{he2017mask}. As shown in Table~\ref{tab:COCOresult}, it can be easily found that our model still has a huge improvement even compared to the best-performing Mask-RCNN~\cite{he2017mask}, halving its error on both Val-COCO and Test-COCO. We also compared our model with the few-shot counting sota method FamNet~\cite{Ranjan21}, and our model outperforms it with a large advance (15.16 MAE and 24.29 RMSE on Val-COCO and 11.87 MAE and 14.81 RMSE on Test-COCO), which demonstrates the superiority of our model. \begin{table}[!htb] \centering \footnotesize \setlength\tabcolsep{7pt} \begin{tabular}{cccccccc} \toprule \multirow{2}*{Methods} & \multirow{2}*{Year} & \multirow{2}*{Method} & \multicolumn{2}{c}{Val-COCO} & \multicolumn{2}{c}{Test-COCO} \\ \cmidrule(lr){4-5}\cmidrule(lr){6-7} & & & MAE & RMSE & MAE & RMSE\\ \midrule Faster-RCNN~\cite{ren2015faster} & NIPS2015 & Detection & 52.79 & 172.46 & 36.20 & 79.59 \\ RetinaNet~\cite{lin2017focal} & ICCV2017 & Detection & 63.57 & 174.36 & 52.67 & 85.86 \\ Mask-RCNN~\cite{he2017mask} & ICCV2017 & Detection & 52.51 & 172.21 & 35.56 & 80.00 \\ FamNet~\cite{Ranjan21} & CVPR2021 & Regression & 39.82 & 108.13 & 22.76 & 45.92 \\ \midrule \textbf{CounTR~(ours)} & 2022 & Regression & \textbf{24.66} & \textbf{83.84} & \textbf{10.89} & \textbf{31.11}\\ \bottomrule \end{tabular} \captionof{table}{\textbf{Comparison with state-of-the-art on the FSC-147 subset.} } \vspace{-0.3cm} \label{tab:COCOresult} \end{table} \vspace{-0.5cm} \paragraph{CARPK.} CARPK~\cite{hsieh2017drone} is a class-specific car counting benchmark with $1448$ images of parking lots from a bird view. We also fine-tuned our model on the CARPK train set and test on it with Non-Maximum Supression (NMS). We compared our CounTR model with several detection-based object counting models and regression-based few-shot counting models. As shown in Table~\ref{tab:CARPKresult}, even compared with the existing class-specific counting models, {\em i.e.}, the models that can only count cars, our CounTR still shows comparable performance. \begin{table}[!htb] \centering \footnotesize \setlength\tabcolsep{7pt} \begin{tabular}{cccccccc} \toprule \multirow{2}*{Methods} & \multirow{2}*{Year} & \multirow{2}*{Method} & \multirow{2}*{Type} & \multicolumn{2}{c}{CARPK}\\ \cmidrule(lr){5-6} & & & & MAE & RMSE\\ \midrule YOLO~\cite{redmon2016you} & CVPR2016 & Detection & Generic & 48.89 & 57.55 \\ Faster-RCNN~\cite{ren2015faster} & NIPS2015 & Detection & Generic & 47.45 & 57.39 \\ S-RPN~\cite{hsieh2017drone} & ICCV2017 & Detection & Generic & 24.32 & 37.62 \\ RetinaNet~\cite{lin2017focal} & ICCV2017 & Detection & Generic & 16.62 & 22.30 \\ LPN~\cite{hsieh2017drone} & ICCV2017 & Detection & Generic & 23.80 & 36.79 \\ One Look~\cite{mundhenk2016large} & ECCV2016 & Detection & Specific & 59.46 & 66.84 \\ IEP Count~\cite{stahl2018divide} & TIP2018 & Detection & Specific & 51.83 & - \\ PDEM~\cite{goldman2019precise} & CVPR2019 & Detection & Specific & 6.77 & 8.52 \\ \midrule GMN~\cite{Lu18} & CVPR2021 & Regression & Generic & 7.48 & 9.90 \\ FamNet~\cite{Ranjan21} & CVPR2021 & Regression & Generic & 18.19 & 33.66 \\ BMNet+~\cite{shi2022represent} & CVPR2022 & Regression & Generic & 5.76 & 7.83 \\ \midrule CounTR~(ours) & 2022 & Regression & Generic & \textbf{5.75} & \textbf{7.45}\\ \bottomrule \end{tabular} \captionof{table}{\textbf{Comparison with state-of-the-art on the CARPK dataset.} } \label{tab:CARPKresult} \end{table} \vspace{-0.5cm} \subsection{Qualitative Results} \label{sec:qualitative} We show qualitative results from our few-shot counting setting in Figure~\ref{fig:good}. As we can see from the first five images from FSC-147, our model can easily count the objects' numbers and locate their position. The last image mistakenly chose the smallest self-similarity unit of spectacle lenses instead of sunglasses for counting due to the ambiguity of the bound boxes, which can be corrected by test-time normalisation. For more qualitative visualisation, we refer our reader to the appendix. \begin{figure}[!htb] \centering \includegraphics[width=0.98\textwidth]{images/pred.png} \vspace{-5pt} \caption{\textbf{Qualitative results of CounTR on FSC-147.} For visualisation purpose, we have overlaid the predited density map on the original image. TTN stands for test-time augmentation. } \label{fig:good} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.99\textwidth]{images/goodpred.png} \vspace{-5pt} \caption{\textbf{More qualitative results of CounTR on FSC-147.} } \label{fig:good} \end{figure} \section{Conclusion} In this work, we aim at the generalised visual object counting problem of counting the number of objects from {\em arbitrary} semantic categories using {\em arbitrary} number of “exemplars”. We propose a novel transformer-based architecture for it, termed as \textbf{CounTR}. It is first pre-trained with self-supervised learning, and followed by supervised fine-tuning. We also propose a simple, scalable pipeline for synthesizing training images which can explicitly force the model to make use of the given “exemplars”. Our model achieves state-of-the-art performance on both zero-shot and few-shot setting. \vspace{-0.2cm} \paragraph{Acknowledgement: } We thank Xiaoman Zhang and Chaoyi Wu for proof-reading. \section{Appendix} In the supplementary material, we will introduce the annotation error of 7171.jpg in FSC-147 test set, and qualitative visualisation of CounTR's results on FSC-147 dataset. \begin{comment} \subsection{Training Details} \label{sec:train} In this section, we aim to give the detail of our proposed two-stage training procedure, that is, first pre-train the ViT encoder with MAE~\cite{he2022masked}, and then fine-tune the whole model on supervised object counting. \paragraph{MAE Pre-training.} As input, the image is of size $384 \times 384$, which is first split into patches of size $16 \times 16$, and projected into $576$ vectors. Our visual encoder uses 12 transformer encoder blocks with a hidden dimension of 768, and the number of heads in the multi-head self-attention layer is 12. The decoder uses 8 transformer layers with a hidden dimension of 512. As shown in Figure~\ref{fig:pretrain}, as input to ViT, we randomly drop 50\% of the visual tokens, and task the model to reconstruct the masked patches with pixelwise mean square error. During pre-training, we chose a batch size of 16 and trained on the FSC-147 for 300 epochs with learning rate of $5 \times 10^{-6}$. \begin{figure}[!htb] \centering \includegraphics[width=.8\textwidth]{images/pretrain.png} \vspace{-5pt} \caption{\textbf{The pre-train process visualisation.} The input image is first divided into patches and randomly masked 50\% of the patches. Then the encoder extracts visual features from these visible patches and the decoder uses the patches to reconstruct the input image. } \label{fig:pretrain} \end{figure} \paragraph{Fine-tuning stage.} The feature interaction module uses 2 transformer decoder layers with a hidden dimension of 512. The ConvNet encoder exploits 4 convolutional layers and a global average pooling layer to extract exemplar features with 512 dimensions. The image decoder uses 4 up-sampling layers with a hidden dimension of 256. For optimisation, we minimise the mean square error between the model's prediction and the ground truth density map, which is generated with Gaussians centering each object. We scale the loss by a factor of 60, and randomly drop 20\% non-object pixels, to alleviate the sample imbalance issue. We use AdamW as the optimiser. Our model is trained on the FSC-147 training set with learning rate of $1 \times 10^{-5}$ and a batch size of 8. Our model is trained and tested on NVIDIA GeForce RTX 3090. \subsection{Test-time Normalisation} \label{sec:tt} For few-shot counting, we have introduced a test-time normalisation strategy to calibrate the output density map in the main text. Specifically, at inference time, we exploit the prior knowledge that the object count at the exemplar position should exactly be $1.0$, any prediction deviation can thus be calibrated by dividing the density map by the current predicted count at the exemplar position. We take this approach because, due to the ambiguity of the bounding boxes, the model sometimes chooses the smallest self-similarity unit of an object to count, rather than the entire object, as shown in Figure~\ref{fig:tt}~(a). Therefore, if the average sum of the density map area corresponding to the bounding boxes exceeds a threshold, such as 1.8, we will exploit this test-time normalisation approach. Additionally, for images with tiny objects~(one exemplar with side length shorter than 10 pixels), we adopt a sliding window prediction, by dividing the image equally into nine pieces and scale them to their original size, to be individually processed by our model. The total number of objects is the sum of the individual count results of the nine images. \begin{figure}[!htb] \centering \subfigure[Test-time Normalisation.]{ \begin{minipage}[t]{0.95\linewidth} \centering \includegraphics[width=0.98\textwidth]{images/ttn.png} \end{minipage}% }% \subfigure[Test-time Cropping.]{ \begin{minipage}[t]{0.95\linewidth} \centering \includegraphics[width=0.95\textwidth]{images/ttc.png} \end{minipage}% }% \caption{\textbf{The test-time normalisation process visualisation.} In test-time normalisation, if the average sum of the exemplar positions in the density map is over $1.8$, the sum of the density map will be divided by this average to become the final prediction. In test-time cropping, if at least one exemplar's side length is smaller than 10 pixels, the image will be cropped into 9 pieces and the model will process these 9 images separately. The final prediction will be the sum of the results of these 9 images. } \label{fig:tt} \end{figure} \subsection{Experiments} \label{sec:exp} In this section, we will first introduce the evaluation metircs we used in Section~\ref{sec:metrics}; in Section~\ref{sec:7171}, we will introduce a important annotation error in FSC-147 test set; in Section~\ref{sec:ae}, we will give additional experiment results on Val-COCO, Test-COCO and CARPK; in Section~\ref{sec:visual}, we will show some qualitative visualisation results of our CounTR model on FSC-147 dataset. \subsubsection{Metrics} \label{sec:metrics} We use two standard metrics to measure the performance of our model, namely, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). \begin{equation} MAE = \frac{1}{N_I} \sum^{N_I}_{i=1}|C_i-C^{GT}_i|, \hspace{20pt} RMSE = \sqrt{\frac{1}{N_I} \sum^{N_I}_{i=1}(C_i-C^{GT}_i)^2} \end{equation} Here, $N_I$ is the total number of testing images, and $C_i$ and $C^{GT}_i$ are the predicted number and ground truth of the $i^{th}$ image. \subsubsection{On the annotation error of 7171.jpg} \label{sec:7171} We discover that the 7171.jpg in FSC147 dataset has a significant annotation error. The annotated object count is inconsistent with the given exemplars, which tend to lead significant error during evaluation. The ground truth annotation and exemplar annotation are shown in Figure~\ref{fig:279}. \begin{figure}[!htb] \centering \subfigure[The exemplar annotation of 7171.jpg.]{ \begin{minipage}[t]{0.4\linewidth} \centering \includegraphics[width=0.9\textwidth]{images/279bbox.png} \end{minipage}}% \subfigure[The ground truth annotation is 14.]{ \begin{minipage}[t]{0.4\linewidth} \centering \includegraphics[width=0.9\textwidth]{images/279gt.png} \end{minipage}% }% \caption{The ground truth annotation and exemplar annotation of 7171.jpg, and we can easily figure out the inconsistency.} \label{fig:279} \end{figure} \subsubsection{Additional Experiments} \label{sec:ae} \paragraph{Val-COCO and Test-COCO.} Val-COCO and Test-COCO are FSC-147 subsets collected from COCO, and they are often used as evaluation benchmarks for detection-based object counting models. Here we compared our CounTR model with several counting models based on detection including Faster-RCNN~\cite{ren2015faster}, RetinaNet~\cite{lin2017focal}, and Mask-RCNN~\cite{he2017mask}. As shown in Table~\ref{tab:COCOresult}, it can be easily found that our model still has a huge improvement even compared to the best-performing Mask-RCNN~\cite{he2017mask}, halving its error on both Val-COCO and Test-COCO. We also compared our model with the few-shot counting sota method FamNet~\cite{Ranjan21}, and our model outperforms it with a large advance (15.16 MAE and 24.29 RMSE on Val-COCO and 11.87 MAE and 14.81 RMSE on Test-COCO), which demonstrates the superiority of our model. \begin{table}[!htb] \centering \footnotesize \setlength\tabcolsep{7pt} \begin{tabular}{cccccccc} \toprule \multirow{2}*{Methods} & \multirow{2}*{Year} & \multirow{2}*{Method} & \multicolumn{2}{c}{Val-COCO} & \multicolumn{2}{c}{Test-COCO} \\ \cmidrule(lr){4-5}\cmidrule(lr){6-7} & & & MAE & RMSE & MAE & RMSE\\ \midrule Faster-RCNN~\cite{ren2015faster} & NIPS2015 & Detection & 52.79 & 172.46 & 36.20 & 79.59 \\ RetinaNet~\cite{lin2017focal} & ICCV2017 & Detection & 63.57 & 174.36 & 52.67 & 85.86 \\ Mask-RCNN~\cite{he2017mask} & ICCV2017 & Detection & 52.51 & 172.21 & 35.56 & 80.00 \\ FamNet~\cite{Ranjan21} & CVPR2021 & Regression & 39.82 & 108.13 & 22.76 & 45.92 \\ \midrule CounTR~(ours) & 2022 & Regression & \textbf{24.66} & \textbf{83.84} & \textbf{10.89} & \textbf{31.11}\\ \bottomrule \end{tabular} \captionof{table}{\textbf{Comparison with state-of-the-art on the FSC-147 subset.} } \label{tab:COCOresult} \end{table} \paragraph{CARPK.} CARPK~\cite{hsieh2017drone} is a class-specific car counting benchmark with $1448$ images of parking lots from a bird view. We also fine-tuned our model on the CARPK train set and test on it. We compared our CounTR model with several detection-based object counting models and regression-based few-shot counting models. As shown in Table~\ref{tab:CARPKresult}, even compared with the existing class-specific counting models, {\em i.e.}, the models that can only count cars, our CounTR still shows comparable performance. \begin{table}[!htb] \centering \footnotesize \setlength\tabcolsep{7pt} \begin{tabular}{cccccccc} \toprule \multirow{2}*{Methods} & \multirow{2}*{Year} & \multirow{2}*{Method} & \multirow{2}*{Type} & \multicolumn{2}{c}{CARPK}\\ \cmidrule(lr){5-6} & & & & MAE & RMSE\\ \midrule YOLO~\cite{redmon2016you} & CVPR2016 & Detection & Generic & 48.89 & 57.55 \\ Faster-RCNN~\cite{ren2015faster} & NIPS2015 & Detection & Generic & 47.45 & 57.39 \\ S-RPN~\cite{hsieh2017drone} & ICCV2017 & Detection & Generic & 24.32 & 37.62 \\ RetinaNet~\cite{lin2017focal} & ICCV2017 & Detection & Generic & 16.62 & 22.30 \\ LPN~\cite{hsieh2017drone} & ICCV2017 & Detection & Generic & 23.80 & 36.79 \\ One Look~\cite{mundhenk2016large} & ECCV2016 & Detection & Specific & 59.46 & 66.84 \\ IEP Count~\cite{stahl2018divide} & TIP2018 & Detection & Specific & 51.83 & - \\ PDEM~\cite{goldman2019precise} & CVPR2019 & Detection & Specific & 6.77 & 8.52 \\ \midrule GMN~\cite{Lu18} & CVPR2021 & Regression & Generic & 7.48 & 9.90 \\ FamNet~\cite{Ranjan21} & CVPR2021 & Regression & Generic & 18.19 & 33.66 \\ BMNet+~\cite{shi2022represent} & CVPR2022 & Regression & Generic & \textbf{5.76} & 7.83 \\ \midrule CounTR~(ours) & 2022 & Regression & Generic & 6.05 & \textbf{7.71}\\ \bottomrule \end{tabular} \captionof{table}{\textbf{Comparison with state-of-the-art on the CARPK dataset.} } \label{tab:CARPKresult} \end{table} \end{comment} \subsection{On the annotation error of 7171.jpg} \label{sec:7171} We discover that the 7171.jpg in FSC147 dataset has a significant annotation error. The annotated object count is inconsistent with the given exemplars, which tend to lead significant error during evaluation. The ground truth annotation and exemplar annotation are shown in Figure~\ref{fig:279}. \begin{figure}[!htb] \centering \subfigure[The exemplar annotation of 7171.jpg.]{ \begin{minipage}[t]{0.4\linewidth} \centering \includegraphics[width=0.9\textwidth]{images/279bbox.png} \end{minipage}}% \subfigure[The ground truth annotation is 14.]{ \begin{minipage}[t]{0.4\linewidth} \centering \includegraphics[width=0.9\textwidth]{images/279gt.png} \end{minipage}% }% \caption{The ground truth annotation and exemplar annotation of 7171.jpg, and we can easily figure out the inconsistency.} \label{fig:279} \end{figure} \end{appendix} \clearpage \begin{comment} \section{Synthetic Dataset} \label{sec:synthetic} Here, we aim to evaluate the model on images with objects of multiple categories, that requires the model to be able to condition on the given few shots. To this end, we synthesize a small dataset based on the FSC-147 validation set, where we simply mosaic multiple images, examples are shown in Figure~\ref{fig:syn}, termed as \textbf{FSC-syn}thetic. It contains 500 images, each consisting of four images randomly drawn from the validation set, the images are collaged directly together without blending. The exemplars given in each image only correspond to one category of objects in the image. In order to reduce the impact of variation within the class, there will be no objects of the same category as the counting objects in the image. \begin{figure}[!htb] \begin{minipage}[b]{0.66\linewidth} \begin{minipage}[b]{0.24\linewidth} \centering \includegraphics[width=0.95\textwidth]{images/synthetic1.png} \end{minipage}% \begin{minipage}[b]{0.24\linewidth} \centering \includegraphics[width=0.95\textwidth]{images/synthetic2.png} \end{minipage}% \begin{minipage}[b]{0.24\linewidth} \centering \includegraphics[width=0.95\textwidth]{images/synthetic3.png} \end{minipage}% \begin{minipage}[b]{0.24\linewidth} \centering \includegraphics[width=0.95\textwidth]{images/synthetic4.png} \end{minipage}% \vspace{2pt} \caption{\small Examples from \textbf{FSC-syn}thetic dataset. Each image is consisted of visual objects from different semantic categories.} \label{fig:syn} \end{minipage}% \hspace{3pt} \begin{minipage}[b]{0.32\linewidth} \centering \footnotesize \begin{tabular}{ccccc} \toprule \# Shots& MAE & RMSE \\ \midrule 0 & 23.22 & 59.75 \\ 3 & 22.16 & 60.36 \\ \bottomrule \end{tabular} \vspace{2pt} \captionof{table}{Zero- and few-shot counting on the FSC-syn dataset. \weidi{to be updated} \vspace{1pt}} \label{tab:syn} \end{minipage} \end{figure} The result of our zero-shot counting and few-shot counting model on the FSC-syn dataset is shown in Table~\ref{tab:syn}. Our few-shot model outperforms the zero-shot model by 3.03 in MAE and 1.70 in RMSE, which can prove our few-shot model's ability to use the "exemplars". \end{comment}
571c7d93b43e197f9109b1db41e25a86c324a0e0
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section*{Introduction} Let $(K,v)$ be a valued field with value group $\Gamma_v=vK$. Let $K^h$ be a henselization of $(K,v)$, determined by the choice of an extension $\bar{v}$ of $v$ to a fixed algebraic closure $\overline{K}$ of $K$. The value group $\Gamma:=\Gamma_{\bar{v}}$ of $\bar{v}$ is the divisible closure of $\Gamma_v$. Let us denote the decomposition group of $\bar{v}$ by $ G=G_{\bar{v}}=\left\{\sigma\in \op{Aut}(\overline{K}/K)\mid \bar{v}\circ\sigma=\bar{v}\right\}=\op{Aut}(\overline{K}/K^h). $ Let $\mathbb V$ be the set of equivalence classes of valuations on $K(x)$ whose restriction to $K$ is equivalent to $v$. In \cite[Section 4]{Rig}, it is shown that $\mathbb V$ has a natural structure of a tree; that is, a partially ordered set whose intervals are totally ordered. Let $\V_\infty\subseteq \mathbb V$ be the subset of classes of {\bf valuation-algebraic} valuations, which can be constructed only after some limit process. This set $\V_\infty$ is included in the set of leaves (maximal nodes) of our tree $\mathbb V$. Let $\V_{\op{fin}}\subseteq{\mathbb V}$ be the subset of classes of {\bf valuation-transcendental} valuations, which can be constructed from suitable monomial valuations by a finite number of augmentations. We have a decomposition \[ \mathbb V=\V_\infty\sqcup \V_{\op{fin}}=\V_\infty\sqcup \V_{\op{rt}}\sqcup\V_{\op{vt}}, \] where $\V_{\op{rt}}$, $\V_{\op{vt}}$ are the subsets of classes of {\bf residue-transcendental} and {\bf value-transcendental} valuations, respectively. In a recent paper \cite{Andrei}, Bengu\mathfrak{c}{s}-Lasnier conjectures the existence of a geometric space of diskoids parametrizing the subtree $\V_{\op{rt}}$ of residue-transcendental valuations. This is proven in \cite{Andrei} in the henselian case, and in the rank-one case. The aim of this paper is to generalize and prove this conjecture for arbitrary valuations in $\mathbb V$, with no assumptions on the valued field $(K,v)$. Following Bengu\mathfrak{c}{s}-Lasnier's insight, we construct a space $\mathbb D$ of certain gene\-ra\-lized diskoids, equipped with a natural partial ordering, and define a mapping: \begin{equation}\label{maineq} \mathbb D\,\longrightarrow\,\mathbb V,\qquad D\longmapsto \mu_D, \end{equation} which is an isomorphism of posets. Also, we find concrete descriptions of subspaces $\D_\infty,\, \D_\Gamma,\, \D_\cut\subseteq\mathbb D$ such that the above isomorphism induces isomorphisms \[ \D_\infty\lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\longrightarrow\,\\\mbox{\tiny $\sim\,$}\end{array}$} \V_\infty,\qquad \D_\Gamma\lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\longrightarrow\,\\\mbox{\tiny $\sim\,$}\end{array}$} \V_{\op{rt}},\qquad \D_\cut\lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\longrightarrow\,\\\mbox{\tiny $\sim\,$}\end{array}$} \V_{\op{vt}}. \] The space $\D_\Gamma$ contains all diskoids centred at monic irreducible polynomials in $K^h[x]$ and whose radii belong to $\Gamma$. The space $\D_\cut$ contains more general diskoids admitting cuts in $\Gamma$ as radii. The space $\D_\infty$ is the quotient set of the space of all nests of diskoids in $\D_\Gamma$ having an empty intersection, under the equivalence relation which identifies two nests if they are mutually cofinal. The outline of the paper is as follows. Section \ref{secBackground} reviews some basic facts on commensurable extensions of valuations and valuative trees. In Section \ref{secKb}, we para\-me\-trize $\mathbb V$ when the field $K$ is algebraically closed. As shown by Alexandru-Popescu-Zaharescu \cite{APZ0,APZ}, the set $\V_{\op{rt}}$ is parametrized by ultrametric closed balls in $\overline{K}$ with radii in $\Gamma$. The description of $\V_\infty$ in terms of nests of closed balls with empty intersection can be found in \cite{APZ2,Kuhl,Vaq3}. For the parametrization of $\V_{\op{vt}}$ we use balls in $\overline{K}$ admitting cuts in $\Gamma$ as radii. In Section \ref{secProof11} we prove our main theorem (Theorem \ref{main}). Clearly, we can parametrize valuations on $K^h(x)$ just by considering the $G$-orbits of the geometric spaces of balls (or nests of balls) parametrizing valuations on $\overline{K}(x)$. Here, $G$ is the decomposition group defined above. By the rigidity theorem \cite[Theorem B]{Rig}, we get a parametrization of valuations on $K(x)$ as well. Finally, it is easy to identify $G$-orbits of balls with diskoids centred at irreducible polynomials in $K^h[x]$. Section \ref{secSupport} is devoted to extend these results to parametrize equivalence classes of valuations on the polynomial ring $K[x]$, including those with nontrivial support. This is achieved by admitting diskoids with an infinite radius. Finally, in Section \ref{secApprT}, we compare our geometric parametrization with that obtained by using Kuhlmann's {\bf approximation types} \cite{KuhlAT}. For an algebraically closed field, Kuhlmann's parametrization is an isomorphism too, and our mapping in (\ref{maineq}) is essentially equal to the inverse of the approximation types mapping. \section{Background on valuative trees}\label{secBackground} \subsection{Commensurable extensions of valuations}\label{subsecComm} Let $\V^{\op{com}}$ be the set of all extensions of $v$ to $K(x)$ taking values in a fixed divisible closure $\Gamma$ of $\Gamma_v$. We consider in $\V^{\op{com}}$ the following partial ordering: \[ \mu\le\nu\ \ \Longleftrightarrow\ \ \mu(f)\le\nu(f)\ \mbox{ for all }f\in K[x]. \] As usual, we write $\mu<\nu$ when $\mu\le\nu$ and $\mu\ne\nu$. With this structure of a poset, $\V^{\op{com}}$ becomes a tree; that is, all intervals in $\V^{\op{com}}$ are totally ordered. A valuation $\mu$ on $K(x)$ extending $v$ is {\bf commensurable} (over $v$) if $\g_\mu/\Gamma_v$ is a torsion group. Clearly, all valuations in $\V^{\op{com}}$ are commensurable. Conversely, every commensurable extension of $v$ to $K(x)$ is equivalent to a unique valuation in $\V^{\op{com}}$. Indeed, for any given extension $\iota\colon\Gamma_v\hookrightarrow\Lambda$ of ordered abelian groups such that $\Lambda/\Gamma_v$ is a torsion group, there is a unique embedding $\Lambda\hookrightarrow\Gamma$ such that the composition $\Gamma_v\hookrightarrow\Lambda\hookrightarrow\Gamma$ is the canonical embedding of $\Gamma_v$ in $\Gamma$. In particular, two valuations in $\V^{\op{com}}$ are equivalent if and only if they coincide. Since valuation-algebraic and residue-trans\-cen\-dental valuations are commensura\-ble, the sets of equivalence classes of these valuations can be identified with subsets of $\V^{\op{com}}$, and they fill up the whole space: \[ \V^{\op{com}}=\V_\infty\sqcup \V_{\op{rt}}. \] A valuation $\mu\in\V^{\op{com}}$ is residue-transcendental if and only the residual extension $K(x)\mu/Kv$ is transcendental. This is equivalent to $\op{KP}(\mu)\ne\emptyset$, where $\op{KP}(\mu)$ is the set of all Mac Lane--Vaqui\'e (MLV) key polynomials for $\mu$ \cite[Theorem 4.4]{KP}. Therefore, in order to fulfill our aim, we must parametrize {\bf all} valuations in $\V^{\op{com}}$ and, moreover, parametrize the set $\V_{\op{vt}}$ of {\bf equivalence classes} of value-transcen\-den\-tal valuations. The latter task is much more subtle. To start with, the fact that there is a canonical partial ordering on $\V_{\op{vt}}$ is not obvious and will be discussed in the next section. \subsection{Partial ordering on $\mathbb V$}\label{subsecPosetV} Let $\mu$ be an arbitrary extension of $v$ to $K(x)$. By \cite[Theorem 1.5]{Kuhl}, the extension $\Gamma_v\hookrightarrow\g_\mu$ is {\bf small}. That is, if $\Gamma_v\subseteq\Delta\subseteq\g_\mu$ is the relative divisible closure of $\Gamma_v$ in $\g_\mu$, the quotient $\g_\mu/\Delta$ is a cyclic group. Two small extensions of $\Gamma_v$ are said to be $\Gamma_v$-equivalent if they are isomorphic as ordered abelian groups, by an isomorphism acting as the identity on $\Gamma_v$. In \cite[Section 4.2]{csme}, a minimal universal ordered abelian group $\Gamma\subseteq \Lambda$ is constructed, containing all small extensions of $\Gamma_v$ up to $\Gamma_v$-equivalence. Hence, we have a natural partial ordering on the set of all $\Lambda$-valued valuations on $K(x)$, which descends to a canonical partial ordering on $\mathbb V$ \cite[Section 4.1]{Rig}. In practice, for any $\mu,\nu\in\mathbb V$, we may still think that \[ \mu\le\nu\ \ \Longleftrightarrow\ \ \mu(f)\le \nu(f)\quad \mbox{for all }f\inK[x], \] where the inequality $\mu(f)\le\nu(f)$ must be properly understood. If $\mu\ne\nu$ (i.e. $\mu$ is not equivalent to $\nu$), then this inequality is equivalent to the existence of $\gamma\in\Gamma$ such that $\mu(f)\le\gamma\le \nu(f)$. \section{Valuations on $\overline{K}(x)$}\label{secKb} As mentioned in the Introduction, our first step will be to parametrize valuations on $\overline{K}(x)$ in terms of ultrametric balls in $\overline{K}$. For any field $K\subseteq L\subseteq \overline{K}$, let us write \[ \mathbb V(L)=\V_\infty(L)\sqcup \V_{\op{fin}}(L)=\V_\infty(L)\sqcup \V_{\op{rt}}(L)\sqcup\V_{\op{vt}}(L), \] for the corresponding spaces of equivalence classes of valuations on $L(x)$. \subsection{Residue-transcendental valuations}\label{subsecRTKb} The parametrization of $\V_{\op{rt}}(\overline{K})$ is well-known \cite{APZ0,APZ,Kuhl,Vaq3}. Let us review the results omitting all proofs. For each pair $(a,\delta)\in\overline{K}\times\Gamma$, consider the closed ball of center $a$ and radius $\delta$: \[ B(a,\delta)=\left\{c\in\overline{K}\mid \bar{v}(c-a)\ge\delta\right\}. \] The criterion of coincidence of two balls is: \[ B(a,\delta)=B(b,\epsilon)\ \ \Longleftrightarrow\ \ \bar{v}(b-a)\ge\delta=\epsilon. \] In particular, any $b\inB(a,\delta)$ is a center of the ball: $B(a,\delta)=B(b,\delta)$. Let ${\mathbb{B}_\Gamma}$ be the set of all these ultrametric closed balls. For every $B=B(a,\delta)\in{\mathbb{B}_\Gamma}$, denote by $\omega_B=\omega_{a,\delta}$ the monomial valuation defined as follows in terms of $(x-a)$-expansions: \ \omega_B\left(\sum\nolimits_{0\le n}a_n(x-a)^n\right) = \min\left\{\bar{v}(a_n)+n\delta\mid0\le n\right\}. \ These valuations $\omega_B$ admit the following reinterpretation. \begin{proposition}\cite[Lemma 6.2]{Andrei}\label{infRT} For all $B\in{\mathbb{B}_\Gamma}$, we have \[ \omega_B(f)=\min\{\bar{v}(f(c))\mid c\in B\}\quad\mbox{ for all }f\in\overline{K}[x]. \] \end{proposition} These extensions of $\bar{v}$ to $\overline{K}(x)$ are residue-transcendental. Actually, for all $c\in\overline{K}$ we have \cite[Lemma 2.4]{Rig}: \[ \ars{1.2} \begin{array}{l} c\in B\ \ \Longrightarrow\ \ x-c\in\op{KP}(\omega_B),\\ c\not\in B\ \ \Longrightarrow\ \ \op{in}_{\omega_B}(x-c)\ \mbox{ is a unit in }\mathcal{G}_{\omega_B}, \end{array} \] where $\mathcal{G}_{\omega_B}$ is the graded algebra associated to $\omega_B$. On the other hand, all residue-transcendental valuations on $\overline{K}(x)$ arise in this way \cite[Theorem 2.1]{APZ0}. Also, for all $B,C\in{\mathbb{B}_\Gamma}$ we have \[ \omega_B=\omega_C\ \ \Longleftrightarrow\ \ B=C. \] Therefore, we obtain a bijective mapping $ {\mathbb{B}_\Gamma}\,\longrightarrow\,\V_{\op{rt}}(\overline{K}),\qquad B\longmapsto \omega_B. $ Finally, consider the partial ordering in ${\mathbb{B}_\Gamma}$ determined by descending inclusion: \[ B\le C\ \ \Longleftrightarrow\ \ B\supseteq C. \] It follows easily from Proposition \ref{infRT} that the mapping $B\mapsto\omega_B$ strictly preserves the ordering: \[ B<C\ \ \Longrightarrow\ \ \omega_B<\omega_C. \] Therefore, our bijective mapping is an isomorphism of posets. \begin{theorem}\label{mainRTKb} The mapping $\,{\mathbb{B}_\Gamma}\to\V_{\op{rt}}(\overline{K})$ determined by $B\mapsto \omega_B$ is an isomorphism of posets. \end{theorem} \subsection{Valuation-algebraic valuations}\label{subsecVAKb} Let $\mathcal{N}_\infty$ be the set of all nests of closed balls of ${\mathbb{B}_\Gamma}$, with empty intersection. Thus, an element in $\mathcal{N}_\infty$ is a family ${\mathcal B}=(B_i)_{i\in I}$ of balls $B_i\in{\mathbb{B}_\Gamma}$, parametrized by some totally ordered set $I$ of indices, such that \[ i<j\ \ \Longleftrightarrow\ \ B_i<B_j\ \ \Longleftrightarrow\ \ B_i\supset B_j. \] Moreover, $\bigcap_{i\in I}B_i=\emptyset$. In particular, the set $I$ contains no last element. Denote for simplicity $\omega_i=\omega_{B_i}$ for all $i\in I$. The family $\Omega_{\mathcal B}=(\omega_i)_{i\in I}$ is a {\bf continuous family} of valuations on $\overline{K}(x)$ of degree one (every valuation admits a MLV key polynomial of degree one). For the definition of continuous families and their limits see \cite{Vaq} or \cite[Section 4.1]{VT}. The property $\bigcap_{i\in I}B_i=\emptyset$ is equivalent to the fact that $\Omega_{{\mathcal B}}$ has a stable limit, which we denote as: \[ \omega_{\mathcal B}=\lim\left(\Omega_{\mathcal B}\right). \] This means that all polynomials $f\in\overline{K}[x]$ are $\Omega_{{\mathcal B}}$-stable; that is, there exists an index $i_0\in I$ (depending on $f$) such that \[ \omega_i(f)=\omega_{i_0}(f) \ \mbox{ for all } i\ge i_0. \] Then, $\omega_{\mathcal B}(f)$ is defined to be this stable value. In other words: \[ \omega_{\mathcal B}(f)=\max\{\omega_i(f)\mid i\in I\}. \] These valuations $\omega_{\mathcal B}$ are valuation-algebraic \cite[Proposition 4.1]{VT}. Moreover, every valuation-algebraic valuation on $\overline{K}(x)$ arises in this way \cite[Theorem 5.1]{APZ2}, \cite[Proposition 2.12]{Vaq3}. On the set $\mathcal{N}_\infty$ we consider a natural equivalence relation. Two nests of closed balls ${\mathcal B}=(B_i)_{i\in I}$, $\mathcal{C}=(C_j)_{j\in J}$ are said to be equivalent if they are mutually cofinal. That is, for all $i\in I$ there exists $j\in J$ such that $B_i\le C_j$, and vice versa. In this case, we write ${\mathcal B}\sim \mathcal{C}$. Let $\B_\infty=\mathcal{N}_\infty/\!\sim$ \,be the quotient set of $\mathcal{N}_\infty$ under this equivalence relation. Denote by $[{\mathcal B}]\in\B_\infty$ the class of any ${\mathcal B}\in\mathcal{N}_\infty$. By Theorem \ref{mainRTKb}, the continuous families $\Omega_{\mathcal B}$, $\Omega_\mathcal{C}$ are mutually cofinal if and only if ${\mathcal B}\sim\mathcal{C}$. By \cite[Proposition 4.8]{VT}, we have \[ \omega_{\mathcal B}=\omega_\mathcal{C}\ \ \Longleftrightarrow\ \ {\mathcal B}\sim \mathcal{C}. \] The next result follows. \begin{theorem}\label{mainVAKb} The mapping $\B_\infty\to\V_\infty(\overline{K})$, defined by $\,[{\mathcal B}]\mapsto \omega_{\mathcal B}$ is bijective. \end{theorem} These valuations $\omega_{\mathcal B}\in\V_\infty(\overline{K})$ follow the pattern indicated in Proposition \ref{infRT} too. \begin{proposition}\label{infVA} For ${\mathcal B}=(B_i)_{i\in I}\in\mathcal{N}_\infty$, let $B=\bigcup_{i\in I}B_i$. Then, \[ \omega_{\mathcal B}(f)=\min\{\bar{v}(f(c))\mid c\in B\}\quad\mbox{ for all }f\in\overline{K}[x]. \] \end{proposition} This follows easily from Proposition \ref{infRT} and the definition of $\omega_{\mathcal B}$. \subsection{Cuts in $\Gamma$}\label{subsecCuts} The equivalence classes of value-transcen\-den\-tal valuations on $\overline{K}(x)$ can be parametrized by balls admitting cuts in the group $\Gamma$ as radii. Let us recall some basic properties of cuts. For all $\gamma\in \Gamma$, we denote \[ \Gamma_{< \gamma}=\{\alpha\in \Gamma\mid \alpha< \gamma\}\subset \Gamma_{\le \gamma}=\{\alpha\in \Gamma\mid \alpha\le \gamma\}. \] For $S,T\subseteq \Gamma$ and $\gamma\in \Gamma$, the expressions $$\gamma<S,\ \quad \gamma\le S,\ \quad S<T,\ \quad S\le T$$ mean that the corresponding inequality holds for all $\alpha\in S$ and all $\beta\in T$. An {\bf initial segment} of $\Gamma$ is a subset $S\subseteq \Gamma$ such that $\Gamma_{\le \gamma}\subseteq S$ for all $\gamma\in S$. On the set $\op{Init}(\g)$ of initial segments of $\Gamma$ we consider the ordering determined by ascending inclusion. We obtain a totally ordered set with a minimal and a maximal element: $\emptyset=\min(\op{Init}(\g))$, $\Gamma=\max(\op{Init}(\g))$. A {\bf cut} in $\Gamma$ is a pair $\delta=(\delta^L,\delta^R)$ of subsets of $\Gamma$ such that $$\delta^L< \delta^R\quad\mbox{ and }\quad \delta^L\cup \delta^R=\Gamma.$$ Clearly, $\delta^L$ is an initial segment of $\Gamma$. Let $\op{Cuts}(\Gamma)$ be the set of all cuts in $\Gamma$. We have an obvious bijection \[ \op{Init}(\g)\,\longrightarrow\,\op{Cuts}(\Gamma),\qquad S \longmapsto (S,\Gamma\setminus S). \] We consider on $\op{Cuts}(\Gamma)$ the total ordering induced by $\op{Init}(\g)$ through this bijection. In particular, $\op{Cuts}(\Gamma)$ admits a minimal and a maximal element \[ -\infty:=(\emptyset,\Gamma)=\min\left(\op{Cuts}(\Gamma)\right), \qquad \infty^-:=(\Gamma,\emptyset)=\max\left(\op{Cuts}(\Gamma)\right), \] which are called {\bf improper cuts}. The notation $\infty^-$ is motivated by the fact that this cut is the immediate predecessor of $\infty$ in the totally ordered set $\op{Cuts}(\Gamma)\infty$.\medskip \noindent{\bf Definition. } Every $\gamma\in\Gamma$ determines two {\bf principal cuts}: \[ \gamma^-=\left(\Gamma_{<\gamma},\,\Gamma_{\ge\gamma}\right) ,\qquad \gamma^+=\left(\Gamma_{\le\gamma},\,\Gamma_{>\gamma}\right). \] \vskip0.1cm For every cut $\delta=(\delta^L,\delta^R)\in\op{Cuts}(\Gamma)$, consider a formal symbol $x_\delta$ and build up the abelian group $\g(\dta)= x_\delta\mathbb Z\oplus\Gamma$. There is a unique ordering on $\g(\dta)$ which is compatible with the group structure and satisfies $\delta^L<x_\delta<\delta^R$. Namely, $$ mx_\delta+b\le nx_\delta+a\ \Longleftrightarrow\ (m-n)x_\delta\le a-b\ \Longleftrightarrow\ (m-n)\delta^L\le a-b. $$ The latter inequality means that $(m-n)\gamma\le a-b$, for all $\gamma\in\delta^L$. Clearly, the extension $\Gamma\hookrightarrow\g(\dta)$ is incommensurable. However, it is a small extension of ordered abelian groups because $\g(\dta)/\Gamma$ is a cyclic group. The groups determined by the improper cuts have a specially simple description. Indeed, the following maps are isomorphisms of ordered abelian groups: \begin{equation}\label{lex} \ars{1.4} \begin{array}{l} \Gamma(-\infty)\,\longrightarrow\,\left(\mathbb Z\times \Gamma\right)_{\operatorname{lex}},\qquad mx_{-\infty}+\gamma\ \mapsto\ (-m,\gamma),\\ \Gamma(\infty^-)\,\longrightarrow\,\left(\mathbb Z\times \Gamma\right)_{\operatorname{lex}},\qquad\; mx_{\infty^-}+\gamma\ \mapsto\ (m,\gamma). \end{array} \end{equation} \subsection{Value-transcendental valuations}\label{subsecValueTKb} For every pair $(a,\delta)\in\overline{K}\times\op{Cuts}(\Gamma)$, consider the ball of center $a$ and radius $\delta$, defined as: \[ B(a,\delta)=\left\{c\in\overline{K}\mid \bar{v}(c-a)>\delta^L\right\}\subseteq\overline{K}. \] For instance, the balls having a principal cut as radius are the standard closed and open balls with radius in $\Gamma$. Indeed, for all $\gamma\in\Gamma$ we have: \begin{equation}\label{NotMerge} \ars{1.3} \begin{array}{l} B(a,\gamma^-)=\left\{c\in\overline{K}\mid \bar{v}(c-a)\ge\gamma\right\}=B(a,\gamma),\\ B(a,\gamma^+)=\left\{c\in\overline{K}\mid \bar{v}(c-a)>\gamma\right\}=B^{\operatorname{o}}(a,\gamma). \end{array} \end{equation} Also, the improper cuts determine very special balls. For all $a\in\overline{K}$, we have \begin{equation}\label{ImproperBalls} B(a,-\infty)=\overline{K},\qquad B(a,\infty^-)=\{a\}. \end{equation} \begin{lemma}\label{coincidence} Let $a,b\in\overline{K}$ and $\delta,\epsilon\in\op{Cuts}(\Gamma)$. Then, \[ B(a,\delta)=B(b,\epsilon)\ \ \Longleftrightarrow\ \ \bar{v}(b-a)>\delta^L\ \mbox{ and }\ \delta=\epsilon. \] \end{lemma} \begin{proof} Suppose $B(a,\delta)=B(b,\epsilon)$. Since $b\in B(a,\delta)$, we have $\bar{v}(b-a)>\delta^L$. If $\delta<\epsilon$, then $\delta^L\subsetneq \epsilon^L$ and there exists $\gamma\in\Gamma$ such that $\delta^L<\gamma\le\epsilon^L$. If $c\in\overline{K}$ has value $\bar{v}(c)=\gamma$, then we have $c\in B(a,\delta)$ and $c\not\in B(b,\epsilon)$, contradicting our assumption. The inequality $\epsilon<\delta$ leads to a completely analogous contradiction. Hence, $\delta=\epsilon$. The converse implication is obvious. \end{proof}\medskip In particular, any $b\inB(a,\delta)$ is a center of the ball: $B(a,\delta)=B(b,\delta)$. Let $\B_{{\op{cut}}}$ be the set of all these ultrametric balls centred at elements in $\overline{K}$ and having a cut as radius. For every $B=B(a,\delta)\in\B_{{\op{cut}}}$, denote by $\omega_B=\omega_{a,\delta}$ the monomial valuation defined as follows in terms of $(x-a)$-expansions: \begin{equation}\label{defom} \omega_B\left(\sum\nolimits_{0\le n}a_n(x-a)^n\right) = \min\left\{\bar{v}(a_n)+nx_\delta\mid0\le n\right\}. \end{equation} Note that $\omega_B$ is value-transcen\-den\-tal with value group $\Gamma_{\omega_B}=\g(\dta)$. For all $c\in\overline{K}$ we have: \begin{equation}\label{x-c} \ars{1.2} \begin{array}{l} c\in B\ \ \Longrightarrow\ \ \omega_B(x-c)=x_\delta,\\ c\not\in B\ \ \Longrightarrow\ \ \omega_B(x-c)=\bar{v}(c-a)\in\Gamma. \end{array} \end{equation} As in the classical case, these valuations $\omega_B$ are uniquely determined by their defining balls. \begin{lemma}\label{B=om} For all $B,C\in\B_{{\op{cut}}}$ the following conditions are equivalent. \begin{enumerate} \item[(a)] $B=C$. \item[(b)] $\omega_B=\omega_C$. \item[(c)] $\omega_B\sim\omega_C$. \end{enumerate} \end{lemma} \begin{proof} Obviously, (a)$\Rightarrow$(b)$\Rightarrow$(c). Let us show that (c)$\Rightarrow$(a). Suppose that $B=B(a,\delta)$, $C=B(b,\epsilon)$ and $\omega_B\sim\omega_C$. This means that there is an isomorphism of ordered groups $\iota\colon \Gamma(\epsilon) \lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\to\\\mbox{\tiny $\sim\,$}\end{array}$}\g(\dta)$ fitting into a commutative diagram $ \ars{1.4} \begin{array}{c} \Gamma(\epsilon)\ \stackrel{\iota}\,\longrightarrow\,\ \g(\dta)\\ \quad\ \mbox{\scriptsize$\omega_C$}\nwarrow\quad \nearrow\mbox{\scriptsize$\omega_B$}\quad\\ K(x)^* \end{array} $ Since $\omega_B$ and $\omega_C$ are extensions of $\bar{v}$, the isomorphism $\iota$ acts as the identity on $\Gamma$. For all $c\in C$, (\ref{x-c}) shows that $\omega_C(x-c)=x_\epsilon$. Since the diagram commutes, \[ \omega_B(x-c)=\iota\left(\omega_C(x-c)\right)=\iota\left(x_\epsilon\right)\not\in\Gamma. \] By (\ref{x-c}), we deduce that $c\in B$. This shows that $C\subseteq B$, and we deduce that $B=C$ by the symmetry of the argument. \end{proof}\medskip On the other hand, all value-transcendental valuations on $\overline{K}(x)$ arise in this way. \begin{lemma}\label{ontoVT} Every value-transcendental valuation on $K(x)$ is equivalent to $\omega_B$ for some $B\in\B_{{\op{cut}}}$. \end{lemma} \begin{proof} Let $\omega$ be a valuation-algebraic extension of $\bar{v}$ to $\overline{K}(x)$. By \cite[Theorem 1.5]{Kuhl}, the value group $\g_\omega$ of $\omega$ is an extension of $\Gamma$ as an ordered abelian group of the following form: \[ \Gamma\hooklongrightarrow\g_\omega=\alpha\mathbb Z\oplus\Gamma, \qquad \gamma\longmapsto (0,\gamma), \] where $\alpha\in\g_\omega$ has no torsion over $\Gamma$. By \cite[Theorem 3.11]{Kuhl}, there exists $a\in \overline{K}$ such that $\omega$ is the monomial valuation acting as usual on $(x-a)$-expansions: \[ \omega\left(\sum\nolimits_{0\le n}a_n(x-a)^n\right) = \min\left\{\bar{v}(a_n)+n\alpha\mid0\le n\right\}. \] Let $\delta\in\op{Cuts}(\Gamma)$ be the cut in $\Gamma$ determined by $\alpha$. That is, \[ \delta^L=\left\{\gamma\in\Gamma\mid \gamma<\alpha\right\},\qquad \delta^R=\left\{\gamma\in\Gamma\mid \gamma>\alpha\right\}. \] We may buid up an isomorphism of ordered abelian groups $\iota\colon \g(\dta)\to\g_\omega$, acting as the identity on $\Gamma$ and mapping $x_\delta$ to $\alpha$. If $B=B(a,\delta)$, then we have clearly $\omega=\iota\circ\omega_B$. Hence, $\omega\sim\omega_B$. \end{proof}\medskip Therefore, we obtain a bijective mapping $ \B_{{\op{cut}}}\,\longrightarrow\,\V_{\op{vt}}(\overline{K}),\qquad B\longmapsto \omega_B. $ Finally, consider the partial ordering in $\B_{{\op{cut}}}$ determined by descending inclusion: \[ B\le C\ \ \Longleftrightarrow\ \ B\supseteq C. \] It is easy to check that the mapping $B\mapsto\omega_B$ strictly preserves the ordering. Therefore, out bijective mapping is an isomorphism of posets. \begin{theorem}\label{mainVTKb} The mapping $\,\B_{{\op{cut}}}\to\V_{\op{vt}}(\overline{K})$ determined by $B\mapsto \omega_B$ is an isomorphism of posets. \end{theorem} \subsubsection{Minimal and maximal elements in $\V_{\op{vt}}(\overline{K})$}\label{subsubsecMM} The tree $\V_{\op{vt}}(\overline{K})$ has an absolute minimal element, which we denote by $\omega_{-\infty}$. Indeed, by (\ref{ImproperBalls}) and Theorem \ref{mainVTKb}, for all $a,b\in\overline{K}$ we have \begin{equation}\label{omi} \omega_{-\infty}:=\omega_{a,-\infty}=\omega_{b,-\infty}\le \omega\quad\mbox{ for all }\omega\in\V_{\op{vt}}(\overline{K}). \end{equation} We say that $\omega_{-\infty}$ is the {\bf root node} of $\V_{\op{vt}}(\overline{K})$. By the isomorphism in (\ref{lex}), we may think that $\omega_{-\infty}$ takes values in the group $\left(\mathbb Z\times\Gamma\right)_{\operatorname{lex}}$ and acts as follows on nonzero polynomials $f\in\overline{K}[x]$: \[ \omega_{-\infty}(f)=\left(-\op{ord}_x(f),\bar{v}(\op{lc}(f))\right), \] where $\op{lc}(f)$ is the leading coefficient of $f$. On the other hand, by (\ref{ImproperBalls}) and Theorem \ref{mainVTKb}, for all $a\in\overline{K}$ the valuation $\omega_{a,\infty^-}$ is maximal in $\V_{\op{vt}}(\overline{K})$. Again, by using the isomorphism in (\ref{lex}), we may think that $\omega_{a,\infty^-}$ takes values in the group $\left(\mathbb Z\times\Gamma\right)_{\operatorname{lex}}$ and acts as follows on nonzero polynomials $f\in\overline{K}[x]$: \[ \omega_{a,\infty^-}(f)=\left(\op{ord}_{x-a}(f),\bar{v}(\op{in}(f))\right), \] where $\op{in}(f)$ is the first nonzero coefficient in the $(x-a)$-expansion of $f$. \subsection{Valuation-transcendental valuations}\label{subsecQC} The tree $\V_{\op{fin}}=\V_{\op{rt}}\sqcup \V_{\op{vt}}$ whose nodes represent equivalence classes of valuation-transcendental extensions of $v$ to $K(x)$ has been described in \cite[Section 7]{VT}. It is a ``compact" set, in the sense that any totally ordered subset has an infimum and a supremum. In particular, every two nodes have a greatest common lower node. Also, it has a unique root node and as many maximal nodes as monic irreducible polynomials in $K^h[x]$. After Theorems \ref{mainRTKb} and \ref{mainVTKb}, a geometric space parametrizing $\V_{\op{fin}}(\overline{K})$ should be a kind of merge of the spaces of balls ${\mathbb{B}_\Gamma}$ and $\B_{{\op{cut}}}$. However, this merge cannot be a simple union, because ${\mathbb{B}_\Gamma}\subseteq\B_{{\op{cut}}}$ as subsets of $\mathcal{P}(\overline{K})$, as shown in (\ref{NotMerge}). In other words, each ball in ${\mathbb{B}_\Gamma}$ determines two different valuations, depending on our consideration of the radius being some $\gamma\in\Gamma$ or the corresponding cut $\gamma^-\in\op{Cuts}(\Gamma)$. Even if we considered the formal disjoint union ${\mathbb{B}_\Gamma}\sqcup \B_{{\op{cut}}}$ as our space, there would remain the problem of deciding what partial ordering on this formal union reflects the partial ordering on $\V_{\op{fin}}(\overline{K})$. A natural merging of these spaces is obtained by considering balls admitting quasi-cuts in $\Gamma$ as radii. A {\bf quasi-cut} in $\Gamma$ is a pair $\delta=\left(\delta^L,\delta^R\right)$ of subsets of $\Gamma$ such that \[ \delta^L\le \delta^R\quad \mbox{ and }\quad \delta^L\cup \delta^R=\Gamma. \] Thus, $\delta^L$ is an initial segment of $\Gamma$ and $\delta^L\cap \delta^R$ consists of at most one element. The set $\op{Qcuts}(\g)$ of all quasi-cuts in $\Gamma$ admits a total ordering: $$ \delta=\left(\delta^L,\delta^R\right)\le \epsilon=\left(\epsilon^L,\epsilon^R\right) \ \ \Longleftrightarrow\ \ \delta^L\subseteq \epsilon^L\quad\mbox{and}\quad \delta^R\supseteq \epsilon^R. $$ There is an embedding of ordered sets $\Gamma\hookrightarrow\op{Qcuts}(\g)$, which assigns to every $\gamma\in\Gamma$ the {\bf principal quasi-cut} $\left(\Gamma_{\le \gamma},\Gamma_{\ge \gamma}\right)$. We abuse of language and denote still by $\gamma\in \op{Qcuts}(\g)$ the principal quasi-cut determined by $\gamma$. Clearly, the set $\op{Cuts}(\Gamma)$ is embedded in $\op{Qcuts}(\g)$ and \[ \op{Qcuts}(\g)=\Gamma\sqcup\op{Cuts}(\Gamma). \] As far as the ordering is concerned, the comparison between the elements of $\Gamma$ and $\op{Cuts}(\Gamma)$ is clarified by the following inequalities: \[ \gamma^-<\gamma<\gamma^+\quad\mbox{ for all }\gamma\in\Gamma. \] Also, $\gamma^-$ is the immediate predecessor of $\gamma$ in $\op{Qcuts}(\g)$, while $\gamma^+$ is the immediate successor of $\gamma$. For each pair $(a,\delta)\in\overline{K}\times\op{Qcuts}(\g)$, consider the ``pointed" ball of center $a$ and radius $\delta$, defined as: \[B^{\mbox{\tiny $\bullet$}}(a,\delta)=\left(B(a,\delta),\delta\right), \] where $B(a,\delta)\subseteq \overline{K}$ is the ball defined in Sections \ref{subsecRTKb} (for $\delta\in\Gamma$) and \ref{subsecValueTKb} (for $\delta\in\op{Cuts}(\Gamma)$). We denote by $\B_{\op{qcut}}$ the set of all these pointed closed balls. Thus, a pointed closed ball $B^{\mbox{\tiny $\bullet$}}\in\B_{\op{qcut}}$ has two ingredients: $\bullet$ \ an {\bf underlying ball} $B\subseteq \overline{K}$, denoted $B=\operatorname{under}(B^{\mbox{\tiny $\bullet$}})$. $\bullet$ \ a {\bf radius} in $\op{Qcuts}(\g)$.\medskip \noindent{\bf Definition. }We say that $\,\delta\in\op{Qcuts}(\g)$ \,is an \emph{essential radius} if $\delta^R$ contains no minimal element. Equivalently, \,$\delta\not\in\Gamma$ and $\delta\ne\gamma^-$ \,for all \,$\gamma\in\Gamma$.\medskip A pointed ball with an essential radius is uniquely determined by its underlying ball in $\overline{K}$. However, for all $\gamma\in\Gamma$, the pointed balls $B^{\mbox{\tiny $\bullet$}}(a,\gamma)$ and $B^{\mbox{\tiny $\bullet$}}(a,\gamma^-)$ have the same underlying ball. For every $B^{\mbox{\tiny $\bullet$}}=\left(B^{\mbox{\tiny $\bullet$}}(a,\delta),\delta\right)\in\B_{\op{qcut}}$, we denote $\omega_{B^{\mbox{\tiny $\bullet$}}}=\omega_{a,\delta}$. In the set $\B_{\op{qcut}}$, we consider the following partial ordering: \[ B^{\mbox{\tiny $\bullet$}}(a,\delta)\le B^{\mbox{\tiny $\bullet$}}(b,\epsilon)\ \ \Longleftrightarrow\ B(a,\delta)\supseteq B(b,\epsilon) \ \mbox{ and }\ \delta\le\epsilon. \] Then, it is easy to deduce from Theorems \ref{mainRTKb} and \ref{mainVTKb} the following result. \begin{theorem}\label{mainQC} The mapping $\B_{\op{qcut}}\to\V_{\op{fin}}(\overline{K})$ determined by $B^{\mbox{\tiny $\bullet$}}\mapsto \omega_{B^{\mbox{\tiny $\bullet$}}}$ is an isomorphism of posets. \end{theorem} The following result generalizes Proposition \ref{infRT} to arbitrary valuation-transcenden\-tal valuations. For a quasi-cut $\delta\in\Gamma\subset\op{Qcuts}(\g)$, let us write $x_\delta:=\delta\in\Gamma$. \begin{proposition}\label{weneed} Let $\omega=\omega_{a,\delta}$ for some $a\in \overline{K}$, $\delta\in\op{Qcuts}(\g)$. Take a non-zero $f=\sum\nolimits_{0\le n}a_n(x-a)^n\in\overline{K}[x]$. Let $S$ be the set of indices $n$ such that $\omega(f)=\bar{v}(a_n)+nx_\delta$. Denote $V=\left\{\bar{v}(f(c))\mid c\in B(a,\delta)\right\}$. Then, the following holds: \begin{enumerate} \item[(a)] If $0\in S$ or $\delta\in\Gamma$, then $\ \omega(f)=\min\left(V\right)$. \item[(b)] If $0\not\in S$ and $\delta=\gamma^-$ for some $\gamma\in\Gamma$, then $\ \omega(f)=\min\left(V\right)^-$. \item[(c)] If $0\not\in S$ and $\delta$ is an essential radius, then $\omega(f)=\inf\left(V\right)$. \end{enumerate} \end{proposition} \noindent{\bf Remark. }In (b), an equality of the form $\omega(f)=\alpha^-$, for $\alpha\in\Gamma$, means that the value $\omega(f)\in\g_\omega$ strictly realizes the cut $\alpha^-$; that is, $\Gamma_{<\alpha}<\omega(f)<\Gamma_{\ge\alpha}$. In (c), the equality $\omega(f)=\inf\left(V\right)$ means that $\omega(f)$ is the infimum of $V$ as a subset of $\g_\omega$.\medskip \begin{proof} Let us write $B=B(a,\delta)$. For all $c\in B$ we have \[ \bar{v}(f(c))\ge \min\{\bar{v}(a_n(c-a)^n)\mid 0\le n\}\ge \min\{\bar{v}(a_n)+nx_\delta\mid 0\le s\}=\omega(f). \] We deduce that $\omega(f)\le V$. If $0\in S$, then $\omega(f)=\bar{v}(a_0)=\bar{v}(f(a))$. Since $a\in B$, we have $\omega(f)=\min\left(V\right)$. If $\delta\in\Gamma$, then (a) follows from Proposition \ref{infRT}. This ends the proof of (a). From now on, we assume that $0\not\in S$ and $\delta\not\in\Gamma$. Then, $S$ is a one-element set, because for all $n\ne m$ we have $$ \bar{v}(a_n)+nx_\delta=\bar{v}(a_m)+mx_\delta\ \ \Longrightarrow\ x_\delta=(\bar{v}(a_n)-\bar{v}(a_m))/(m-n)\in\Gamma, $$ against our assumption. Hence, $S=\{n_0\}$ with $n_0>0$, and \begin{equation}\label{n0} \omega(f)=\bar{v}(a_{n_0})+n_0x_\delta<\bar{v}(a_{n})+nx_\delta,\quad \mbox{for all }n\ne n_0. \end{equation} Since $n_0>0$, we have $\omega(f)\not\in\Gamma$, so that $\omega(f)<V$.\medskip \noindent{\bf Proof of (b). }Suppose $0\not\in S$ and $\delta=\gamma^-$ for some $\gamma\in\Gamma$. By (\ref{NotMerge}), $B=B(a,\gamma)$. By Propositon \ref{infRT}, $\min(V)$ exists and coincides with $\omega_{a,\gamma}(f)$. Let us show that \[ \min(V)=\omega_{a,\gamma}(f)=\bar{v}(a_{n_0})+n_0\gamma. \] Indeed, since $\gamma>x_\delta=x_{\gamma^-}$, we have $n(\gamma-x_\delta)>n_0(\gamma-x_\delta)$ for all $n>n_0$. Hence, we deduce from (\ref{n0}) that $\bar{v}(a_{n_0}+n_0\gamma)<\bar{v}(a_n)+n\gamma$. On the other hand, for $n<n_0$, the inequality (\ref{n0}) implies \[ x_\delta<\dfrac{\bar{v}(a_n)-\bar{v}(a_{n_0})}{n_0-n}. \] Hence, $\gamma\le (\bar{v}(a_n)-\bar{v}(a_{n_0}))/(n_0-n)$, because $\gamma$ is the minimal element in $\delta^R$. This leads to $\bar{v}(a_{n_0})+n_0\gamma\le\bar{v}(a_n)+n\gamma$. Hence, \[ \omega_{a,\gamma}(f)=\min\{\bar{v}(a_n)+n\gamma\}=\bar{v}(a_{n_0})+n_0\gamma. \] In order to show that $\omega(f)$ strictly realizes the cut $\min(V)^-$ in $\g_\omega$, it suffices to show that $\Gamma_{<\min(V)}<\omega(f)$. Consider any $\alpha\in\Gamma$ such that $\omega(f)<\alpha$, and express it as $\alpha=\bar{v}(a_0)+n_0\beta$ for some $\beta\in\Gamma$. We have \[ \bar{v}(a_0)+n_0x_{\gamma^-}=\omega(f)<\alpha=\bar{v}(a_0)+n_0\beta\ \ \Longrightarrow\ \ x_{\gamma^-}<\beta\ \ \Longrightarrow\ \ \gamma\le\beta. \] Hence, $\alpha\ge\bar{v}(a_0)+n_0\gamma=\min(V)$. This ends the proof of (b).\medskip \noindent{\bf Proof of (c). }Suppose $0\not\in S$ and $\delta$ is an essential radius. Write \[ \alpha:=\min\{\left(\bar{v}(a_n)-\bar{v}(a_{n_0})\right)/(n_0-n)\mid 0\le n< n_0\}\in\Gamma. \] \noindent{\bf Claim. }Suppose that $\beta\in\Gamma$ satisfies $x_\delta<\beta<\alpha$. Then, for any $c\in\overline{K}$ such that $\bar{v}(c-a)=\beta$, we have $c\in B$ and $\omega(f)<\bar{v}(f(c))=\bar{v}(a_{n_0})+n_0\beta$.\medskip Indeed, $c\in B$ because $\bar{v}(c-a)>x_\delta>\delta^L$. Also, for all $n<n_0$, we have \[ \bar{v}(c-a)<\dfrac{\bar{v}(a_n)-\bar{v}(a_{n_0})}{n_0-n}\ \ \Longrightarrow\ \ \bar{v}(a_n(c-a)^n)<\bar{v}(a_{n_0}(c-a)^{n_0}). \] Finally, for all $n>n_0$, from (\ref{n0}) and $x_\delta<\beta$, we deduce $ \bar{v}(a_{n_0})+n_0\beta<\bar{v}(a_{n})+n\beta$. Hence, $\bar{v}(a_n(c-a)^n)<\bar{v}(a_{n_0}(c-a)^{n_0})$ for all $n>n_0$. As a consequence, \[ \bar{v}(f(c))=\bar{v}(a_{n_0}(c-a)^{n_0})=\bar{v}(a_{n_0})+n_0\beta>\bar{v}(a_{n_0})+n_0x_\delta=\omega(f). \] This ends the proof of the Claim. In order to show that $\omega(f)=\inf(V)$, we need only to check that for every $\xi\in\g_\omega$ such that $\omega(f)<\xi$, there is some $\epsilon\in V$ such that $\omega(f)<\epsilon<\xi$. In other words, there is some $c\in B$ such that $\omega(f)<\bar{v}(f(c))<\xi$. Since $\delta$ is an essential radius and $n_0>0$, the cut of $\Gamma$ determined by $\omega(f)=\bar{v}(a_0)+n_0x_\delta$ is essential too. Hence, we may assume that $\xi\in\Gamma$. In order to facilitate some comparisons, let us write $\xi=\bar{v}(a_{n_0})+n_0\gamma$ for some $\gamma\in\Gamma$. Then, the inequality $\omega(f)<\xi$ is equivalent to $x_\delta<\gamma$. Since $x_\delta<\alpha$ and $\delta^R$ has no minimal element, there exists $\beta\in\Gamma$ such that $x_\delta<\beta<\min\{\gamma,\alpha\}$. By the Claim, there exists $c\in B$ such that \[ \omega(f)<\bar{v}(f(c))=\bar{v}(a_{n_0})+n_0\beta<\bar{v}(a_{n_0})+n_0\gamma=\xi. \] This ends the proof of (c). \end{proof} \subsection{Global ordering on $\mathbb B$}\label{subsecGlobalOrd} Let $\mathbb B=\B_\infty\sqcup \B_{\op{qcut}}$. Since the intersection of a nest of closed balls in $\B_\infty$ is empty, for any reasonable extension of the partial ordering of $\B_{\op{qcut}}$ to $\mathbb B$, the nests of closed balls in $\B_\infty$ must be maximal elements. Thus, we obtain a partial ordering on $\mathbb B$ just by defining what balls in $\B_{\op{qcut}}$ lie below any given nest of balls. Let ${\mathcal B}=(B_i)_{i\in I}\in\mathcal{N}_\infty$ be a nest of closed balls, and $B^{\mbox{\tiny $\bullet$}}\in\B_{\op{qcut}}$. Then, we define \[ B^{\mbox{\tiny $\bullet$}}\le {\mathcal B}\ \ \Longleftrightarrow\ \ \operatorname{under}(B^{\mbox{\tiny $\bullet$}})\le B_i\ \mbox{ for some }i\in I. \] Clearly, this ordering is compatible with the equivalence $\sim$ of nests of closed balls. Thus, it determines a partial ordering on $\mathbb B$. We deduce our main theorem for algebraically closed fields. \begin{theorem}\label{mainB} The mapping $\,\mathbb B\to\mathbb V(\overline{K})$ determined by $B\mapsto \omega_B$ is an isomorphism of posets. \end{theorem} The node $\omega_{-\infty}\in\V_{\op{vt}}(\overline{K})\subseteq\mathbb V(\overline{K})$ becomes a root node of $\mathbb V(\overline{K})$ too. Also, for all $a\in\overline{K}$, the nodes $\omega_{a,\infty^-}\in\V_{\op{vt}}(\overline{K})\subseteq\mathbb V(\overline{K})$ become maximal nodes of $\mathbb V(\overline{K})$ too. We say that these $\omega_{a,\infty^-}$ are {\bf finite leaves} of $\mathbb V(\overline{K})$. The set of leaves of $\mathbb V(\overline{K})$ is \[ \B_\infty\sqcup\{\omega_{a,\infty^-}\mid a\in\overline{K}\}. \] We say that the leaves in $\B_\infty$ are {\bf infinite leaves} of $\mathbb V(\overline{K})$. Every interval between the root node and a finite leaf is parametrized by $\op{Qcuts}(\g)$: \[ [\omega_{-\infty},\omega_{a,\infty^-}]=\left\{\omega_{a,\delta}\mid \delta\in \op{Qcuts}(\g)\right\}. \] \section{Descent of valuations from $\overline{K}(x)$ to $K(x)$}\label{secProof11} The descent of valuations from $\overline{K}(x)$ to $K^h(x)$ is described in full detail in a recent paper by Vaqui\'e \cite[Section 3]{Vaq3}. Since the automorphisms in the decomposition group $G$ leave $\bar{v}$ invariant, they act on ultrametric balls as follows: \[ \sigma\left(B(a,\delta)\right)=B(\sigma(a),\delta),\quad \forall\,\sigma\in G,\ \forall\,\delta\in\op{Qcuts}(\g). \] Since this action preserves the radius $\delta$, it extends in an obvious way to an action on the set $\B_{\op{qcut}}$ of pointed balls. Also, $G$ acts on $\B_\infty$ because its action on nests of closed balls is clearly compatible with the equivalence relation of mutual cofinality. Therefore, $G$ acts in a natural way on $\mathbb B$. Let $\mathbb B/G$ be the set of all $G$-orbits, and let us denote by $[B]_G$ the $G$-orbit of any $B\in\mathbb B$. This $G$-action on $\mathbb B$ is reflected in the following action on $\mathbb V$: \[ \omega_{\sigma(B)}=\omega_B\circ\sigma^{-1}\quad\mbox{ for all }\sigma\in G. \] Hence, for all $B,C\in\mathbb B$ we have \begin{align*} \left(\omega_B\right)_{\mid K^h(x)}=\left(\omega_C\right)_{\mid K^h(x)}&\ \ \Longleftrightarrow\ \ \omega_C=\omega_B\circ\sigma\ \mbox{ for some }\sigma\in G\\&\ \ \Longleftrightarrow\ \ C = \sigma^{-1}(B)\ \mbox{ for some }\sigma\in G\\&\ \ \Longleftrightarrow\ \ [B]_G=[C]_G. \end{align*} Therefore, we deduce from Theorem \ref{mainB} an isomorphism of posets \[ \mathbb B/G\,\longrightarrow\,\mathbb V(K^h),\qquad [B]_G\longmapsto \left(\omega_B\right)_{\mid K^h(x)}. \] On the other hand, \cite[Theorem B]{Rig} shows that the restriction mapping determines an isomorphism of posets: \begin{equation}\label{thmB} \mathbb V(K^h)\,\longrightarrow\, \mathbb V,\qquad \mu\longmapsto \mu_{\mid K(x)}. \end{equation} This ends the proof of our main theorem. \begin{theorem}\label{main} The mapping $\,\mathbb B/G\to \mathbb V$ determined by $[B]_G\mapsto \left(\omega_B\right)_{\mid K(x)}$, is an isomorphism of posets. \end{theorem} In order to fit this theorem with Bengu\mathfrak{c}{s}-Lasnier conjecture, we need only to identify $\mathbb B/G$ with some space $\mathbb D$ of diskoids. Let $\op{Irr}(K^h)$ be the set of all monic irreducible polynomials in $K^h[x]$.\medskip \noindent{\bf Definition. } Take $f\in\op{Irr}(K^h)$ and $\delta\in\op{Qcuts}(\g)$. The {\bf diskoid} $D(f,\delta)$ centred at $f$ of radius $\delta$ is defined as \[ D(f,\delta)=\{c\in\overline{K}\mid\bar{v}(f(c))\ge\delta^L\}. \] The {\bf pointed diskoid} $D^{\mbox{\tiny $\bullet$}}(f,\delta)$ centred at $f$ of radius $\delta$ is defined as \[ D^{\mbox{\tiny $\bullet$}}(f,\delta)=\left(D(f,\delta),\delta\right). \] We say that $D(f,\delta)$ is the {\bf underlying diskoid} of $D^{\mbox{\tiny $\bullet$}}(f,\delta)$. Let $\D_{\op{qcut}}$ be the set of all pointed diskoids centred at polynomials in $\op{Irr}(K^h)$ and having radii in $\op{Qcuts}(\g)$. Consider the following partial ordering on $\D_{\op{qcut}}$: \[ D^{\mbox{\tiny $\bullet$}}(f,\delta)\le D^{\mbox{\tiny $\bullet$}}(g,\epsilon)\ \ \Longleftrightarrow\ \ D(f,\delta)\supseteq D(g,\epsilon)\ \mbox{ and }\ \delta\le\epsilon \] Let $\D_\Gamma,\D_\cut\subseteq \D_{\op{qcut}}$ be the subsets of all diskoids having radii in $\Gamma$ and $\op{Cuts}(\Gamma)$, respectively. Let $\D_\infty$ be the quotient set of the set of nests of diskoids in $\D_\Gamma$ having an empty intersection, under the equivalence relation identifying mutually cofinal nests. Let $\mathcal{D}=(D_i)_{i\in I}$ be a nest of diskoids, and $D^{\mbox{\tiny $\bullet$}}\in\D_{\op{qcut}}$. Then, we define \begin{equation}\label{partialDi} D^{\mbox{\tiny $\bullet$}}\le \mathcal{D}\ \ \Longleftrightarrow\ \ \operatorname{under}(D^{\mbox{\tiny $\bullet$}})\le D_i\ \mbox{ for some }i\in I, \end{equation} where $\operatorname{under}(D^{\mbox{\tiny $\bullet$}})$ is the underlying diskoid of $D^{\mbox{\tiny $\bullet$}}$. Clearly, this ordering is compatible with the equivalence of nests of diskoids. Let $\mathbb D=\D_\infty\sqcup \D_{\op{qcut}}$ equipped with the partial ordering determined by the partial ordering of $\D_{\op{qcut}}$ and the relationship (\ref{partialDi}). The following observation follows easily from \cite[Lemma 6.9]{Andrei}. \begin{lemma}\label{B-D} The following mapping is an isomorphism of posets: \[ \mathbb B/G\,\longrightarrow\, \mathbb D,\qquad [B]_G\longmapsto \bigcup_{\sigma\in G}\sigma(B) \] \end{lemma} Moreover, this isomorphism induces isomorphisms: \[ \B_\infty/G\lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\longrightarrow\,\\\mbox{\tiny $\sim\,$}\end{array}$} \D_\infty,\quad {\mathbb{B}_\Gamma}/G\lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\longrightarrow\,\\\mbox{\tiny $\sim\,$}\end{array}$}\D_\Gamma,\quad \B_{{\op{cut}}}/G\lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\longrightarrow\,\\\mbox{\tiny $\sim\,$}\end{array}$}\D_\cut,\quad \B_{\op{qcut}}/G\lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\longrightarrow\,\\\mbox{\tiny $\sim\,$}\end{array}$}\D_{\op{qcut}}. \] \subsection{Root node and leaves of $\mathbb V$}\label{subsecMM} Clearly, the restriction mappings preserve the ordering of valuations. Hence, the restriction of the valuation $\omega_{-\infty}$, defined in (\ref{omi}) is the root node of $\mathbb V$, and the restriction of the leaves of $\mathbb V(\overline{K})$ are the leaves of $\mathbb V$. The set of leaves of $\mathbb V(\overline{K})$ was described in Section \ref{subsubsecMM}. Thus, $\mathbb V$ has a set of infinite leaves, parametrized by $\B_\infty/G$, and a set of finite leaves, obtained as the restrictions to $K(x)$ of the valuations $\omega_{a,\infty^-}$ for $a\in\overline{K}$. Since $\omega_{a,\infty^-}$ and $\omega_{b,\infty^-}$ have the same restriction to $K^h(x)$ if and only if $a$ and $b$ are $G$-conjugate, we see that the finite leaves of $\mathbb V(K^h)$ are parametrized by the set $\op{Irr}(K^h)$. By the isomorphism (\ref{thmB}), the set of finite leaves of $\mathbb V$ is in bijection with $\op{Irr}(K^h)$ too. \section{Valuations on $K[x]$ with nontrivial support}\label{secSupport} \subsection{Valuations on $K[x]$}\label{secValsKx} Let $\Lambda$ be an ordered abelian group. A $\Lambda$-valued valuation on the polynomial ring $K[x]$ is a mapping $\mu\colon K[x]\to \Lambda\infty$, satisfying the following conditions:\medskip (0) \ $\mu(1)=0$, \ $\mu(0)=\infty$, (1) \ $\mu(fg)=\mu(f)+\mu(g),\quad\forall\,f,g\in K[x]$, (2) \ $\mu(f+g)\ge\min\{\mu(f),\mu(g)\},\quad\forall\,f,g\in K[x]$.\medskip The {\bf support} of $\mu$ is the prime ideal $\mathfrak{p}=\op{supp}(\mu)=\mu^{-1}(\infty)\in\operatorname{Spec}(K[x])$. The {\bf value group} of $\mu$ is the subgroup $\g_\mu\subseteq \Lambda$ generated by $\mu\left(K[x]\setminus\mathfrak{p}\right)$. The valuation $\mu$ induces in a natural way a valuation $\overline{\mu}$ on the field of fractions $L=\operatorname{Frac}\left(K[x]/\mathfrak{p}\right)$; that is, $L=K(x)$ if $\mathfrak{p}=0$, or $L=K[x]/(f)$ if $\mathfrak{p}=fK[x]$ for some irreducible $f\inK[x]$. The {\bf residue field} of $\mu$ is, by definition, the residue field $L\overline{\mu}$. Thus, the valuations on $K[x]$ with trivial support may be identified with valuations on $K(x)$. The valuations with nontrivial support extending $v$ are commensurable over $v$ and have a finite residual extension $L\overline{\mu}/Kv$. In particular, these valuations are equivalent to a unique $\Gamma$-valued valuation, by the arguments used in Section \ref{subsecComm}. The valuations with nontrivial support are useful to describe extensions of $v$ to finite field extensions of $K$. More precisely, every valuation $\mu$ with nontrivial support $fK[x]$ can be identified with an extension of $v$ to the field $L=K[x]/(f)$, just by considering the composition \[ \mu\colon K[x]\,\longrightarrow\, L\stackrel{\overline{\mu}}\,\longrightarrow\, \Gamma\infty. \] These extensions of $v$ to a given finite simple extension $L/K$, determined by an irreducible $f\inK[x]$, are in 1-1 correspondence with the irreducible factors of $f$ in $K^h[x]$. For the discrete rank-one case this fact goes back to Hensel. For an arbitrary valued field $(K,v)$ it can be deduced from the techniques of \cite[Section 17]{endler}. A concrete proof may be found in \cite[Section 3]{NN}. For each irreducible factor $F\in\op{Irr}(K^h)$ of $f$, we may consider the following valuation $v_F$ on $K[x]$ with support $fK[x]$: \[ v_F(g)=\bar{v}(g(a))\quad\mbox{ for all }g\inK[x], \] where $a\in\overline{K}$ is any choice of a root of $F$ in $\overline{K}$. By the henselian property, this valuation does not depend on the choice of $a$. As a consequence of the above mentioned 1-1 correspondence, the mapping $F\mapsto v_F$ determines a bijection between $\op{Irr}(K^h)$ and the set of all $\Gamma$-valued valuations on $K[x]$ with nontrivial support. \subsection{Tree of equivalence classes of valuations on $K[x]$} Let $\widetilde{\mathbb V}$ be the set of equivalence classes of valuations on $K[x]$ whose restriction to $K$ is equivalent to $v$. This set has the structure of a tree too, fully described in \cite[Section 7]{VT}. As a set, we have \[ \widetilde{\mathbb V}=\mathbb V\sqcup\{v_F\mid F\in \op{Irr}(K^h)\}. \] What is the relative position of these added valuations with nontrivial support? Recall that the finite leaves of $\mathbb V(\overline{K})$ are the valuations $\omega_{a,\infty^-}$, for $a$ running on $\overline{K}$. For every $F\in\op{Irr}(K^h)$, the finite leaves $\omega_{a,\infty^-}$, for $a$ running on the roots of $F$, have the same restriction to $K^h[x]$ (hence to $K[x]$). This restriction satisfies \[ \left(\omega_{a,\infty^-}\right)_{\mid K[x]}<v_F \] and $v_F$ is the immediate successor of $\left(\omega_{a,\infty^-}\right)_{\mid K[x]}$ in $\widetilde{\mathbb V}$. Therefore, the finite leaves in $\mathbb V$ (parametrized by $\op{Irr}(K^h)$) cease to be leaves in $\widetilde{\mathbb V}$, because each one admits an immediate successor in $\widetilde{\mathbb V}$. Also, the valuations with nontrivial support become the set of finite leaves of $\widetilde{\mathbb V}$. Finally, the tree $\widetilde{\mathbb V}$ admits a completely analogous geometric parametrization as that of Theorem \ref{main}. Let us briefly describe the parametrization of $\widetilde{\mathbb V}(\overline{K})$. The parametrization of $\widetilde{\mathbb V}$ follows exactly in the same way as before, just by considering $G$-action and applying \cite[Theorem B]{Rig}, which includes valuation with nontrivial support. The set $\B_\infty$ remains untouched. We enlarge the set $\B_{\op{qcut}}$, just by admitting radii in $\op{Qcuts}(\g)\infty$. We agree that \[ B(a,\infty)=B(a,\infty^-)=\{a\}\quad\mbox{ for all }a\in \overline{K}. \] The pointed balls $B^{\mbox{\tiny $\bullet$}}(a,\infty)=(B(a,\infty),\infty)$ determine valuations $\omega_{a,\infty}$ with nontrivial support $(x-a)\overline{K}[x]$, by using exactly the same formula as in (\ref{defom}), letting the symbol $x_\infty$ to be equal to $\infty$. That is, \[ \omega_{a,\infty}\left(\sum\nolimits_{0\le n}a_n(x-a)^n\right) = \bar{v}(a_0). \] The restriction of $\omega_{a,\infty}$ to $K[x]$ is the valuation $v_F$, for $F$ the minimal polynomial of $a$ over $K^h[x]$. Denote by $\tilde{\mathbb B}_{\operatorname{qcut}}$ this enlargement of $\B_{\op{qcut}}$. Then, the set \[ \tilde{\mathbb B}:=\B_\infty\sqcup \tilde{\mathbb B}_{\operatorname{qcut}} \] parametrizes $\widetilde{\mathbb V}(\overline{K})$ as indicated in Theorem \ref{mainB}. \section{Comparison with approximation types}\label{secApprT} The concept of approximation type was explored by Kuhlmann in \cite{KuhlAT}. Let $(K,v)$ be a valued field with value group $\Gamma_v=vK$. Let $\mathfrak B=\mathfrak B(K,v)$ be the set of all balls (open or closed) in $K$, with radii in $\Gamma_v$. Let $\mathcal N$ be the set of all nests of balls in $K$. For $\mathcal B=(B_i)_{i\in I}\in\mathcal N$ we define \[ \overline{\mathcal B}:=\{B\in \mathfrak B\mid B_i\subseteq B \ \mbox{ for some }i\in I\}. \] An {\bf approximation type} over $(K,v)$ is either the empty set or a nest of balls $\mathbf{A}$ such that $\mathbf{A}=\overline{\mathbf{A}}$. In other words, a non-empty approximation type is a nest of balls containing every open or closed ball which contains some ball $B\in \mathbf{A}$. \begin{lemma}\label{AB} For every nest of balls $\mathcal B\in\mathcal N$, there exists a uniquely determined approximation type $\mathbf{A}$ such that $\mathbf{A}\sim \mathcal B$. \end{lemma} \begin{proof} Let $\mathcal B$ be a nest of balls and set $\mathbf{A}=\overline{\mathcal B}$. Clearly, $\mathbf{A}=\overline{\mathbf{A}}$, hence it is an approximation type. By definition, $\mathbf{A}$ and $\mathcal B$ are cofinal in each other, so that $\mathbf{A}\sim{\mathcal B}$. Finally, two approximation types in $\mathcal N$ are mutually cofinal if and only if they are equal. Hence, $\textbf A$ is uniquely determined. \end{proof}\medskip The {\bf support} of $\mathbf{A}$ is the following initial segment of $\Gamma_v$: \[ \op{supp}(\mathbf{A}):=\left\{\gamma\in\Gamma_v\mid \mathbf{A}\ \mbox{ contains a closed ball of radius }\gamma\right\}. \] For all $\gamma\in\op{supp}(\mathbf{A})$ there is a unique closed ball in $\mathbf{A}$ of radius $\gamma$. We denote this closed ball by $\mathbf{A}_\gamma$, and its corresponding open ball by $\mathbf{A}_\gamma^{\operatorname{o}}$. Let $\mathcal A$ be the set of all approximation types over $(K,v)$. On this set, we consider the partial ordering determined by ascendent inclusion as sets of balls: \[ \mathbf{A}\le \mathbf{A}'\ \ \Longleftrightarrow\ \ \mathbf{A}\subseteq \mathbf{A}'. \] \noindent{\bf Definition. } We write $\bigcap\mathbf{A}$ to indicate the intersection of all balls in $\mathbf{A}$. \begin{itemize} \item $\mathbf{A}$ is {\bf immediate}\, if $\ \mathbf{A}\ne\emptyset$\, and $\ \bigcap\mathbf{A}=\emptyset$. \item $\mathbf{A}$ is {\bf value-extending}\, if either $\mathbf{A}=\emptyset$, or \[ \bigcap\mathbf{A}\ne\emptyset\quad \mbox{ and }\quad\mathbf{A}_\gamma^{\operatorname{o}}\in\mathbf{A}\ \mbox{ for all }\gamma\in\op{supp}(\mathbf{A}). \] \item $\mathbf{A}$ is {\bf residue-extending}\, if there exists $\gamma\in\op{supp}(\mathbf{A})$ such that $\mathbf{A}_\gamma^{\operatorname{o}}\not\in\mathbf{A}$. \end{itemize} In the latter case, we have $\gamma=\max(\op{supp}(\mathbf{A}))$ and $\ \mathbf{A}=\overline{\mathbf{A}_\gamma}$.\medskip Let $\mathbb V$ be the set of equivalence classes of valuations on $K(x)$ whose restriction to $K$ is equivalent to $v$. For each valuation $\nu\in \mathbb V$, Kuhlmann defines the approximation type of $x$ over $(K,v)$ as follows \[ {\rm appr}_\nu(x,K)=\{B\cap K\mid B\in\mathfrak B(K(x),\nu)\ \mbox{ centred on $x$, with radius in }\Gamma_v\}. \] One can show that the above definition is compatible with equivalence of valuations on $K(x)$. The next result follows from \cite[Theorems 1.2, 1.3]{KuhlAT}. \begin{theorem}\label{mainK} The map $\mathbb V\to \mathcal A$ given by $\nu\mapsto {\rm app}_\nu(x,K)$ is surjective. If $K$ is algebraically closed, then this mapping is bijective. \end{theorem} Suppose from now on that $K$ is algebraically closed. Write $\Gamma:=\Gamma_v$, which is a divisible group. In this case, we have two different geometric parametrizations of $\mathbb V$. The mapping $\mathbb B\to\mathbb V$ from Theorem \ref{mainB} and the mapping $\mathbb V\to\mathcal{A}$ from Theorem \ref{mainK}. It is easy to check that the latter map strictly preserves the ordering, so that it is an isomorphism of posets too. Our aim in this section is to explicitly describe the isomorphism \begin{equation}\label{composition} \mathbb B\,\longrightarrow\,\mathbb V\,\longrightarrow\, \mathcal{A} \end{equation} obtained by composition of the two geometric parametrizations. Recall the decomposition $\mathbb B=\B_\infty\sqcup{\mathbb{B}_\Gamma}\sqcup\B_{{\op{cut}}}$ described in Section \ref{secKb}. \begin{theorem}\label{comparison} The isomorphism of (\ref{composition}) restricts to isomorphisms: \[ \B_\infty\lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\longrightarrow\,\\\mbox{\tiny $\sim\,$}\end{array}$}\mathcal{A}_\infty,\qquad {\mathbb{B}_\Gamma}\lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\longrightarrow\,\\\mbox{\tiny $\sim\,$}\end{array}$}\mathcal{A}_{\operatorname{re}},\qquad \B_{{\op{cut}}}\lower.3ex\hbox{\ars{.08}$\begin{array}{c}\,\longrightarrow\,\\\mbox{\tiny $\sim\,$}\end{array}$}\mathcal{A}_{\operatorname{ve}}, \] where $\mathcal{A}_\infty$, $\mathcal{A}_{\operatorname{re}}$ and $\mathcal{A}_{\operatorname{ve}}$ are the subsets of immediate, residue-extending and value-extending approximation types. More explicitly, the isomorphism acts as follows: \[ \ars{1.3} \begin{array}{lll} \B_\infty\,\longrightarrow\, \mathcal{A}_\infty,&\qquad [{\mathcal B}]\mapsto \overline{{\mathcal B}},&\\ {\mathbb{B}_\Gamma}\,\longrightarrow\, \mathcal{A}_{\operatorname{re}},&\qquad B(a,\gamma)\mapsto \overline{B(a,\gamma)},&\ \mbox{ for all }a\in K,\ \gamma\in\Gamma,\\ \B_{{\op{cut}}}\,\longrightarrow\, \mathcal{A}_{\operatorname{ve}},&\qquad B(a,\delta)\mapsto \overline{\left\{B^{\operatorname{o}}(a,\gamma)\mid \gamma\in\delta^L\right\}},&\ \mbox{ for all }a\in K,\ \delta\in\op{Cuts}(\Gamma). \end{array} \] \end{theorem} \begin{proof} Take any $[{\mathcal B}]\in\B_\infty$; that is, the class of a nest of closed balls ${\mathcal B}=(B_i)_{i\in I}$ such that $\bigcap B_i=\emptyset$. Write $B_i=B(a_i,\gamma_i)$ for all $i\in I$, and let $S\subseteq \Gamma$ be the initial segment generated by the subset $\{\gamma_i\mid i\in I\}$. The set $S$ has no last element. Let $\nu=\omega_{\mathcal B}$. By the remarks in Section \ref{subsecVAKb} we deduce that for every $b\in K$, there exists $i\in I$ such that \[ \nu(x-b)= \min\{v(b-a_i),\gamma_i\}\leq \gamma_i. \] Hence, if $\gamma>S$, then $B(x,\gamma)\cap K=\emptyset$. On the other hand, if $\gamma<\gamma_i$ for some $i\in I$, then $B^{\operatorname{o}}(x,\gamma)\cap K\supseteq B_i$. Indeed, \[ b\in B_i \ \Longrightarrow\ v(b-a_i)\ge \gamma_i \ \Longrightarrow\ \nu(x-b)\ge\omega_{a_i,\gamma_i}(x-b)=\min\{v(b-a_i),\gamma_i\}>\gamma. \] Therefore, the image of $[{\mathcal B}]$ in $\mathcal{A}$ is the approximation type \[ \mathbf{A}=\overline{\left\{B^{\operatorname{o}}(x,\gamma)\cap K\mid \gamma\in S\right\}}. \] By Lemma \ref{AB}, in order to check that $\mathbf{A}=\overline{{\mathcal B}}$, we need only to show that $\mathbf{A}\sim{\mathcal B}$. We have already seen that every $B\in\mathbf{A}$ contains some $B_i$. Let us show the converse statement; more precisely, let us show that every $B_i$ contains $B^{\operatorname{o}}(x,\gamma)\cap K$ for all $\gamma\in S$ such that $\gamma>\gamma_i$. Indeed, take any $b\in B^{\operatorname{o}}(x,\gamma)\cap K$. Take $i<j\in I$ sufficiently large so that \[ \gamma_i\le\nu(x-a_i)=\min\{v(a_j-a_i),\gamma_j\},\qquad \gamma<\nu(x-b)=\min\{v(a_j-b),\gamma_j\}. \] The condition $v(b-a_i)<\gamma_i$ leads to a contradiction: \[ \gamma_i>v(b-a_i)\ge\min\{v(b-a_j),v(a_j-a_i)\}\ge\min\{\gamma,\gamma_i\}=\gamma_i. \] Hence, $B^{\operatorname{o}}(x,\gamma)\cap K\subseteq B_i$. This proves that $\mathbf{A}\sim{\mathcal B}$. In particular, $\bigcap A=\bigcap_{i\in I}B_i=\emptyset$, so that $\mathbf{A}$ is an immediate approximation type. Now, take $B(a,\gamma)\in{\mathbb{B}_\Gamma}$ and let $\nu=\omega_{a,\gamma}$. Since $\nu(x-b)\le\gamma$ for all $b\in K$, we have $B(x,\alpha)\cap K=\emptyset$ for all $\alpha>\gamma$. On the other hand, for $\alpha=\gamma$ we clearly have \[ B(x,\gamma)\cap K=B(a,\gamma),\qquad B^{\operatorname{o}}(x,\gamma)\cap K=\emptyset. \] This proves that the image of $B(a,\gamma)$ in $\mathcal{A}$ is the approximation type $\mathbf{A}=\overline{B(a,\gamma)}$, which is obviously residue-extending. Finally, take $B(a,\delta)\in\B_{{\op{cut}}}$ for some $a\in K$ and $\delta\in\op{Cuts}(\Gamma)$. Let $\nu=\omega_{a,\delta}$. As in the previous cases, if $\gamma\in\Gamma$ satisfies $\gamma>\delta^L$, then $B(x,\gamma)\cap K=\emptyset$. On the other hand, if $ \gamma\in\delta^L$, then $B^{\operatorname{o}}(x,\delta)\cap K=B^{\operatorname{o}}(a,\delta)$. Therefore, the image of $B(a,\delta)$ in $\mathcal{A}$ is the approximation type \[ \overline{\left\{B^{\operatorname{o}}(a,\gamma)\mid \gamma\in\delta^L\right\}}, \] which is obviously value-extending. \end{proof}\medskip Let us illustrate with some examples, the fact that the isomorphisms in (\ref{composition}) preserve the ordering. The empty approximation type belongs to $\mathcal{A}_{\operatorname{ve}}$. It corresponds to the maximal ball $B(a,-\infty)=K\in\B_{{\op{cut}}}$, which is independent of $a\in K$. They both correspond to the root node $\omega_{-\infty}$ of $\mathbb V$. The leaves of $\mathbb V$ correspond to the minimal balls $B(a,\infty^-)=\{a\}\in\B_{{\op{cut}}}$ for all $a\in K$. The corresponding maximal approximation types in $\mathcal{A}_{\operatorname{ve}}$ are those of the form \[ \overline{\left\{B^{\operatorname{o}}(a,\gamma)\mid \gamma\in\Gamma\right\}}=\left\{B\in\mathfrak{B}\mid a\in B\right\}. \] Finally, take arbitrary $a\in K$, $\gamma\in \Gamma$. Consider the quasi-cuts $\gamma^-<\gamma<\gamma^+$, and the valuations $\omega_{a,\gamma^-}<\omega_{a,\gamma}<\omega_{a,\gamma^+}$. The corresponding approximation types are \[ \overline{\left\{B^{\operatorname{o}}(a,\alpha)\mid \alpha<\gamma\right\}}\subsetneq \overline{B(a,\gamma)}\subsetneq \overline{B^{\operatorname{o}}(a,\gamma)}. \]
0dc4a228e432084468b2e87fa95e57e8170f1a56
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{section:intro} Reed-Muller (RM) codes are linear block codes with a recursive structure introduced in 1954 by Muller~\cite{Muller1954}. Immediately after, Reed\cite{Reed1954} introduced the first decoder for RM codes based on majority voting, which could correct bit errors in a codeword up to half of the minimum distance of the corresponding code. Since then, several efficient decoding methods for RM codes have been published~\cite{Betextquotesingleery1986,Ashikhmin1996,green1966serial,Sidel1992,Dumer2004,Sakkour2005,Dumer2006a,Dumer2006}. Recently, there has been growing interest in RM codes as they have been shown to be capacity-achieving under the maximum-likelihood (ML) decoding for general symmetric channels~\cite{Arikan2009, Hussami2009, Kudekar2017, Abbe2015}. However, since the complexity of ML decoding scales exponentially with the blocklength, ML decoding is not a practically feasible decoding method. Therefore, considerable effort has been put into proposing efficient near-ML decoding algorithms for RM codes with tractable complexity, which is especially attractive for short blocklength applications~\cite{Tonnellier2021}. A wide variety of such proposed algorithms are taking advantage of the recursive structure of the RM codes to achieve near-ML decoding. For example, the recursive list decoder proposed in \cite{Dumer2006} provides efficient near-ML decoding performance for RM codes with reasonable list size. Nevertheless, similarly to all list-based decoders, it is challenging to achieve low-latency decoding even with a relatively small list size due to the required path selection operation that cannot be readily parallelized. Moreover, due to the similarity of RM codes to polar codes, the successive-cancellation (SC) decoding~\cite{Arikan2009} and SC list (SCL)~\cite{Tal2011} decoding algorithms for polar codes are able to decode RM codes recursively with reasonable complexity. Recently, a recursive algorithm called \emph{recursive projection-aggregation} (RPA)~\cite{Ye2020} decoding was proposed for decoding RM codes. RM codes decoded with the RPA algorithm achieve near-ML decoding performance, outperforming polar codes with the same blocklength and rate under SCL decoding even with large list sizes. On a high level, the RPA decoder generates many smaller codewords by combining elements of the received vector and then decodes the generated codewords recursively to aggregate them into a reliable decision for the received vector. The authors of~\cite{Ye2020} mention that the computational complexity of RPA decoding scales like $O(n^r \log n)$ where $n$ is the blocklength and $r$ is the order of the RM code, making RPA decoding impractical for RM codes with large order $r$. Several algorithms have been published to reduce the complexity of the RPA algorithm. For example, simplified RPA~\cite{Ye2020} and collapsed projection-aggregation (CPA)~\cite{Lian2020} reduce the number of recursive levels and project the received vectors directly onto smaller codes, thus reducing the number of projections. However, both simplified RPA and CPA lead to high-complexity projection and aggregation steps. The authors of \cite{Fathollahi2021} introduced a variant of the RPA decoder called sparse RPA that uses a small random subset of the projections, thus reducing the complexity of the RPA algorithm. However, the error-correcting performance of SRPA degrades significantly as the sparsity increases. To address this, the authors also proposed \mbox{k-SRPA} which employs $k$ sparse decoders to improve the error-correcting performance of single SRPA. A cyclic redundancy check (CRC) and Reed's algorithm are used to select the most reliable candidate among all candidates decoded from the $k$ individual SRPA decoders. Unfortunately, the \mbox{k-SRPA} algorithm causes a reduction in the effective code rate due to the added CRC as well as hardware overhead due to the required randomization of projections, multiple control units, memories, and data paths for multiple decoders. The work of \cite{JiaJie2021} proposed syndrome-based early stopping techniques along with a scheduling scheme to reduce the complexity of the RPA algorithm called reduced complexity RPA. Reduced complexity RPA is still challenging due to the computational overhead of syndrome checks and the variable decoding latency. In this work, we further optimize RPA decoding with the ultimate goal of making it feasible for hardware implementation. More specifically, we propose a multi-factor pruning method that is inspired by observations we made about the relative importance of projections at various recursion levels based on simulations. Our method prunes the RPA decoder with a varying level of aggressiveness for different iterations and recursion levels. Our results show a computational complexity reduction of $92\%$ compared to the baseline RPA algorithm~\cite{Ye2020} and $38$\% to $77$\% compared to the works of \cite{JiaJie2021} and \cite{Fathollahi2021} with no degradation in the error-correcting performance. Moreover, in contrast to the works of \cite{Fathollahi2021} and \cite{JiaJie2021}, our proposed pruning method can translate to a simpler hardware implementation. The remainder of this paper is organized as follows. In Section \ref{sec:background}, we give the preliminaries and background, including a description of RM codes as well as the RPA algorithm and we explain the issues with previously proposed pruning methods in more detail. In Section~\ref{sec:delta_RPA}, we present our proposed multi-factor pruning method. In Section \ref{sec:results}, we present error-correcting performance simulation results and computational complexity comparisons with existing pruning methods. Finally, we conclude the paper in Section~\ref{sec:conclusion}. \section{Preliminaries and Background} \label{sec:background} \subsection{Reed-Muller codes} Reed-Muller codes are linear block codes denoted by $\text{RM}(m,r)$, where $n = 2^m$ is the blocklength, and $r$ is the order of the code. Similar to the other linear block codes, RM codes are defined by a $k \times n$ generator matrix $\mathbf{G}_{(m,r)}$, where $k=\sum _{i=0}^r \binom{n}{i}$ and the rate of the code is $R= \frac{k}{n}$. A binary vector $\mathbf{u}$ with $k$ information bits is encoded into a binary vector $\mathbf{c}$ belonging to the $\text{RM}(m,r)$ code as follows: \begin{equation} \label{eq:code} \mathbf{c}= \mathbf{u}\mathbf{G}_{(m,r)}. \end{equation} The generator matrix $\mathbf{G}_{(m,r)}$ of $\text{RM}(m,r)$ code is obtained from the $n^\text{th}$ Kronecker power of $\mathbf{F}=\begin{bmatrix} 1 & 1\\ 0 & 1 \end{bmatrix}$ by selecting the rows with a Hamming weight of at least $2^{m-r}$. The resulting generator matrix $\mathbf{G}_{(m,r)}$ has a universal and recursive structure defined as: \begin{equation}\label{eq:GMtx} \mathbf{G}_{(m,r)}= \begin{bmatrix} \mathbf{G}_{(m-1,r)} & \mathbf{G}_{(m-1,r)}\\ \mathbf{0} & \mathbf{G}_{(m-1, r-1)} \end{bmatrix}, \quad \mathbf{G}_{(1, 1)}= \mathbf{F}. \end{equation} In addition, a codeword $\mathbf{c}\in \text{RM}(m,r) $ can be be mapped to an $m$-variate polynomial with degree less than or equal to $r$ in a vector space $\mathbb{E}:= \mathbb{F}^{m}_{2}$. Consequently, any coordinate of the codeword $\mathbf{c}$ are indexed by an $m$-bit binary vector $z \in \mathbb{E}$,i.e., $\mathbf{c}=\left(c(z),z\in \mathbb{E} \right)$~\cite{Abbe2020}. \subsection{Coset-based projections} \label{sec:coset} Let $\mathbb{B}$ be an $s$-dimensional subspace of $\mathbb{E}$. There are $2^{m-s}$ different cosets $T$ for $\mathbb{B}$ making the quotient space $\mathbb{E}/ \mathbb{B}$: \begin{equation}\label{eq:t} \mathbb{E}/ \mathbb{B} = \lbrace T:= z+\mathbb{B}, z\in \mathbb{E} \rbrace. \end{equation} Moreover, the projection of $\mathbf{c}=\left(c(z),z\in \mathbb{E} \right)$ on the cosets of an $s$-dimensional subspace $\mathbb{B}$ is defined as: \begin{equation} \mathbf{c}_{/ \mathbb{B}}=\operatorname{Proj}(\mathbf{c}, \mathbb{B}):=\left(c_{/ \mathbb{B}}(T), T \in \mathbb{E} / \mathbb{B} \right), \end{equation} where $\mathbf{c}_{/ \mathbb{B}} = \left(c_{/ \mathbb{B}}(T):= {\oplus}_{z\in T}c(z)\right)$ is a binary vector generated by summing up all coordinates of $\mathbf{c}$ indexed by the elements of each coset $T\in \mathbb{E}/ \mathbb{B}$. The work of~\cite{Ye2020} proved that $c_{/ \mathbb{B}}$ is a codeword of $\text{RM}(m-s,r-s)$ if $\mathbf{c}$ is a codeword of $\text{RM}(m,r)$. The RPA decoding algorithm, which is the focus of this paper, exploits this property of RM codes using \mbox{one-dimensional} subspaces, i.e., $s=1$. \subsection{Recursive projection aggregation (RPA) Decoding} \label{sec:RPA} \begin{figure}[t] \centering \centerline{\includegraphics[width=0.525\textwidth]{rpa}} \vspace{-1cm} \caption{\small RPA decoding for $r=3$ RM codes.} \label{fig:RPA} \vspace*{-1em} \end{figure} Let $W:\{0,1\} \rightarrow \mathcal{W}$ denote a binary-input memoryless channel, where $\mathcal{W}$ is the output alphabet. The log-likelihood ratio (LLR) of a channel output $y\in \mathcal{W} $, denoted by $L(y)$, is defined as: \begin{equation}\label{eq:LLR} { L(y)}:=\ln \left(\frac{W(y \mid 0)}{W(y \mid 1)}\right). \end{equation} Soft-decision RPA decoding uses the LLR values of the channel output vector $\mathbf{y}$, denoted by the vector $\mathbf{L}$, as its input. In particular, the RPA decoding algorithm decodes $\mathbf{L}$ in three steps: projection, recursive decoding, and aggregation. As Fig.~\ref{fig:RPA} shows, in the \textit{projection} step the RPA algorithm generates $n-1$ vectors $\mathbf{L}_{/\mathbb{B}_i}$ with length of $2^{m-1}$ by projecting $\mathbf{L}$ onto $n-1$ one-dimensional subspaces of the vector space $\mathbb{E}$. Let $\mathbb{B}_i=[0,i]$ be the $i$-th one-dimensional subspace of $\mathbb{E}$. There are $2^{m-1}$ different cosets $T$ for $\mathbb{B}_i$. Therefore, the quotient space $\mathbb{E}/ \mathbb{B}_i$ is built based on~\eqref{eq:t}. There are $n-1$ one-dimensional subspaces for the binary vector space $\mathbb{E}$ that provide $n-1$ projections for channel output vector $\mathbf{L}$ given their corresponding quotient space such that: \begin{equation} \mathbf{L}_{/ \mathbb{B}_i}=\operatorname{Proj}(\mathbf{L}, \mathbb{B}_i):=\left({L}_{/ \mathbb{B}_i}(T), T \in \mathbb{E} / \mathbb{B}_i\right), \end{equation} where \begin{equation} {L}_{/ \mathbb{B}_i}(T)=2\tanh^{-1}\left(\prod_{z\in T} \tanh\left(\frac{L(z)}{2}\right) \right). \end{equation} In the \textit{recursive decoding} step, each projected vector $\mathbf{L_{/\mathbb{B}_i}}$ is recursively decoded by RPA for $\text{RM}(m{-}1,r{-}1)$ until first-order RM codes are reached. At this point, the RPA algorithm uses an efficient decoder based on the fast Hadamard transform (FHT) to decode the first-order RM codes~\cite{Betextquotesingleery1986}. In the final \textit{aggregation} step a per-coordinate average LLR is calculated from all the decoded codewords obtained from the previous step as follows: \begin{equation}\label{eq:agg} {\hat{L}}(z)=\frac{1}{n-1}\sum_{i=1}^{n-1}\left(1-2\hat{c}_{/ \mathbb{B}_i}(z+\mathbb{B}_i)\right)L(z+z_i), \end{equation} where $\hat{c}_{/ \mathbb{B}_i}$ is the decoded binary codeword from recursive decoding, and $\hat{L}(z)$ is the LLR value of $z^{\text{th}}$ index of the estimated vector $\hat{\mathbf{c}}$ for the transmitted codeword $\mathbf{c}$. The RPA decoding algorithm repeats the above steps up to $N_{\max}$ times or until the output converges to the input as: \begin{equation}\label{eq:earlystop} |\hat{L}(z)-{L}(z)|<\theta|L(z)|,\forall z\in\mathbb{E}, \end{equation} where $\theta$ is a small constant called the exiting threshold. The total number of FHT-based first-order decodings (FODs), which we use as a proxy for the computational complexity,\footnote{We ignore the complexity of the projection and aggregation steps for simplicity because the overall complexity of RPA decoding is dominated by the FOD operations.} is: \begin{align}\label{eq:rpa_comp} \lambda_{\text{RPA}}=N_{\max}^{r-1}\prod_{i=1}^{r-1}\left(2^{m-i-1}-1\right), \end{align} as mentioned in \cite{JiaJie2021}. \subsection{Existing complexity reduction methods} \label{sec:reduced_comp} \subsubsection{Sparse RPA decoder} The sparse RPA decoder~ (SRPA)\cite{Fathollahi2021} reduced the computational complexity of the RPA decoder by introducing a projection pruning factor $0<q < 1$. Specifically, it keeps $q\times (n-1)$ random projections instead of all $n-1$ projections at each recursive level. The numerical results of this decoder show that removing half of the projections for decoding an $RM(m,2)$ code does not degrade the error-correcting performance significantly. Moreover, for more aggressive pruning, \mbox{k-SRPA} was introduced to improve the decoding performance. It runs $k$ SRPA decoders generating $k$ estimated codewords for a single received vector and then uses a CRC check to choose the best candidate. It utilizes Reed's decoder for k$>2$ to guarantee that the output is always a valid codeword. Numerical results reported in \cite{Fathollahi2021} showed that the computational complexity can be reduced between $50\%$ to $79\%$ compared to RPA. However, having multiple decoders and a Reed decoder for the final decision introduces overhead in terms of hardware implementation. In addition, added CRC bits compromise the effective rate of the RM codes and the randomization of projections introduces additional hardware implementation overhead. \subsubsection{Reduced complexity RPA decoder} The work of \cite{JiaJie2021} proposed scheduled RPA, denoted by RPA\textsubscript{SCH}, that reduces the number of projections in successive iterations. RPA\textsubscript{SCH} introduced a decaying parameter $d>1$ defining the number of projections at each iteration $j( j\in [1,N_{\max}])$ as $\lceil \frac{n-1}{d^{j-1}}\rceil$. It keeps all projections for the first iteration since most of the errors get corrected in the first iteration~\cite{HashemipourNazari2021}. After that, it reduces the number of projections by a factor of $d$ at each iteration level. This scheduling technique lowers the computational complexity compared to the baseline RPA by up to $67\%$. However, there is still a lot of room to further reduce the number of projections, especially for RM codes with $r>2$ as we will show in the sequel. \section{Multi-factor pruned RPA algorithm } \label{sec:delta_RPA} Both SRPA and RPA\textsubscript{SCH} significantly reduce the computational complexity as measured by the number of FODs, especially for second-order RM codes. However, the number of FODs is still high for $\text{RM}(m,r>2)$ codes as both methods prune all the recursive levels uniformly. Moreover, RPA\textsubscript{SCH} does not apply the decaying factor for the recursion levels in the iteration layers. In other words, the decoder being called recursively in RPA\textsubscript{SCH} for decoding $\text{RM}(m,r>2)$ codes starts with all projections even in after the first iteration, resulting in a very large number of FODs for the recursion levels called in each iteration. In this section, we first describe a general multi-factor projection pruning function and we explain the motivation behind the choice of each factor. We then use this function to introduce a multi-factor pruned RPA (MFP-RPA) algorithm where the pruning factor is a function of both on the iteration and the recursion level, resulting in significantly more aggressive pruning than existing methods while maintaining the error-correcting performance of the original RPA algorithm. \subsection{Projection pruning function} We define the following pruning function that uses three user-defined factors $\gamma$, $\delta_{\text{itr}}$, and $\delta_{\text{rec}}$, where $0< \gamma, \delta_{\text{itr}}, \delta_{\text{rec}}\leq 1$, and has the iteration number $j$ and the recursion level $l$ as its inputs: \begin{equation}\label{eq:delta_func} \Delta(j,l,\gamma,\delta_{\text{itr}},\delta_{\text{rec}})= \gamma \times (\delta_{\text{itr}})^{j-1}\times(\delta_{\text{rec}})^{l-2}, \end{equation} The output of the pruning function $\Delta(j,l,\gamma,\delta_{\text{itr}},\delta_{\text{rec}})$ is a pruning factor that dictates the fraction of projections that are kept at iteration $j$ for recursion level $l$ compared to RPA decoding. These parameters are set off-line. In the following, we explain the motivation behind each pruning factor $\gamma$, $\delta_{\text{itr}}$, and $\delta_{\text{rec}}$. \input{psudo_delta} \subsubsection{Starting pruning factor $\gamma$} As mentioned in Section~\ref{sec:RPA}, the RPA decoder starts with $n-1$ projections for decoding an $\text{RM}(m,r)$ code. As \cite{Fathollahi2021} stated, having this number of projections is typically redundant and results in a very large computational complexity. As a result, inspired by \cite{Fathollahi2021}, we define the starting pruning factor $\gamma$, which reduces the number of projections even for the first iteration, in contrast to \cite{JiaJie2021}. Moreover, contrary to the approach taken by \cite{Fathollahi2021}, we allow $\gamma$ to change in each level of recursion as a function of the other pruning factors, as shown in line \ref{line:gamma} of Algorithm~\ref{alg:delta}. \subsubsection{Iteration pruning factor $\delta_{\text{itr}}$} Similar to \cite{JiaJie2021}, from our numerical simulation results of RPA, we also concluded that the first iteration is the most crucial one and the importance of further iterations quickly decreases. Therefore, the iteration pruning factor $\delta_{\text{itr}}$ exponentially decreases with the iteration count, as shown in~\eqref{eq:delta_func}. \subsubsection{Recursion pruning factor $\delta_{\text{rec}}$} We designed an experiment to explore the impact of each recursive layer on the error-correcting performance of the pruned version of the RPA decoder for various $\text{RM}(m,3)$ codes. We defined two parameters $P_1$ and $P_2$ representing the number of projections in level $r=3$ and $r=2$ for the $\text{RM}(m,3)$ codes, respectively, as shown in Fig.~\ref{fig:p1p2_bd}. The product $P_1\times P_2$ determines the number of required FODs for the generated first-order RM codes. In this experiment, we set $N_{\max}=1$ to remove the effect of iterations on the performance. As shown in Fig.~\ref{fig:p1p2_fer}, with the same overall number of FODs, aggressive pruning in the level $r=2$ leads to a worse error-correcting performance compared to the aggressive pruning in the level $r=3$. As a result, we can conclude that the effect of each recursion level on the overall error-correcting performance of the pruned version of the RPA decoder is clearly not the same. Motivated by this observation, the recursion pruning factor $\delta_{\text{rec}}$ affects the recursion levels such that the highest number of projections is kept at the level $r=2$, meaning that the pruning becomes more aggressive for the higher recursion layer, as~\eqref{eq:delta_func} shows. \begin{figure}[!t] \centering \includegraphics[width=0.27\textwidth]{mf-pruned}% \caption{Pruned version of RPA decoder for an $\text{RM}(m,3)$ codes with $P_1$ and $P_2$ projections at $r=3$ and $r=2$ recursion levels and $N_{\max}=1$.} \label{fig:p1p2_bd} \vspace*{-1em} \end{figure} \begin{figure}[!t] \centering \scalebox{0.7}{\input{figs/pdf_gen/p1p2.tex}}% \caption{Frame error rate (FER) of different $r=3$ RM codes under pruned RPA decoder depicted in Fig.~\ref{fig:p1p2_bd}.} \label{fig:p1p2_fer} \vspace*{-1em} \end{figure} \subsection{Multi-factor pruned RPA algorithm} Algorithm \ref{alg:delta} describes the overall proposed multi-factor pruned (MFP-RPA) decoding method for $\text{RM}(m,r)$ codes. Overall, it follows the logic of the baseline RPA decoding algoritum but, as noted in line \ref{line:np}, it selects $np$ projections distributed among all $n-1$ projections uniformly, where $np \ll n-1.$ Moreover, SRPA can be obtained using \mbox{MFP-RPA} by setting the tuple $(\gamma,\delta_{\text{itr}},\delta_{\text{rec}})$ to $(q,1,1)$. Similarly, RPA\textsubscript{SCH} can be obtained using \mbox{MFP-RPA} by setting the tuple $(\gamma,\delta_{\text{itr}},\delta_{\text{rec}})$ to $(1,\frac{1}{d},1)$ for all recursion levels. Finally, the baseline RPA algorithm can also be obtained from \mbox{MFP-RPA} with $(\gamma,\delta_{\text{itr}},\delta_{\text{rec}}) = (1,1,1)$. \subsection{Complexity of \mbox{MFP-RPA}} The number of FODs for \mbox{MFP-RPA} decoding the $\text{RM}(m,r)$ code can be calculated as follows: \begin{equation}\label{eq:num_FOD} \lambda_{\text{{MFP-RPA}}} = \sum_{j=1}^{N_{\max}}\prod_{l=2}^{r} \left\lceil\Delta(j,l,\gamma\delta_{\text{itr}}^{j-1},\delta_{\text{itr}},\delta_{\text{rec}}) \times \left(\frac{n}{2^{r-l}}-1\right) \right\rceil. \end{equation} Compared to \eqref{eq:rpa_comp}, a good selection of $\gamma$, $\delta_{\text{itr}}$ , and $\delta_{\text{rec}}$ can result in a significant reduction of computational complexity. This complexity reduction can lead to lower hardware resource requirements, lower latency, or both, depending on the exact hardware architecture that is used (e.g., fully parallel, sequential, or semi-parallel, respectively). \section{Simulation Results} \label{sec:results} \begin{figure}[t!] \centering \input{multi_FER.tex} \caption{FER comparison between RPA, $2$-SRPA, RPA\textsubscript{SCH}, and \mbox{MFP-RPA} algorithms. The results for $2$-SRPA and RPA\textsubscript{SCH} are taken from \cite{Fathollahi2021} and \cite{JiaJie2021}, respectively.} \label{fig:FER} \end{figure} We simulate the performance of the proposed \mbox{MFP-RPA} algorithm on $\text{RM}(7,2)$ and $\text{RM}(8,3)$ codes over the additive white Gaussian noise (AWGN) channel. Fig.~\ref{fig:FER} shows the simulation results for the proposed \mbox{MFP-RPA} compared to RPA~\cite{Ye2020}, SRPA~\cite{Fathollahi2021}, and RPA\textsubscript{SCH}~\cite{JiaJie2021}. From our simulations with $N_{\max}=\lceil\frac{m}{2}\rceil$, we observed that the last iteration does not impact the error-correcting performance. Therefore, we set the maximum number of iterations $N_{\max}=3$ for RPA and \mbox{MFP-RPA}. Moreover, we set the tuple $(\gamma,\delta_{\text{itr}},\delta_{\text{rec}})$ of user-defined pruning factors of \eqref{eq:delta_func} to $\left(\frac{2}{3},\frac{1}{4},\frac{1}{2}\right)$ for the $\text{RM}(7,2)$ code and $\left(\frac{3}{4},\frac{1}{3},\frac{3}{4}\right)$ for the $\text{RM}(8,3)$ code. We note that we selected the pruning factors heuristically in a way that the performance loss compared to the non-pruned decoder is negligible and we currently do not have a systematic way of choosing these factors. Furthermore, the code's parameters as well as required reliability, latency, and available hardware resources in the event of hardware implementation should generally be taken into consideration for selecting the appropriate pruning factors. We observe that the proposed \mbox{MFP-RPA} algorithm has effectively identical error-correcting performance to the RPA algorithm and its previously proposed pruning methods SRPA and RPA\textsubscript{SCH}. However, as shown in Table ~\ref{table:fht}, it reduces the number of FODs significantly. Specifically, Table ~\ref{table:fht} illustrates the number of FODs required for the baseline RPA, RPA\textsubscript{SCH}, $2$-SRPA, and our proposed \mbox{MFP-RPA}. We note that the numbers in Table~\ref{table:fht} do not use early stopping controlled by $\theta$ in~\eqref{eq:earlystop} and we use the worst-case complexity numbers for SRPA and RPA\textsubscript{SCH} as a hardware implementation generally has to account for the worst case. However, early stopping can still be used in conjunction with \mbox{MFP-RPA} if desired to, e.g., reduce the energy consumption. \begin{table}[t!] \caption{Comparison the number of FODs required for RPA~\cite{Ye2020}, RPA\textsubscript{SCH}~\cite{JiaJie2021}, 2-SRPA~\cite{Fathollahi2021}, and our proposed \mbox{MFP-RPA}.} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{lccccc} \hline \multirow{3}{*}{\textbf{}} & \multirow{3}{*}{RPA} & \multirow{3}{*}{RPA\textsubscript{SCH}} & \multirow{3}{*}{$2$-SRPA} & \multicolumn{2}{c}{\mbox{MFP-RPA}$(\gamma,\delta_{\text{itr}},\delta_{\text{rec}})$} \\ & & & & \multicolumn{1}{c}{$(2/3,1/4,1/2)$} & \multicolumn{1}{c}{$(3/4,1/3,3/4)$} \\ \hline $\text{RM}(7,2)$ & $381$ & $221$ & $96$ & $113$ & - \\ \hline $\text{RM}(8,3)$ & $291465$ & $98385$ & $36433$ & - & $22544$ \\ \hline \end{tabular} } \label{table:fht} \end{table} We observe that, for the $\text{RM}(7,2)$ code, our proposed MFP-RPA reduces the number of FODs by $70$\% and $49$\% compared to the RPA and RPA\textsubscript{SCH} algorithms. Compared to 2-SRPA, MFP-RPA requires $18$\% more FODs, but it also has a better error-correcting performance. For the $\text{RM}(8,3)$ code, MFP-RPA reduces the number of FODs by $92$\%, $77$\%, and $38$\% compared to RPA, RPA\textsubscript{SCH}, and 2-SRPA, respectively. We proposed the update rule \eqref{eq:delta_func} based on the intuitions explained in Section~\ref{sec:delta_RPA}. However, other update rules are possible and they may lead to further complexity reduction. This is an interesting open problem. \section{Conclusion} \label{sec:conclusion} In this work, we proposed a multi-factor pruning method for RPA decoding of RM codes that prunes projections as a function of both the iteration number and the recursion level. Our results show that significantly more aggressive projection pruning is possible compared to existing methods without degrading the error-correcting performance. Specifically, our proposed multi-factor pruning method leads to up to $92\%$ and $77\%$ lower computational complexity compared to the baseline RPA decoding algorithm and previously proposed complexity reduction techniques, respectively. \balance \bibliographystyle{ieeetr}
a5f137318314ba86ab560203fbf2fa76324538ea
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Since the influential work of Maxey and Riley \cite{maxey1983equation} in deriving the equations of motion of an inertial spherical particle immersed in viscous flow, there have been a multitude of studies exploring the collective behavior of suspensions of particles. In particular, the remarkable phenomenon of preferential concentration of inertial particles in turbulence has attracted the attention of many authors. This phenomenon, sometimes referred to as the ``centrifuge effect", is also attributed to Maxey \cite{maxey_1987} who showed that particles disperse in regions where the fluid velocity strain rate is low compared to the vorticity. The theoretical mechanisms for particle clustering has since been further explored by means of Lyapunov exponent analysis \cite{bec2003fractal}, caustics \cite{wilkinson2005caustics} and perturbative methods to name a few. Sophisticated numerical simulations \cite{squires1991preferential} have also advanced and verified our understanding of this phenomenon for a variety of flows and extended such observations to non-spherical particles \cite{mortensen2008dynamics}. As the need for large-scale simulations increase, the demand for cost-effective numerical methods is growing. However, despite the fact that numerical simulations are so well documented, there have been few studies that explore the extent to which the numerical methods used in simulations accurately reproduce the geometric properties that explain the preferential clustering of particles. In this paper we discuss some features of the equations of motion that influence the preferential concentration of particles and determine to what extent these features can be replicated by well designed numerical methods. In doing so, we propose an efficient numerical algorithm that is designed to replicate these features. The method combines matrix-valued radial basis functions for the divergence-free interpolation of the discrete fluid field with a splitting method that is designed specifically for the equations of motion under study. Interpolation methods are necessary for simulating suspensions of particles as the flow field is usually generated by a direct numerical simulation of the Navier-Stokes equations and is therefore only available at discrete points in space, meaning that it must be approximated at the location of the particle. To achieve this in a simple and efficient manner many authors use a variant of a tri-polynomial interpolant, for example \cite{squires1991preferential,Bernard, Portela, rouson2001preferential, challabotla2015orientation, uijttewaal1996particle, van2000dynamics, PanBanerjee, wang1996large, Deardorff}. Previous studies \cite{yeung_pope_1989, BALACHANDA} have explored the extent to which these interpolation methods accurately reproduce statistical properties of the turbulent flow field. However, all the interpolation methods considered in the aforementioned references are based on polynomials that create an approximation to the fluid velocity field that is not divergence-free. One major consequence is that the hydrodynamic Stokes force that determines the particle path lines is instead calculated from a non-conservative fluid velocity field. These non-conservative interpolation methods are still used in practice today despite the fact that the theoretical mechanisms that explain the preferential concentration are derived with the assumption of incompresibility. Furthermore, it is often argued (e.g., \cite{uijttewaal1996particle}) that interpolation errors are ``averaged out" and it is concluded that one can acheive statistically similar results using a fast low-order interpolation method. This claim is supported by the fact that linear interpolation produces similar statistics to simulations using cubic interpolation \cite{yeung_pope_1989}. However, neither linear nor cubic interpolation preserves the divergence-free condition of the fluid field and therefore it is not truly understood whether or not errors to the divergence of the fluid field are averaged out in the same way that standard truncation errors are. The implications of these divergence-errors have not been studied in detail, however there is numerical evidence suggesting that breaking this condition can lead to erroneous clustering in PDF methods, first presented in \cite{meyer2004conservative} and also in \cite{gobert2006lagrangian}. Divergence free interpolation has been used to good effect in particle-laden flow simulations \cite{Esmaily-Moghadam2016analysis,gobert2010numerical} as well as in other particle simulation problems, such as in geodynamic modelling \cite{wang2015advantages} and magnetospheric physics \cite{mackay2006divergence}, for example. One of the goals of this study is to explain, from a numerical analysis point of view, the consequences of breaking the divergence-free condition in the flow field. We also show the benefit of divergence-free interpolation using in simulations of suspensions of inertial particles as well as show how these errors affect the numerical time integration. In addition, we study some numerical integration methods and how their errors affect the preferential concentration of particles. Two popular classes of methods are explicit Runge-Kutta \cite{Bernard,rouson2001preferential} and Adams-Bashforth methods \cite{Portela, challabotla2015orientation, van2000dynamics, wang1996large, PanBanerjee}. As the accuracy of the time stepping algorithm is limited by the interpolation error we consider only explicit order one and two methods. Such methods often do a reasonably good job at integrating the ODEs under study as they are efficient and easy to implement. However, the exact solution to the ODEs that govern the dynamics of particles with Stokes drag possess a number of physical features that can be exploited to increase the accuracy of the time stepping methods without increasing its order or cost. Such features include constant contractivity of phase space volume, the centrifuge effect, rigid body motion, linear dissipation and, in some cases, perturbative forces. These features are able to be exploited by a carefully designed splitting method. In this work we propose, as an alternative to Runge-Kutta and Adams-Bashforth methods, a splitting method that is especially designed to reduce the error in the centrifuge effect, which combined with divergence free interpolation techniques allow us to obtain a higher lever of accuracy in the distribution of particles in viscous flows. \subsection{Main contributions and summary of paper} We now highlight the main contributions and give a brief outline of the paper. We begin by outlining the equations of motion and the centrifuge effect in section \ref{sec:EoM}. In section \ref{sec:integration} we develop and analyze a contractive splitting method whose flow preserves the sum of the Lyapunov spectrum of the exact solution and show that conventional methods cannot do this. The splitting method is then applied to the equations of motion for spherical particles and the so-called ``centrifuge-preserving" methods are presented, which are constructed to minimize the error of the centrifuge effect. Section \ref{sec:interp} presents the use and implementation of matrix-valued radial basis function interpolation to construct a divergence-free interpolation of the discrete flow field. We show that a vector field approximated by matrix-valued radial basis functions are compatible with the Stokes equations due to the fact they they are identical to the method of regularized Stokeslets. This results in a more physically realistic approximation to the underlying Navier-Stokes equations. In section \ref{sec:spherical} we focus our attention to how physical volume of the particle phase $\Psi$ evolves over a small time $h$. Upon expanding $\Psi$ in $h$ under the exact solution, we recover the centrifuge effect at $O(h^4)$. When expanding $\Psi$ under the numerical solution, we find that errors to the divergence of the fluid velocity field appear at $O(h^2)$, overshadowing the centrifuge effect. However, when a divergence-free interpolation method is used, all the numerical methods under consideration replicate the \textit{qualitative} behavior of the centrifuge effect. That is, physical volumes of particles will contract in regions where the vorticity is lower than the strain rate and vice versa, however, they do so at a slightly erroneous rate. To account for this error, we show that the centrifuge-preserving methods contract physical volume at the same rate as the exact solution to leading order in $h$, hence also preserving the \textit{quantitative} behavior of the centrifuge-effect. Section \ref{sec:numerical tests} is dedicated to simulations of particle suspensions evolving in a discrete cellular flow field where we compare the proposed geometric methods against conventional methods. What we observe is that a computationally inexpensive combination of divergence-free interpolation and centrifuge-preserving splitting methods yield far more accurate spatial distributions of particles compared to standard methods of higher cost. We present many examples where our geometric algorithm produced distributions of particles that are more similar to the ``exact" distribution despite having higher error per particle than distributions produced by slow conventional methods. The main conclusion here is that numerical solutions that preserve the sum of the Lyapunov spectrum, the contractivity of phase space volume, the divergence-free condition and the centrifuge effect in simulations is of great benefit. Section \ref{sec:conclusions} is dedicated to conclusions. \section{The equations of motion}\label{sec:EoM} The translational dynamics of a small particle immersed in a viscous fluid is governed by the rigid body equations with a Stokes force term \begin{align} \dot{\mathbf{v}} =& \alpha K (\mathbf{u}(\mathbf{x})-\mathbf{v})\label{sphericalv}\\ \dot{\mathbf{x}} =& \mathbf{v}\label{sphericalx} \end{align} where $\mathbf{u}(\mathbf{x})$ is the fluid velocity at the particle's location $\mathbf{x}$, $\mathbf{v}$ the velocity, $K$ is a positive definite resistance tensor and $\alpha=1/St$ is the inverse particle Stokes number, which is a dimensionless measure of particle inertia. Note that unless mentioned we will assume that $\mathbf{u}(\mathbf{x})$ does not explicitly depend on $t$. Doing so improves the readability and presentation of the paper and does not affect the forthcoming results. For spherical particles, $K = I$ is the identity, the rotational variables are constant and the above ODEs uniquely specify the dynamics of each particle. For non-spherical particles, the resistance tensor $K = Q^TK_bQ$, where $K_b$ is the diagonal positive definite body frame resistance tensor and $Q\in SO(3)$ is a rotation matrix that transforms a vector in the body frame to one in the inertial frame. The angular velocity $\boldsymbol{\omega}$ evolves via \begin{equation}\label{eq:rotation} J\dot{\boldsymbol{\omega}}=J\boldsymbol{\omega}\times\boldsymbol{\omega}-\mathbf{T}, \end{equation} where $J$ is the diagonal body frame moment of inertia tensor and $\mathbf{T}$ is the hydrodynamic torque. The rotation matrix $Q$ is calculated by solving the matrix ODE \begin{equation}\label{eq:Qdot} \dot{Q} = Q\widehat{\boldsymbol{\omega}}, \end{equation} where $\widehat{\cdot}:\mathbb{R}^3 \rightarrow \mathfrak{so}(3)$ is defined by \begin{equation} \left( \begin{array}{c} \omega_1\\ \omega_2\\ \omega_3\\ \end{array} \right) \mapsto \widehat{\boldsymbol{\omega}} = \left( \begin{array}{ccc} 0 & -\omega_1 & \omega_2 \\ \omega_1 & 0 & -\omega_3 \\ -\omega_2 & \omega_3 & 0 \\ \end{array} \right), \end{equation} such that $\widehat{\boldsymbol{\omega}}\mathbf{v} = \boldsymbol{\omega}\times\mathbf{v}$. The expressions for $K_b$ and $\mathbf{T}$ for spheroidal particles are given in \ref{model}. \subsection{The centrifuge effect} Here we will outline the centrifuge effect of the particle equations of motion, which is one of the mechanisms for particle clustering that is referred to throughout the paper. In \cite{maxey_1987}, Maxey assumes $\alpha\gg 1$ and expands the the spherical particle ODEs \eqref{sphericalv} and \eqref{sphericalx} in powers of $\alpha^{-1}$ to derive a first-order ODE expression for $\mathbf{x}$ \begin{equation}\label{xdot} \dot\mathbf{x} = \mathbf{u}(\mathbf{x}) - \alpha^{-1}\left(\frac{\partial \mathbf{u}}{\partial t} + \mathbf{u}\cdot\nabla\mathbf{u}\right) + O(\alpha^{-2}) \end{equation} where we have ignored the effect of gravity. Taking the divergence gives \begin{equation}\label{divV} \nabla\cdot\mathbf{v} = \frac{\partial u_i}{\partial x_i} - \frac{1}{\alpha}\left(\frac{\partial}{\partial t} \frac{\partial u_i}{\partial x_i} + \frac{\partial u_i}{\partial x_j}\frac{\partial u_j}{\partial x_i} + u_i\frac{\partial }{\partial x_i}\frac{\partial u_j}{\partial x_j}\right)+ O(\alpha^{-2}) \end{equation} where there is an implied summation over repeated indices, which is the convention that is assumed throughout the paper. Assuming that the fluid field is divergence-free, we arrive at the familiar relationship between the fluid field rate of strain, rate of rotation and the divergence of the particle velocity field \begin{equation}\label{MaxeyCentrifuge} \nabla\cdot\mathbf{v} = - \frac{1}{\alpha}\frac{\partial u_i}{\partial x_j}\frac{\partial u_j}{\partial x_i} = - \frac{1}{\alpha} \left( \|S\|^2_F - \|\Omega\|^2_F\right)+ O(\alpha^{-2}) \end{equation} where the rate of strain and rotation tensors $S$ and $\Omega$ are given by \begin{equation} S_{ij} = \frac{1}{2}\left(\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i}\right)\quad\text{and}\quad \Omega_{ij} = \frac{1}{2}\left(\frac{\partial u_i}{\partial x_j} - \frac{\partial u_j}{\partial x_i}\right) \end{equation} and $\|\cdot\|_F$ is the Frobenius matrix norm. In other words, the divergence of the particle velocity field $\nabla\cdot\mathbf{v}$ is positive when the vorticity is large compared to the strain rate tensor meaning that the particulate phase disperses in these regions. Conversely, particles concentrate in regions where the strain rate is large compared to the vorticity. This phenomenon is the ``centrifuge effect" of the exact solution to \eqref{sphericalv} and \eqref{sphericalx}. Finally, we remark that while the centrifuge effect was derived for spherical particles, one can make similar observations for non-spherical particles. In this scenario, the resistance tensor can be decomposed into a spherical part and a non-spherical part, e.g., for a spheroidal particle with rotational symmetry (see \ref{model}) we can write $K_b = a\,I + b\,\mathbf{e}_z\mathbf{e}_z^T$, where $\mathbf{e}_z = (0,0,1)^T$ and $b\rightarrow a$ in the spherical limit. In other words, the centrifuge effect still plays a central role in the preferential clustering of non-spherical particles in addition to the non-spherical effects due to $b\mathbf{e}_z\mathbf{e}_z^T$ term in the resistance tensor. \section{Numerical integration of dissipative vector fields}\label{sec:integration} The dynamics of small inertial particles (both spherical and non-spherical) can be modeled as the flow of a vector field with linear dissipation. Such vector fields arise due to the fact that for low Reynolds number flow the drag forces are linear in the slip velocity, for example the Stokes drag force for small ellipsoids, spheres or rigid slender particles \cite{andersson2020integral}. We begin this section with a discussion of such linearly dissipative vector fields and their contractive properties of phase space volume. We then discuss the application of some conventional explicit methods for integrating such ODEs. In particular, we show that conventional methods cannot preserve the contractivity of phase space volume. A splitting scheme is then shown to preserve the exact contractivity of phase space volume. The section concludes with the application of the splitting scheme to spherical particles. \subsection{Linearly dissipative vector fields and contractivity of phase space volume} A linearly dissipative vector field in $n$ dimensions is given in general by \begin{equation}\label{dissipativeODE} \dot{\mathbf{y}} = \mathbf{f}(\mathbf{y})-A\mathbf{y} \end{equation} where $A$ is a positive definite matrix and $\mathbf{f}(\mathbf{y})$ is volume preserving, that is, it satisfies $\nabla\cdot\mathbf{f}(\mathbf{y}) = 0$ (e.g., any Hamiltonian vector field). Note that the ODEs of both non-spherical and spherical particles can be cast in this form, where $\mathbf{f}(\mathbf{y})$ represents the free rigid-body vector field plus the conservative part of the Stokes force and $-A\mathbf{y}$ represents the dissipative part of the Stokes force. The quantitative behavior of particle clustering can be explained in part by analyzing the Lyapunov exponents $\lambda_i$ of the ODE, see for example \cite{bec2003fractal}. It is therefore desirable that the numerical solution of the ODE reproduces similar Lyapunov exponent characteristics. Whilst there do not currently exist numerical methods that preserve individual Lyapunov exponents we can however construct a numerical method that preserves the sum of the Lyapunov spectrum $\sum_{i=1}^{n}\lambda_i$. From a backward error analysis point of view, a numerical method that preserves the Lyapunov spectrum is one that is the exact solution to an ODE with the same Lyapunov spectrum sum as the ODE being solved. For the equations of motion for spherical particles (equations \eqref{sphericalv} and \eqref{sphericalx}), the sum of the first three Lyapunov exponents characterizes the divergence of the velocity field and the sum of the spatial Lyapunov exponents characterizes the rate at which particle clouds contract or expand \cite{Esmaily-Moghadam2016analysis}. Generally speaking, the sum of the Lyapunov spectrum describes the rate at which phase space volume exponentially contracts or expands \cite{hoover1988negative}. That is, by letting $\mathbf{y}(t)$ denote the exact solution of \eqref{dissipativeODE} with initial conditions $\mathbf{y}(0) = \mathbf{y}_0$ and \begin{equation}\label{Y1} Y = \det\left( \frac{\partial \mathbf{y}(t)}{\partial \mathbf{y}_0}\right) \end{equation} the $n$ dimensional phase space volume, then \begin{equation}\label{lyapsum} Y = \prod_{i=1}^{n} e^{t\lambda_i}. \end{equation} It is also known that linearly dissipative systems contract phase space volume at a constant rate, as we will now show. Taking the Jacobian of $\mathbf{y}(t)$ with respect to $\mathbf{y}_0$ gives \begin{equation} \frac{\mathrm{d}}{\mathrm{d} t}\frac{\partial \mathbf{y}(t)}{\partial \mathbf{y}_0} = (\mathbf{f}'-A)\frac{\partial \mathbf{y}(t)}{\partial \mathbf{y}_0}. \end{equation} We now recall Jacobi's formula, which relates the derivative of the determinant of a square matrix $M(t)$ by the following \begin{equation}\label{JacobiFormula} \frac{\mathrm{d}}{\mathrm{d} t}\det(M(t)) = \det(M(t))\, \mathrm{Tr}\left(M(t)^{-1} \frac{\mathrm{d}}{\mathrm{d} t}M(t)\right). \end{equation} Differentiating \eqref{Y1} with respect to time and applying \eqref{JacobiFormula} gives \begin{equation}\label{detAODE} \frac{\mathrm{d}}{\mathrm{d} t}Y = -Y\, \mathrm{Tr}\left(A\right), \end{equation} as $\mathbf{f}'$ has zero trace. This is solved by \begin{equation}\label{detA} Y=e^{-t\,\mathrm{Tr}(A)}. \end{equation} By equating this with \eqref{lyapsum} we obtain the relation \begin{equation}\label{L-P} \sum_{i=1}^{n}\lambda_i = - \mathrm{Tr}(A). \end{equation} As the trace of $A$ is by definition positive, the phase space volume $Y$ is strictly monotonically contracting in time. Equation \eqref{L-P} implies that a numerical integration method that preserves phase space volume $Y$ also preserves the sum of the Lyapunov exponents of the underlying ODE. It is therefore logical to imply that a numerical flow of \eqref{dissipativeODE} that preserves the contractivity of phase space volume will better reproduce the clustering properties of the exact solution than one that doesn't. The rest of this section is dedicated to analysing to what extent some common numerical integration methods for particle dynamics can preserve this constant contractivity of phase space volume. \subsection{Preservation of the contractivity of phase space volume by numerical methods} Denote by $\Phi_h$ a numerical method for solving \eqref{dissipativeODE} such that ${\Phi_h(\mathbf{y}_0)\approx \mathbf{y}(h)}$ for time step $h\ll1$. For $\Phi_h$ to be called \textit{contractivity preserving} when applied to \eqref{dissipativeODE}, we require that $\det\left(\frac{\partial\Phi_h(y)}{\partial y}\right) = e^{-h\,\mathrm{Tr}(A)}$ \cite{grimm2008geometric}. It is known that no standard methods (e.g., one with a B-series \cite{GNI}) can preserve phase space volume for divergence free vector fields \cite{Iserles2007}. The same is also expected when it comes to preserving contractivity of phase space volume for dissipative vector fields \cite{mclachlan2000numerical}. Instead, a weaker requirement is that they contract phase space volume when the exact solution does so, that is, $\det\left(\frac{\partial\Phi_h(y)}{\partial y}\right) < 1$ when applied to \eqref{dissipativeODE}. Such a numerical method that possesses this property is called \textit{contractive}. We now consider some popular numerical methods for particle dynamics. We consider explicit methods used in the literature, namely, explicit Runge-Kutta methods, Adams-Bashforth methods and a splitting method scheme. \subsubsection{Runge-Kutta methods and phase volume contractivity} Take an order-$p$ Runge-Kutta method $\Phi^{[RK]}_h(\mathbf{y}_0)$ with stability function $R(z)$ applied to an ODE of the form \eqref{dissipativeODE} with linear $\mathbf{f}(\mathbf{y})$, say \begin{equation}\label{lindiss} \dot{\mathbf{y}} = B\mathbf{y} - A\mathbf{y}, \end{equation} where $B$ is a square and traceless matrix. Then the numerical solution by the Runge-Kutta method is given by \begin{equation} \Phi^{[RK]}_h(\mathbf{y}_0) = R(h(B-A))\mathbf{y}_0. \end{equation} Recall that $R(z)$ is an order-$p$ Pad{\'e} approximation to the exponential function. We therefore have \begin{equation}\label{rkexp} \Phi^{[RK]}_h(\mathbf{y}_0) = \exp\left(h(B-A)\right)\mathbf{y}_0 + O(h^{p+1}) \end{equation} which means that over one time-step, the phase space volume contracts via \begin{equation}\label{RKphasevol} \det\left(\frac{\partial \Phi^{[RK]}_h(\mathbf{y}_0)}{\partial \mathbf{y}_0}\right) = e^{-h\mathrm{Tr}(A)} + O(h^{p+1}). \end{equation} That is, for such a linear system, a Runge-Kutta method will only preserve the phase volume contractivity up to the order of the method. So for a non-linear dissipative ODE of the form \eqref{dissipativeODE}, one can hardly expect a Runge-Kutta method to preserve phase space volume exactly. In fact, due to this error explicit Runge-Kutta methods usually have a time-step restriction on $h$ to even be contractive at all \cite{grimm2008geometric}. This is illustrated by the following examples of some low order explicit Runge-Kutta methods applied to equations \eqref{sphericalv} and \eqref{sphericalx}. \begin{example}\label{FEcontraction} We apply the forward Euler method $\Phi^{[F\!E]}_h$ to the ODEs \eqref{sphericalv} and \eqref{sphericalx}. Note that these ODEs are non-linear due to $\mathbf{u}(\mathbf{x})$. Setting $\mathbf{y}:=(\mathbf{v},\mathbf{x})$, we have for the contractivity of phase space volume under the forward Euler method \begin{equation}\label{jdetFE} \det\left(\frac{\partial\Phi^{[F\!E]}_h(\mathbf{y}_0)}{\partial \mathbf{y}_0}\right) = 1-3\alpha h + \alpha\,h^2\,\left(3\alpha-\frac{\partial u_i}{\partial x_i}\right) + O(h^3), \end{equation} which is an order one approximation to the exact contractivity \begin{equation}\label{exactContractivity} \det\left(\frac{\partial \mathbf{y}(h)}{\partial \mathbf{y}_0}\right) = e^{-3\alpha h}. \end{equation} The Forward Euler method must therefore satisfy the following time-step restriction for it to be contractive \begin{equation} h\lessapprox\frac{3}{3\alpha-\frac{\partial u_i}{\partial x_i}}. \end{equation} Violating this restriction means that the forward Euler method will expand phase space volume despite the ODE dictating that it is always contracting. Furthermore, it can be seen that large values of $|\frac{\partial u_i}{\partial x_i}|$ will place further restrictions on the size of $h$. \end{example} \begin{example} Consider the following second order explicit Runge-Kutta method \begin{equation}\label{RK2} \Phi^{[RK]}_h(\mathbf{y}_0) = \mathbf{y}_0 + h\bigl( (1-\tfrac1{2\theta}) \mathbf{f}(\mathbf{y}_0) + \tfrac1{2\theta} \mathbf{f}(\mathbf{y}_0 + \theta h \mathbf{f}(\mathbf{y}_0)), \end{equation} where $\theta=\frac{1}{2},\frac{2}{3}$ and $1$ correspond to the explicit midpoint method, Ralston's method and Heun's method, respectively. We apply this to the ODEs \eqref{sphericalv} and \eqref{sphericalx}. Setting $\mathbf{y}:=(\mathbf{v},\mathbf{x})$, we have for the contractivity of phase space volume under $\Phi^{[RK]}_h$ \begin{align}\label{jdetRK} \det\left(\frac{\partial\Phi^{[{RK}]}_h(\mathbf{y}_0)}{\partial \mathbf{y}_0}\right) &= 1\,-\,3\alpha h\,+\, 9\alpha^2\frac{h^2}{2!}\\ &\quad \,+\,\left( 3\alpha(\theta - 1)\frac{\partial^2 u_i}{\partial x_i\partial x_j}v_j\,+\,3\alpha^2\frac{\partial u_i}{\partial x_i}\,-\,24\alpha^3\right)\frac{h^3}{3!}\,+\,O(h^4) \end{align} which is an order two approximation to the exact contractivity \eqref{exactContractivity}, which is expected for an order two method. The time-step $h$ must be chosen small enough such that the $O(h^3)$ error term does not violate the contractivity condition. Violating this restriction means that the method will expand phase space volume despite the ODE dictating that it is always contracting. Furthermore, it can be seen that large values of $|\frac{\partial u_i}{\partial x_i}|$ and $v_i$ will place further restrictions on the size of $h$. \end{example} We remark that we can make similar observations for the above methods applied to the ODEs for non-spherical particles. That is, the phase space volume is conserved only to the order of the method. \subsubsection{Multi-step methods and phase volume contractivity}\label{sec:multistep_contractivity} Another popular numerical method used for particle dynamics are multi-step methods. Consider the explicit $k$-step Adams-Bashforth methods. Such methods are of global order-$k$ and can be seen as a map $\Phi_h^{[AB]}:(\mathbf{y}_0,...,\mathbf{y}_{k-1})\rightarrow(\mathbf{y}_1,...,\mathbf{y}_{k})$ such that \begin{align}\label{ABk} \mathbf{y}_k &= \mathbf{y}_{k-1} + h\sum_{i=0}^{k-1}b_i\mathbf{f}(\mathbf{y}_i),\\ \mathbf{y}_i &= \mathbf{y}_{i-1},\quad\text{for}\quad i=1,...,k-1 \end{align} where the coefficients $b_i$ satisfy $\sum_{i=0}^{k-1}b_i=1$. That is, $\Phi_h^{[AB]}$ takes a point in a $kn$ dimensional phase space to another point in the same space. Due to this and the fact that the initial vectors in the domain $\mathbf{y}_i$ for $i=0,...,k-1$ are independent of one another, it's less clear how to define the notion of numerical phase space volume that relates to that of the underlying ODE. However, in practice these initial vectors $\mathbf{y}_k$ for $i=0,...,k-1$, are usually computed by an order-$k$ one step method, for example a Runge-Kutta method. Therefore, the vectors $\mathbf{y}_i$ depend on the vectors $\mathbf{y}_j$ for $j<i$. This implies that each $\mathbf{y}_i$ has the same series expansion as the exact solution up to $O(h^{k})$. While a detailed analysis of multi-step methods and the preservation of phase volume lie outside the scope of the paper, we can illustrate this concept by the following example, which considers the phase volume properties of a second order Adams-Bashforth using a second order Runge-Kutta method to compute the initial vectors. \begin{example}\label{AB2example} Consider the second-order Adams-Bashforth method $\Phi_h^{[AB]}:(\mathbf{y}_1,\mathbf{y}_0)\rightarrow(\mathbf{y}_2,\mathbf{y}_1)$ where \begin{align} \mathbf{y}_2 =& \mathbf{y}_1 + \frac{h}{2}\left(3\mathbf{f}(\mathbf{y}_1)-\mathbf{f}(\mathbf{y}_0)\right) \end{align} Now define $\mathbf{y}_1=\Phi^{[RK]}_h(\mathbf{y}_0)$ in the domain by the second order Runge-Kutta method \eqref{RK2}. Applying $\Phi_h^{[AB]}$ to the ODEs \eqref{sphericalv} and \eqref{sphericalx} and taking the Jacobian determinant of $\mathbf{y}_2$ with respect to $\mathbf{y}_0$ gives \begin{align} \det\left(\frac{\partial \mathbf{y}_2}{\partial \mathbf{y}_0}\right) &= 1\,-\,6\alpha h\, +\, 36\alpha^2\frac{h^2}{2!}\\ &\quad \,+\, \left(\alpha\left(3\theta - \frac{21}{2}\right)\frac{\partial^2 u_i}{\partial x_i\partial x_j}v_j\,+\,\frac{21\alpha^2}{2}\frac{\partial u_i}{\partial x_i}\,-\,24\alpha^3\right)\frac{h^3}{3!}\,+\,O(h^4) \end{align} which is an $O(h^2)$ approximation to the exact contractivity \eqref{exactContractivity}. Note that here we have taken the the contractivity over two time-steps $2h$. Like the previous example, we observe that the contractivity is affected by non-zero values of $\frac{\partial u_i}{\partial x_i}$ and $v_i$. \end{example} \subsection{A splitting scheme that preserves the contractivity of phase space volume} \label{sec:contractivitysplitting} We now analyze the splitting method based on the following splitting of equation \eqref{dissipativeODE} \begin{equation}\label{splitDissipativeODE} \dot{\mathbf{y}} = \mathbf{f}(\mathbf{y})-\mathbf{b}(\mathbf{y}) \quad\text{and}\quad \dot{\mathbf{y}} = -A\mathbf{y}+\mathbf{b}(\mathbf{y}) \end{equation} where $\mathbf{b}(\mathbf{y})$ is any vector that is constant along the flow of the second vector field. A similar splitting was proposed in \cite{tapley2019novel} for non-spherical particle dynamics. Denote their exact flow operators by $\psi^{[1]}_h$ and $\psi^{[2]}_h$, respectively. In the context of small particles immersed in a viscous fluid, the first vector field represents the free rigid body equations and the second is due to the Stokes viscous drag forces. The free rigid body vector field can be solved exactly. That is, by a forward Euler step for the spherical case or otherwise using trigonometric or Jacobi elliptic functions depending whether or not the body is axially symmetric \cite{celledoni2008exact}. Due to the existence of an exact solution, we immediately have volume preservation \begin{equation} \left| \frac{\partial \psi^{[1]}_h (\mathbf{y}_0)}{\partial \mathbf{y}_0}\right| = 1. \end{equation} The second vector field is solved by the variation of parameters formula \begin{equation} \psi^{[2]}_h (\mathbf{y}_0) = e^{-hA}(\mathbf{y}_0 + A^{-1}\mathbf{b})+A^{-1}\mathbf{b}. \end{equation} Taking the Jacobian determinant gives \begin{equation} \left| \frac{\partial \psi^{[2]}_h (\mathbf{y}_0)}{\partial \mathbf{y}_0}\right| = e^{-h\mathrm{Tr}(A)}, \end{equation} which is consistent with the exact solution \eqref{detA}. As the Jacobian of the composition of two or more maps is the product of the Jacobians of the maps, any splitting method based on the alternating compositions of the flows $\psi^{[1]}_h$ and $\psi^{[2]}_h$ will be contractivity preserving. In forthcoming numerical experiments, we will consider only order one and two methods including the order one Lie-Trotter method \begin{equation}\label{LieTrotter} \Phi^{[LT]}_h = \psi^{[1]}_h\circ\psi^{[2]}_h, \end{equation} and the order two Strang method \begin{equation}\label{strang} \Phi^{[SS]}_h = \Phi^{[LT]}_{\frac{h}{2}}\circ \Phi^{[LT]*}_{\frac{h}{2}}. \end{equation} Here, we denote by $\Phi^{[LT]*}_{h} = \psi^{[2]}_h\circ\psi^{[1]}_h$ the conjugate of the $\Phi^{[LT]}_{h}$. \subsection{Application to spherical particle dynamics and the centrifuge-preserving methods} To construct a contractivity preserving splitting method for spherical particles, we split the ODEs \eqref{sphericalv} and \eqref{sphericalx} in to the following two vector fields \begin{equation}\label{ode11} \left(\begin{array}{c} \dot{\mathbf{v}}\\\dot{\mathbf{x}}\\ \end{array}\right) = \left(\begin{array}{c} 0\\\mathbf{v}\\ \end{array}\right) ,\quad\text{and}\quad \left(\begin{array}{c} \dot{\mathbf{v}}\\\dot{\mathbf{x}}\\ \end{array}\right) = \left(\begin{array}{c} \alpha (\mathbf{u}(\mathbf{x})-\mathbf{v})\\0\\ \end{array}\right). \end{equation} Their exact flow operators are \begin{equation} \psi^{[1]}_h\left(\begin{array}{c} \mathbf{v}\\\mathbf{x}\\ \end{array}\right) = \left(\begin{array}{c} 0\\\mathbf{x} + h\mathbf{v}\\ \end{array}\right) \quad\mathrm{and}\quad \psi^{[2]}_h\left(\begin{array}{c} \mathbf{v}\\\mathbf{x}\\ \end{array}\right) = \left(\begin{array}{c} e^{-\alpha\,h}(\mathbf{v}-\mathbf{u}(\mathbf{x})) + \mathbf{u}(\mathbf{x})\\\mathbf{x}\\ \end{array}\right). \end{equation} Indeed, letting $\mathbf{y}_0 = (\mathbf{v}_0^T,\mathbf{x}_0^T)^T$ we see that \begin{equation} \left| \frac{\partial \psi^{[1]}_h (\mathbf{y}_0)}{\partial \mathbf{y}_0}\right| = 1\quad\mathrm{and}\quad \left| \frac{\partial \psi^{[1]}_h (\mathbf{y}_0)}{\partial \mathbf{y}_0}\right| = e^{-3\alpha h} \end{equation} hence any composition of the above flows will preserve contractivity. For the construction of the splitting method for non-spherical particle dynamics, we refer the reader to \cite{tapley2019novel}. A draw back of splitting methods is that composing methods of order higher than two requires the use of negative time steps. As the ODEs in question are dissipative, such higher order methods would therefore require strict time step restrictions, which is preferably avoided. The idea behind geometric numerical integrators is that preserving relevant properties of the exact solution upon discretisation can lead to better qualitative and long time numerical solutions. With this in mind, instead of improving the accuracy of the method in a conventional sense by increasing the order, we propose as an alternative the following composition methods \begin{align} \Phi^{[C\!P_1]}_h =& \Phi^{[LT]}_{(1-\frac{\sqrt{6}}{6})h}\circ \Phi^{[LT]*}_{\frac{\sqrt{6}}{6}h} \label{CP1} \\ \Phi^{[C\!P_2]}_h =& \Phi^{[LT]}_{\frac{3h}{12}}\circ \Phi^{[LT]*}_{\frac{5h}{12}}\circ \Phi^{[LT]}_{\frac{4h}{12}} \label{CP2} \end{align} which are order one and order two methods, respectively. We propose that the above splitting methods are particularly well suited to calculation of particle dynamics as their numerical solution preserves the centrifuge effect of the exact solution when considering the contraction of physical volume of the particle field. We will therefore refer to the methods \eqref{CP1} and \eqref{CP2} as the ``centrifuge-preserving" methods. This favorable property is discussed in more detail in section \ref{sec:num int sph}. In section \ref{sec:clusters} we show through numerical simulations that integrators possessing this property predict more accurately the spatial distribution of particles (both spherical and non-spherical) compared to methods without this property. \section{Divergence-free interpolation with matrix-valued radial basis functions}\label{sec:interp} To construct a divergence-free approximation to the discrete fluid field, we propose using matrix-valued radial basis functions (MRBFs). In this section, we will give a brief outline on their use and implementation. We then further motivate their use by showing that the interpolated vector field generated by MRBFs is a solution to the Stokes equation. The interpolation problem is as follows. Given a set of vector-valued data $\{\mathbf{u}_{i},\mathbf{x}_{i}\}_{i=1}^{n^3}$ generated by an accurate direct numerical simulation to the Navier-Stokes equations, construct a divergence-free vector field that locally interpolates the data. In our context, $\mathbf{u}_{i} = \mathbf{u}(\mathbf{x}_{i})$ is the fluid velocity vector at the grid node located at $\mathbf{x}_{i} = (x_i,y_i,z_i)^{\mathrm{T}}$. When implementing a polynomial interpolation method, one usually chooses the $ n\times n\times n$ cube of data points neighboring the particle, where $n=2,3$ or $4$. This is because polynomial interpolation of degree $n-1$ requires $n$ data points in each dimension to specify a unique interpolating polynomial. MRBFs are not restricted by this particular choice of data points, however to keep the interpolation methods comparable we will adopt this convention. The MRBF interpolating vector field $\mathbf{s}(\mathbf{x})$ is then constructed by \begin{equation}\label{MRBFsurf} \mathbf{s}(\mathbf{x}) = \sum_{i=1}^{n^3} \Theta_i(\mathbf{x}) \mathbf{c}_i, \end{equation} where \begin{equation} \Theta_i(\mathbf{x}) = (\nabla\nabla^T - \nabla^2 I)\theta(r_i(\mathbf{x}))\in\mathbb{R}^{3 \times 3} \end{equation} is called an MRBF, $\theta(r_i(\mathbf{x}))$ is a (scalar-valued) radial basis function, $r_i(\mathbf{x}) = \|\mathbf{x}_i-\mathbf{x}\|$ is the distance from the point $\mathbf{x}_i$ and $I$ is the identity matrix in three dimensions. The $n^3$ vector-valued coefficients $\mathbf{c}_i\in\mathbb{R}^{3}$ are chosen such that $\mathbf{s}(\mathbf{x}_i) = \mathbf{u}(\mathbf{x}_i)$, which amounts to solving the following $3 n^3$ dimensional linear system \begin{equation} \left(\begin{array}{ccc} \Theta_1(\mathbf{x}_1)&\cdots&\Theta_n(\mathbf{x}_1)\\ \vdots &\ddots & \vdots \\ \Theta_1(\mathbf{x}_n)&\cdots&\Theta_n(\mathbf{x}_n)\\ \end{array}\right) \left(\begin{array}{c} \mathbf{c}_1\\ \vdots \\ \mathbf{c}_n\\ \end{array}\right) = \left(\begin{array}{c} \mathbf{u}_1\\ \vdots \\ \mathbf{u}_n\\ \end{array}\right)\in\mathbb{R}^{3n^3}. \end{equation} The particular RBF we use in the forthcoming experiments is the Gaussian $\theta(r) = \exp(-\epsilon^2r^2)$, where $\epsilon$ is some user defined parameter that controls the flatness of the RBF. In general, one should choose $\epsilon$ as small as possible as this leads to less interpolation error, although more ill conditioned systems. It can be easily seen that $\mathbf{s}(\mathbf{x})$ is divergence free. Using the double curl identity in $\mathbb{R}^3$ we have \begin{equation} \nabla\cdot\mathbf{s} = \sum_{i=1}^{3n}\nabla\cdot\left((\nabla\nabla^T - \nabla^2 I)\theta(r_i)\mathbf{c}_i\right)=\sum_{i=1}^{3n}\nabla\cdot\left(\nabla\times\nabla\times\left(\theta(r_i)\mathbf{c}_i\right)\right)=0. \end{equation} Finally, we list some advantages of MRBF interpolation over standard tri-polynomial interpolation: (1) they work equally well on scattered data points, meaning that they are just as well suited to interpolate data generated by a direct numerical simulation involving complex geometries on unstructured grids; (2) they have faster convergence of their derivatives \cite{buhmann2003radial,wendland2004scattered}, compared to tri-polynomial interpolation \cite{carlson1973error}; and (3), they are compatible with the Stokes equations, meaning that they construct a more physically realistic fluid field for fluid simulations. We will discuss point (3) in the next section. \subsection{MRBFs as regularised Stokeslet solutions to the Stokes equations}\label{sec:interpNS} In addition to the fact that the underlying flow field should be divergence-free, we are given extra knowledge that can be exploited; namely that the data is a numerical solution to the incompressible Navier-Stokes equations \begin{equation}\label{NSeq} \rho\left(\frac{\partial \mathbf{u}}{\partial t} + (\mathbf{u}\cdot\nabla)\mathbf{u}\right) - \mu\nabla^2 \mathbf{u} ~=~ - \nabla p +\mathbf{F} \quad \mathrm{and}\quad \nabla \cdot \mathbf{u} ~=~0. \end{equation} We are only interpolating in space and hence approximating steady-state solutions to \eqref{NSeq} and as the grid-spacing $\Delta x$ is comparable to the smallest length scales of the flow (e.g., the Kolmogorov scale for turbulent flows), the Reynolds number is small and the non-linear terms of equations \eqref{NSeq} can be ignored. Under the above assumptions, a good approximation for the local flow in a grid-cell can be given by the steady Stokes equations, which reads \begin{equation}\label{StokesEq} \mu \nabla^2 \mathbf{u} - \nabla p ~=~ -\mathbf{F} \quad\mathrm{and}\quad \nabla \cdot \mathbf{u} ~=~0, \end{equation} where we have set $\mu=1$. Cortez \cite{cortez2001method} presents what's called the regularised Stokeslet solution to the Stokes equation, which is an approximation to Green's function of the Stokes equation for body force $\mathbf{F} = \phi_\epsilon(\mathbf{x})\,\mathbf{f}_0$, where $\mathbf{f}_0\in\mathbb{R}^3$ is constant. Here, $\phi_\epsilon(\mathbf{x})$ is the so-called ``blob" function, which is a radially symmetric smooth approximation of the Dirac delta function $\delta(\mathbf{x})$ that decays to zero at infinity whilst satisfying \begin{equation} \int \phi_\epsilon (\mathbf{x})\,\mathrm{d}\mathbf{x} = 1 \quad\mathrm{and}\quad \lim\limits_{\epsilon\rightarrow0}(\phi_{\epsilon}(\mathbf{x}))=\delta(\mathbf{x}). \end{equation} Now define the functions $G_\epsilon(\mathbf{x})$ and $B_\epsilon(\mathbf{x})$ as the solutions to \begin{equation} \nabla^2 G_\epsilon(\mathbf{x}) = \phi_\epsilon(\mathbf{x}) \quad\mathrm{and}\quad \nabla^2 B_\epsilon(\mathbf{x}) = G_\epsilon(\mathbf{x}), \end{equation} which are smooth approximations to Green's function and the biharmonic equation $\nabla^4B(\mathbf{x})=\delta(\mathbf{x})$, respectively. Then Cortez's regularised Stokeslet solution reads \begin{equation}\label{rs} \mathbf{u}_\epsilon(\mathbf{x}) = (\mathbf{f}_0\cdot\nabla)\nabla B_\epsilon(\mathbf{x}) - \mathbf{f}_0 G_\epsilon(\mathbf{x}) \end{equation} with pressure term \begin{equation} p_\epsilon(\mathbf{x}) = \mathbf{f}_0\cdot\nabla G_\epsilon(\mathbf{x}). \end{equation} Using the definition for $G_\epsilon(\mathbf{x})$, we can rewrite the regularised Stokeslet \eqref{rs} as \begin{equation} \mathbf{u}_\epsilon(\mathbf{x}) = \,(\nabla \nabla^T - \nabla^2I)(B_\epsilon(\mathbf{x})\mathbf{f}_0), \end{equation} which is identical to an MRBF element if we can identify $B_\epsilon(\mathbf{x})$ with a positive-definite RBF $\psi(||\mathbf{x}||)$ (e.g., the Gaussian $\psi(||\mathbf{x}||) = \exp\left(-\epsilon^2||\mathbf{x}||^2\right)$) and the force vectors are identified with the interpolation coefficient vectors $\mathbf{c}_i$ from equation \eqref{MRBFsurf}. This means that a vector field that is constructed from a linear combination of MRBFs, (i.e., equation \eqref{MRBFsurf}) corresponds to a linear combination of regularised Stokeslet solutions, with force $\mathbf{f}_i=\mu \mathbf{c}_i$. This leads to the following solution to the Stokes equation, now written in terms of MRBFs \begin{equation} \mathbf{s}(\mathbf{x}) = \mathbf{u}_\epsilon(\mathbf{x}) = \sum_{i=0}^{N}\,\Theta_i(\mathbf{x})\mathbf{f}_i, ~~\mathrm{and}~~ p_\epsilon(\mathbf{x}) =\sum_{i=0}^{N} \mathbf{f}_i\cdot\nabla(\nabla^2\theta(r_i)). \end{equation} One implication of this is that the interpolated background fluid field is related to the gradient of a scalar pressure field, when MRBF interpolation is used. The benefit of this can be illustrated by inserting equation \eqref{NSeq} into equation \eqref{xdot} to derive an expression for $\nabla\cdot\mathbf{v}(\mathbf{x})$ in terms of the pressure field \cite{elperin1996turbulent} \begin{equation} \nabla\cdot\mathbf{v}(\mathbf{x}) = \nabla\cdot\mathbf{u}(\mathbf{x}) + \alpha^{-1}(\nabla^2 p_\epsilon(\mathbf{x}))+O(\alpha^{-2}). \end{equation} This equation tells us that the pressure field is also related to the preferential concentration of particles. Moreover, this suggests that particles cluster in regions of maximum pressure ($\nabla^2 p_\epsilon(\mathbf{x}) <0$) \cite{elperin1996turbulent}. Indeed, in \cite{luo2007pressure}, numerical evidence is found to support the correlation between the Laplacian of the pressure field $\nabla^2 p_\epsilon(\mathbf{x})$ and the spatial distribution of the particles. However, if the background fluid field is interpolated by a standard polynomial method, then there is no background scalar pressure field, which could erroneously influence the particle path lines. \section{Numerical errors and preferential concentration of spherical particles} \label{sec:spherical} In what follows we will consider how volumes of inertial spherical particles evolve under the flow of the ODEs \eqref{sphericalv} and \eqref{sphericalx}. The goal of this section is to relate the numerical interpolation and integration errors to the clustering mechanisms of the exact solution. This is done by first expanding the exact solution into its elementary differentials. We then discuss the effects of integration errors and interpolation errors on the evolution of volumes of particles. In what follows, we will initially assume that $\mathbf{u}(\mathbf{x})$ is an arbitrary vector field that is not necessarily divergence-free until explicitly mentioned. \subsection{Expanding the exact solution}\label{sec:expansion} We start this section by defining the notion of volume of the particle suspension. Given an open and bounded set $D_t\subset\mathbb{R}^3$ at time $t$, then its volume at $t=0$ is given by \begin{equation} \mathrm{vol}(D_0) := \int_{D_0}\!\mathrm{d} \mathbf{x}_0 \end{equation} where $\mathbf{x}(0)=\mathbf{x}_0$. This can be thought of as the volume occupied by a suspension of inertial particles confined to the region $D_0$. The idea is to consider how this volume expands or contracts in time. Consider now the same volume after evolving under the ODEs \eqref{sphericalv} and \eqref{sphericalx} for time $t$ \begin{equation} \mathrm{vol}(D_t) = \int_{D_t}\!\!\mathrm{d}\mathbf{x}(t)= \int_{D_t}\!\!\det \left(\frac{\partial\mathbf{x}(t)}{\partial \mathbf{x}_0}\right) \mathrm{d} \mathbf{x}_0. \end{equation} Hence, the quantity \begin{equation}\label{X} \Psi := \det \left(\frac{\partial \mathbf{x}(t)}{\partial \mathbf{x}_0}\right) \end{equation} determines how volumes of particles contract or expand over time. That is, given a volume of particles, if $\Psi>1$ the volume is expanding, $\Psi<1$ the volume is contracting and $\Psi=1$ the volume is preserved. These three cases correspond to the particulate phase dispersing, concentrating or remaining a constant density, respectively. Note that we will refer to $\Psi$ as the \textit{physical} volume, to distinguish between phase space volume. To illustrate the connection between $\nabla\cdot\mathbf{v}$ and $\Psi$, we can take the Jacobian of equation \eqref{sphericalx} with respect $\mathbf{x}_0$ \cite{ijzermans_meneguz_reeks_2010} \begin{equation} \frac{\partial \dot{\mathbf{x}}}{\partial \mathbf{x}_0} = \frac{\partial \mathbf{x}}{\partial \mathbf{x}_0}\frac{\partial \mathbf{v}}{\partial \mathbf{x}}. \end{equation} Applying Jacobi's formula \eqref{JacobiFormula} yields a differential equation for $\Psi$ \begin{equation}\label{dXdt} \frac{\partial}{\partial t}\Psi = \left(\nabla \cdot \mathbf{v}\right)\Psi. \end{equation} It is clear that if $\nabla \cdot \mathbf{v}<1$, then $\Psi$ is decreasing and if $\nabla \cdot \mathbf{v}>1$ then $\Psi$ is increasing. We will now show this more concretely, by expanding the exact solution into its elementary differentials, which we now recall. Denote by $\mathbf{y}(t)$ the exact solution of an ODE \begin{equation} \dot{y}_i(t) = f_i(\mathbf{y}(t)), \quad\text{for}\quad i=1,...,n \end{equation} For some small time $0<h\ll1$, $y_i(h)$ has the following elementary differential expansion \cite{GNI} \begin{align} y_i(h) =\,& y_i(0) + h f_i\big|_{t=0} + \frac{h^2}{2}\left(\frac{\partial f_i}{\partial y_j}f_j\right)\bigg|_{t=0} \\\quad & +\frac{h^3}{3!}\left(\frac{\partial^2 f_i}{\partial y_j\partial y_k}f_jf_k+\frac{\partial f_i}{\partial y_j}\frac{\partial f_j}{\partial y_k}f_k\right)\bigg|_{t=0} \!+ \frac{h^4}{4!}\bigg( \frac{\partial^3 f_i}{\partial y_j\partial y_k\partial y_l}f_jf_kf_l \\ & + 3\frac{\partial^2 f_i}{\partial y_j\partial y_k}\frac{\partial f_l}{\partial y_l}f_lf_k + \frac{\partial f_i}{\partial y_j}\frac{\partial^2 f_j}{\partial y_k\partial y_l}f_kf_l + \frac{\partial f_i}{\partial y_j}\frac{\partial f_j}{\partial y_k}\frac{\partial f_k}{\partial y_l}f_l\bigg)\bigg|_{t=0} + ... \end{align} for $i=1,...,n$. We note that the expansion is convergent if $h$ is small compared to $\|\mathbf{f}\|$. The elementary differentials of the ODEs \eqref{sphericalv} and \eqref{sphericalx} are calculated and the terms up to $O(h^3)$ are presented \begin{align} v_i(h) =& v_i + h\alpha(u_i-v_i) + \frac{h^2}{2} \left(-\alpha^2(u_i-v_i) + \alpha \frac{\partial u_i}{\partial x_j}v_j\right) \\ & + \frac{h^3}{3!}\left(\alpha v_jv_k\frac{\partial^2u_i}{\partial x_j \partial x_k} + \alpha^3 (u_i-v_i) - \alpha^2 \frac{\partial u_i}{\partial x_j} (u_j-2v_j)\right) + ...\label{expansionv} \\ x_i(h) =& x_i + hv_i + \frac{h^2}{2} \left(\alpha(u_i-v_i)\right) + \frac{h^3}{3!}\left(\alpha\frac{\partial u_i}{\partial x_j}v_j - \alpha^2(u_i-v_i)\right) + ... \label{expansionx} \end{align} where the variable appearing on the right hand side are evaluated at $t=0$. Here we assumed nothing about the size of $\alpha$, but instead take $h\ll\alpha$ such that the series converges. Taking the determinant of the Jacobian of $\mathbf{x}(h)$ from equation \eqref{expansionx} with respect to $\mathbf{x}_0$ yields an expansion for $\Psi$ with repect to $h$ \begin{equation}\label{vol expansion} \Psi = 1 + h\Psi_1 + h^2\Psi_2 + h^3\Psi_3 + h^4\Psi_4 + O(h^5) \end{equation} where \begin{gather}\label{Xexpand} \Psi_1 =\, 0,\quad \Psi_2 =\, {\frac {\alpha}{2}}\,\chi_3 ,\quad \Psi_3 =\, {\frac {\alpha}{6}}\left(\chi_5-\alpha\chi_3\right) \\ \Psi_4 =\, \frac{\alpha}{24}\,\left(\chi_1 + \alpha \chi_2 + \alpha^2 \chi_3 + 3\alpha \chi_3^2 -2 \alpha \chi_4 - 2\alpha\chi_5\right). \end{gather} The elementary differentials $\chi_i$ are given by \begin{gather} \chi_1 = v_iv_j\frac{\partial^3u_k}{\partial x_i\partial x_j\partial x_k},\quad \chi_2 = u_i\frac{\partial^2u_j}{\partial x_i\partial x_j},\quad \chi_3 = \frac{\partial u_i}{\partial x_i}\\ \chi_4 = \frac{\partial u_j}{\partial x_i}\frac{\partial u_i}{\partial x_j} = \|S\|^2_F - \|\Omega\|^2_F, \quad \chi_5 = v_j\frac{\partial^2 u_i}{\partial x_i\partial x_j}. \end{gather} This can be verified by expansion of \eqref{dXdt} into its Taylor series. If we insist that the fluid field is divergence-free then the $\Psi_i$ and $\chi_i$ all vanish except for $\Psi_4$ and $\chi_4$. We are then left with \begin{align}\label{vol centrifuge effect} \Psi\big|_{\nabla\cdot\mathbf{u}=0}= 1-\frac{\alpha^2}{12}\,{h}^{4} \left( \|S\|^2_F - \|\Omega\|^2_F\right)+O \left( {h}^{5} \right) \end{align} which relates the fluid rate of strain and rotation with the contractivity of physical volume in the same way as the centrifuge effect \eqref{MaxeyCentrifuge}. That is, if the rate of vorticity is greater than the rate of strain, physical volumes of particles will contract and vice versa. \subsection{Expanding the numerical solution and numerical errors}\label{sec:num int sph}\label{sec:interp errors} In this section, we perform a similar analysis to that of section \ref{sec:expansion} but instead of the exact flow of the ODE, we consider now how physical volumes of particles evolve under the \textit{numerical} flow. That is, we will look at how errors to the divergence of the fluid field affect the evolution of volumes of particles under the numerical solution to the equations of motion. We do so by expanding the numerical methods into their elementary differentials and comparing the expansions with the exact solution. We then look at how errors to the divergence of the fluid field affect the evolution of physical volume under the numerical flow. The results are that if a divergence-free interpolation method is used, the numerical methods preserve the same qualitative behavior of the centrifuge effect. Moreover, we show here that the centrifuge-preserving methods replicate the centrifuge-effect from equation \eqref{vol centrifuge effect} up to the accuracy of the interpolation method when the fluid field is divergence-free. Consider the map $\Phi^{[n]}_h:(\mathbf{v}_0,\mathbf{x}_0)\rightarrow(\mathbf{v}_1,\mathbf{x}_1)$, where the superscript $[n]$ denotes the numerical method in consideration. We calculate $\Psi^{[n]} = \det\left(\frac{\partial \mathbf{x}_1}{\partial \mathbf{x}_0}\right)$ and expand the solution in $h$ yielding an expression of the form \begin{equation}\label{num expansion} \Psi^{[n]} = 1 + h\Psi^{[n]}_1+h^2\Psi^{[n]}_2+h^3\Psi^{[n]}_3+h^4\Psi^{[n]}_4+O(h^5) \end{equation} The values of $\Psi^{[n]}_i$ for $i = 2,3,4$ for the Forward Euler (FE1), Lie-Trotter (LT1), order one centrifuge-preserving (CP1), Ralston (RK2), Adams-Bashforth two-step (AB2), order two centrifuge-preserving (CP2) and Strang splitting (SS2) methods are presented in table \ref{table:num expansion}. Note that $\Psi^{[n]}_1=0$ for all the above methods. \renewcommand{\arraystretch}{1.5} \begin{table} \centering \begin{tabular}{c|ccc} Method & $\Psi_2^{[n]}$ & $\Psi_3^{[n]}$ & $\Psi_4^{[n]}$ \\ \hline \begin{tabular}{c} Exact\\ solution\\ \end{tabular} & $\frac{\alpha}{2}\chi_3$&${\frac {\alpha}{6}}\left(\chi_5-\alpha\chi_3\right)$ & \begin{tabular}{c} $ \frac{\alpha}{24}\,\big(\chi_1 + \alpha\chi_2 + \alpha^2 \chi_3+ 3\alpha\chi_3^2 $ \\ $\qquad -2 \alpha\chi_4 - 2\alpha\chi_5\big)$ \\ \end{tabular} \\ \hline FE1 &0 & 0 & 0 \\ \hline LT1 & $\alpha\chi_3$ & $\frac{\alpha^2}{2}\chi_3$ &$-\frac {{\alpha}^{2}}{6} \left(3\,\chi_{{4}}- \alpha\,\chi_{{3}}-3\,{\chi_{{3}}}^{2} \right)$ \\ \hline CP1 & $\frac{\alpha\sqrt{6}}{6}\chi_3$ &${\frac {\alpha\, \left( \sqrt {6}-1 \right) }{6}}\chi_{{5}}-{\frac {{\alpha}^{2}\sqrt {6}}{12}}\chi_{{3}} $&\begin{tabular}{c} $\frac {\alpha}{36} \big( ( {\alpha}^{2}\chi_{{3}}-3\,\alpha\, \chi_{{5}}+{\frac {7}{2}\chi_{{1}}} ) \sqrt {6}$\\ $+ \big( 3\,{ \chi_{{3}}}^{2}-3\,\chi_{{4}}+3\,\chi_{{5}} \big) \alpha-6\,\chi_{{1}} \big) $ \end{tabular}\\ \hline AB2 & $\frac{3\alpha}{16\theta}\chi_3$ & ${\frac {3\alpha}{32}}\left(\chi_5-\alpha\chi_3\right)$ & $\frac{\alpha}{128}\left(3\theta\chi_1+16\alpha\chi_3^2-16\alpha\chi_4\right)$ \\ \hline RK2 & $\frac{\alpha}{2}\chi_3$& 0 & $\frac{\alpha^2}{8}\left(\chi_3^2 - \chi_4\right)$ \\ \hline CP2 & $\frac{\alpha}{2}\chi_3$ &$\frac{3\alpha^2}{16}\chi_3+\frac{\alpha}{6}\chi_5$ & \begin{tabular}{c} $\frac{\alpha}{576}\, \big( 33\,{\alpha}^{2}\chi_{{3}}+72\,\alpha\,{\chi_{{3}}}^{2}+24\,\alpha\,\chi_{{2}}$\\ $\quad -48\,\alpha\,\chi_{{4}}-60\,\alpha\,\chi_{{5}}+32\,\chi_{{1}} \big)$\\ \end{tabular} \\ \hline SS2 & $\frac{\alpha}{2}\chi_3$ &${\frac {\alpha}{4}}\left(\chi_5-\alpha\chi_3\right)$ & $\frac{\alpha}{48}\,\left(3\chi_1 + 4\alpha^2 \chi_3 + 6\alpha \chi_3^2 -6 \alpha \chi_4 - 6\alpha\chi_5\right)$ \\ \end{tabular} \caption{The terms in the series expansion \eqref{num expansion} for the physical volume $\Psi^{[n]}$ under various numerical methods. Note that $\Psi^{[n]}_1 = 0$ for all the methods. } \label{table:num expansion} \end{table} \renewcommand{\arraystretch}{1} We make a number of observations from this table. First, the divergence of the fluid field affects $\Psi^{[n]}$ at $O(h^2)$ for each method. The one exception to this is FE1, which satisfies $\Psi^{[F\!E]}=1$ and therefore erroneously preserves physical volume. When the divergence of the fluid field is zero, all the $\chi_i=0$ except $\chi_4$. For example, setting $\nabla\cdot\mathbf{u}=0$ gives for the SS2 method \begin{equation}\label{strangCentrifuge} \Psi^{[SS]}\big|_{\nabla\cdot\mathbf{u}=0} = 1- \frac{\alpha^2}{8}\,{h}^{4} \left( \|S\|^2_F - \|\Omega\|^2_F\right)+O \left( {h}^{5} \right). \end{equation} This means that the numerical solution generated by the Strang splitting method \eqref{strang} reproduces the \textit{qualitative} nature of the centrifuge effect, in the sense that $\Psi^{[SS]}\big|_{\nabla\cdot\mathbf{u}=0}>1$ when $\left.\|\Omega\|_F>\|S\|_F\right.$. This qualitative centrifuge effect is seen by all the methods (other than FE1) by setting $\nabla\cdot\mathbf{u}=0$ in table \ref{table:num expansion}. However, we note here that the coefficient of the $O(h^4)$ term in equation \eqref{strangCentrifuge} is different to that of the exact solution \eqref{vol centrifuge effect}. Meaning that, while the method contracts physical volume when the exact solution does, it does so at an erroneous rate. This issue is circumvented by the centrifuge-preserving methods (CP1 and CP2), where we have chosen the time-step coefficients in such a way such that they yield the exact same expansion as \eqref{vol centrifuge effect} up to $O(h^4)$ and hence contracts physical volume at the same rate as the exact solution to leading order. We now discuss the effect of interpolation errors in simulations of spherical particles in numerically calculated flows. Say that $\mathbf{u}_e(\mathbf{x})$ is the true solution to the underlying Navier-Stokes equations that satisfies $\nabla\cdot\mathbf{u}_e(\mathbf{x})=0$. As this exact solution is generally not available, we consider the following three cases \begin{enumerate} \item Case (a): the fluid field has interpolation errors $\boldsymbol{\delta}(\mathbf{x})$ that are not divergence free $\mathbf{u}(\mathbf{x})= \mathbf{u}_e(\mathbf{x}) + \boldsymbol{\delta}(\mathbf{x})$, where $\nabla\cdot\boldsymbol{\delta}(\mathbf{x}) \ne 0$ (e.g., using standard polynomial interpolation) \item Case (b): the fluid field has interpolation errors $\boldsymbol{\delta}(\mathbf{x})$ and is divergence free $\mathbf{u}(\mathbf{x})= \mathbf{u}_e(\mathbf{x}) + \boldsymbol{\delta}(\mathbf{x})$, where $\nabla\cdot\boldsymbol{\delta}(\mathbf{x}) = 0$ (e.g., using MRBF interpolation) \item Case (c): the fluid field is free of errors $\mathbf{u}(\mathbf{x}) = \mathbf{u}_e(\mathbf{x})$ and $\nabla\cdot\mathbf{u}(\mathbf{x})=0$ (e.g., when the velocity field is available in closed form, also referred to as ``exact" interpolation.) \end{enumerate} We pay particular attention to how errors resulting in $\nabla\cdot\boldsymbol{\delta}(\mathbf{x})\ne0$ affect how the numerical methods evolve physical volume. To quantify this we define the physical volume error by \begin{equation} \Delta \Psi^{[n]} = \Psi-\Psi^{[n]}. \end{equation} Here, $\Psi$ is used to denote the physical volume over time $h$ of the exact solution with fluid field corresponding to case (c), that is, the physical volume of the true solution in the absence of any errors, whereas $\Psi^{[n]}$ is the physical volume of the numerical solution with fluid field corresponding to one of the three cases given below. The results are presented in table \ref{table:vol errors}. We see here that the physical volume errors in case (a) are $O(h^2)$ and proportional to $\nabla\cdot\boldsymbol{\delta}(\mathbf{x})$ for all the methods. In case (b), the physical volume errors are proportional to $O(h^4)$ except for the centrifuge-preserving methods, which have physical volume error proportional to $O(h^4\delta_4)$, where $\delta_4 = \left| \chi_4(\mathbf{u}) - \chi_4(\mathbf{u}+{\boldsymbol \delta})\right| = O(|\boldsymbol{\delta}|) \ll |\chi_4(\mathbf{u})|$ is the error of $\chi_4$ from the interpolation method, which we assume is small. In case (c), $\delta_4=0$ and the centrifuge-preserving methods have physical volume error proportional to $O(h^5)$, while the other methods are $O(h^4)$. It is due to this behavior that we expect all the methods to more accurately evolve physical volume, when a divergence-free interpolation method is implemented such as with MRBFs. In this case, we expect the centrifuge-preserving methods to perform especially well due to table \ref{table:vol errors}. \renewcommand{\arraystretch}{1.5} \begin{table}[h!] \centering \begin{tabular}{c|ccc} & \multicolumn{3}{c}{$|\Delta\Psi^{[n]}|$}\\\hline Method & \renewcommand{\arraystretch}{1.1}\begin{tabular}{c} Case (a) \\ ($\nabla\cdot\mathbf{u}(\mathbf{x})\ne0$)\\ \end{tabular} & \renewcommand{\arraystretch}{1.1}\begin{tabular}{c} Case (b)\\ ($\nabla\cdot\mathbf{u}(\mathbf{x})=0$)\\ \end{tabular}&\renewcommand{\arraystretch}{1.1}\begin{tabular}{c} Case (c)\\ ($\mathbf{u}(\mathbf{x})=\mathbf{u}_e(\mathbf{x})$)\\ \end{tabular}\\ \hline FE1 & $\frac{\alpha}{2}h^2|\chi_3|$&$\frac{\alpha^2}{12}\,{h}^{4} |\chi_4|$ &$\frac{\alpha^2}{12}\,{h}^{4} |\chi_4|$ \\ LT1 & $\alpha h^2|\chi_3|$ &$\frac{5\alpha^2}{12}\,{h}^{4} |\chi_4+\frac{\delta_4}{2}|$ &$\frac{5\alpha^2}{12}\,{h}^{4} |\chi_4|$ \\ CP1 & $\frac{\alpha\sqrt{6}}{6}h^2|\chi_3|$ &$\frac{\alpha^2}{12}\,{h}^{4} |\delta_4|$ &$O(h^5)$ \\ AB2 & $\frac{3\alpha }{16\theta}h^2|\chi_3|$&$\frac{\alpha^2}{24}\,{h}^{4}|\chi_4+\frac{\delta_4}{8}|$ &$\frac{\alpha^2 h^2}{24}\,{h}^{4} |\chi_4|$ \\ RK2 &$\frac{\alpha}{2}h^2|\chi_3|$ &$\frac{\alpha^2}{24}\,{h}^{4}|\chi_4+\frac{\delta_4}{8}|$ &$\frac{\alpha^2}{24}\,{h}^{4} |\chi_4|$ \\ CP2 & $\frac{\alpha}{2}h^2|\chi_3|$ & $\frac{\alpha^2}{12}\,{h}^{4} |\delta_4|$ &$O(h^5)$ \\ SS2 & $\frac{\alpha}{2}h^2|\chi_3|$& $\frac{\alpha^2}{24}\,{h}^{4}|\chi_4+\frac{\delta_4}{8}|$ & $\frac{\alpha^2}{24}\,{h}^{4} |\chi_4|$\\ \end{tabular} \caption{The errors of the physical volume after one time step for the numerical methods under consideration. } \label{table:vol errors} \end{table} \renewcommand{\arraystretch}{1} In addition to the erroneous contraction of physical volume, we note from examples \ref{FEcontraction} - \ref{AB2example} that large divergence errors impose more stringent restrictions on the time step for the numerical methods to be contractive. \section{Numerical simulations}\label{sec:numerical tests} In this section we test our numerical methods for simulating suspensions of particles in viscous flows. The section begins by outlining the flow field and summarizing the methods and numerical parameters. We then outline the computational cost and verify the convergence of the methods. Next we simulate suspensions of $10^4$ particles in Taylor-Green vortices. This is the most important part of the section and is comprised of three experiments. The first compares the integration methods with exact evaluation of the fluid field. The second compares the effect of different interpolation errors with the CP2 integration. The third and final experiment explores how a combination of the proposed interpolation and integration methods can be used to generate cost-effective accurate particle distributions compared to conventional methods. \subsection{Preliminaries}\label{preliminaries} Here we will briefly outline the numerical methods that are under consideration in the forthcoming numerical experiments, the fluid field and finally the particle models. The integration methods under consideration and their properties are summarized in table \ref{table:integration properties}. \begin{table}[h!] \centering \begin{tabular}{c|ccccccc} & FE1 & LT1 & CP1 & AB2 & RK2 & SS2 & CP2 \\ \hline Order & 1 & 1 & 1 & 2 & 2 & 2 & 2 \\ Contractivity-preserving & No & Yes & Yes & No & No & Yes & Yes \\ Centrifuge-preserving & No & No & Yes & No & No & No & Yes \\ \end{tabular} \caption{Summary of the properties of the integration methods under consideration} \label{table:integration properties} \end{table} We will abbreviate the divergence-free MRBF interpolation with the nearest $(n+1)\times (n+1)\times (n+1)$ data points by MRBF$n$ and the non-divergence-free order $n$ tripolynomial interpolation by TP$n$. The MRBF shape parameters are set to $\epsilon_1 = 0.31$, $\epsilon_2 = 0.23$ and $\epsilon_3 = 0.16$ corresponding to the MRBF1, MRBF2 and MRBF3 schemes, respectively, and are chosen empirically. We will compare the methods against a reference solution that uses exact evaluation of the analytic fluid field and the classical fourth order Runge-Kutta method for time integration with a time step that is 10 times smaller then that of the other methods. Note that such a reference solution is only available in the case that the flow field is known in closed form. The discrete fluid field is generated by evaluating a closed form solution to the Navier Stokes equation on a regularly spaced grid with uniform sampling in each direction $\Delta x = \Delta y = \Delta z=1/10$. We use a stationary Taylor-Green vortex solution that was proposed in \cite{taylor1937mechanism} and has been used by other authors to study the behaviour of particles in cellular flow fields \cite{maxey1987motion,ruan2020structural,bergougnoux2014motion,jayaram2020clustering}. The particular Taylor-Green flow field used in the experiments is given by $\mathbf{u}(\mathbf{x}) = (u(\mathbf{x}),v(\mathbf{x}),w(\mathbf{x}))^T$ where \begin{align} u(\mathbf{x}) =& \,2\cos(2\pi x)\sin(2\pi y)\sin(2\pi z),\\ v(\mathbf{x}) =& -\sin(2\pi x)\cos(2\pi y)\sin(2\pi z),\label{TGV}\\ w(\mathbf{x}) =& -\sin(2\pi x)\sin(2\pi y)\cos(2\pi z). \end{align} We will perform experiments on both spherical and non-spherical particles. Denoting by $\lambda$ the aspect ratio of the particle, then $\lambda=1$ corresponds to spherical particles, $\lambda>1$ corresponds to a prolate spheroid and $\lambda<1$ corresponds to an oblate spheroid. For $\lambda=1$, the equations of motion are given by equations \eqref{sphericalv} and \eqref{sphericalx}, while for $\lambda\ne1$ the equations of motion are \eqref{sphericalv}, \eqref{sphericalx}, \eqref{eq:rotation} and \eqref{eq:Qdot}. For details about the moment of inertia tensor $J$, torque term $\mathbf{T}$, resistance tensor $K$ for the $\lambda\ne1$ cases we refer to \ref{model}. Finally, we note that for all of the following experiments, the particles are given a random initial location within in a box of width $0.01$ centered at the point $x_0 = (1/3,1/5,1/7)^T$ in the domain and a random initial orientation for non-spherical particles. \subsection{Computational cost} Here, we outline the main computational costs associated with the methods. The two main steps in the algorithm are the interpolation step and the time integration step, which we examine separately. The wall clock times $T_w$ for $10^4$ time steps of the considered integration methods using exact evaluation of the fluid field are measured and presented in table \ref{integration times} and $T_w$ for $10^4$ time steps of the various interpolation methods using the FE1 method are presented in table \ref{interpolation times}. We note that the centrifuge-preserving methods are slightly more costly due to extra evaluations of the $\Phi_{ah}^{[LT]}$ operator. However, we note that one could speed up many of these splitting methods by observing that they are conjugate to a lower stage faster method, for example $$\left(\Phi_h^{[SS]}\right)^N = \psi^{[1]}_{\frac{h}{2}}\circ\left(\Phi_h^{[LT]*}\right)^N \circ\psi^{[1]}_{\frac{-h}{2}},$$ hence repeated evaluations of the operator $\Phi_h^{[SS]}$ when implemented in this way effectively has the same cost as $\Phi_h^{[LT]*}$. Similar observations are made for the centrifuge preserving methods. For the interpolation step, there are two main calculations that contribute the most to the computational cost. The first being the solution of a linear system of size $3n^3\times3n^3$ to find the interpolation coefficients. Guassian elimination is used for this purpose due to simplicity and the fact that the systems are not so large (at most $192\times192$ for the MRBF3 and TP3 methods). However, we note the existence of the exact matrix inverses for the coefficient matrices of these linear systems. This can be found in \cite{lekien2005tricubic} for the TP method and \cite{akaike1973block} for the MRBF method, the latter being due to the fact that the coefficient matrix has a block toeplitz structure for MRBF interpolation on Cartesian grids. The next most significant cost is evaluation of the sums of basis functions, that is, the sum in \eqref{MRBFsurf} and a similar equation for the TP methods. MRBF interpolation involves evaluation of more complex basis functions (i.e., matrix-vector products containing exponentials of polynomials), which is more costly than evaluating sums of monomials for the TP interpolation. This cost is more of a burden for the MRBF2 and MRBF3 methods as seen in table \ref{interpolation times}. \begin{table}[h] \centering \begin{tabular}{c|ccccccc} & FE1 & LT1 & CP1 & AB2 & RK2 & CP2 & SS2 \\ \hline $T_w\,(s)$ & 1.1817 & 1.3610 & 1.6577 & 2.0524 & 2.0376 & 2.5001 & 1.6455 \\ \end{tabular} \caption{The wall clock times for $10^4$ time steps using different integration methods and exact interpolation.} \label{integration times} \vspace{0.5cm} \begin{tabular}{c|cccccc} & MRBF1 & MRBF2 & MRBF3 & TP1 & TP2 & TP3\\ \hline $T_w\,(s)$ & 3.3796 & 5.0776 & 12.0333 & 3.7263 & 4.2226 & 5.7551 \\ \end{tabular} \caption{The wall clock times for $10^4$ time steps using the FE1 method and different integration methods.} \label{interpolation times} \end{table} We see here that the MRBF1 and TP1 methods are roughly equal in cost. The MRBF2 method is about double that of TP1 and MRBF1 and is more expensive than the TP2 method. The MRBF3 method is double the cost of the TP3 method. We recall that we are not constrained to these three choices of MRBF methods and one is free to use any number of data points to achieve an optimum balance of accuracy and cost. This freedom is due to the fact that MRBF interpolation was designed for interpolation on scattered data points \cite{wendland2004scattered}. This option is not available for the TP$n$ methods, where $n+1$ grid points in each dimension are required to ensure the existence of a unique degree $n$ interpolating polynomial. \subsection{Convergence} In this section, we will verify the convergence of the integration methods first with exact interpolation then with various combinations of the interpolation methods for spherical ($\lambda = 1$) and non-spherical ($\lambda = 10$). In these experiments, we set $St=1$ and compute the particles' dynamics for time $T=1$. The convergence of the error, measured in the 2-norm, of the integration methods are presented in figures \ref{fig:conv sph} and \ref{fig:conv rod}. We observe here that the FE1 and LT1 methods have similar accuracy as do the RK2 and SS2 methods. It is noted that the benefits of preserving contractivity in the various splitting methods are expected to be seen after longer times. One remarkable observation here is that for this Stokes number the first order CP1 method is competitive with the second order RK2 and AB2 methods at large time steps, furthermore, the CP2 method is the most accurate by a factor of about 5 in both scenarios. Figures \ref{fig:conv2 sph} and \ref{fig:conv2 rod} show the convergence of the CP2 and RK2 methods with different interpolation methods. We see that the methods initially converge at their expected order, but as $h$ goes to zero we see that the integration error becomes overshadowed by the $h$-independent interpolation error. We observe that the MRBF solutions yield more accurate solutions than the TP solutions using the equal number of data points. However, the TP3 solution is expected to perform better for longer simulations where particles cross grid cells. This is because the piece-wise fluid field constructed from the TP3 method is globally $C^1(\mathbb{R}^3)$, meaning that the spatial derivatives of the fluid velocity are everywhere continuous. This is not true for the other methods. Finally, we remark that the centrifuge-preserving methods perform equally well for non-spherical particles. \begin{figure}[!h] \centering \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{convergence_figs_CP1/convergence_spheres_exact} \caption{} \label{fig:conv sph} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{convergence_figs_CP1/convergence_rods_exact} \caption{} \label{fig:conv rod} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{convergence_figs_CP1/convergence_spheres_interp} \caption{} \label{fig:conv2 sph} \end{subfigure} \begin{subfigure}{0.4\textwidth} \includegraphics[width=\linewidth]{convergence_figs_CP1/convergence_rods_interp} \caption{} \label{fig:conv2 rod} \end{subfigure} \caption{Figures (a) and (b) show the convergence of the numerical integration methods using exact interpolation for the $\lambda = 1$ and $\lambda = 10$ equations, respectively. Figures (c) and (d) show the convergence of the CP2 and RK2 methods with the six interpolation methods for the $\lambda = 1$ and $\lambda = 10$ equations, respectively. The dashed lines are $O(h)$ and $O(h^2)$ } \label{} \end{figure} \subsection{Simulating suspensions of particles}\label{sec:clusters} Up until now we have mainly focused on the average error in the positions of individual particles. It is well known that standard methods such as polynomial interpolation and Adams-Bashforth integration do a good job at minimizing this truncation error in some norm. However, while it is indeed important that this conventional measure of error is kept at a minimum, in practice one is usually more interested in properties of distributions of many indistinguishable particles meaning that the individual error of each particle is less important. Due to this, it is more desirable that an algorithm reproduces accurately the spatial statistical properties of many particles rather than minimizing the absolute errors of each individual particle. With this in mind, the main goal here is to test to what extent the aforementioned errors affect suspensions of particles when viewed as a single discrete probability distribution. We will show that distributions of particles calculated by our proposed geometric methods will more closely resemble that of the exact solution, despite sometimes having higher average error per particle in the conventional sense. In our context, a distribution $P=\{ (\mathbf{x}_i,w_i)\}_{i=1}^{n_c}$ is a set of $n_c$ non-empty equally sized cells, where $\mathbf{x}_i$ is the location of the cell center and $w_i$ is a weight that is equal to the number of particles in that cell. We let $P_n$ denote a distribution where the particle locations are calculated by a numerical method, $P_{\mathrm{ref}}$ refers to the distribution obtained by the reference solution and we use $300\le n_c\le400$ depending on the spread of particles. We will determine the accuracy of $P_n$ using three measures which we now outline. The first is the first Wasserstein distance, which is a natural metric to compare the distance between two discrete probability distributions of equal size (also known as the Earth Mover Distance). The first Wasserstein distance between two probability distributions is denoted by $W_1(P_1,P_2)$ and is a measure of the cost of transporting the distribution $P_1$ into $P_2$ in the cheapest way possible. The cost is measured as the distance between cell centers, measured in the 2-norm and weighted by the number of particles being transported. For mathematical details about the first Wasserstein distance, we refer the reader to \cite{rubner2000earth} and the numerical computation of the first Wasserstein distances are computed using a publicly available MATLAB code \cite{ulmaz}. We denote by $W_1(P_n) = W_1(P_n,P_{\mathrm{ref}})$ the first Wasserstein distance between $P_n$ and $P_{\mathrm{ref}}$. The second is the relative entropy (also known as the Kullback-Leibler divergence) \cite{kullback1951information}, which is a measure of how much information is lost from a reference distribution $P_2$ when an approximate distribution $P_1$ is used. The relative entropy is calculated by \begin{equation} E(P_1,P_2) = \sum_{\mathbf{x}_i\in\Omega_P} P_1(\mathbf{x}_i) \log\left(\frac{P_1(\mathbf{x}_i)}{P_2(\mathbf{x}_i)}\right), \end{equation} where $P(\mathbf{x}_i) = w_i$ is the number of particles in the cell at $\mathbf{x}_i$ and $\Omega_P$ is the support of the two distributions. If there is an empty cell in one distribution and not the other, say at $\mathbf{x}=\mathbf{x}_0$ we use $P(\mathbf{x}_0)=10^{-1}$, to avoid singularities. This modestly penalizes the approximate solution for predicting a non-zero probability of having a particle in a cell that should have zero particles according to the reference distribution. We denote by $E(P_n) = E(P_n,P_{\mathrm{ref}})/n_p$ the relative entropy between $P_n$ and $P_{\mathrm{ref}}$ scaled by the number of particles $n_p = 10^4$. Finally, the third means of determining the accuracy of the distribution is by the average error of the particle positions $\overline{\Delta \mathbf{x}_n}$. This conventional measure of error is calculated by taking the difference between the final position of the numerical and reference solution starting from the same initial conditions and averaging over all the $n_p = 10^4$ particles, that is \begin{equation} \overline{\Delta \mathbf{x}_n} = \frac{1}{n_p}\sum_{i=1}^{n_p}\|\mathbf{x}_{n,i}-\mathbf{x}_{\mathrm{ref},i}\|_2, \end{equation} where $\mathbf{x}_{n,i}$ is the $i$th particle calculated by the numerical method and $\mathbf{x}_{\mathrm{ref},i}$ is the $i$th particle under the reference solution. As the rotational variables are strongly coupled with the translational variables, errors in the rotational dynamics will also influence the final positions of the particles, hence this is a reasonable measure of the error of the algorithms' overall accuracy in computing the dynamics of a single particle. We recall that this error is that which the conventional methods are designed to reduce when referring the the global order of accuracy of a method. In the forthcoming experiments, we will use various combinations of integration and interpolation methods to compute the paths of $10^4$ particles in the discrete Taylor-Green vortices starting with random positions and orientations within a cube of width $1/100$ centered at the point $(1/3,1/5,1/7)^T$. We perform three experiments. The first compares how the various numerical integration methods and their errors affect the spatial distribution of suspensions of particles in the absence of interpolation errors. The second experiment investigates how interpolation errors affect the spatial distribution of particles using the CP2 method. Finally, we look at how a combination of MRBF interpolation and centrifuge-preserving integration can be used to calculate fast and accurate suspensions of particles compared to the conventional AB2+TP$n$ methods, similar to the methods used in \cite{Portela, challabotla2015orientation, van2000dynamics, PanBanerjee, wang1996large}, for example. \subsubsection{Comparison of integration methods}\label{sec:spherical clusters} In this experiment we use the seven integration methods outlined in table \ref{table:integration properties} to simulate a suspension of particles evolving in the Taylor-Green flow with exact interpolation. The methods are each tested in six separate simulations, three with Stokes numbers of $St = 1/5,1,10$ for spherical particles ($\lambda = 1$) and three with the same Stokes numbers for non-spherical particles ($\lambda = 1/10$). At the end of the simulation the relative entropy $E(P_n)$, first Wasserstein distance $W(P_n)$ and the average spatial error $\overline{\Delta \mathbf{x}_n}$ between the numerical distribution and the reference distribution are calculated and presented in table \ref{table:integration}. The time step $h$ and total simulation time $T$ are also presented in this table. We start by discussing some qualitative features of the final distributions, examples of which are given in figure \ref{integration_spheres}. We then discuss the results of table \ref{table:integration} in detail. Figure \ref{integration_spheres} depicts the final distribution of the particles for the various integration methods. The particle positions are plotted modulo 2 for presentation purposes and represented by black dots, while the reference solution is plotted using green dots. Figures \ref{is1} to \ref{is6} correspond to the $St = 10$, $\lambda = 1$ simulation and is viewed along the $y$ direction. We see here that the CP2 solution is able to predict the correct clustering in all the regions that are predicted by the green reference solution. The LT1, CP1 and SS2 solutions are visually similar to each other, however do not correctly predict clustering of particles in some regions, given by regions of green dots that are void of black dots. The RK2 and AB2 solutions do a worse job as seen, again, by even more regions with a higher concentration of green dots compared to black dots. Similar observations are again seen in figures \ref{is7} to \ref{is12}, which correspond to the $St = 1$, $\lambda = 1/10$ simulation, viewed along the $z$ direction. In this simulation, the particles more closely follow the streamlines of the fluid field and more quickly concentrate in regions of high strain as seen by the regions of dense green dots. In these figures, it is even more easily seen that the four contractivity preserving methods do a good job at correctly clustering particles in regions where the reference solution does, while we see with the FE1 and RK2 solutions multiple regions exhibiting an erroneous concentration of black dots that are void of green dots. The AB2 solution is unstable for these parameters. \begin{figure} \centering \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_integration_2_perturbed_spheres_box/Figs_CP1_2_perturbed_spheres_box_exact1+CP1_nt_21_zx} \caption{}\label{is1} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_integration_2_perturbed_spheres_box/Figs_integration_2_perturbed_spheres_box_exact+RK2_nt_21_zx} \caption{}\label{is2} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_integration_2_perturbed_spheres_box/Figs_integration_2_perturbed_spheres_box_exact+SS2_nt_21_zx} \caption{}\label{is3} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_integration_2_perturbed_spheres_box/Figs_integration_2_perturbed_spheres_box_exact+LT1_nt_21_zx} \caption{}\label{is4} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_integration_2_perturbed_spheres_box/Figs_integration_2_perturbed_spheres_box_exact+AB2_nt_21_zx} \caption{}\label{is5} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_integration_2_perturbed_spheres_box/Figs_integration_2_perturbed_spheres_box_exact+CP2_nt_21_zx} \caption{}\label{is6} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_integration_6_O1_disks_box/Figs_integration_6_O1_disks_box_exact+FE1_nt_21_xy} \caption{}\label{is7} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_integration_6_O1_disks_box/Figs_integration_6_O1_disks_box_exact+RK2_nt_21_xy} \caption{}\label{is8} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_integration_6_O1_disks_box/Figs_integration_6_O1_disks_box_exact+SS2_nt_21_xy} \caption{}\label{is9} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_integration_6_O1_disks_box/Figs_integration_6_O1_disks_box_exact+LT1_nt_21_xy} \caption{}\label{is10} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_integration_6_O1_disks_box/Figs_CP1_5_O1_disks_box_exact1+CP1_nt_21_xy} \caption{}\label{is11} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_integration_6_O1_disks_box/Figs_integration_6_O1_disks_box_exact+CP2_nt_21_xy} \caption{}\label{is12} \end{subfigure} \caption{Figures (a) through (f) show the spatial distribution of the particles in the $x-z$ plane for the $St = 10$, $h = 1/5$, $T = 20$, $\lambda = 1$ simulation from table \ref{table:integration} (The exact+FE1 is not shown). Figures (g) through (l) show the spatial distribution of the particles in the $x-y$ plane for the $St = 1$, $h = 1/20$, $T = 8$, $\lambda = 1/10$ simulation (the exact+AB2 solution is not shown). The reference solution is plotted in green in all figures.} \label{integration_spheres} \end{figure} To quantify the above observations, which have up until now been visual, we turn our attention to table \ref{table:integration}. We start by outlining some general observations that are common to all six simulations. Looking first at the order one methods, we observe that the LT1 and CP1 methods, which are contractivity preserving, outperform the FE1 method in almost all measures in each simulation despite the fact that their order of accuracy is the same. What is striking here is that in most simulations the LT1 and CP1 methods generally have a lower $\overline{\Delta \mathbf{x}_n}$, $W(P_n)$ and $E(P_n)$ compared to the conventional RK2 and AB2 methods despite being of lower order and computational cost. Similar observations are made if we turn our attention towards the order two methods. That is, the SS2 method in most cases has lower $\overline{\Delta \mathbf{x}_n}$, $W(P_n)$ and $E(P_n)$ than the RK2 and AB2 methods, and better still is the CP2 method. The advantage of the CP2 method over the SS2 is more pronounced than the advantage of the CP1 method over the LT1. The CP2 method has the lowest $\overline{\Delta \mathbf{x}_n}$, $W(P_n)$ and $E(P_n)$ in all six simulations and is clearly the best method here in all three metrics. \begin{table}[h] \centering \begin{tabular}{|c|c|ccc|ccc|} \hline & & \multicolumn{3}{c|}{$\lambda = 1$} & \multicolumn{3}{c|}{$\lambda = 1/10$} \\ \hline & $P_n$ & $E(P_n)$ & $W(P_n)$ & $\overline{\Delta \mathbf{x}_n}$ & $E(P_n)$ & $W(P_n)$ & $\overline{\Delta \mathbf{x}_n}$ \\ \hline \multirow{2}{*}{$St = \frac{1}{5}$} & exact+FE1 & 7.0143 & 0.4466 & 0.5534 & -- & -- & -- \\ & exact+LT1 & 0.4596 & 0.0070 & 0.0098 & 2.3929 & 0.1793 & 0.2086 \\ \multirow{2}{*}{$h = \frac{1}{50}$} & exact+CP1 & 0.1212 & 0.0049 & 0.0065 & 2.3100 & 0.1750 & 0.2048 \\ & exact+AB2 & 4.0952 & 0.0697 & 0.0585 & -- & -- & -- \\ \multirow{2}{*}{$T = 4$} & exact+RK2 & 1.6791 & 0.0234 & 0.0215 & -- & -- & -- \\ & exact+CP2 & 0.0581 & 0.0022 & 0.0027 & 0.5339 & 0.0666 & 0.0983 \\ & exact+SS2 & 0.1169 & 0.0048 & 0.0062 & 2.2956 & 0.1739 & 0.2039 \\ \hline \multirow{2}{*}{ $St = 1$ } & exact+FE1 & 6.9916 & 1.6321 & 2.5224 & 0.9086 & 0.6445 & 1.7383 \\ & exact+LT1 & 0.1538 & 0.0929 & 1.0000 & 0.6645 & 0.4788 & 1.2536 \\ \multirow{2}{*}{ $h = \frac{1}{20}$ } & exact+CP1 & 0.1061 & 0.1034 & 1.0084 & 0.5716 & 0.4293 & 1.2554 \\ & exact+AB2 & 0.4833 & 0.2715 & 1.4977 & -- & -- & -- \\ \multirow{2}{*}{ $T = 8$ } & exact+RK2 & 0.2696 & 0.1566 & 1.3083 & 1.3583 & 0.4631 & 1.4790 \\ & exact+CP2 & 0.0786 & 0.0558 & 0.5512 & 0.2505 & 0.2892 & 1.0857 \\ & exact+SS2 & 0.1435 & 0.0890 & 1.0044 & 0.6374 & 0.4858 & 1.2535 \\ \hline \multirow{2}{*}{ $St = 10$ } & exact+FE1 & 6.8070 & 2.7326 & 2.8528 & 1.3145 & 1.9214 & 4.8392 \\ & exact+LT1 & 0.3531 & 0.1626 & 0.6588 & 0.1002 & 0.5384 & 5.1653 \\ \multirow{2}{*}{ $h = \frac{1}{5}$ }& exact+CP1 & 0.3014 & 0.1619 & 0.6479 &0.0658 & 0.5721 & 5.1883\\ & exact+AB2 & 0.7651 & 0.3095 & 0.8671 & 0.8826 & 1.9070 & 4.9994 \\ \multirow{2}{*}{ $T = 20$ } & exact+RK2 & 0.5804 & 0.2522 & 0.9074 & 0.1593 & 0.7891 & 5.2612 \\ & exact+CP2 & 0.0733 & 0.0667 & 0.2919 & 0.0711 & 0.2761 & 4.9678 \\ & exact+SS2 & 0.3157 & 0.1567 & 0.6700 & 0.0992 & 0.5433 & 5.2206 \\ \hline \end{tabular} \caption{The relative entropy $E(P_n)$, first Wasserstein distance $W(P_n)$ and $\overline{\Delta \mathbf{x}_n}$ between the numerical distribution $P_n$ and the reference distribution. The numerical distributions are calculated by various integration methods that use exact interpolation as shown in the second column. The first column contains the Stokes number $St$, time step $h$ and simulation time $T$ used in the six simulations. The first row contains the aspect ratio $\lambda$ of the particle shape. Values with a $-$ mean that the numerical solution is unstable. } \label{table:integration} \end{table} For the $St = 1$, $\lambda = 1/10$ simulation, the CP1 solution has a larger $\overline{\Delta \mathbf{x}_n}$ than the SS2 solution, but a lower $W(P_n)$ and $E(P_n)$ which suggests that for these simulation parameters, the centrifuge-preserving property is more advantageous for producing more accurate distributions than simply reducing the accuracy of the method in the conventional sense. In this simulation, the CP1 method is the second best in all measures, the best being the CP2 method. It is also noteworthy that in other simulations the CP1 method has roughly equal, and sometimes lower $\overline{\Delta \mathbf{x}_n}$, $W(P_n)$ and $E(P_n)$ than the SS2 solution, which further suggests that the centrifuge-preserving property is advantageous. One of the most remarkable observations is made for the $St = 10$, $\lambda = 1/10$ experiment. Here, the values of $\overline{\Delta \mathbf{x}_n}$ are quite severe and roughly the same for all methods, due to the fact that the time step is quite large and the non-spherical ODEs are more stiff. Despite this, the contractivity preserving methods have a much lower $W(P_n)$ and $E(P_n)$ and the centrifuge preserving methods are better still. This highlights the fact that preserving the aforementioned physical features in the numerical solution results in spatial distributions that more closely resemble the reference solution, despite having the same $\overline{\Delta \mathbf{x}_n}$. Finally, we mention that all the splitting schemes have better stability properties and are still able to produce accurate clusters of particles for low Stokes numbers and reasonably large time steps as noted by the $\lambda = 1/10$ simulations for $St = 1/5$ and $St = 1$ where we begin to see some of the conventional methods losing stability. \subsubsection{Comparison of interpolation methods} In this experiment we compare the MRBF and TP interpolation methods in combination with the CP2 method to simulate a suspension of particles evolving in the Taylor-Green flow. Six separate simulations are performed, three with Stokes numbers of $St = 1/10,1,10$ for spherical particles ($\lambda = 1$) and three with the same Stokes numbers for non-spherical particles ($\lambda = 5$). At the end of each simulation the average spatial error $\overline{\Delta \mathbf{x}_n}$, relative entropy $E(P_n)$ and the first Wasserstein distance $W(P_n)$ between the numerical distribution and the reference distribution are calculated and the results are presented in table \ref{table:interpolation}. The time step $h$ and total simulation time $T$ are also presented in this table. Some spatial distributions produced by the different interpolation methods are presented in figure \ref{interpolation_figs}. \begin{figure} \centering \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_interpolation_7_stiff_rods_box/Figs_interpolation_7_stiff_rods_box_TP1+CP2_nt_21_xy} \caption{}\label{ii1} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_interpolation_7_stiff_rods_box/Figs_interpolation_7_stiff_rods_box_TP2+CP2_nt_21_xy} \caption{}\label{ii2} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_interpolation_7_stiff_rods_box/Figs_interpolation_7_stiff_rods_box_TP3+CP2_nt_21_xy} \caption{}\label{ii3} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_interpolation_7_stiff_rods_box/Figs_interpolation_7_stiff_rods_box_MRBF1+CP2_nt_21_xy} \caption{}\label{ii4} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_interpolation_7_stiff_rods_box/Figs_interpolation_7_stiff_rods_box_MRBF2+CP2_nt_21_xy} \caption{}\label{ii5} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_interpolation_7_stiff_rods_box/Figs_interpolation_7_stiff_rods_box_MRBF3+CP2_nt_21_xy} \caption{}\label{ii6} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_interpolation_5_O1_spheres_box/Figs_interpolation_5_O1_spheres_box_TP1+CP2_nt_21_yz} \caption{}\label{ii7} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_interpolation_5_O1_spheres_box/Figs_interpolation_5_O1_spheres_box_TP2+CP2_nt_21_yz} \caption{}\label{ii8} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_interpolation_5_O1_spheres_box/Figs_interpolation_5_O1_spheres_box_TP3+CP2_nt_21_yz} \caption{}\label{ii9} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_interpolation_5_O1_spheres_box/Figs_interpolation_5_O1_spheres_box_MRBF1+CP2_nt_21_yz} \caption{}\label{ii10} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_interpolation_5_O1_spheres_box/Figs_interpolation_5_O1_spheres_box_MRBF2+CP2_nt_21_yz} \caption{}\label{ii11} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_interpolation_5_O1_spheres_box/Figs_interpolation_5_O1_spheres_box_MRBF3+CP2_nt_21_yz} \caption{}\label{ii12} \end{subfigure} \caption{Figures (a) through (f) show the spatial distribution of the particles in the $x-y$ plane for the $St = 1/10$, $h = 1/100$, $T = 6$, $\lambda = 1$ simulation from table \ref{table:interpolation}. Figures (g) through (l) show the spatial distribution of the particles in the $y-z$ plane for the $St = 1$, $h = 1/40$, $T = 12$, $\lambda = 1/10$ simulation. The reference solution is plotted in green in all figures.} \label{interpolation_figs} \end{figure} Directing our attention towards figures \ref{ii1} to \ref{ii6}, which show the final distribution of the $St = 1/10$, $\lambda = 5$ simulation looking down the $z$-axis. It can be seen here that the three MRBF solutions look visually very similar to the reference solution, as does the TP3 solution. If we look towards the corresponding part of table \ref{table:interpolation}, we see that the TP3 solution has a $\overline{\Delta \mathbf{x}_n}$ of 0.3468, which is lower than the MRBF1 solution, which has a $\overline{\Delta \mathbf{x}_n}$ of 0.5389. Despite this, the MRBF1 solution, which we note performs exceptionally well here, has a lower $E(P_n)$ and $W(P_n)$ meaning that the final distribution is more similar to the reference distribution even though the $\overline{\Delta \mathbf{x}_n}$ is greater. Figures \ref{ii7} to \ref{ii12} show the final distribution of the $St = 1$, $\lambda = 1$ simulation looking down the $x$-axis. Here we see that the TP1, TP2 and MRBF1 solutions differ visually from the reference solution, whilst the TP3, MRBF2 and MRBF3 solutions look quite similar. From the corresponding section of table \ref{table:interpolation}, we see that the MRBF2 solution's $\overline{\Delta \mathbf{x}_n}$ is 0.5004 compared to the TP3 solution, which is 0.3332, but both have a similar $E(P_n)$ and $W(P_n)$. Furthermore, there are many examples here of the MRBF solutions having higher $\overline{\Delta \mathbf{x}_n}$, but lower $E(P_n)$ and $W(P_n)$. This can be seen in all three $\lambda = 1$ simulations, where the MRBF2 solution has larger $\overline{\Delta \mathbf{x}_n}$ than the TP3 solution but similar or lower $E(P_n)$ and $W(P_n)$. For the $\lambda = 5$ simulations and for $St = 1$ and $St = 10$, the MRBF2 solution outperforms the TP3 solution in all three measures. Such examples indicate that preserving the divergence-free condition is important to acheive accurate spatial distributions. We now make some general observations about table \ref{table:interpolation}. We see that in all but one simulation, the MRBF1 solutions outperform the TP2 solution in all three measures. It is noteworthy that the MRBF1 solution is as fast as TP1 interpolation where both require only eight data points for the interpolation as opposed to 27 data points for the TP2 interpolation, which is a slower method. Additionally, in all six simulations the MRBF3 interpolation method outperforms all the TP solutions in all measures. To summarize, we have seen many examples of the MRBF solutions producing distributions that are more similar to the reference solution than the TP solutions, despite having worse $\overline{\Delta \mathbf{x}_n}$. These observations are consistent with the fact that the CP2 method, among others, loses accuracy in $\Delta\Psi^{[n]}$ when evolving particles in a non-divergence-free flow field. That is, the physical volume $\Psi^{[n]}$ is more strongly affected when the divergence-free condition is broken, despite the fact that the order of accuracy of the method remains unaffected. \begin{table}[h] \centering \begin{tabular}{|c|c|ccc|ccc|} \hline & & \multicolumn{3}{c|}{$\lambda = 1$} & \multicolumn{3}{c|}{$\lambda = 5$} \\ \hline & $P_n$ & $E(P_n)$ & $W(P_n)$ & $\overline{\Delta \mathbf{x}_n}$ & $E(P_n)$ & $W(P_n)$ & $\overline{\Delta \mathbf{x}_n}$ \\ \hline \multirow{2}{*}{ $St = \frac{1}{10}$ } & MRBF1+CP2 & 0.1285 & 0.0860 & 0.3063 & 0.0514 & 0.0478 & 0.5389 \\ & MRBF2+CP2 & 0.1002 & 0.0388 & 0.1256 & 0.0967 & 0.0657 & 0.3578 \\ \multirow{2}{*}{ $h = \frac{1}{100}$ } & MRBF3+CP2 & 0.0380 & 0.0164 & 0.0614 & 0.0337 & 0.0242 & 0.1732 \\ & TP1+CP2 & 7.9636 & 0.7427 & 0.7957 & 3.1871 & 0.5657 & 1.1261 \\ \multirow{2}{*}{ $T = 4$ } & TP2+CP2 & 3.7166 & 0.2975 & 0.3959 & 1.5779 & 0.3409 & 0.8653 \\ & TP3+CP2 & 0.3349 & 0.0625 & 0.1197 & 0.0778 & 0.0704 & 0.3468 \\ \hline \multirow{2}{*}{ $St = 1$ } & MRBF1+CP2 & 0.9861 & 0.5802 & 1.4746 & 0.0402 & 0.0965 & 1.3596 \\ & MRBF2+CP2 & 0.0444 & 0.0439 & 0.5004 & 0.0351 & 0.0698 & 0.7334 \\ \multirow{2}{*}{ $h = \frac{1}{40}$ } & MRBF3+CP2 & 0.0367 & 0.0353 & 0.2442 & 0.0258 & 0.0564 & 0.3755 \\ & TP1+CP2 & 1.6501 & 0.7003 & 1.5487 & 0.4281 & 0.3222 & 1.8162 \\ \multirow{2}{*}{ $T = 8$ } & TP2+CP2 & 1.5784 & 1.2164 & 1.8284 & 0.0585 & 0.1871 & 1.7269 \\ & TP3+CP2 & 0.0404 & 0.0440 & 0.3332 & 0.0310 & 0.0714 & 0.8375 \\ \hline \multirow{2}{*}{ $St = 10$ } & MRBF1+CP2 & 1.5452 & 0.0979 & 0.1223 & 0.1401 & 0.0726 & 0.1998 \\ & MRBF2+CP2 & 0.1071 & 0.0109 & 0.0154 & 0.0178 & 0.0183 & 0.0548 \\ \multirow{2}{*}{ $h = \frac{1}{10}$ } & MRBF3+CP2 & 0.0416 & 0.0041 & 0.0065 & 0.0112 & 0.0084 & 0.0216 \\ & TP1+CP2 & 5.8812 & 0.1545 & 0.1674 & 0.6419 & 0.1957 & 0.5617 \\ \multirow{2}{*}{ $T = 12$ } & TP2+CP2 & 4.0693 & 0.1540 & 0.2427 & 0.0955 & 0.0537 & 0.2222 \\ & TP3+CP2 & 0.7464 & 0.0135 & 0.0125 & 0.0339 & 0.0242 & 0.0946 \\ \hline \end{tabular} \caption{The relative entropy $E(P_n)$, first Wasserstein distance $W(P_n)$ and average error per particle $\overline{\Delta \mathbf{x}_n}$ between the numerical distribution $P_n$ and the reference distribution. The numerical distributions are calculated by various interpolation methods that use CP2 integration as shown in the second column. The first column contains the Stokes number $St$, time step $h$ and simulation time $T$ used in the six simulations. The first row contains the aspect ratio $\lambda$ of the particle shape.} \label{table:interpolation} \end{table} \subsection{Comparison of interpolation and integration methods} Our final experiment explores the benefit that is gained by combining MRBF interpolation with the centrifuge- and contractivity-preserving methods compared to the standard methods used in the literature. We will compare the methods TP1+FE1, MRBF1+CP1, TP2+AB2, TP3+AB2, MRBF2+CP2 and TP2+CP2. The first two methods are the cheapest and are of roughly equal cost. The method TP2+AB$n$ are used in, for example \cite{Portela, challabotla2015orientation, van2000dynamics, PanBanerjee,wang1996large} and subsequent studies. We include TP3+AB2 to test whether increasing the interpolation accuracy is worthwhile use of computational resources. We also consider the MRBF2+CP2 solution, which is an accurate and economical combination of our proposed geometric methods. Finally, the TP2+CP2 method is considered to emphasize the negative implications of using a non-divergence-free interpolation method with the CP2 method. Six simulations are performed, three with Stokes numbers of $St = 1/10,1,10$ for spherical particles ($\lambda = 1$) and three with the same Stokes numbers for non-spherical particles ($\lambda = 10$). At the end of the simulation, the average spatial error $\overline{\Delta \mathbf{x}_n}$, relative entropy $E(P_n)$ and the first Wasserstein distance $W(P_n)$ between the numerical distribution and the reference distribution are computed and presented in table \ref{table:interpolation} along with the time step and total simulation times used. \begin{figure} \centering \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_comparison_5_stiff_rods_box/Figs_comparison_5_stiff_rods_box_TP1+FE1_nt_21_xy} \caption{}\label{c1} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_comparison_5_stiff_rods_box/Figs_comparison_5_stiff_rods_box_TP2+AB2_nt_21_xy} \caption{} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_comparison_5_stiff_rods_box/Figs_comparison_5_stiff_rods_box_TP3+AB2_nt_21_xy} \caption{} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_comparison_5_stiff_rods_box/Figs_comparison_5_stiff_rods_box_MRBF1+CP1_nt_21_xy} \caption{} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_comparison_5_stiff_rods_box/Figs_comparison_5_stiff_rods_box_TP2+CP2_nt_21_xy} \caption{} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_comparison_5_stiff_rods_box/Figs_comparison_5_stiff_rods_box_MRBF2+CP2_nt_21_xy} \caption{}\label{c6} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_comparison_4_O1_spheres_box/Figs_comparison_4_O1_spheres_box_TP1+FE1_nt_21_yz} \caption{}\label{c7} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_comparison_4_O1_spheres_box/Figs_comparison_4_O1_spheres_box_TP2+AB2_nt_21_yz} \caption{} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_comparison_4_O1_spheres_box/Figs_comparison_4_O1_spheres_box_TP3+AB2_nt_21_yz} \caption{} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_comparison_4_O1_spheres_box/Figs_comparison_4_O1_spheres_box_MRBF1+CP1_nt_21_yz} \caption{} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_comparison_4_O1_spheres_box/Figs_comparison_4_O1_spheres_box_TP2+CP2_nt_21_yz} \caption{} \end{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{Figs_comparison_4_O1_spheres_box/Figs_comparison_4_O1_spheres_box_MRBF2+CP2_nt_21_yz} \caption{}\label{c12} \end{subfigure} \caption{ Figures (a) through (f) show the spatial distribution of the particles in the $x-y$ plane for the $St = 1/10$, $h = 1/100$, $T = 6$, $\lambda = 10$ simulation from table \ref{table:comparison}. Figures (g) through (l) show the spatial distribution of the particles in the $z-y$ plane for the $St = 1$, $h = 1/40$, $T = 8$, $\lambda = 1$ simulation. The reference solution is plotted in green in all figures. } \label{comparison_figs} \end{figure} Figures \ref{c1} to \ref{c6} show the final distribution of the $St = 1/10$, $\lambda = 10$ simulation looking down the $z$-axis. We see that the only methods that look similar to the green reference solution are the MRBF1+CP1, MRBF2+CP2 and TP3+AB2 solutions. That is, both of our geometric methods and the most costly conventional method. It is worth noting that the TP2+CP2 and the TP2+AB2 solutions look very similar, despite one being generated by the CP2 method. This is likely due to the fact that the CP2 method loses its centrifuge-preserving properties when the fluid field is not divergence-free as seen in table \ref{table:vol errors} as well as the interpolation method being the dominant source of error. Turning our attention towards the corresponding part of table \ref{table:comparison} (i.e., the top-right) we make a few remarks. The most striking one here is that the MRBF1+CP1 solution has a larger $\overline{\Delta \mathbf{x}_n}$ than the TP3+AB2 solution, but its $E(P_n)$ and $W(P_n)$ are both far lower. This is in agreement with the fact that the TP3+AB2 method is better in reducing error in the conventional sense, (i.e., the average 2-norm of the particle position errors $\overline{\Delta \mathbf{x}_n}$), but does a poorer job at reproducing the mechanisms that are responsible for the preferential distribution of particles (i.e., the centrifuge effect and the sum of the Lyapunov spectrum). We can make similar remarks for the $St = 1/10$, $\lambda = 1$ experiment where, despite having higher $\overline{\Delta \mathbf{x}_n}$, the MRBF1+CP1 method has a lower $E(P_n)$ and $W(P_n)$ than the TP3+AB2 method. These significant observations are prevalent in both $St<1$ experiments, which corresponds to a more stiff vector field with greater relative influence of fluid inertia. \begin{table}[h] \centering \begin{tabular}{|c|c|ccc|ccc|} \hline & & \multicolumn{3}{c|}{$\lambda = 1$} & \multicolumn{3}{c|}{$\lambda = 10$} \\ \hline & $P_n$ & $E(P_n)$ & $W(P_n)$ & $\overline{\Delta \mathbf{x}_n}$ & $E(P_n)$ & $W(P_n)$ & $\overline{\Delta \mathbf{x}_n}$ \\ \hline \multirow{2}{*}{ $St = \frac{1}{10}$ } & MRBF1+CP1 & 0.1305 & 0.1050 & 0.3351 & 0.0423 & 0.0431 & 0.5453 \\ & TP1+FE1 & 7.9359 & 0.6340 & 0.7866 & 3.2075 & 0.7961 & 1.2645 \\ \multirow{2}{*}{ $h = \frac{1}{100}$ } & MRBF2+CP2 & 0.1249 & 0.0414 & 0.1242 & 0.0706 & 0.0456 & 0.3513 \\ & TP2+CP2 & 2.7166 & 0.2998 & 0.3990 & 1.2931 & 0.3131 & 0.8785 \\ \multirow{2}{*}{ $T = 6$ } & TP2+AB2 & 2.6957 & 0.2836 & 0.3805 & 1.2585 & 0.2852 & 0.8833 \\ & TP3+AB2 & 2.9179 & 0.1339 & 0.2512 & 0.1326 & 0.0804 & 0.4506 \\ \hline \multirow{2}{*}{ $St = 1$ } & MRBF1+CP1 & 0.9463 & 0.5888 & 1.4858 & 0.0395 & 0.1000 & 1.3961 \\ & TP1+FE1 & 4.3703 & 1.4721 & 2.1060 & 0.4507 & 0.5446 & 1.9263 \\ \multirow{2}{*}{ $h = \frac{1}{40}$ } & MRBF2+CP2 & 0.0507 & 0.0525 & 0.5055 & 0.0306 & 0.0753 & 0.7845 \\ & TP2+CP2 & 1.6212 & 1.2123 & 1.8251 & 0.0534 & 0.1630 & 1.7856 \\ \multirow{2}{*}{ $T = 8$ } & TP2+AB2 & 1.1982 & 1.0943 & 1.7759 & 0.0532 & 0.1595 & 1.7693 \\ & TP3+AB2 & 0.0589 & 0.0532 & 0.4793 & 0.0376 & 0.0865 & 1.0213 \\ \hline \multirow{2}{*}{ $St = 10$ } & MRBF1+CP1 & 1.2748 & 0.3379 & 0.5122 & 0.0727 & 0.1190 & 0.7322 \\ & TP1+FE1 & 6.1882 & 0.7767 & 0.8750 & 1.9416 & 0.6882 & 1.5169 \\ \multirow{2}{*}{ $h = \frac{1}{10}$ } & MRBF2+CP2 & 0.0979 & 0.0348 & 0.0663 & 0.0262 & 0.0438 & 0.2688 \\ & TP2+CP2 & 3.2931 & 0.4742 & 0.7083 & 0.0754 & 0.1176 & 0.8004 \\ \multirow{2}{*}{ $T = 16$ } & TP2+AB2 & 3.2219 & 0.4063 & 0.6304 & 0.0774 & 0.1194 & 0.7933 \\ & TP3+AB2 & 0.2631 & 0.0575 & 0.0865 & 0.0397 & 0.0594 & 0.4864 \\ \hline \end{tabular} \caption{The relative entropy $E(P_n)$, first Wasserstein distance $W(P_n)$ and average error per particle $\overline{\Delta \mathbf{x}_n}$ between the numerical distribution $P_n$ and the reference distribution. The numerical distributions are calculated by various combinations of integration and interpolation methods as shown in the second column. The first column contains the Stokes number $St$, time step $h$ and simulation time $T$ used in the six simulations. The first row contains the aspect ratio $\lambda$ of the particle shape.} \label{table:comparison} \end{table} We finish with some general observations. First, the MRBF1+CP2 solutions outperform the TP1+FE1, TP2+AB2 and TP2+CP2 methods in all three measures. Second, the TP2+AB2 and TP2+CP2 methods perform about the same in each simulation, suggesting there is little advantage from using the CP2 method in conjunction with the TP2 solution. Finally, we note that the MRBF2+CP2 method is more accurate than the TP2+AB2 and the TP3+AB3 in all cases. The only exception is the $St = 1$, $\lambda = 1$ simulation where the MRBF2+CP2 method has worse $\overline{\Delta \mathbf{x}_n}$ than the TP3+AB3 solution. \section{Conclusions}\label{sec:conclusions} A novel combination of geometric numerical methods for calculating accurate distributions of inertial particles in viscous flows is proposed. The algorithm consists of MRBFs to construct a divergence-free approximation of the background flow field and a geometric splitting method for the time integration. The splitting method is shown to preserve the sum of the Lyapunov spectrum and hence the contractivity of phase space volume. By expanding the exact solution we derive an expression for how a physical volume of particles change over a small time step $h$, with which we recover the centrifuge effect at $O(h^4)$. We show that when a divergence-free interpolation method is used, one can implement a so-called centrifuge-preserving splitting method that preserves not only the qualitative but also the quantitative behavior of this centrifuge effect. Moreover, it is shown that errors to the divergence of the fluid field can overshadow this effect when a conventional polynomial interpolation method is used, for example. It is shown through numerical experiments that MRBF interpolation yields particle distributions that are more similar to the exact solution than standard TP interpolation. In many examples, this is observed even when the MRBF solution has higher error per particle. This is, in part, explained by the fact that: (1) MRBF interpolation is divergence-free meaning that the numerical time integration methods mimic the qualitative centrifuge effect; and (2) MRBF interpolation produces a vector field that solves the Stokes equations, meaning that the background flow field more physically resembles that of the exact solution (e.g., the flow field is related to the gradient of a scalar pressure function). Furthermore, we see that the proposed centrifuge-preserving methods are superior to the standard methods in terms of error per particle and how closely the particle distribution resembles the exact distribution. This is true, remarkably so, even when comparing the order-one CP1 method to the order-two RK2 and AB2 methods. In particular, for experiments with low particle inertia, the MRBF1+CP1 method produces more accurate distributions of particles than the expensive TP3+AB2 solutions, despite having slightly worse error and being an order one integration method that uses far less data points for the interpolation step. These observations strongly suggest that preserving certain physical features of ODEs under study in the numerical solution is of importance when simulating inertial particles in discrete flow fields. Of particular interest for future studies would be to implement the proposed methods in a physically realistic flow fields generated by a direct numerical simulation of homogeneous isotropic turbulence or turbulent channel flow, for example. \section{Acknowledgments} This work has received funding from the European Unions Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement (No. 691070). B. K. Tapley, E. Celledoni and B. Owren would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme {\it Geometry, compatibility and structure preservation in computational differential equations} (2019) where part of the work on this paper was undertaken.
9385cad89721bbb9a6777274f1b2c947c5f65d30
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{intro} It has been shown that strongly correlated electron systems may have interesting universalities regardless of microscopic details. For example, the resistivity ($\rho$) is linear in temperature ($T$)~\cite{Hartnoll:2016apf} \begin{equation} \rho \sim T \,, \end{equation} in various strange metals such as cuprates, heavy fermions and pnictides with a remarkable degree of universality.\footnote{As other examples of universal properties, there are the Hall angle \cite{Blake:2014yla,Zhou:2015dha,Kim:2015wba,Chen:2017gsl,Blauvelt:2017koq,Kim:2010zq} at finite magnetic field and Homes's law in superconductors~\cite{Homes:2004wv, Zaanen:2004aa, Erdmenger:2015qqa,Kim:2015dna,Kim:2016hzi,Kim:2016jjk}.} This is a distinctive property compared with the ordinary metal case, in which $\rho \sim T^2$, a different universal property explained by the Fermi liquid theory. However, this well known ``linear-$T$ resistivity'' puzzle has not been completely resolved because of the theoretical difficulty in dealing with strong correlation. As a novel and effective tool to address strong correlation in general, the holographic methods or gauge/gravity duality~\cite{Zaanen:2015oix, Ammon:2015wua,Hartnoll:2016apf} has been widely used. The basic idea is to understand strongly correlated systems by mapping them to the dual classical gravity systems. There have been many researches to understand the linear-$T$ resistivity by holographic methods. Most researches, for example \cite{Charmousis:2010zz, Davison:2013txa, Gouteraux:2014hca, Kim:2015wba, Zhou:2015dha, Ge:2016lyn, Cremonini:2016avj, Chen:2017gsl, Blauvelt:2017koq, Ahn:2017kvc, Salvio:2013jia, Donos:2014cya, Kim:2014bza,Kim:2015sma}, have focused on the relation between the linear-$T$ resistivity and the infrared (IR) geometry, which can be supported by matter fields and couplings. The IR geometry is characterized by three critical exponent: the dynamical critical exponent ($z$), hyperscaling violating exponent ($\theta$) and charge anomalous parameter ($\zeta$). {In particular, Gout\'{e}raux~\cite{Gouteraux:2014hca}, for the first time, systematically studied these geometries with momentum relaxation and characterized their scalings in terms of $z, \theta$ and $\zeta$ as well as derived the temperature scaling of the resistivity } \begin{equation} \label{rho1} \rho \sim T^{x(z,\theta,\zeta)} \,, \end{equation} in {\it low} temperature limit. Here, $x$ is some function of critical exponents. It is an interesting result since it gives an understanding of the linear-$T$ resistivity based on the scaling properties of condensed matter systems, which are nicely geometrized in dual gravity models. However, at the same time, it has an important limitation. The result \eqref{rho1} is valid only in {\it low} temperature limit in the sense that $T$ has to be very low compared with any other scales in given models. For example, with a chemical potential ($\mu$) and momentum relaxation (of which strength is denoted by $k$), $T/\mu \ll 1$ and $T/k \ll 1$ etc. Indeed, such a condition enables us to compute the power $x(z,\theta,\zeta)$ analytically. However, noting that the linear-$T$ resistivity has been observed in a `large' range of temperatures, up to room temperature $\sim 300 K$ in experiments, the condition of `low' temperature limit for \eqref{rho1} will be too restrictive. To deal with this problem, we first have to quantify `low' or `high' temperature compared with `what'. We may choose the chemical potential $\mu$ as our reference, $T/\mu$. However, it is not clear the relation between `$\mu$' in holography and the chemical potential, \kyr{$\mu$}, in real world. \kyr{Most holographic models including ours belong to bottom-up models. Without a top-down construction with a precise field theory dual, the meaning of the `chemical potential' (`$\mu$') is ambiguous. It is just a `chemical potential' (`$\mu$') for `some' conserved $U(1)$ charge. Furthermore, even in the case they are exactly the same quantity, it is still possible that there is some difference in numerical values, for example, $\mu \sim 10`\mu$' or $\mu \sim 0.01`\mu$'.} Thus, for a reference scale, it will be better to choose an intrinsic scale in the model. For example, because experimental results show that the resistivity is linear in $T$ up to $T > T_c$ \kyr{from zero $T$}, where $T_c$ is the superconducting transition (critical) temperature, $T_c$ can play a role as a reference scale. In other words, the `low' or `high' temperature can be defined by the temperature below $T_c$ or above $T_c$.\footnote{\kyr{One may think that $T_c$ is of order $\mu$ so there is no essential difference to use $T_c$ as a reference scale, compared with $\mu$. This may be true qualitatively. However, it turns out that the value of the order one number is important. For example, see Fig. 3 of \cite{Jeong:2018tua}, where there are six cases. For all cases $T_c \sim \mu$ up to oder one number. However, only some of them exhibit the linear-$T$ resistivity above $T_c$ from zero $T$. }} \kyr{Of course, it is possible to have a different reference scale other than $T_c$ depending on the context. Our purpose here is to motivate why a simple notion of small temperature based on $T/\mu < 1$ might be misleading in some cases. For example, in the context of superconductor, $T_c$ is a better reference scale than $\mu$ because we have to check if the linear-$T$ resistivity persists up to high temperature $T>T_c$. We refer to \cite{Jeong:2018tua} for more details. } Ref.~\cite{Jeong:2018tua} addressed this issue for the first time, to our knowledge. In this work, the Gubser-Rocha model~\cite{Gubser:2009qt} with momentum relaxation~\cite{Davison:2013txa, Zhou:2015qui, Kim:2017dgz} have been considered with a complex scalar field to trigger superconducting instability. It has been shown that these models can exhibit the linear-$T$ resistivity up to `high' temperature, i.e. above $T_c$ only if momentum relaxation is strong. The importance of strong momentum relaxation was first emphasized in \cite{Hartnoll:2014lpa}: it was argued that, if the momentum relaxation, which is extrinsic and non-universal, is strong (quick), transport can be governed by diffusion of energy and charge, which is intrinsic and universal. Thus, the universality of the linear-$T$ resistivity emerges with strong momentum relaxation\footnote{The linear-$T$ resistivity may appear in weak momentum relaxation regime in the case of weakly-pinned charge density waves (CDWs), where the resistivity is governed by incoherent, diffusive processes which do not drag momentum and can be evaluated in the clean limit \cite{Delacretaz:2016ivq, Amoretti:2017frz, Amoretti:2017axe}. See also \cite{Davison:2018ofp,Davison:2018nxm}. }. The work in Ref.~\cite{Jeong:2018tua} is important because it is the work studying the linear-$T$ resistivity at `high' temperature, while most of holographic works~\cite{Davison:2013txa, Kim:2015wba, Gouteraux:2014hca, Zhou:2015dha, Ge:2016lyn, Charmousis:2010zz, Cremonini:2016avj, Chen:2017gsl, Blauvelt:2017koq, Ahn:2017kvc} have been focused in the low temperature limit. However, Ref.~\cite{Jeong:2018tua} deals with only one class of models based on the Gubser-Rocha model, so it is not clear if strong momentum relaxation is really necessary and/or sufficient to have the linear-$T$ resistivity in general. The goal of this paper is to investigate this issue in a more general setup. We start with a most general scaling geometry studied in~\cite{Davison:2013txa, Gouteraux:2014hca, Zhou:2015dha, Ge:2016lyn, Cremonini:2016avj, Chen:2017gsl, Blauvelt:2017koq, Ahn:2017kvc}, so called axion-dilaton theories or the Einstein-Maxwell-Dilaton with Axion model (EMD-Axion model). The axion field is introduced to realize the momentum relaxation effect. The dilaton field is introduced with some potentials and couplings charaterized by three parameters $(\alpha, \beta, \gamma)$, in order to support the IR geometry parametrized by three scaling exponents ($z, \theta, \zeta$) as explained above \eqref{rho1},\footnote{These three exponents are related with three action parameters ($\alpha, \beta, \gamma$).} which is rich enough to explore various possibilities. The IR geometry with an emblackening factor is valid only at {\it low} temperature limit. For the arbitrary temperature solutions, we need to introduce potentials and couplings supporting asymptotically ultraviolate (UV) AdS geometry~\cite{Kiritsis:2015oxa, Ling:2016yxy, Bhattacharya:2014dea}. In general, there are many possibilities for potentials and couplings, and in this paper we consider a minimal (one-parameter) UV completion for potentials without changing couplings. This UV completion was introduced in \cite{Ling2017} for the purpose of studying the shear viscosity to entropy ratio and includes the Gubser-Rocha model in \cite{Jeong:2018tua} as a special case. In short, our strategy is i) start with the various IR geometry giving the linear-$T$ resistivity such that $x(z,\theta,\zeta)=1$ in \eqref{rho1}, which is valid only in low $T$ limit; ii) after UV completing the geometry, obtain the arbitrary finite $T$ dependence of resistivity; iii) change the momentum relaxation parameter to see how it affects the robustness of linear-$T$ resistivity at high temperature. As a result, we have found that, in general, the strong momentum relaxation is still necessary to have a robust linear-$T$ resistivity up to higher temperature, but not sufficient: the parameter range for the robust linear-$T$ resistivity is quite limited compared with the high possibility in the low temperature limit~\cite{Gouteraux:2014hca, Kim:2015wba, Zhou:2015dha, Charmousis:2010zz, Davison:2013txa, Ge:2016lyn, Cremonini:2016avj}. We have identified this parameter range which is different from the Gubser-Rocha model. In addition, we have also clarified the term which is responsible for the linear-$T$ resistivity in axion-dilaton theories: it is the incoherent term\footnote{{The first term in \eqref{sigma} can be properly called `incoherent' which means `no momentum dragging' only for strong momentum relaxation, which is the very regime we are interested in. For weak momentum relaxation, there is an incoherent contribution from the second term too~\cite{Davison:2015bea}. The first term is sometimes called a pair-creation term, which is proper if there is no net charge. The first term also has an interpretation as the conductivity in the absence of heat flows~\cite{Donos:2014cya}.}} the first term in the conductivity formula \eqref{sigma} or the coupling between the Maxwell and dilaton fields (for spatial dimension 2). We organize the paper as follows : In section \ref{Set up}, we review axion-dilaton theories and its low $T$ limit properties focusing on the linear-$T$ resistivity. In section \ref{sec:3}, we classify all possible parameter range in the action to obtain the linear-$T$ resistivity in IR and explain our UV-completion. In section \ref{sec:4}, we report our results showing a robust linear-$T$ resistivity up to high temperature and demonstrate the importance of strong momentum relaxation. In section \ref{sec:5}, we interpret our results in more detail. We identify the term responsible for the linear-$T$ resistivity. We also describe the effects of the parameters introduced in axion-dilaton theories on the temperature dependence of the resistivity. In section \ref{sec:conclusion} we conclude. \section{Axion-dilaton theories: a quick review}\label{Set up} \kyr{In this section, we make a very brief review on the axion-dilaton theory investigated in~\cite{Gouteraux:2014hca}.\footnote{See also \cite{Charmousis:2010zz} for the original work without axion.} The purpose of this section is to set the stage and collect the results that will be useful for our discussion later. We refer to the original work~\cite{Gouteraux:2014hca} or a review in \cite{Ahn:2017kvc} for more detailed explanation and derivations.} \subsection{Action and equations of motion} An action of generic EMD-Axion models (or axion-dilaton theories) can be expressed as follows, \begin{align}\label{action} &S = \int \mathrm{d} t \mathrm{d}^{d}x \mathrm{d} r \sqrt{-g}\left( R + \mathcal{L}_m \right) \nonumber \,,\\[1ex] &\mathcal{L}_m = \displaystyle- \frac{1}{2}(\partial \phi)^2 - \frac{J(\phi)}{2}\sum_{i=1}^{d} (\partial \chi_i)^2 + V(\phi) - \frac{Z(\phi)}{4}F^{2} \,, \end{align} where $\phi$ and $\chi_{i}$ are scalar fields which are called dilaton and axion respectively. The terms denoted by $J$, $Z$, and $V$ are the coupling functions and potential function. The action yields the following Einstein equations: \begin{align} \label{eommaster} \begin{split} R_{\mu\nu} &= T_{\mu\nu} - \frac{1}{d}g_{\mu\nu}T\\ &=\frac{1}{2}\partial_{\mu}\phi\partial_{\nu}\phi +\frac{J(\phi)}{2}\sum_{i=1}^{d}\partial_{\mu}\chi_{i}\partial_{\nu}\chi_{i}+\frac{Z(\phi)}{2}F_{\mu}{^\rho}F_{\nu\rho} -\frac{Z(\phi)F^2}{4d}g_{\mu\nu}-\frac{V(\phi)}{d}g_{\mu\nu} \,, \\ \end{split} \end{align} where $T_{\mu\nu} := -\frac{1}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\mathcal{L}_{m})}{\delta g^{\mu\nu}}$ and $T = g^{\mu\nu}T_{\mu\nu}$ \,. The Maxwell equation, scalar equation, and axion equation are \begin{align} \label{eommaster2} \begin{split} &\nabla_{\mu}(Z(\phi)F^{\mu\nu}) = 0 \,, \\& \square\phi+V'(\phi)-\frac{1}{4}Z'(\phi)F^2-\frac{1}{2}J'(\phi)\sum_{i=1}^{d}(\partial\chi_{i})^2 =0 \,, \\ &\nabla_{\mu}(J(\phi)\nabla^{\mu}\chi_{i}) =0\,. \end{split} \end{align} By considering the following homogeneous (meaning all functions are only functions of $r$) ansatz \begin{align} \label{backmaster} \begin{split} &\mathrm{d} s^2=-D(r)\mathrm{d} t^2+B(r)\mathrm{d} r^2+C(r)\sum_{i=1}^{d}\mathrm{d} x_{i}^{2}\,,\\ &\phi=\phi(r) \,, \quad A=A_t(r) \mathrm{d} t \,, \quad \chi_{i}=k x_{i}\,, \end{split} \end{align} we obtain the Einstein equations % \begin{align} & \! 0=\frac{Z(d-1)A_{t}'^{2}}{d D}+\frac{2BV}{d}+\frac{B'D'}{2BD} -\frac{d D'C'}{2DC}+\frac{D'^{2}}{2D^{2}}-\frac{D''}{D}\,, \label{eom11} \\ & \! 0 =\phi'^{2}-\frac{d C'^{2}}{2C^{2}}-\frac{d C'D'}{2CD}-\frac{d C' B'}{2CB} + \frac{d C''}{C}\,, \label{eom12} \\ & \! 0=\frac{J k^{2}B}{C}-\frac{2BV}{d}+\frac{ZA_{t}'^{2}}{d D}+\frac{C'}{2C}\left(\frac{D'}{D}-\frac{B'}{B}\right)+\frac{(d-2)C'^{2}}{2C^{2}}+\frac{C''}{C}\,, \label{eom13} \end{align} which come from the equations corresponding to $R_{tt}, R_{rr}$, and $R_{xx}$ in \eqref{eommaster} respectively. The prime $'$ denotes the derivative with respect to $r$. The Maxwell equation and scalar equation are reduced to \begin{align} &0=\left[Z\frac{C^{\frac{d}{2}}}{\sqrt{BD}}A_{t}'\right]'\,, \label{max123} \\ &0=-\frac{d J_{,\phi}k^{2} B}{2C}+\frac{Z_{,\phi}A_{t}'^{2}}{2D}+BV_{,\phi}-\frac{B'\phi'}{2B} +\left(\frac{dC'}{2C}\right)\phi' +\frac{D'\phi'}{2D}+\phi'' \,, \label{axax} \end{align} and the axion equations are satisfied trivially. \subsection{IR analysis of the axion-dilaton theories} \label{sec22} Our task is to find the functions $B(r),C(r),D(r),A_t(r)$, and $\phi(r)$ in the ansatz \eqref{backmaster} satisfying the equations of motion \eqref{eom11}-\eqref{axax} for a given the couplings and potential functions in $J, Z$ and $V$ in \eqref{action}. In order to have analytic scaling solutions in IR (Infrared: far from the AdS boundary), we give the following constrains to $J, Z$ and $V$ in IR: \begin{equation}\label{IRpot} V(\phi) \sim V_0 e^{\alpha \phi}\,, \qquad J(\phi) \sim e^{\beta \phi}\,, \qquad Z(\phi) \sim e^{\gamma \phi}\,, \end{equation} where the constant parameters ($\alpha, \beta, \gamma, V_0$) are introduced. We consider the coefficient $V_0$ only for $V$ without loss of generality because the overall factors for $J$ and $Z$ can be absorbed into the field $\chi_i$ and $A_t$. Plugging the ``IR scaling coupling'' \eqref{IRpot} into the equations of motion \eqref{eom11}-\eqref{axax} {with the following scaling solution ansatz,} \begin{equation}\label{scalinggeo} \begin{split} &\mathrm{d} s^{2} = r^{\frac{2\theta}{d}}\left( -\frac{\mathrm{d} t^2}{r^{2z}}+\frac{L^2\mathrm{d} r^2}{r^2}+\frac{\sum_{i=1}^{d} \mathrm{d} x_i^2}{r^2}\right)\,, \\ & A_t = Qr^{\zeta - z}\,, \quad \phi = \kappa \ln{(r)}\,, \quad \chi_i = kx_i\,, \end{split} \end{equation} we obtain the solution ($z$, $\theta$, $\zeta$, $L$, $Q$, $\kappa$) in terms of $(\alpha, \beta, \gamma, V_0, k)$. We will call the exponents ($z$, $\theta$, $\zeta$) and the coefficients ($L, Q ,\kappa$) ``solution parameters'' or ``output parameters''. We will call the parameters $(\alpha, \beta, \gamma, V_0, k)$ ``action parameters'' or ``input parameters''\footnote{In fact, $k$ is not introduced in an ``action'' level but we will include it in ``action parameters'' for convenience because it is one of the ``input parameters.''}. Physically, $Q$ is proportional to the charge density and $k$ means the strength of the momentum relaxation. Note that $\phi \to \pm \infty$ in IR\footnote{The IR regime in general can be near $r \to 0$ or $r \to \infty$. Without loss of generality, we may choose $r \to \infty$ as IR.} as $r \to \infty$ where $J \sim r^{\kappa\beta}, Z \sim r^{\kappa\gamma}$ and $V \sim r^{\kappa\alpha}$. After plugging scaling solution ansatz \eqref{scalinggeo} into equations of motion \eqref{eom11}$\sim$\eqref{axax}, we find that there are four possibilities to satisfy the equations of motion. We may classify these solutions according to the ``relevance'' of the axion and/or charge, following \cite{Gouteraux:2014hca}. By ``marginally relevant axion'' we mean the axion parameter $k$ appear explicitly in the leading solutions and by ``marginally relevant charge'' we mean $Q$ appears explicitly in the leading solutions. By ``irrelevant axion (charge)'' we mean $k$ $(Q)$ do not appear explicitly in the leading solutions but they can appear in the sub-leading solutions. Therefore, we may consider four classes as follows. \begin{itemize} \item{class I: marginally relevant axion $\&$ charge \qquad \qquad \qquad \ \ \ \ ($k \ne 0, Q \ne 0$) } \item{class II: marginally relevant axion $\&$ irrelevant charge \qquad \, ($k \ne 0, Q = 0$) } \item{class III: irrelevant axion $\&$ marginally relevant charge \qquad \,($k = 0, Q \ne 0$) } \item{class IV: irrelevant axion $\&$ charge \qquad \qquad \qquad \qquad \qquad \ \ \ ($k = 0, Q = 0$) } \end{itemize} Notice that the classification is based on the property of the leading solutions. To have a more complete picture, we also should consider the deformation by the sub-leading solutions: \begin{equation} \label{perturb1} \Phi_i \rightarrow \Phi_i+ \epsilon_i r^{\beta_i} + \cdots \,, \end{equation} where $\Phi_i$ denotes every leading order solution collectively, $\epsilon_i$ is a small parameter {and $\beta_i$ denotes the exponent of the sub-leading order and $\beta_i<0$ $(\beta_i>0)$ when the IR is located at $r\rightarrow\infty$ $(r\rightarrow0)$.} Therefore, $Q =0 $ in the leading solution does not mean zero density and $k = 0$ in the leading solution does not mean no momentum relaxation because these parameters can appear in the sub-leading solutions. In particular, if the axion is relevant, we may expect the momentum relaxation affects IR physics more strongly than the irrelevant axion cases. We present the explicit solutions for every classes in the following. \paragraph{Class I: charge and axions are marginally relevant} The leading order solutions read \begin{equation}\label{classIexp} \begin{split} &z = \frac{-2 + d\alpha^2 - d \beta^2}{d(\alpha-\beta)\beta}\,, \qquad \theta = \frac{d \alpha}{\beta}\,, \qquad \zeta = \frac{\alpha + \gamma}{\beta} \,, \qquad \kappa = -\frac{2}{\beta} \,, \\ &L^2 = -\frac{2(d-\theta+z-1)(d-\theta+z)}{(d-1)k^2 - 2V_0}\,, \qquad Q^2 = \frac{2(d z - \theta)k^2 - 4V_0(z-1)}{((d-1)k^2-2V_0)(d-\theta+z)}\,, \end{split} \end{equation} as well as a constraint between input parameters: \begin{equation} \label{con000} \gamma = (d-1)\alpha - d\beta \,. \end{equation} or output parameters: \begin{equation} d+\zeta - \theta = 0 \,. \end{equation} \paragraph{Class II: charge is irrelevant; axions are marginally relevant} The leading order solutions read \begin{equation}\label{classIIexp} \begin{split} &z = \frac{-2 + d \alpha^2 - d \beta^2}{d(\alpha - \beta)\beta}\,, \qquad \theta = \frac{d \alpha}{\beta}\,, \qquad \kappa = -\frac{2}{\beta} \,, \\ &L^2 = \frac{(d-\theta+z)(d z - \theta)}{V_0}\,, \end{split} \end{equation} as well as a constraint between input parameters: \begin{equation} \label{con2} k^2 = \frac{2V_0(z-1)}{d z - \theta} \,. \end{equation} Note that the $Q$ and $\gamma$ does not appear in the solutions \eqref{classIIexp} and the constrain \eqref{con2} can be understood as the condition $Q=0$ in \eqref{classIexp} so $\zeta$ is also undetermined. The value of $Q$ and $\zeta$ and their $\gamma$ dependences will be determined in the subleading order \begin{equation}\label{gaugemode} \zeta = d - \kappa\gamma - \frac{d-2}{d}\theta = d + \frac{2\gamma}{\beta} - \frac{(d-2)\alpha}{\beta} \,. \end{equation} As explained in \eqref{perturb1}, this subleading order solution will back-react to metric and dilaton field, which behave as \begin{equation}\label{gaugebackreact} \sim r^{d+\zeta-\theta}\,. \end{equation} To have a stable IR geometry, we impose the constraint \begin{equation} \label{con0000} d+\zeta - \theta < 0 \,, \end{equation} near IR, $r \rightarrow \infty$. In terms of $\alpha, \beta, \gamma$ this constraint means \begin{equation}\label{c2const} 2d - \frac{2(d-1)\alpha}{\beta} + \frac{2\gamma}{\beta}<0\,. \end{equation} \paragraph{Class III: charge is marginally relevant; axions are irrelevant} The leading order solutions read \begin{equation}\label{classIIIexp} \begin{split} &z = \frac{4d-\kappa^2(d \alpha^2 -2)}{2d(2+\alpha \kappa)}\,, \quad \theta = \frac{d^2 \alpha}{(d-1)\alpha - \gamma}\,,\quad \zeta = -\frac{1}{2}\kappa(\alpha + \gamma) \,, \quad \kappa = -\frac{2d}{(d-1)\alpha - \gamma}\,, \\ &L^2 = \frac{(d-\theta+z-1)(d-\theta+z)}{V_0}\,, \qquad Q^2 = \frac{2(z-1)}{d-\theta+z}\,. \end{split} \end{equation} Note that the $k$ and $\beta$ does not appear in the solutions \eqref{classIIIexp} so, compared with class I and II solutions, $\kappa \beta = -2$ does not hold anymore. The solutions \eqref{classIIIexp} may be understood from \eqref{classIexp} by setting $k=0$ and replacing $\beta$ by using \eqref{con000}. Similarly to class II, in the subleading order, the axion $(\chi_i = k x_i)$ starts playing a role and its back-reaction to metric and dilaton field behaves as \begin{equation}\label{axionbackreact} \sim r^{2+\kappa \beta}\,. \end{equation} To have a stable IR, we impose the constraint \begin{equation} \label{con000001} 2+\kappa \beta < 0 \,, \end{equation} in the IR, $r \rightarrow \infty$. In terms of $\alpha, \beta, \gamma$ this constraint means \begin{equation}\label{constc3} 2+\frac{2d \beta}{\gamma - (d-1)\alpha}<0 \,. \end{equation} \paragraph{Class IV: charge and axions are irrelevant} The leading order solutions read \begin{equation}\label{classIVexp} z=1\,, \qquad \theta = \frac{d^2 \alpha^2}{d\alpha^2-2}\,, \qquad \kappa = -\frac{2d \alpha}{d \alpha^2 - 2} \,, \qquad L^2 = \frac{(d-\theta)(d-\theta+1)}{V_0}\,. \end{equation} Note that the $Q$, $\gamma$, $k$, and $\beta$ do not appear in the solutions. The solutions \eqref{classIVexp} can be understood from \eqref{classIIIexp} by setting $Q=0$ or $z=1$. Similarly to case II the subleading gauge field yields \begin{equation} \zeta = d - \kappa \gamma - \frac{d-2}{d}\theta = \frac{2d(\alpha(\alpha+\gamma)-1)}{d \alpha^2 -2} \,.\, \end{equation} Similarly to class II and III we have the following constraints, \eqref{gaugebackreact} and \eqref{axionbackreact}: \begin{equation} \label{con00000} d+\zeta - \theta < 0 \quad \mathrm{and} \quad 2+\kappa \beta < 0\,, \end{equation} or equivalently \begin{equation} \label{con00000yy} \frac{2d(\alpha(\alpha+\gamma)-2)}{d \alpha^2-2}<0 \quad \mathrm{and} \quad 2 - \frac{2d\alpha \beta}{d \alpha^2 - 2} <0\,. \end{equation} \subsection{Resistivity in low temperature limit} Furthermore, it turns out that the emblackening factor $f(r)$ \begin{equation} \label{emblack1} f(r) = 1- \left(\frac{r}{r_H}\right)^{z+d-\theta} \,, \end{equation} can be turned on ($\mathrm{d} t^2 \rightarrow f \mathrm{d} t^2$ and $\mathrm{d} r^2 \rightarrow \mathrm{d} r^2/f$ in \eqref{scalinggeo}) for all classes. Then, the Hawking temperature from this emblackening factor allows us to study the axion-dilaton theories at low temperature. However, it should be emphasized that the solution with this emblackening factor is valid only for very low temperature compared with any other scales, for example, chemical potential and momentum relaxation. In this low temperature limit, the Hawking temperature $T$ and charge density $q$ can be expressed in terms of solution parameters $(z, \theta, \zeta, Q)$: \cite{Gouteraux:2014hca, Ahn:2017kvc} \begin{equation} T := \frac{1}{4\pi} \left. \frac{|D'|}{\sqrt{DB}}\right|_{r_H} = \frac{|d+z-\theta|}{4\pi}r_H^{-z} \,, \label{HT123} \end{equation} \begin{equation} q := \left. \sqrt{\frac{C^d}{DB}}ZA_t' \right|_{r_H} = Q(\zeta-z)\,, \quad s := \left. 4\pi C^{d/2}\right|_{r_H} = 4\pi r_H^{\theta-d} \,, \label{Hq123} \end{equation} where the subscript H denotes horizon. The transport coefficients, for instance electric conductivity or thermal conductivity, can be calculated~\cite{Gouteraux:2014hca, Donos:2014cya}. Especially the electric DC conductivity formula for the EMD-Axion model is given by \begin{align} \sigma_{DC} &= Z_H C_H^{\frac{d-2}{2}} + \frac{q^2}{k^2 C_H^{d/2} J_H} \label{sigma} \\ &\sim T^{-(2-\zeta)/z} + \frac{q^2}{k^2}T^{-\frac{d-\theta-\kappa \beta}{z}}\,. \end{align} Note that only the behavior of ($Z, J, C$) at the horizon play a role in determining the power of $T$ via the relation between $r_H$ and $T$ \eqref{HT123}. More variables ($A_t, B, C, D, Z$) enter for the charge density $q$ as shown in \eqref{Hq123}. By using the constraints between $\zeta, \beta, \kappa$, \eqref{con000}, \eqref{con0000}, \eqref{con000001}, \eqref{con00000}, we can find which term in \eqref{sigma} is dominant in low temperature limit. In class I both two terms are of the same order, in class II the first term is dominant, in class III the second term is dominant, and in class IV the dominant term depends on the parameters. {\it i.e.} $\,$ in low temperature limit the resistivity $\rho \sim \sigma_{DC}^{-1}$ behaves as: \begin{equation}\label{eq:res} \begin{split} &\text{Class I \ \ : } \rho \sim T^{(2+d-\theta)/z}\,,\\ &\text{Class II \ : } \rho \sim T^{\frac{2-\zeta}{z}}\,,\\ &\text{Class III : } \rho \sim T^{\frac{d-\theta-\kappa \beta}{z}}\,,\\ &\text{Class IV : } \rho \sim T^{d-\theta-\kappa \beta} \quad \text{or} \, \, \, T^{2-\zeta}\,. \end{split} \end{equation} Note that the power $x$ in $\rho \sim T^{x}$ does not depend on the momentum relaxation $k$. This is because this formula is valid only in low temperature limit. At finite temperature, this is not the case as we will show. In fact, in addition to the constrains, \eqref{con000}, \eqref{con0000}, \eqref{con000001}, \eqref{con00000}, there are more constraints\footnote{For more details, we refer to \cite{Gouteraux:2014hca,Ahn:2017kvc}. } for the exponents $z$ and $\theta$, which will be necessary to constrain our physical model later. First, since we are considering the case that IR is located at $r \rightarrow \infty$,\footnote{In principle, we may consider the case that IR is located at $r \rightarrow 0$. In this case we will end up with negative $z$ in \eqref{thetad}. We do not consider this case because it is not physical.} the exponent of $r$ of each metric component in \eqref{scalinggeo} should be negative, which implies \begin{equation}\label{IRcon} \theta < d z\,, \quad \theta < d\,. \end{equation} Second, the emblackness factor \eqref{emblack1} $f(r) \rightarrow 1$ at the UV ($r \to 0$), which implies \begin{equation}\label{blackcon} \theta < z + d\,. \end{equation} Third, the condition that the specific heat is positive implies \begin{equation} \frac{d - \theta}{z} > 0 \,, \end{equation} where we used the relation between the entropy and temperature \eqref{HT123}: $s \sim T^{\frac{(d - \theta)}{z}}$. Fourth, the $\kappa, Q,$ and $L$ in the \eqref{classIexp}, \eqref{classIIexp} and \eqref{classIIIexp} should be real. The $\kappa$ can be rewritten by : \begin{equation}\label{kappa} \kappa^2 = \frac{2(d-\theta)(d(z-1)-\theta)}{d}\,. \end{equation} The conditions \eqref{IRcon} - \eqref{blackcon} and $Q^2 > 0,\, L^2 > 0,\, \kappa^2>0$ imply \begin{equation} \label{thetad} z>1, \quad \theta < d \,. \end{equation} Note that Eqs. \eqref{con00000} and \eqref{thetad} imply that both $2-\zeta$ and $d-\theta- \kappa \beta$ can not be 1 in class IV. Thus, the class IV solution does not allow the linear-$T$ resistivity in the low temperature limit. Being interested in the linear-$T$ resistivity, we study only class I, II and III solutions in the following section. \section{Resistivity up to high temperature} \label{sec:3} From \eqref{eq:res}, we may obtain the condition for the linear-$T$ resistivity. However, as we emphasized in the previous section, the temperature dependence of the resistivity \eqref{eq:res} is valid only in low temperature limit, which means that the temperature is very low compared with any other scales such as the chemical potential, the momentum relaxation strength, or the superconducting phase transition scale. However, from experimental results, it is important to have a robust linear-$T$ resistivity also at intermediate and high temperature, which we will call {\it finite} temperature to distinguish it from low temperature limit. \subsection{UV-completion of potentials} At finite temperature, the couplings and potential in \eqref{IRpot} only valid in IR have to be UV-completed. In order to have asymptotically AdS space in UV, we impose the following conditions \kyr{\cite{Charmousis:2010zz}}: \begin{align} &V(\phi) = \frac{(d+1)d}{\ell_{AdS}^2} - \frac{1}{2}m^2 \phi^2 + \cdots \,, \label{cond10} \\ & Z(\phi) = 1 + \cdots \,, \quad J(\phi) = 1 + \cdots \,, \label{cond11} \end{align} near boundary ($r \sim 0$). In other words, for $V(\phi)$, \begin{equation} \label{cond2} V(0) = -2\Lambda =\frac{(d+1)d}{\ell_{AdS}^2}\,, \qquad V'(0) = 0\,, \qquad V''(0) = -m^2 = \frac{-\Delta(\Delta - d - 1)}{\ell_{AdS}^2} \,, \end{equation} where $\Delta$ is the conformal dimension of the dual operator of $\phi$ and $\phi \rightarrow 0$ in UV. For simplicity we set $\ell_{AdS} = 1$ from here. In principle, there will be many possibilities to construct $V, Z, J$ satisfying \eqref{IRpot} in IR and \eqref{cond10} and \eqref{cond11} in UV. See for example~\cite{Gouteraux:2011ce, Kiritsis:2015oxa, Ling2017, Davison:2018nxm}. Here, for simplicity, we choose one minimal way studied in \cite{Ling2017}: \begin{equation}\label{ZandJV1} Z(\phi) = e^{\gamma \phi}\,, \qquad J(\phi) = e^{\beta \phi}\,, \end{equation} and three cases for $V(\phi)$ \begin{equation}\label{ZandJV2} \begin{split} &V(\phi) = \left\{ \begin{array}{lll} \frac{2d}{\alpha^2}\sinh^2\left(\frac{\alpha \phi}{2}\right) + (d+1)d \,, \, & \text{for} \, \theta <0 \,,\\[0.5ex] (d+1)d\,, \, & \text{for} \, \theta = 0 \,,\\[0,5ex] d\left(\frac{1}{\alpha^2}+2\left(d+1\right)\right) \sech(\alpha \phi)-d\left(\frac{1}{\alpha^2}+\left(d+1\right)\right)\sech^2\left(\alpha \phi \right), \, &\text{for} \, d>\theta>0 \,. \end{array} \right. \end{split} \end{equation} Let us now explain the rationale for our choice. All $V(\phi)$ in \eqref{ZandJV2} satisfy the expansion \eqref{cond10}, where the dilaton mass $m^2 = -d$ for $\theta \ne 0$ and $m^2 = 0$ for $\theta =0$. The dilaton $\phi$ behaves near boundary as follows \begin{equation} \label{eq:dilaton} \phi(r) \sim \begin{cases} c + \cdots, \quad \theta = 0\,,\\ r + \cdots, \quad \theta \neq 0\,, \end{cases} \end{equation} where $c$ is a constant and we will set $c=0$. Thus, our choice of the dilaton mass makes \eqref{ZandJV1} satisfy the UV condition \eqref{cond11}. The particular choice of $V_0$ we make is not necessary but for simplicity. Thanks to this choice, our $V(\phi)$ becomes a function of only $\alpha$.\footnote{For example, if we do not choose the dilaton mass as $m^2=-d$ our potential will include $m^2$ as a free parameter.} We consider three kinds of $V(\phi)$ depending on the sign of $\theta$ in \eqref{ZandJV2}. To understand the necessity of these three different forms let us first find the relation between the signs of $\alpha$ and $\theta$. Near IR $r \rightarrow \infty$, we choose $\phi \rightarrow +\infty$, which means $\kappa > 0$ from \eqref{scalinggeo}. When $\kappa>0$, the relations between $\alpha$ and $\theta$ in \eqref{classIexp}, \eqref{classIIexp}, \eqref{classIIIexp} imply that the signs of $\theta$ and $\alpha$ are opposite, and if $\theta=0$ then $\alpha=0$. Thus, we find that the asymptotic potentials near IR read \begin{equation} \label{IRp123} \begin{split} &V(\phi) = \left\{ \begin{array}{lll} \frac{d}{2\alpha^2}\left(e^{\alpha \phi} + e^{-\alpha \phi} - 2\right) + (d+1)d \approx V_0 e^{\alpha \phi}\,, & V_0 = \frac{d}{2\alpha^2}\,, & \text{for} \, \alpha >0\,,\\[0.5ex] (d+1)d\,, & V_0 = (d+1)d\,, & \text{for} \, \alpha = 0\,,\\[0,5ex] \frac{d\left(\frac{1}{\alpha^2}+2(d+1)\right)}{\cosh \left(\alpha \phi\right)} -\frac{d\left(\frac{1}{\alpha^2}+(d+1)\right)}{\cosh^2 \left(\alpha \phi\right)}\approx V_0 e^{\alpha \phi}\,,& V_0 = 2d\left(\frac{1}{\alpha^2}+2d +2\right), &\text{for} \, \alpha<0\,, \end{array} \right. \end{split} \end{equation} which precisely match to our IR condition $V \sim V_0 e^{\alpha\phi}$ in \eqref{IRpot}. For example, the first potential will take a form of $V_0 e^{-\alpha \phi}$ for $\alpha<0$, which is not consistent with our IR condition $V \sim V_0 e^{\alpha\phi}$ in \eqref{IRpot}. Note that the first potential in \eqref{ZandJV2} includes the Gubser-Rocha model with linear axion. For $(\alpha, \beta, \gamma) = (1/\sqrt{3}, 0, -1/\sqrt{3})$ and $d = 2$, $V(\phi), Z(\phi)$ and $J(\phi)$ in \eqref{ZandJV1} and \eqref{ZandJV2} yield \begin{equation}\label{GRwithA} V(\phi) = 6 \cosh{\frac{\phi}{\sqrt{3}}}\,, \qquad Z(\phi) = e^{-\frac{\phi}{\sqrt{3}}}\,, \qquad J(\phi) = 1 \,, \end{equation} which is nothing but the Gubser-Rocha model with linear axion studied in \cite{Gouteraux:2014hca, Jeong:2018tua}. \subsection{Numerical methods} \label{sec32} Because the potential is changing, the scaling solution \eqref{scalinggeo} is not valid anymore so we start with the following ansatz : \begin{equation}\label{ansatz} \begin{array}{l} \mathrm{d} s^{2} =\displaystyle \frac{1}{u^2}\left( - (1-u)U(u)e^{-S(u)} \mathrm{d} t^2 + \frac{\mathrm{d} u^2}{(1-u)U(u)} + \sum_{i=1}^{d}\mathrm{d} x_i^2 \right)\,,\\ \phi = \phi(u)\,, \qquad \chi_i = k x_i\,, \qquad A = (1-u)A_{t}(u) \mathrm{d} t\,, \end{array} \end{equation} % where $u := r/r_h$. In this coordinate, the horizon and the boundary are located at $u = 1$ and $u = 0$ respectively. Since we want the geometry to be asymptotically $AdS_{d+2}$ near boundary, we impose the conditions $U(0) = 1$ and $S(0) = 0$. In our set-up, there are three dimensionful parameters: the chemical potential $\mu = A_t(0)$, the Hawking temperature $T$, and the momentum relaxation strength $k$. For numerical analysis, we will take $d = 2$. We choose the chemical potential as our scale so the dimensionless parameters are $T/\mu$ and $k/\mu$. Our controlling parameters are $(\alpha, \beta, \gamma)$, which determine the whole action as well as the IR behavior of the model via \eqref{ZandJV1} and \eqref{ZandJV2}. We want to find the parameter range $(\alpha, \beta, \gamma)$ which yield the linear-$T$ resistivity from low temperature to high temperature. Therefore, we start with the restricted range of $(\alpha, \beta, \gamma)$, which gives the linear-$T$ resistivity in low temperature regime. Fig.~\ref{fig:wregion} shows such a range in $(\alpha, \beta, \gamma)$ space. It can be understood as follows. % \begin{figure}[] \centering \includegraphics[width=0.4 \linewidth]{nlinregime} \caption{The $(\alpha, \beta, \gamma)$ region which gives the linear-$T$ resistivity in low temperature limit. The (red/blue) surface corresponds to the (class II/class III) solution. The black line, which is the intersection of the red and blue surface, corresponds to the class I. Class IV does not allow the linear-$T$ resistivity.} \label{fig:wregion} \end{figure} % \begin{itemize} \item class I: The linear-$T$ condition \eqref{eq:res} ($(2+d-\theta)/z=1$ in terms of $\alpha, \beta, \gamma$ with \eqref{classIexp}) and \eqref{con000} gives a curve in 3 dimensional $\alpha, \beta, \gamma$ space: \begin{equation} \left(\alpha, \alpha \pm \sqrt{\frac{2}{d(d+1)}}, -\alpha \mp \sqrt{\frac{2}{d(d+1)}}d \right) \,, \end{equation} which is the black line in Fig. \ref{fig:wregion}. \item class II: The linear-$T$ condition \eqref{eq:res} ($(2-\zeta)/z = 1$ in terms of $\alpha, \beta, \gamma$ with \eqref{classIIexp}) defines a surface in 3 dimensional $\alpha, \beta, \gamma$ space: \begin{equation} \left(\alpha, \frac{d^2\alpha -d(2\alpha+\gamma)\pm\sqrt{d(2+d((\alpha+\gamma)^2-2))}}{d(d-1)}, \gamma \right). \end{equation} However, the inequality \eqref{c2const} restricts the available surface, which is the red surface in Fig. \ref{fig:wregion}. \item class III: The linear-$T$ condition \eqref{eq:res} (($d-\theta-\kappa \beta)/z=1$ in terms of $\alpha, \beta, \gamma$ with \eqref{classIIIexp}) defines a surface in 3 dimensional $\alpha, \beta, \gamma$ space: \begin{equation} \left(\alpha, \frac{3\alpha^2+4\alpha \gamma + \gamma^2 -2 -\frac{(\alpha + \gamma)^2}{d}}{2(\alpha + \gamma)},\gamma \right). \end{equation} However, the inequality \eqref{constc3} restricts the available surface, which is the blue surface in Fig. \ref{fig:wregion}. \item class IV: There is no range. See the explanation below \eqref{thetad}. \end{itemize} The potential we consider is valid only for $\beta < 0$ because it corresponds to $\kappa > 0$ due to \eqref{classIexp}, \eqref{classIIexp} and \eqref{classIIIexp} so correctly match the IR potential \eqref{IRp123}. If we want to consider the case $\beta>0$ we need to reconstruct the potential \eqref{ZandJV2} accordingly. Note also that the sign of $\alpha$ is opposite to $\theta$ so i) for $\alpha > 0$ region, the first potential in \eqref{ZandJV2} should be used ii) for $\alpha = 0$, the second potential should be used. iii) for $\alpha < 0$ region, the third potential in \eqref{ZandJV2} should be used. \section{Linear-$T$ resistivity from low temperature to high temperature} \label{sec:4} We have fine-gridded the surface in Fig. \ref{fig:wregion}. By trying out all gridded data set ($\alpha, \beta, \gamma $), we have found that, only for a small range of parameters near the parameter set \begin{equation} (\alpha, \beta, \gamma) = \left(-\frac{1}{\sqrt{3}}, -\frac{2}{\sqrt{3}}, \sqrt{3}\right) \ \ \Leftrightarrow \ \ (z, \theta, \zeta) = (3, 1, -1) \,, \end{equation} yields the linear-$T$ resistivity from low temperature to high temperature when the momentum relaxation is strong. See Fig.~\ref{fig:z3t1lrhoT} for the numerical results for this parameter set with a strong momentum relaxation $k/\mu = 20$. It is a plot for the resistivity ($\rho$) vs temperature $T$, showing $\rho \sim T^x$ with $x\sim 1$. \begin{figure}[] \centering \subfigure[Resistivity ($\rho$) vs Temperature ($T/\mu$)] {\includegraphics[width=6.96cm, height = 4.7cm]{fig31} \label{fig:z3t1lrhoT}}\ \ \ \subfigure[The power $x$ where $\rho \sim T^x$. The red dashed line is a guide to the eye.] {\includegraphics[width=7.4cm, height = 4.7cm]{fig32} \label{fig:z3t1lrhoTp}} \caption{Resistivity vs temperature: the linear-$T$ resistivity. ($\alpha, \beta, \gamma) = \left(-\frac{1}{\sqrt{3}}, -\frac{2}{\sqrt{3}}, \sqrt{3} \right)$ (i.e. $(z, \theta, \zeta) = (3, 1, -1)$) with large momentum relaxation $k/\mu = 20$. }\label{fig:z3t1} \end{figure} \begin{figure}[] \centering \subfigure[$(\alpha,\beta,\gamma) = \left(\frac{1}{3\sqrt{3}}, -\frac{2}{3\sqrt{3}}, \frac{5}{3\sqrt{3}}\right)$ (i.e.$\left(z, \theta, \zeta\right) = (5, -1, -3))$) with $k/\mu=10$. The black dot case in Fig.~\ref{fig:nthregion2}.] {\includegraphics[width=6.96cm, height = 4.7cm]{figB10} \label{new123}}\ \ \ \ \ \ \subfigure[$(\alpha,\beta,\gamma) = \left(0,-\frac{1}{\sqrt{3}},\frac{2}{\sqrt{3}}\right)$ (i.e.$\left(z, \theta, \zeta\right) = (4 ,0 ,-2 )$) with $k/\mu=10$. The black dot case in Fig.~\ref{fig:zthregion}.] {\includegraphics[width=6.96cm, height = 4.7cm]{figB11} \label{new1234}} \caption{Resistivity vs temperature: non-linear-$T$ resistivity}\label{fig:z3t123} \end{figure} To show the value of the exponent $x$ more clearly, we make another plot for $\frac{\partial \ln \rho}{\partial \ln T }$ in Fig.~\ref{fig:z3t1lrhoTp}, where we see $x\sim 1$ for $T/\mu \lesssim 5$. As we explain in section \ref{sec43}, a small neighborhood of the point $(-1/\sqrt{3}, -2/\sqrt{3}, \sqrt{3})$ also exhibits the linear-$T$ resistivity. For a purpose of comparison, we also show typical plots for non-linear-$T$ resistivity in Fig.~\ref{fig:z3t123}. The parameters used in Fig.~\ref{fig:z3t123} were chosen as the same as the one in Fig.~\ref{fig:nthregion2} and Fig.~\ref{fig:zthregion}. Note that Fig.~\ref{fig:z3t1} corresponds to the third potential ($\alpha<0$) in \eqref{ZandJV2} while Fig.~\ref{new123} and Fig.~\ref{new1234} correspond to the first ($\alpha>0$) and second ($\alpha=0$) potential in \eqref{ZandJV2} respectively. It turns out that the third potential or $\alpha<0$ is more advantageous to have a linear-$T$ resistivity than the others. We will discuss about it more in sec.~\ref{sec43}. \subsection{Momentum relaxation effect}\label{sec:z3t1zm1} In this subsection, we will show how the momentum relaxation affects the linear-$T$ resistivity behavior in the finite temperature region. In Fig.~\ref{fig:z3t1vm}, we find that if the momentum relaxation becomes smaller, the temperature range of the linear-$T$ resistivity becomes shorter. \begin{figure}[] \centering \subfigure[$\rho$ vs $T/\mu$] {\includegraphics[width=6.96cm, height = 4.7cm]{fig21} \label{fig:z3t1lrhoTvm}}\ \ \ \subfigure[$x$ vs $T/\mu$ ($\rho \sim T^x$). The dotted line is for $x=1$.] {\includegraphics[width=7.4cm, height = 4.7cm]{fig22} \label{fig:z3t1lrhoTpvm}} \caption{The temperature dependence of resistivity for $(\alpha, \beta, \gamma) = (-1/\sqrt{3}, -2/\sqrt{3}, \sqrt{3})$ or $(z, \theta, \zeta) = (3, 1, -1)$. The color represents the momentum relaxation strength $k/\mu$: \{color($k/\mu$)\} = \{red(0.1), orange(0.5), yellow(1), green(5), blue(10), purple(20)\}. }\label{fig:z3t1vm} \end{figure} The different colors in the figures represent different momentum relaxation strength: \{red, orange, yellow, green, blue, purple\} means $k/\mu= \{ 1/10,1/2, 1,5,10, 20$\}. From Fig.~\ref{fig:z3t1lrhoTvm}, one may think, for all $k/\mu$ we considered, the resistivity looks linear in $T$ for certain range of $T$. However, in fact it is not. By carefully reading off the exponent $x$ in $\rho \sim T^x$ in Fig.~\ref{fig:z3t1lrhoTpvm}, we find that in most cases $x$ becomes quickly deviated from $1$ as $T$ increases from zero. Nevertheless, we stress all the curves in Fig.~\ref{fig:z3t1lrhoTpvm} go to $1$ as $T$ goes to zero. This confirms our numerics are consistent with the analytic formula \eqref{eq:res}. It turns out that the strong momentum relaxation is important to have robust linear-$T$ resistivity up to high temperature. For example, the range for the linear-$T$ resistivity is around up to $T/\mu \sim 5$ for $k/\mu = 20$ and $T/\mu \sim 2$ for $k/\mu = 10$ as shown in Fig.~\ref{fig:z3t1lrhoTpvm}. \section{Interpretations of numerical results} \label{sec:5} \kyr{ In the previous section, we have shown the importance of large momentum relaxation to have a robust linear-$T$ resistivity at finite temperature. To have a better understanding on the mechanism for this observation, in this section, we want to answer the following two questions. \begin{enumerate} \item (section 5.1) To have a robust linear-$T$ resistivity, which term is important among two terms in \eqref{sigma}? We will show it is the first term, so called the pair creation term. \item (section 5.2) What are the effects of $\alpha, \beta,$ and $\gamma$ in \eqref{ZandJV1} and \eqref{ZandJV2} on $x$ in $\rho \sim T^x$? We will show that \begin{itemize} \item{Increasing $\alpha$ or $\gamma$ $\Longrightarrow$ increasing $x$.} \item{Increasing $\beta$ $\Longrightarrow$ dicreasing $x$.} \end{itemize} \end{enumerate} } \subsection{First term or second term?} In general, the conductivity \eqref{sigma} consists of two terms. The first term is called the pair creation term ($\sigma_{DC, pc}$) and the second term is called the dissipation term ($\sigma_{DC,diss}$) \cite{Gouteraux:2014hca}. For $d = 2$, \eqref{sigma} reads \begin{equation}\label{d2sigma} \sigma_{DC} = \sigma_{DC, pc} + \sigma_{DC,diss} = Z_H + \frac{q^2}{k^2 C_H J_H} \,. \end{equation} Thus, we may ask which term is responsible for the linear-$T$ resistivity at finite temperature. \begin{figure}[] \centering \subfigure[Class I, Eq.~\eqref{y1}] {\includegraphics[width=4.831cm]{fig41} \label{fig:classIandcon}} \subfigure[Class II, Eq.~\eqref{y2}] {\includegraphics[width=4.831cm]{fig42} \label{fig:classIIandcon}} \subfigure[Class III, Eq.~\eqref{y3}] {\includegraphics[width=4.831cm]{fig43} \label{fig:classIIIandcon}} \caption{The relative contributions of two terms to the conductivity in \eqref{d2sigma}. {The green and blue curves represent $k/\mu = (5, 10)$ respectively.}}\label{fig:classandcon} \end{figure} To answer this question we compare $\frac{\sigma_{,DC,pc}}{\sigma_{DC}}$ and $\frac{\sigma_{DC,diss}}{\sigma_{DC}}$ in Fig.~\ref{fig:classandcon} for three representative cases\footnote{Here, we changed $z$ and $\zeta$ by one for comparison. In terms of $\alpha, \beta, \gamma$ our choice looks more complicated.} \begin{align} &\mathrm{Class\ I}: \quad (\alpha, \beta, \gamma) = \left(-\frac{1}{\sqrt{3}}, -\frac{2}{\sqrt{3}}, \sqrt{3}\right)\,,\quad \left(z, \theta, \zeta\right) = (3, 1, -1) \,, \label{y1} \\ &\mathrm{Class\ II}: \quad (\alpha,\beta,\gamma) = \left(\frac{1}{\sqrt{5}},-\frac{2}{\sqrt{5}},\frac{4}{\sqrt{5}}\right) \,, \quad (z,\theta,\zeta) = (4,1,-2)\,, \label{y2} \\ &\mathrm{Class\ III}: \quad (\alpha,\beta,\gamma) = \left(\frac{1}{\sqrt{5}},-\frac{3}{\sqrt{5}},\frac{3}{\sqrt{5}}\right) \,, \quad (z,\theta,\zeta) = (4,1,-1)\,, \label{y3} \end{align} where $(z, \theta, \zeta)$ can be calculated by using \eqref{classIexp}, \eqref{classIIexp}, \eqref{classIIIexp}. We can determine which $(\alpha, \beta, \gamma)$ corresponds to which class by checking the equality and inequalities \eqref{con000}, \eqref{c2const}, and \eqref{constc3}. For all cases, the resistivity is linear in $T$ in low temperature limit as shown in \eqref{eq:res}. Class I \eqref{y1}, Class II \eqref{y2}, and Class III \eqref{y3} are shown in Fig.~\ref{fig:classIandcon}, Fiq.~\ref{fig:classIIandcon} and Fig.~\ref{fig:classIIIandcon} respectively. The green and blue curves represent the momentum relaxations, $k/\mu = 5$ and $k/\mu = 10$. We find that, in general, at high temperature, the pair creation term contribute more dominantly. It can be understood by more active pair creation at high temperature. At strong momentum relaxation, naively, one may think that the second term is always suppressed because there is $k^2$ factor in the denominator. However, this is not obvious because all the other factors $q, C_H$, and $J_H$ are also implicitly functions of $k$. Indeed, we find that the dissipation term is dominant at low temperature in Class III in Fig.~\ref{fig:classIIIandcon}.\ In particular, Fig.~\ref{fig:classIandcon} is for Eq.~\eqref{y1}, which exhibits a robust linear-$T$ resistivity. In this case, the larger momentum relaxation is, the more pair creation term is dominant. Thus, we find that the pair creation term, the horizon value of $Z$ ($Z_H$), is responsible for the linear-$T$ resistivity at finite temperature. Furthermore, from Fig. \ref{fig:classIIIandcon}, we may understand why class III case is hard to exhibit the robust linear-$T$ resistivity at finite temperature. As temperature increases, the dominant mechanism for conductivity is changed: at low temperature, the dissipation term dominates while at high temperature the pair creation term dominates. It will be more difficult to have a universal physics from two different mechanism.\footnote{If the momentum relaxation is weak, the dissipation term is always dominant so there is no crossing in Fig. \ref{fig:classIIIandcon}. We thank Blaise Gout\'{e}raux for pointing this out.} \subsection{$\alpha, \beta, \gamma$ dependence} \label{sec43} In the previous subsection, we have investigated the effect of the momentum relaxation on the resistivity in finite temperature regime. In this subsection we want to investigate the effect of $\{ \alpha, \beta, \gamma \}$ or $ \{ z,\theta,\zeta \}$ on the resistivity. Because we found that, in general, the large momentum relaxation is important to have a robust linear-$T$ resistivity, here we fix the momentum relaxation to be large, say $k/\mu =10$. For a systematic study we first need to choose a potential in \eqref{ZandJV2}. Thus, we have three cases depending on the sign of $\theta$ \begin{equation} \theta >0 \,, \quad \theta = 0 \,, \quad \theta <0 \,. \end{equation} This classification is equivalent to \begin{equation} \alpha <0 \,, \quad \alpha = 0 \,, \quad \alpha >0 \,, \end{equation} respectively because $\alpha \kappa = - \frac{2\theta}{d}$ with a positive dilaton $\phi$ ($\kappa >0$). For a given potential we have three classes Class I,II, and III explained in sec. ~\ref{sec22}. The classes are determined by the parameter range. This parameter range and the effect of $\alpha,\beta,\gamma$ on resistivity are best explained by figures, from Fig.~\ref{fig:pthregion} to Fig.~\ref{bgdepp2}. Because there are common features in a class of figures (Fig.~\ref{fig:pthregion}, Fig.~\ref{fig:nthregion}, Fig.~\ref{fig:zthregion}) and another class of figures (Fig.~\ref{abgdepp3}, Fig.~\ref{abgdepp1}, Fig.~\ref{bgdepp2}) we explain them here for all of them. \begin{enumerate} \item (Fig.~\ref{fig:pthregion}, Fig.~\ref{fig:nthregion}, Fig.~\ref{fig:zthregion}) show allowed region of $(\beta, \gamma)$ for a given $\alpha$. The red region, blue region, and the black line between them correspond to class II, class III, and class I respectively. \item The dotted line in blue and red corresponds to the parameters giving the linear-$T$ resistivity in low temperature limit. Let us imagine that we file up the dashed lines for every $\alpha$. Then, the blue(red) dashed lines form a blue(red) surface in $(\alpha, \beta, \gamma)$ space. This surface is nothing but the surface in Fig.~\ref{fig:wregion}. In other words, the dashed lines in figures (Fig.~\ref{fig:pthregion}, Fig.~\ref{fig:nthregion}, Fig.~\ref{fig:zthregion}) are cross-section of Fig.~\ref{fig:wregion} at a given $\alpha$. The region above (below) the dashed line corresponds to $x>1$ ($x<1$), where $x$ is defined in the relation $\rho \sim T^x$ in low temperature limit. \item Therefore, the black dot in (Fig.~\ref{fig:pthregion}, Fig.~\ref{fig:nthregion}, Fig.~\ref{fig:zthregion}) gives the linear-$T$ resistivity as a class I case. Because all the other color dots do not belong to the dotted line they do not give the linear-$T$ resistivity in low temperature limit. We chose these deviated points to investigate the effect of $\alpha, \beta, \gamma$. \item $\alpha$ increase as the purple dot $\rightarrow$ the black dot $\rightarrow$ the red dot in (Fig.~\ref{fig:pthregion} and Fig.~\ref{fig:nthregion}). This was shown as an arrow in (Fig.~\ref{abgdepp3a}, Fig.~\ref{abgdepp1a}). \item $\beta$ increase as the blue dot $\rightarrow$ the black dot $\rightarrow$ the orange dot in (Fig.~\ref{fig:pthregion}, Fig.~\ref{fig:nthregion}, Fig.~\ref{fig:zthregion}). This was shown as an arrow in (Fig.~\ref{abgdepp3b}, Fig.~\ref{abgdepp1b}, Fig.~\ref{bgdepp2a}). \item $\gamma$ increase as the dark yellow dot $\rightarrow$ the black dot $\rightarrow$ the green dot in (Fig.~\ref{fig:pthregion}, Fig.~\ref{fig:nthregion}, Fig.~\ref{fig:zthregion}). This was shown as an arrow in (Fig.~\ref{abgdepp3c}, Fig.~\ref{abgdepp1c}, Fig.~\ref{bgdepp2b}). \item (Fig.~\ref{abgdepp3}, Fig.~\ref{abgdepp1}, Fig.~\ref{bgdepp2}) shows the exponent $x$ in $\rho \sim T^x$. The colors of the curves are chosen to be the same as the colors of the dots in (Fig.~\ref{fig:pthregion}, Fig.~\ref{fig:nthregion}, Fig.~\ref{fig:zthregion}). Recall that the region above (below) the dashed line corresponds to $x>1$ ($x<1$), where $x$ is defined in the relation $\rho \sim T^x$ in low temperature limit. Our results show that this tendency remain at finite temperature in general. \item As a consistency check of our numerics, we have confirmed that the numerical values of $x$ in the limit $T/\mu \rightarrow 0 $ for every curve in (Fig.~\ref{abgdepp3}, Fig.~\ref{abgdepp1}, Fig.~\ref{bgdepp2}) agree with the analytic expressions \eqref{eq:res} i.e. in the low temperature limit $x \rightarrow 1$ for class I, $x \rightarrow (2-\zeta)/z$ for class II and $ x\rightarrow (2-\theta - \kappa \beta)/z$ for class III. For example, in Fig.~\ref{abgdepp3a}, the black, purple, and red curves correspond to class I, II and III respectively. By using the values $(z, \theta,\zeta) = (\frac{21}{5}, \frac{7}{5}, -1)$ for the purple curve (class II) and $(z,\theta,\zeta, \beta, \kappa) = {(\frac{49}{18}, \frac{2}{3}, -\frac{4}{3}, -\frac{2}{\sqrt{3}} , \frac{10}{3\sqrt{3}})}$ for the red curve (class III) we find that $x\sim 0.71$ and $x\sim 1.31$ respectively.\footnote{Here, we used \eqref{classIexp}, \eqref{classIIexp}, \eqref{classIIIexp} to compute $(z,\theta,\zeta, \beta, \kappa)$ from ($\alpha, \beta, \gamma$).} These agree with our numerical results. One may think that $x \sim 0.71$ is fine but $x \sim 1.31$ looks quite different from the value in our numerics (the red curve in the limit of $T/\mu \rightarrow 0$). However, this is because the red curve belongs to class III. As we showed in Fig.~\ref{fig:classIIIandcon} in class III the second term of \eqref{d2sigma} is dominant but the first term's contribution is still not negligible at low temperature. This contamination by the first term is reflected on our numerics. If we go to extremely low $T/\mu$ we will find a good agreement, which we have checked.\footnote{The same argument should work for class I, but we do not see a similar discrepancy to class III. This is because, in class I, the first term and second term have the same power $x$ in $\rho \sim T^x$.} \end{enumerate} \paragraph{The third potential in \eqref{ZandJV2} ($\alpha<0$ and $\theta > 0$)} The reference point is the black dot in Fig.~\ref{fig:pthregionb}, which is \begin{equation} (\alpha,\beta,\gamma) = (\alpha_3,\beta_3,\gamma_3) := \left(-\frac{1}{\sqrt{3}}, -\frac{2}{\sqrt{3}}, \sqrt{3}\right) \ \ \Leftrightarrow \ \ \left(z, \theta, \zeta\right) = (3, 1, -1) \,. \end{equation} Fig.~\ref{fig:pthregion} shows the allowed region of ($\beta, \gamma$) for a given $\alpha$: $\alpha = 1.4\alpha_3$ for Fig.~\ref{fig:pthregiona}, $\alpha=\alpha_3$ for Fig.~\ref{fig:pthregionb}, and $\alpha = 0.6\alpha_3$ for Fig.~\ref{fig:pthregionc}. For fixed $(\beta, \gamma) = (\beta_3, \gamma_3)$, the $\alpha$ decreases from the purple dot ($\alpha= 1.4\alpha_3$) to the black dot ($\alpha= \alpha_3$) and to the red dot ($\alpha= 0.6\alpha_3$). For these three points the change of $x$ in $\rho \sim T^x$ is shown in Fig.~\ref{abgdepp3a}. The value of $x$ increases as $\alpha$ increases at low temperature $T/\mu$ while it does not change at high temperature $T/\mu$. For fixed $(\alpha, \gamma) = (\alpha_3, \gamma_3)$, the $\beta$ increases from the blue dot ($\beta= 1.4\beta_3$) to the black dot ($\beta= \beta_3$) and to the orange dot ($\beta= 0.6\beta_3$). For these three points the change of $x$ in $\rho \sim T^x$ is shown in Fig.~\ref{abgdepp3b}. The value of $x$ decreases as $\beta$ increases in general, but it does not change much in the intermediate temperature range $1 < T/\mu < 3$. For fixed $(\alpha, \beta) = (\alpha_3, \beta_3)$, the $\gamma$ increases from the dark yellow dot ($\gamma= 0.6\gamma_3$) to the black dot ($\gamma= \gamma_3$) and to the green dot ($\gamma= 1.4\gamma_3$). For these three points the change of $x$ in $\rho \sim T^x$ is shown in Fig.~\ref{abgdepp3c}. The value of $x$ increases as $\gamma$ increases. \begin{figure}[] \centering \subfigure[$\alpha = -\frac{14}{10}\frac{1}{\sqrt{3}}$] {\includegraphics[width=4.831cm]{par33} \label{fig:pthregiona}} \subfigure[$\alpha = -\frac{1}{\sqrt{3}}$] {\includegraphics[width=4.831cm]{par32} \label{fig:pthregionb}} \subfigure[$\alpha = -\frac{6}{10}\frac{1}{\sqrt{3}}$] {\includegraphics[width=4.831cm]{par31} \label{fig:pthregionc}} \caption{Allowed region of $(\beta, \gamma)$ for a given $\alpha <0$. In Fig. \ref{abgdepp3} we show the resistivity for the parameters corresponding to the dots in (a), (b), and (c). See the items 1-5 in sec,~\ref{sec43} for the meanings of colors, lines, and dots. In Fig. \ref{shift} we show the resistivity for the squares in (a) and (c). }\label{fig:pthregion} \end{figure} % \begin{figure}[] \centering \subfigure[$\alpha$ dependence] {\includegraphics[width=4.831cm]{figB6} \label{abgdepp3a}} \subfigure[$\beta$ dependence] {\includegraphics[width=4.831cm]{figB7} \label{abgdepp3b}} \subfigure[$\gamma$ dependence] {\includegraphics[width=4.831cm]{figB8} \label{abgdepp3c}} \caption{ The exponent $x$ in $\rho \sim T^x$. The colors of the curves are chosen to be the same as the colors of the dots in Fig.~\ref{fig:pthregion}. }\label{abgdepp3} \end{figure} In brief, these can be summrized as follow. The region above (below) the dashed line corresponds to $x>1$ ($x<1$) in the relation $\rho \sim T^x$ in low temperature limit. Our results show that this tendency remain at finite temperature in general. From Fig.~\ref{abgdepp3} one might wonder if we decrease $\alpha$ and increase $\gamma$ the shift-up effect and shift-down effect cancel each other and the linear-$T$ may be obtained again in another point different from the black dot. Indeed, it is true as shown in the purple curve in Fig.~\ref{shift}. By this way, we have found that there is some range of parameters yielding the linear-$T$ resistivity. However, this does not work in the other way, i.e. first increase $\alpha$ and then decrease $\gamma$. See the red curve in Fig.~\ref{shift}. The difference between two ways is the class that the ending point arrives at. The former belongs to class II and the latter belongs to class III. As we showed in Fig.~\ref{fig:classIIIandcon} the dominant mechanism for conductivity changes as temperature increases in class III, so it is not easy to keep the linear-$T$ resistivity across this change. \begin{figure}[] \centering {\includegraphics[width=7cm]{figB12} \label{}} \caption{The exponent $x$ in $\rho \sim T^x$. The black curve corresponds to the black dot in Fig.~\ref{abgdepp3a}. The purple (red) curve corresponds to the square in Fig.~\ref{abgdepp3a} (Fig.~\ref{abgdepp3c}).}\label{shift} \end{figure} Because of very complicated coupled dynamics at finite temperature, it is not easy to have some analytic understanding of our observation. However, we speculate that it may have something to do with the vanishing potential near IR, $V \sim e^{\alpha \phi}$, for $\alpha<0$. For $\alpha \geqslant0$ the potential will diverge or be constant. \paragraph{The first potential in \eqref{ZandJV2} ($\alpha>0$ and $\theta < 0$)} The reference point is the black dot in Fig.~\ref{fig:nthregion2}, which is \begin{equation} (\alpha,\beta,\gamma) = (\alpha_1,\beta_1,\gamma_1) := \left(\frac{1}{3\sqrt{3}}, -\frac{2}{3\sqrt{3}}, \frac{5}{3\sqrt{3}}\right) \ \ \Leftrightarrow \ \ \left(z, \theta, \zeta\right) = (5, -1, -3) \,. \end{equation} Fig.~\ref{fig:pthregion} shows the allowed region of ($\beta, \gamma$) for a given $\alpha$: $\alpha = 0.6\alpha_1$ for Fig.~\ref{fig:pthregiona}, $\alpha=\alpha_1$ for Fig.~\ref{fig:pthregionb}, and $\alpha = 1.4\alpha_1$ for Fig.~\ref{fig:pthregionc}. Similarly to the previous case $\alpha<0 (\theta >0)$, we consider six points around the reference black dot. For fixed $(\beta, \gamma) = (\beta_1, \gamma_1)$, the $\alpha$ increases from the purple dot ($\alpha= 0.6\alpha_1$) to the black dot ($\alpha= \alpha_1$) and to the red dot ($\alpha= 1.4\alpha_1$). For fixed $(\alpha, \gamma) = (\alpha_1, \gamma_1)$, the $\beta$ increases from the blue dot ($\beta= 1.4\beta_1$) to the black dot ($\beta= \beta_1$) and to the orange dot ($\beta= 0.6\beta_1$). For fixed $(\alpha, \beta) = (\alpha_1, \beta_1)$, the $\gamma$ increases from the dark yellow dot ($\gamma= 0.6\gamma_1$) to the black dot ($\gamma= \gamma_1$) and to the green dot ($\gamma= 1.4\gamma_1$) \begin{figure}[] \centering \subfigure[$\alpha = \frac{6}{10}\frac{1}{3\sqrt{3}}$] {\includegraphics[width=4.831cm]{par13} \label{fig:nthregion1}} \subfigure[$\alpha = \frac{1}{3\sqrt{3}}$] {\includegraphics[width=4.831cm]{par12} \label{fig:nthregion2}} \subfigure[$\alpha = \frac{14}{10}\frac{1}{3\sqrt{3}}$] {\includegraphics[width=4.831cm]{par11} \label{fig:nthregion3}} \caption{Allowed $(\beta, \gamma)$ region for a given $\alpha > 0$. In Fig. \ref{abgdepp1} we show the resistivity for the parameters corresponding to the dots in (a), (b), and (c). See the items 1-5 in sec,~\ref{sec43} for the meanings of colors, lines, and dots. }\label{fig:nthregion} \end{figure} \begin{figure}[] \centering \subfigure[$\alpha$ dependence] {\includegraphics[width=4.831cm]{figB1} \label{abgdepp1a}} \subfigure[$\beta$ dependence] {\includegraphics[width=4.831cm]{figB2} \label{abgdepp1b}} \subfigure[$\gamma$ dependence] {\includegraphics[width=4.831cm]{figB3} \label{abgdepp1c}} \caption{The exponent $x$ in $\rho \sim T^x$. The colors of the curves are chosen to be the same as the colors of the dots in Fig.~\ref{fig:nthregion}. }\label{abgdepp1} \end{figure} As $\alpha$ increases, $\beta$ decreases, or $\gamma$ increases the curves shift up at finite temperature. See Fig.~\ref{abgdepp1}. It is again consistent with the fact that the region above (below) the dashed line corresponds to $x>1$ ($x<1$), where $x$ is defined in the relation $\rho \sim T^x$ in low temperature limit. \paragraph{The second potential in \eqref{ZandJV2} ($\alpha=\theta = 0$)} The reference point is the black dot in Fig.~\ref{fig:zthregion}, which is \begin{equation} (\alpha,\beta,\gamma) = (\alpha_2,\beta_2,\gamma_2) := \left(0,-\frac{1}{\sqrt{3}},\frac{2}{\sqrt{3}}\right) \ \ \Leftrightarrow \ \ \left(z, \theta, \zeta\right) = (4 ,0 ,-2 ) \,. \end{equation} In this case, there is no change in $\alpha$ since $\alpha=\theta = 0$. For fixed $(\alpha, \gamma) = (0, \gamma_2)$, the $\beta$ increases from the blue dot ($\beta= 1.4\beta_2$) to the black dot ($\beta= \beta_2$) and to the orange dot ($\beta= 0.6\beta_2$). For fixed $(\alpha, \beta) = (0, \beta_2)$, the $\gamma$ increases from the dark yellow dot ($\gamma= 0.6\gamma_2$) to the black dot ($\gamma= \gamma_1$) and to the green dot ($\gamma= 1.4\gamma_1$). Similarly to other cases, as $\beta$ decreases or $\gamma$ increases the curves shift up at finite temperature. See Fig.~\ref{bgdepp2}. \begin{figure}[] \centering {\includegraphics[width=5cm]{par21} \label{}} \caption{Allowed region of $(\beta, \gamma)$ for $\alpha = 0$. In Fig. \ref{bgdepp2} we show the resistivity for the parameters corresponding to the dots. See the items 1-5 in sec,~\ref{sec43} for the meanings of colors, lines, and dots. }\label{fig:zthregion} \end{figure} \begin{figure}[] \centering \subfigure[$\beta$ dependence] {\includegraphics[width=5cm]{figB4} \label{bgdepp2a}} \subfigure[$\gamma$ dependence] {\includegraphics[width=5cm]{figB5} \label{bgdepp2b}} \caption{The exponent $x$ in $\rho \sim T^x$. The colors of the curves are chosen to be the same as the colors of the dots in Fig.~\ref{fig:zthregion}. }\label{bgdepp2} \end{figure} \section{Conclusion}\label{sec:conclusion} In this paper, we studied \kyr{the linear-$T$ resistivity up to finite temperature in} more general cases: axion-dilaton theories or the EMD-Axion models. In order to study resistivity from low to high temperature, we start with the low temperature limit or IR limit. In this limit, resistivity can be analyzed by the scaling geometries supported by the asymptotic potential and couplings in IR \eqref{IRpot} \begin{equation}\label{IRpotcon} V(\phi) \sim V_0 e^{\alpha \phi}\,, \qquad J(\phi) \sim e^{\beta \phi}\,, \qquad Z(\phi) \sim e^{\gamma \phi}\,. \end{equation} They are characterized by three parameters $\alpha, \beta, \gamma$, in terms of which, the necessary conditions for the linear-$T$ resistivity has been well studied in~\cite{Gouteraux:2014hca} and summarized in \eqref{eq:res}. In addition, there are many constraints for $\alpha, \beta, \gamma$ coming from physical conditions. For instance, the specific heat should be positive and our geometry should be stable under small perturbation \cite{Gouteraux:2014hca, Ahn:2017kvc}. The constraints are expressed in \eqref{con000}, \eqref{c2const}, \eqref{constc3}, \eqref{con00000yy}, and \eqref{thetad}. Considering all, we have explicitly identified a parameter region which yields the linear-$T$ resistivity in low temperature limit. This region is displayed as a two dimensional surface in three dimensional $\alpha, \beta, \gamma$ space. See Fig.~\ref{fig:wregion}. To study resistivity at finite temperature, not only in the limit of low temperature, we have UV-completed $V(\phi)$ as \begin{equation}\label{ZandJV2con} \begin{split} &V(\phi) = \left\{ \begin{array}{lll} \frac{2d}{\alpha^2}\sinh^2\left(\frac{\alpha \phi}{2}\right) + (d+1)d\,, \, & \text{for} \, \theta <0\,,\\[0.5ex] (d+1)d\,, \, & \text{for} \, \theta = 0\,,\\[0,5ex] d\left(\frac{1}{\alpha^2}+2\left(d+1\right)\right) \sech(\alpha \phi)-d\left(\frac{1}{\alpha^2}+\left(d+1\right)\right)\sech^2\left(\alpha \phi \right), \, &\text{for} \, d>\theta>0 \,. \end{array} \right. \end{split} \end{equation} with the same $J(\phi)$ and $Z(\phi)$ in \eqref{IRpotcon}. Contrary to the low temperature limit, no analytic solution is available so we need to resort to numerical analysis. By fine-gridding the surface of the linear-$T$ resistivity in low temperature limit, in Fig.~\ref{fig:wregion}, we have systematically searched the parameters yielding the linear-$T$ resistivity up to high temperature. We found that the point \begin{equation} \label{uytghu} (\alpha, \beta, \gamma) = \left(-\frac{1}{\sqrt{3}}, -\frac{2}{\sqrt{3}}, \sqrt{3}\right) \ \ \Leftrightarrow \ \ (z, \theta, \zeta) = (3, 1, -1) \,, \end{equation} and its small neighborhood give the linear-$T$ resistivity from low temperature to high temperature if momentum relaxation is strong. \kyr{We have found \eqref{uytghu} by systematic and hard work by brute force. However, unfortunately, we do not have a deep understanding on the {\it precise} physical mechanism for \eqref{uytghu} yet. The values (and the size of its neighborhood) will be changed with different UV completions. Thus, our main point is not the values \eqref{uytghu} but some qualitative discoveries we have obtained, which will be summarized as follows. } \begin{enumerate} \item Large momentum relaxation is a necessary condition to obtain robust linear-$T$ resistivity from low temperature up to high temperature. See Fig.~\ref{fig:z3t1vm}. \item Among two terms in the conductivity formula \eqref{sigma}, the pair-creation term (the first term) is responsible for the linear-$T$ resistivity. In terms of geometry, this is nothing but the horizon value of the coupling $Z(\phi)$ with the Maxwell term for $d=2$ as shown in \eqref{action} and \eqref{d2sigma}, i.e. \begin{equation} e^{\gamma \phi(r_h)} = r_h^{\gamma \kappa}\,. \end{equation} It is a very simple formula. However, it is not easy to have a good intuition, because $r_h$ is a functions of ($T, \mu, k $) determined by complicated coupled dynamics of various fields. Only in low temperature limit, $r_h \sim T$ so things are simplified. \item In class III the dominant mechanism for conductivity is switched from the second (dissipation) term to first (pair-creation) term in the conductivity formula, as temperature increases. See Fig.~\ref{fig:classandcon}. Thus, we may expect that it is not easy to have a universal property, the linear-$T$ resistivity, in class III because of this mechanism change. Indeed, the parameters we found for linear-$T$ resistivity belong to class I and II. \item In class I, since both deformations (axion and charge) are marginal, the condition $T \ll$ Max($\mu, k$) is enough for the geometry to be captured by the IR scaling geometry. Thus, by increasing momentum relaxation $k$, it is more possible to have a larger range of linear-$T$ resistivity than the other classes.\footnote{We thank Blaise Gout\'{e}raux for pointing this out.}~\footnote{\kyr{Not all parameters in class I do not show this behavior. It may be partly understood by the fact the precise value of $\mu$ (not an order of magnitude) depends on the whole geometry, so depends on UV completion. Thus, we cannot simply say $T/\mu > 1$ or $T/\mu < 1$ without knowing UV completion. When we say ``high temperature'', we do not mean $T/\mu \gg 1$, we mean, for example, $T/\mu = 4$ or $T/\mu = 6$, so the numerical value of $O(1)$ number matters. Thus, we cannot simply judge $T/\mu > 1$ or $T/\mu < 1$ without having a specific UV completion. }} \item We also discussed how the conductivity behavior changes as $\alpha, \beta, \gamma$ changes for every potential in section \ref{sec43}. We speculate that the third potential in \eqref{ZandJV2con} may be favorable for the robust $T$ dependence of resistivity because the potential $V \sim e^{\alpha \phi}$ vanishes near IR for $\alpha < 0$. For $\alpha \geqslant0$ the potential will diverge or be constant. \end{enumerate} It has been shown in~\cite{Jeong:2018tua} that the Gubser-Rocha model with linear axion exhibits the linear-$T$ resistivity up to high temperature. This model is also included in our set-up as shown in \eqref{GRwithA}. It corresponds to the parameter set $(\alpha, \beta, \gamma) = (1/\sqrt{3}, 0, -1/\sqrt{3})$, which is the green dot in Fig.~\ref{fig:Gub}. It belongs to the first potential in \eqref{ZandJV2con} and class I.\footnote{It looks in class III because it is in the blue region. However, the boundary of the blue and red region is excluded in the definition of class II and class III. The green dot is a very special point, where $z \rightarrow \infty$, $\theta \rightarrow -\infty$ and $\eta := - \theta/z \rightarrow 1$. {The extension of the green dot to $\alpha$ direction corresponds to the case in section 3.2 in~\cite{Jeong:2018tua}.}} In this case, the DC conductivity ($\sigma_{DC}$) is \begin{figure}[] \centering {\includegraphics[width=7cm]{par1G} \label{adepp3}} \caption{The figure shows allowed ($\beta, \gamma$) region in the case of $\alpha = \frac{1}{\sqrt{3}}$. The black line represent class I case. The red, blue and green region stand for the class II, class III and class IV respectively. The dashed red(blue) line shows $(\alpha, \beta, \gamma)$ for the linear-$T$ resistivity in low temperature limit of class II(class III). The green dot $(\alpha, \beta, \gamma) = (\frac{1}{\sqrt{3}}, 0, - \frac{1}{\sqrt{3}})$.}\label{fig:Gub} \end{figure} \begin{equation} \label{conduct1} \sigma_{DC} = \sqrt{1+{\tilde{Q}}} + \frac{\sqrt{1+{\tilde{Q}}}}{(k/\mu)^2} \,, \end{equation} where ${\tilde{Q}}$ is a complicated function of $(T,\mu,k)$. Thus, the first (pair creation) and second (dissipation) term contribute in the same way through ${\tilde{Q}}$. However, it has been shown that the resistivity is linear in $T$ up to high temperature only for large momentum relaxation, $k/\mu \gg1$, so the first term dominates. This agrees with what we have found in this paper. In order to identify the power $x$ in $\rho \sim T^x$ in a more precise way we made a plot of $\partial \ln \rho/\partial \ln T $. This method is good enough to find a linear-$T$ behavior all the way from zero $T$ to high $T$. However, if there is a residual resistivity at zero $T$ (i.e. $T = \mathrm{constant} + T^x$) or if there is the linear-$T$ resistivity after some temperature $T_1$ (i.e. for $T > T_1 > 0$, $T = \mathrm{constant} + T^x$), our measure $\partial \ln \rho/\partial \ln T $ may not be able to capture it. Thus, in such more relaxed conditions, which may be relevant for some phenomenology (for example, \cite{Cooper603}), there may be more parameter regime allowing the linear-$T$ resistivity in our model. Investigating conductivity at high temperature involves full bulk geometry so it depends on UV-completion of potential and couplings in general. In this paper, as a first step, we used a kind of minimal UV completion, in the sense that the potential $V$ depends on only one parameter $\alpha$. As reviewed in Appendix \ref{appA} other UV completions are also possible. It will be interesting if we can find more conditions to constrain UV completion from other phenomenological input or theoretical consistency such as a top-down approach. Or, from phenomenological perspective, we may ask what kind of UV completion can allow the linear-$T$ resistivity up to high temperature. \acknowledgments We would like to thank Blaise Gout\'{e}raux, Yi Ling and Zhuo-Yu Xian for valuable discussions and comments. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT $\&$ Future Planning(NRF- 2017R1A2B4004810) and GIST Research Institute(GRI) grant funded by the GIST in 2019 and 2020.
0af783bf71fee793d6e0842c229938d2c2ae5c18
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} In this paper, the tick dynamics of stock prices observed at ultra-high-frequency level are modeled based on the symmetric marked Hawkes process and the empirical properties of the price dynamics are examined. The simple self and mutually excited Hawkes model for the price dynamics with the unit jump size incorporates the stylized facts of the ultra-high-frequency financial data, such as market microstructure noise and order clustering. On the other hand, random size jumps, i.e., not a constant jump, as in the simple Hawkes model, are observed in the tick structure of equity markets, particularly when there is a high ratio between the stock price and minimum tick size. By combining the Hawkes model with a mark structure, which has additional information for each event, more realistic model of the tick price dynamics is proposed to deal with the random size jumps. Recent studies on (ultra)-high-frequency data and the market microstructure have been developed in several ways. The volume of literature on the financial theory of the market microstructure and limit order book \citep{Rosu2009}, and the role of algorithmic trading at a high-frequency rate \citep{Foucault2012,Chaboud2014,Hoffmann2014} is increasing. A number of studies focused on the reduced form or stochastic modeling of the limit order dynamics and order executions; the reader may refer to \cite{Lo2002}, \cite{Cont2010}, \cite{Malo2012}, \cite{Cont2013}, \cite{Abergel2013}. The statistical property of the ultra-high-frequency data is also an important subject because they exhibit the distinctive characteristics from the macro price dynamics. For example, care should be taken when applying the typical statistical methods to ultra-high-frequency data, and when computing the realized volatility \citep{ABDL} due to microstructure noise, which refers to the mean reverting properties of the price processes at high frequency level. Previous studies (\cite{Ait2005}, \cite{Zhang2005}, \cite{Hansen} and \cite{Ait2011}) measured the volatility of the return in the presence of market microstructure noise. \cite{Huth2014} examined the lead/lag relationship between asset prices and showed that there are significant cross correlations in the futures/stock at the high-frequency contrast with the daily data cases where cross correlations are negligible. The lead/lag relationship among the international index futures of different countries were also observed by \cite{Alsayed2014}. For the semi-Markov model with price jumps to explain the micorstructure noise, consult \cite{Fodra}. The financial asset price time series at the ultra-high-frequency level exhibits several autocorrelations that are not observed on a daily basis. Under the tick structure with a minimum tick size of price variation, the price dynamics is a pure jump process that consists of up jumps and down jumps. First, the frequency of up movements tends to increase with increasing frequency of the past down movements and vice versa. This causes a mean reverting property in the price dynamics, even though the correlations last for less than a few seconds. Second, there are also autocorrelations between the movements of the same direction. This causes volatility clustering that is different from the clustering on the macro level, which is typically modeled by GARCH \citep{Bollerslev1986} or the stochastic volatility model \citep{Heston1993}, because the clustering properties in the tick structure last for only a few seconds. The durations of the autocorrelations are much shorter than those of the autocorrelation observed on a daily level. These properties are well incorporated into the Hawkes model, which belongs to the class of point processes and is introduced by \cite{Hawkes1971}. Therefore, there has been an increase in the related work of modeling price dynamics based on the Hawkes process. The bivariate Hawkes process was introduced to model the buy and sell order arrivals and the impact of the orders on future prices was examined \citep{Hewlett2006}. The generalized Hawkes models was used to study the dependence between the occurrence of time trades and changes to the mid quote as well as the dependences between trading days \citep{Bowsher2007}. \cite{Large2007} examined the market resilience after trades using the limit order book data and mutually excited Hawkes models. \cite{Bacry2012} explained the non-parametric estimation method for the symmetric Hawkes process based on high-frequency futures data. Based on the mutually exciting Hawkes process, \cite{Bacry2013} suggested the mathematical framework that incorporates the market microstructure noise and the Epps effect, which is the correlation between the returns of two different assets at high sampling frequency. The trade clustering properties of the price dynamics on the micro level was well incorporated by the self-excited Hawkes process \citep{Fonseca2014Self}. The formulas for the moments and correlation function for the self and mutually excited Hawkes process were derived by \cite{Fonseca2014}. A multivariate Hawkes process was introduced for the up and down price movements and buy and sell orders to explain the stylized facts of the market impact and microstructure \citep{Bacry2014}. \cite{Zheng2014} suggested a multivariate constrained Hawkes process to describe the dynamics of the bid and ask prices. \cite{LeeSeo} focused on the daily and intraday volatility estimation based on the symmetric Hawkes process and compared the result with the realized volatility. For more about the kernel estimation in the Hawkes model, consult \cite{Bacry2016} and for the correlation and lead-lag relationship in a multi-asset model using the Hawkes process, consult \cite{Fonseca2016}. The previous studies focused mainly on the simple point process model, where the jump size is constant. In the present study, the existing simple Hawkes model was extended to the marked Hawkes model to handle more realistic price movements in stock markets where the random size jumps (marks) are introduced. A marked point process was introduced based on the ACM-ACD model, where points are the transaction time and the marks are information on the transaction \citep{Russell2005}. The Hawkes process was adopted to explain the aftermath effects of the marks, which is more convenient for calculating the useful formula. The future effects of the marks depend on the absolute sizes of the marks and hence a linear impact function is introduced to deal with the future impact of the mark. Our empirical study shows that the estimates of the slope parameter of the impact function are significant positive values in stock markets. This suggests that the larger marks tend to magnify the future intensities more than the smaller marks. For the distribution of the mark, a specific distribution is not assumed in this paper but the empirical distribution is used for estimations and volatility calculation. Our model is not limited to the independent mark as the empirical studies show the intensity dependent mark distribution. The remainder of the paper is organized as follows. In Section~\ref{Sect:point}, the marked Hawkes model is proposed to describe the tick price dynamics of equities. Section~\ref{Sect:simul} and Section~\ref{Sect:empirical} present the simulation results and empirical studies, respectively. Section~\ref{Sect:concl} concludes the paper. The mathematical proofs are reported in the Appendix. \section{Symmetric marked Hawkes model}\label{Sect:point} \subsection{Marked point process} This subsection introduces the basic concepts of marked point processes. The mathematical framework is in line with \cite{Daley}. With the given complete separable metric state space, $\mathcal X$, a point process is a measure to count the number of random events that occur in an open set that belongs to the $\sigma$-field of $\mathcal X$'s Borel set, $\mathcal B_{\mathcal X}$. To deal with random events, a probability space $(\Omega, \mathbb P)$ is introduced. A point process $N(\omega)$ for $\omega \in \Omega$, or simply $N$, is the counting measure on $\mathcal B_{\mathcal X}$, i.e., $N(A)=N(\omega,A)$ is a random non-negative finite integer for any bounded $A \in \mathcal B_{\mathcal X}$. In other words, $N(A)$ is a random variable that counts the number of events in $A$. From now on, the term $\omega$ is omitted for the notational simplicity. A marked point process is a more complex model that is introduced to describe not only the location of random events but also additional information, called the mark, attached to each event. A marked point process is a point process $\{(t_\ell, k(t_\ell) )\}$ with locations $t_\ell \in \mathbb R$ and marks $k(t_\ell)$ in a mark space $\mathcal K$. The location space is not necessarily $\mathbb R$, but in this paper, the space is defined as a real line to model the price movements over time. In addition, the mark space $\mathcal K = \mathbb Z^+ $ is the space of price jump sizes. This suggests that the absolute jump sizes of the price movements are represented by positive integer multiples of a minimum jump size, $\delta.$ For price dynamics modeling, there are two marked point processes, $N_1$ and $N_2$, which represent the up and down movements, respectively. This study assumes that the probabilistic properties of $k_1$ and $k_2$, the mark size of $N_1$ and $N_2$, respectively, are the same. For each $i=1,2$, the ground process $N_{gi}(\cdot) = N_i (\cdot,\mathbb Z^+)$, which is the marginal process for locations, is itself a point process. Each ground process, $N_{gi}$, describes the arrival times of up or down price movements. The price process is assumed to be represented by a two dimensional marked Hawkes point process, which belongs to the marked point processes. Let $\lambda_{gi}(t)$ be the conditional intensity of $N_{gi}$ upon filtration $\mathcal F_{t-}$ which is the minimal $\sigma$-algebra generated by the history of the marked point processes of $N_1$ and $N_2$ on $(-\infty, t)$. The conditional intensities are $\mathcal F_t$-adapted stochastic processes, that heuristically speaking, satisfy $$\lambda_{gi}(t)\mathrm{d} t = \mathbb E[N_{gi}(t+\mathrm{d} t) -N_{gi}(t)| \mathcal F_{t-}].$$ Owing to the discontinuity, the intensities may not be unique at the discontinuities and $\lambda_{gi}(t)$ are considered the left continuous modifications in general, but if the intensities are used as integrators of some stochastic integrations, then the right continuous modification of $\lambda_{gi}$ is used as the integrators. The marked Hawkes model for the price dynamics does not need to be symmetric in general. However, for computational ease, it is convenient to assumed that the joint distributions of $(k_1, \lambda_{g1})$ and $(k_2, \lambda_{g2})$ are the same, for example, $\mathbb E[k_1]=\mathbb E[k_2]$. The marked Hawkes process assumption implies that, for each $i=1,2$, the intensities of the ground processes satisfy \begin{align} \lambda_{gi}(t) &= \mu + \sum_{j=1,2} q_{ij} \int_{(-\infty,t)\times\mathbb Z^+} g_{ij}(k_j) \phi_{ij}(t-u) N_j(\mathrm{d} u \times \mathrm{d} k_j)\label{Eq:ground}\\ &= \mu + \sum_{j=1,2} q_{ij} \sum_{-\infty < u_\ell < t} g_{ij}(k_j(u_\ell)) \phi_{ij}(t-u_\ell)\label{Eq:ground2} \end{align} where $u_\ell$ are the event times. The mark size is represented by $k_j$ in the integration form with the counting measure, and is represented by $k_j(u_\ell)$ in the summation form with the associated event time $u_\ell$. With the counting measure, inside the integration, $k_j$ can be considered to be function on $\mathbb R \times \mathbb Z^+$ such that $(u, k_j) \rightarrow k_j$. If the occurrence time of $k_j$ needs to be specified, then the mark is written as $k_j(u)$ as in Eq.~\eqref{Eq:ground2} with some time notation, $u$, indicating that $k_j(u)$ takes place at time $u$. The Hawkes processes generated by the above intensities are defined by the ancestor-offspring argument (as long as the intensities are finite). The immigrant ancestor of type $i$ with mark $k$ arrives at the system from outside in a Poisson process at rate $\mu$. These ancestors generate offspring and the generated offspring become the new ancestor to generate new offspring. Owing to the ancestor type $j$ born at time $u$ with mark $k$, whether immigrant or not, the type $i$ offspring are generated by a Poisson process with a rate $q_{ij}g_{ij}(k)\phi_{ij}(t-u)$ at time $t$. The Poisson rate is emphasized by $g_{ij}(k)$ at time $u$ and decreases with $\phi_{ij}(t-u)$ as $t$ increases. A normalization method to determine $q_{ij}, g_{ij}(k)$ and $\phi_{ij}(t-u)$ is used in general, since the combination of $q_{ij}g_{ij}(k)\phi_{ij}(t-u)$ is not unique. We assume the exponential decay kernel for $\phi_{ij}(t-u) = \phi(t-u) = \beta \mathrm e^{-\beta(t-u)}, \beta>0,$ which is normalized such that $$\int_0^\infty \phi(\tau) \mathrm{d} \tau = 1.$$ The impact of mark $g_{ij}$ is also normalized in the sense that $\mathbb E [g_{ij}(k)] = 1$. In addition, $q_{ij}>0$ are called the branching coefficients and $\mathbf Q := \{q_{ij}\}_{i,j=1,2}$ is called the branching matrix. With an exponential decay kernel, $(\lambda_{g1}, \lambda_{g2})$ is Markovian and the calculations in this paper depend largely on this property. This paper considers the case that the distribution of mark of type $i$ may depend on $\lambda_{gi}$ and hence the conditional distribution of the mark is represented by $f(k_i|\lambda_{gi}(t))$. On the other hand, this paper does not assume the specific parametric distribution for the mark size except when paths are generated under the simulation study. The estimation procedures and volatility analysis are performed without specifying the mark distribution. The counting measure $N_i$ can be interpreted as a stochastic jump process. To apply the stochastic integration theory later, without notational confusion, the associated jump processes is defined as $$ N_i(t) = \int_{(0,t]\times\mathbb Z^+} k_i N_i(\mathrm{d} u \times \mathrm{d} k_i) $$ where in the l.h.s., the stochastic process $N_i$ is represented by the sole parameter $t$ and in the r.h.s., the measure $N_i$ is represented by both the location $t$ and mark size $k$. In the stochastic process representation, $N_i(t)$ counts the number of events over $(0,t]$ with weight $k_i$. The jump processes $N_i(t)$ are considered to be right continuous with left limits. Similarly, the ground processes also have the stochastic process representations: $$ N_{gi}(t) = \int_{(0,t]\times\mathbb Z^+} N_i(\mathrm{d} u \times \mathrm{d} k_i) $$ which counts the number of events over $(0,t]$ without considering the jump size $k_i$. As a jump process, the ground process is also regarded to be right continuous with left limits. \subsection{Linear impact function} A linear impact function of the mark is introduced. The impact function $g_{ij}$ and the distribution of the mark should satisfy some additional criteria so that the marked Hawkes process does not blow up. \begin{assumption}\label{Assumption} (i) The ground intensities $\lambda_{gi}$ are assumed to be stationary.\\ (ii) The impact functions have the same formula for all $i,j = 1,2$ and are linear with a slope parameter $\eta$: $$ g(k) := g_{ii}(k) = g_{ji}(k)= \frac{ 1+(k-1)\eta }{\mathbb E[1+(k-1)\eta]}.$$ (iii) The branching matrix is symmetric with $$q_s:=q_{11}=q_{22}= \frac{\alpha_s}{\beta} \mathbb E[1 + (k-1) \eta], \quad q_c:=q_{12}=q_{21}=\frac{\alpha_c}{\beta} \mathbb E[1 + (k -1) \eta].$$ (iv) For $i=1,2$, we assume \begin{align} \mathbb E[k_i \lambda_{gi}(t)] = K_{i \lambda_{gi}} \mathbb E[\lambda_{gi}(t)] \label{Eq:K} \end{align} for some constant $K_{i \lambda_{gi}}>0$ and \begin{align} \{1 + ( K_{i \lambda_{gi}} -1)\eta\} \left(\frac{\alpha_s}{\beta} + \frac{\alpha_c}{\beta}\right) < 1.\label{Eq:condition} \end{align} In addition, the joint distributions of $(k_1, \lambda_{g1})$ and $(k_2, \lambda_{g2})$ are the same and we have $K_{1 \lambda_{g1}} = K_{2 \lambda_{g2}}$. \end{assumption} The condition of Eq.~\eqref{Eq:condition} is similar to the existence condition of the simple symmetric Hawkes process except for the additional term $\{1 + ( K_{i \lambda_{gi}} -1)\eta\}$. Indeed, Assumption~\ref{Assumption} (ii)$\sim$(iv) leads to the weak stationarity of $\lambda_{gi}$. Under Eq.~\eqref{Eq:ground} with the exponential decay function, we have \begin{align} \lambda_{g1}(t) = \mu &+ q_s \int_{(-\infty,t)\times\mathbb Z^+} g(k_1) \beta \mathrm e^{-\beta(t-u)} N_1(\mathrm{d} u\times \mathrm{d} k_1) \nonumber\\ &+ q_c \int_{(-\infty,t)\times\mathbb Z^+} g(k_2) \beta \mathrm e^{-\beta(t-u)} N_2(\mathrm{d} u\times \mathrm{d} k_2), \label{Eq:lambdag1}\\ \lambda_{g2}(t) = \mu &+ q_c \int_{(-\infty,t)\times\mathbb Z^+} g(k_1) \beta \mathrm e^{-\beta(t-u)} N_1(\mathrm{d} u\times \mathrm{d} k_1) \nonumber\\ &+ q_s \int_{(-\infty,t)\times\mathbb Z^+} g(k_2) \beta \mathrm e^{-\beta(t-u)} N_2(\mathrm{d} u\times \mathrm{d} k_2).\label{Eq:lambdag2} \end{align} The point process $(N_1, N_2)$ defined under the above ground intensity is then a two dimensional marked self and mutually excited Hawkes process with a linear impact function. Assuming the integrand of the following formula is integrable, a predictable finite variation process $$ \int_\cdot^t \mathbb E[ g(k_i) |\lambda_{gi}(u) ]\lambda_{gi}(u) \beta \mathrm e^{-\beta(t-u)} \mathrm{d} u$$ is a compensator for $$\int_{(\cdot,t) \times \mathbb Z^+} g(k_i) \beta \mathrm e^{-\beta(t-u)} N_i(\mathrm{d} u \times \mathrm{d} k_i),$$ and hence $$ \int_{(\cdot,t) \times \mathbb Z^+} g(k_i) \beta \mathrm e^{-\beta(t-u)} N_i(\mathrm{d} u \times \mathrm{d} k_1) - \int_\cdot^t \mathbb E[ g(k_i) |\lambda_{gi}(u) ]\lambda_{gi}(u) \beta \mathrm e^{-\beta(t-u)} \mathrm{d} u$$ is a martingale. Therefore, by taking the unconditional expectation for the ground intensity formulas in Eqs.~\eqref{Eq:lambdag1}~and~\eqref{Eq:lambdag2}, we have \begin{align*} \mathbb E[\lambda_{g1}(t)] = \mu &+ q_s \int_{-\infty}^{t} \mathbb E[\mathbb E[ g(k_1) |\lambda_{g1}(u) ]\lambda_{g1}(u)]\beta \mathrm e^{-\beta(t-u)}\mathrm{d} u \\ &+ q_c \int_{-\infty}^{t}\mathbb E[ \mathbb E[g(k_2)| \lambda_{g2}(u) ]\lambda_{g2}(u)]\beta \mathrm e^{-\beta(t-u)}\mathrm{d} u,\\ \mathbb E[\lambda_{g2}(t)] = \mu &+ q_c \int_{-\infty}^{t}\mathbb E[\mathbb E[ g(k_1) |\lambda_{g1}(u) ]\lambda_{g1}(u)]\beta \mathrm e^{-\beta(t-u)}\mathrm{d} u \\ &+ q_s \int_{-\infty}^{t} \mathbb E[ \mathbb E[g(k_2)| \lambda_{g2}(u) ]\lambda_{g2}(u)]\beta \mathrm e^{-\beta(t-u)}\mathrm{d} u \end{align*} and by Eq.~\eqref{Eq:K}, $$ \mathbb E[\mathbb E[ g(k_i) |\lambda_{gi}(u) ]\lambda_{gi}(u)] = \mathbb E[g(k_i)\lambda_{gi}(u) ] = \frac{\{ 1 + (K_{i \lambda_{gi}}-1)\eta \} \mathbb E[\lambda_{gi}(u)]}{ 1+(\mathbb E[k]-1)\eta}$$ where $\mathbb E[k] = \mathbb E[k_i]$ since $k_1$ and $k_2$ have the same distributional property. We write \begin{align} \mathbb E[\lambda_{g1}(t)] = \mu &+ \frac{\alpha_s}{\beta} \int_{-\infty}^{t} \{1 + (K_{1 \lambda_{g1}}-1)\eta\}\mathbb E[\lambda_{g1}(u)] \beta \mathrm e^{-\beta(t-u)}\mathrm{d} u \nonumber \\ &+ \frac{\alpha_c}{\beta} \int_{-\infty}^{t} \{1 + ( K_{2 \lambda_{g2}}-1)\eta\}\mathbb E[\lambda_{g2}(u)]\beta \mathrm e^{-\beta(t-u)}\mathrm{d} u,\label{Eq:integ_sys1}\\ \mathbb E[\lambda_{g2}(t)] =\mu &+ \frac{\alpha_c}{\beta} \int_{-\infty}^{t} \{1 + ( K_{1 \lambda_{g1}}-1)\eta\}\mathbb E[\lambda_{g1}(u)]\beta \mathrm e^{-\beta(t-u)}\mathrm{d} u \nonumber \\ &+ \frac{\alpha_s}{\beta} \int_{-\infty}^{t} \{1 + ( K_{2 \lambda_{g2}}-1)\eta\}\mathbb E[\lambda_{g2}(u)]\beta \mathrm e^{-\beta(t-u)}\mathrm{d} u \label{Eq:integ_sys2} \end{align} or, in a system of linear differential equation, \begin{align*} \begin{bmatrix} \dfrac{\mathrm{d} \mathbb E[\lambda_{g1}(t)]}{\mathrm{d} t} \\ \dfrac{\mathrm{d} \mathbb E[\lambda_{g2}(t)]}{\mathrm{d} t} \end{bmatrix} = \begin{bmatrix} \alpha_s \{1 + ( K_{1 \lambda_{g1}}-1)\eta\} - \beta & \alpha_c \{1 + ( K_{1 \lambda_{g1}}-1)\eta\} \\ \alpha_c \{1 + ( K_{1 \lambda_{g1}}-1)\eta\} & \alpha_s \{1 + ( K_{1 \lambda_{g1}}-1)\eta\} - \beta \end{bmatrix} \begin{bmatrix} \mathbb E[\lambda_{g1}(t)] \\ \mathbb E[\lambda_{g2}(t)] \end{bmatrix} + \begin{bmatrix} \beta \mu \\ \beta \mu \\ \end{bmatrix} \end{align*} where $K_{1 \lambda_{g1}}= K_{2 \lambda_{g2}}$ is used. The eigenvalues of the system are $$ (\xi_1, \xi_2) = \left(-\beta+ (\alpha_s -\alpha_c)\{1 + ( K_{1 \lambda_{g1}}-1)\eta\}, -\beta+ (\alpha_s + \alpha_c)\{1 + ( K_{1 \lambda_{g1}}-1)\eta\}\right)$$ and the solution of the above system is \begin{align*} \begin{bmatrix} \mathbb E[\lambda_{g1}(t)] \\ \mathbb E[\lambda_{g2}(t)] \end{bmatrix} =\frac{-\lambda_{g1}(0)+\lambda_{g2}(0)}{2}\mathrm e^{\xi_1 t} \begin{bmatrix}-1\\1\end{bmatrix} +\frac{\lambda_{g1}(0)+\lambda_{g2}(0)}{2}\mathrm e^{\xi_2 t} \begin{bmatrix}1\\1\end{bmatrix} -\frac{\mu \beta}{\xi_2} \left(1-\mathrm e^{\xi_2 t} \right) \begin{bmatrix}1\\1\end{bmatrix}. \end{align*} If Eq.~\eqref{Eq:condition} holds, then $\xi_1, \xi_2 < 0$ and the solution converges to a constant as $t \rightarrow \infty$. A similar argument is applied to the second moments of $\lambda_{g1}(t)$ and $\lambda_{g2}(t)$ and $\lambda_{gi}(t)$ are weakly stationary at least in the long-run or by assuming that $\lambda_{gi}(0)$ are equal to their long-run expectations. Note that by the symmetry and the stationarity of $\lambda_{gi}$, Eqs~\eqref{Eq:integ_sys1} and \eqref{Eq:integ_sys2} lead directly to \begin{align*} \mathbb E[\lambda_{g1}(t)] = \mu &+ \frac{\alpha_s}{\beta} \{1 + (K_{1 \lambda_{g1}}-1)\eta\}\mathbb E[\lambda_{g1}(t)] + \frac{\alpha_c}{\beta} \{1 + (K_{1 \lambda_{g1}}-1)\eta\}\mathbb E[\lambda_{g2}(t)],\\ \mathbb E[\lambda_{g2}(t)] = \mu &+ \frac{\alpha_c}{\beta} \{1 + (K_{1 \lambda_{g1}}-1)\eta\}\mathbb E[\lambda_{g1}(t)] + \frac{\alpha_s}{\beta} \{1 + (K_{1 \lambda_{g1}}-1)\eta\}\mathbb E[\lambda_{g2}(t)]. \end{align*} and $$ \left( \mathbf{I} - \{1 + ( K_{1 \lambda_{g1}} -1)\eta\} \begin{bmatrix} \dfrac{\alpha_s}{\beta} & \dfrac{\alpha_c}{\beta} \\ \dfrac{\alpha_c}{\beta} & \dfrac{\alpha_s}{\beta} \end{bmatrix} \right) \begin{bmatrix}\mathbb E[\lambda_{g1}(t)] \\ \mathbb E[\lambda_{g2}(t)]\end{bmatrix} = \begin{bmatrix}\mu \\ \mu \end{bmatrix}. $$ By the symmetry between $\lambda_{g1}$ and $\lambda_{g2}$, \begin{equation} \mathbb E[\lambda_{g1}(t)] = \mathbb E[\lambda_{g2}(t)] = \frac{\mu \beta}{\beta- (\alpha_s+\alpha_c)\{1 + (K_{1 \lambda_{g1}} -1)\eta\} }.\label{Eq:Elambdag1} \end{equation} Therefore, if condition~\eqref{Eq:condition} is satisfied, then the ground processes are well defined, i.e., the expectation of the ground intensities are positive and finite. \subsection{Second moment property} In this subsection, the volatility formula of the asset return generated by the marked Hawkes processes under the symmetric model is calculated. The symmetry of the model implies that $i$ and $j$ are interchangeable which makes the formula simple. In the following notation, various $K$s and $\alpha$s are defined in a similar manner to those in Eq.~\eqref{Eq:K}, which simplify the notations. \begin{notation}\label{Notation} For a jump process $X$ such as $\lambda_{gi}$ and $N_i$, let \begin{align*} \mathbb E[k_i X(t)] &= K_{iX} \mathbb E[X(t)], \quad \mathbb E[k_i^2 X(t)] = K^{(2)}_{iX} \mathbb E[X(t)]. \end{align*} (In the previous, when $X = \lambda_{gi}$, subscript is omitted for simplicity as in Eq.~\eqref{Eq:K}, i.e., $K=K_{i\lambda_{gi}}$.) Furthermore, \begin{align*} \bar{\bar K} &= 1+2(K_{1\lambda_{g1}}-1)\eta + (K^{(2)}_{1\lambda_{g1}} -2K_{1\lambda_{g1}} +1 )\eta^2 = 1+2(K_{2\lambda_{g2}}-1)\eta + (K^{(2)}_{2\lambda_{g2}} -2K_{2\lambda_{g2}} +1 )\eta^2,\\ \bar K &= K_{1\lambda_{g1}}+(K_{1\lambda_{g1}}^{(2)}-K_{1\lambda_{g1}})\eta = K_{2\lambda_{g2}}+(K_{2\lambda_{g2}}^{(2)}-K_{2\lambda_{g2}})\eta,\\ \breve \alpha &= \alpha\{1+(K_{1\lambda^2_{g1}}-1)\eta\}= \alpha\{1+(K_{2\lambda^2_{g2}}-1)\eta\},\\ \tilde \alpha &= \alpha\{1+(K_{2\lambda_{g1}\lambda_{g2}}-1)\eta\} =\alpha\{1+(K_{1\lambda_{g2}\lambda_{g1}}-1)\eta\},\\ \acute \alpha &= \alpha \{ 1 + (K_{1\lambda_{g1} N_1} - 1 )\eta \} = \alpha \{ 1 + (K_{2\lambda_{g2} N_2} - 1 )\eta \},\\ \grave \alpha &= \alpha \{ 1 + (K_{2\lambda_{g2} N_1} - 1 )\eta \} = \alpha \{ 1 + (K_{1\lambda_{g1} N_2} - 1 )\eta \}, \end{align*} and \begin{align*} &\mathbf{M} = \begin{bmatrix} \breve \alpha_s - \beta & \tilde \alpha_c \\ \breve \alpha_c & \tilde \alpha_s - \beta \end{bmatrix}, \quad \mathbf{M}_2 = \begin{bmatrix} \acute \alpha_s - \beta & \grave \alpha_c \\ \acute \alpha_c & \grave \alpha_s - \beta \end{bmatrix}, \\ &\mathbf{K} = \begin{bmatrix} K_{1\lambda_{g1}} & 0 \\ 0 & K_{1\lambda_{g1}} \end{bmatrix}, \quad \mathbf{K}_2 = \begin{bmatrix} K_{1\lambda_{g1}^2} & 0 \\ 0 & K_{2\lambda_{g1}\lambda_{g2}} \end{bmatrix}, \mathbf{K}_3 = \begin{bmatrix} K_{1\lambda_{g1} N_1} & 0 \\ 0 & K_{1\lambda_{g1} N_2} \end{bmatrix}. \end{align*} \end{notation} \begin{theorem}\label{Thm:var} Let $(N_1, N_2)$ be a two dimensional marked self and mutually excited Hawkes process with a linear impact function under Assumption~\ref{Assumption} with ground intensities of Eqs.~\eqref{Eq:lambdag1}~and~\eqref{Eq:lambdag2}. If the price process $S_t$ follows $$ S_t = S_0 + \delta(N_1(t) - N_2(t))$$ with a minimum jump size $\delta$, then the unconditional variance of the return over $[0,t]$ is $$ \mathrm{Var} \left(\frac{S_t - S_0}{S_0} \right) = \frac{\delta^2}{S_0^2}\mathbb E[(N_1(t) -N_2(t))^2] $$ with \begin{align*} &\begin{bmatrix} \mathbb E[N_1^2(t)] \\ \mathbb E[N_1(t)N_2(t)] \end{bmatrix} = -\mathbb E[\lambda_{g1}(t)]\mathbf{K}_3 \left\{ \beta \mu \mathbf{K} \mathbf{M}_2^{-1} \begin{bmatrix}1\\1 \end{bmatrix}t^2 \right. \\ {}&\left. + \left( 2\mathbf{M}_2^{-1} \begin{bmatrix} \alpha_s \bar K \\ \alpha_c \bar K\end{bmatrix} - \mathbf{M}_2^{-1} \mathbf{K}_2 \mathbf{M}^{-1} \begin{bmatrix} (\alpha_s^2 + \alpha_c^2) \bar{\bar K} \\ 2\alpha_s\alpha_c\bar{\bar K} \end{bmatrix} + 2\mathbf{M}_2^{-1}(\mathbf{K}\mathbf{M}_2^{-1}- \mathbf{K}_2 \mathbf{M}^{-1}) \begin{bmatrix} \beta\mu \\ \beta\mu \end{bmatrix} - \begin{bmatrix}K^{(2)}_{1\lambda_g1}/K_{1\lambda_{g1} N_1} \\ 0\end{bmatrix} \right)t \right\}. \end{align*} \end{theorem} \begin{proof} See \ref{Proof:var}. \end{proof} The result is slightly complicated but the following remark is useful for the practice. \begin{remark}\label{Remark:vol} By assuming $K_{1\lambda_{g1} N_{1}} \approx K_{1\lambda_{g1} N_2}$ and $K_{1\lambda^2_{g1}} \approx K_{1\lambda_{g1}\lambda_{g2}}$, we have \begin{align} \mathbb E[(N_1(t) -N_2(t))^2] &= 2(\mathbb E[N_1^2(t)] - \mathbb E[N_1(t)N_2(t)]) \nonumber\\ &\approx 2 K_{1\lambda_{g1} N_1} \mathbb E[\lambda_{g1}(t)] \left( \frac{K_{1\lambda^2_{g1}}\bar{\bar K}(\alpha_s-\alpha_c)^2}{(\beta - \breve\alpha_s + \breve\alpha_c)(\beta - \acute\alpha_s + \acute\alpha_c)} + \frac{2(\alpha_s-\alpha_c)\bar K}{\beta - \acute\alpha_s + \acute\alpha_c} + \frac{K^{(2)}_{1\lambda_{g1}}}{K_{1\lambda_{g1} N_1}}\right) t. \label{Eq:var} \end{align} Under the assumption, $\mathbf{M}, \mathbf{M}_2$ and $\mathbf{K}_2$ are symmetric and the variance formula becomes simple. In addition, if all $K$s are equal to 1, then the variance formula is reduced to \begin{align*} \mathbb E[(N_1(t) -N_2(t))^2] = 2 \mathbb E[\lambda_{g1}(t)] \frac{\beta^2}{(\beta-\alpha_s-\alpha_c)^2} t. \end{align*} which is the same formula of the variance in the simple Hawkes model. \end{remark} \begin{corollary}\label{Cor:iidvol} Under the assumption in Theorem~\ref{Thm:var}, if the marks $k_i$ are i.i.d., then $K_{iX} = K := \mathbb E[k_i]$ and $K^{(2)}_{iX} = K^{(2)} := \mathbb E[k^2_i]$ for all $X \in \{\lambda_{gi}, \lambda_{gi}\lambda_{gj}, \lambda_{gi}N_j : i,j=1,2\}$ and $\breve \alpha = \tilde \alpha = \acute \alpha = \grave \alpha$. Therefore, $$ \mathbb E[(N_1(t) -N_2(t))^2] =2 K \mathbb E[\lambda_{g1}(t)] \left( \frac{K\bar{\bar K}(\alpha_s-\alpha_c)^2}{(\beta - \breve\alpha_s + \breve\alpha_c)^2} + \frac{2(\alpha_s-\alpha_c)\bar K}{\beta - \breve\alpha_s + \breve\alpha_c} + \frac{K^{(2)}}{K}\right) t$$ where $\bar{\bar K}$ and $\bar K$ are now represented by \begin{align*} \bar{\bar K} = 1+2(K-1)\eta + (K^{(2)} -2K +1 )\eta^2, \qquad \bar K = K+(K^{(2)}-K)\eta . \end{align*} \end{corollary} \subsection{Likelihood function} To estimate the parameters in the intensity processes, such as $\mu, \alpha_s, \alpha_c, \beta$ and $\eta$, the log-likelihood needs to be computed. The joint log-likelihood function of the realized interarrival of the jumps and marks of $(N_1, N_2)$ over the period $[0,T]$ is represented by \begin{align} &\left( \int_{(0, T]} \log \lambda_{g1}(u) N_{g1}(\mathrm{d} u) + \int_{(0, T]} \log \lambda_{g2}(u) N_{g2}(\mathrm{d} u) - \int_0^T (\lambda_{g1}(u) + \lambda_{g2}(u)) \mathrm{d} u \right) \nonumber \\ &+ \left( \int_{(0,T]\times\mathbb Z^+ } \log f (k_1 | \lambda_{g1} (u) ) N_{1} (\mathrm{d} u\times\mathrm{d} k_1) + \int_{(0,T]\times\mathbb Z^+} \log f (k_2 | \lambda_{g2} (u) ) N_2(\mathrm{d} u \times \mathrm{d} k_2) \right) \nonumber\\ &=: \log L_g + \log L_m \label{Eq:likelihood} \end{align} where $f$ denotes the conditional distribution of the mark $k_i$ with a given $\lambda_{gi}$. In the above formula, the log-likelihood is separated into two parts, $\log L_g$ and $\log L_m$. The first part $\log L_g$ is the log-likelihood function of the ground intensity processes, or more precisely, the joint log-likelihood function of the jump interarrival times. The second part $\log L_m$ is the log-likelihood function of the conditional mark distribution. When $\log L_g$ is used solely for the maximum likelihood estimation, then $\log L_g$ is indeed the conditional log-likelihood function of the jump interarrival times with the conditions on the realized marks, i.e., the estimation is the maximum likelihood estimation based on the interarrival times of jumps with given realized marks. In the estimation procedure of the simulation and empirical studies later, no specific form of the joint distribution of the ground intensities and mark sizes is assumed but the empirical mark distribution is used to conduct the maximum likelihood procedure to maximize $\log L_g$, the conditional log-likelihood function of the jump interarrival times. Because the empirical mark distribution is used, the $\log L_m$ part does not affect the estimations of $\boldsymbol\theta=(\mu, \alpha_s, \alpha_c, \beta, \eta)$. On the other hand, if one assume a specific parametric modeling on the mark distribution by specifying a conditional distribution $f (k_i | \lambda_{gi})$ as in Subsection~\ref{Subsect:geo} and also want to estimate the parameters in $f$, then $\log L_m$ is also affected by $\boldsymbol\theta$ since when $\lambda_{gi}$s are inferred, it is calculated using $\boldsymbol\theta$. In another aspect, the estimation procedure only on $\log L_g$ is possible since the marks $k_i$ are observable. If the mark sizes are unobservable, it is then inevitable to assume a parametric modeling on the joint or conditional distribution between $\lambda_{gi}$ and $k_i$ to infer the mark size $k_i$. Owing to the parametric assumption on $f (k_i | \lambda_{gi} (u) )$, as mentioned before, the parameter family of $\boldsymbol\theta$ also appears in the formula of $\log L_m$, and the changing values of $\boldsymbol\theta$ change not only the value $\log L_g$ but also $\log L_m$. Therefore, the estimator that maximizes $\log L_g$ may not converge to the estimator that maximizes $(\log L_g + \log L_m)$ as the sample size increases. Fortunately, since all $k_i$ are observable in our empirical study, the observed realized mark sizes are used to compute $\log L_g$, not the inferred $k_i$ based on some parametric assumption. Hence, the estimator of $\boldsymbol\theta$ that maximizes $\log L_g$ are the maximum likelihood estimator associated with the conditional joint distribution of the interarrival times of jumps with the given marks. Note that $\log L_g$ and $\log L_m$ are separated not because the marks $k_i$ are independent from the ground intensities but because $\log L_g$ can be represented as the log-likelihood function of the conditional joint distribution of the interarrival times with the given marks. In practice, $\log L_g$ is computed as follows. With the presumed parameter values of $\alpha_s, \alpha_c, \beta$ and $\eta$, we compute the inferred ground intensity processes $\lambda_{gi}$s based on the realized $N_{i}$s of the stock price process and Eqs.~\eqref{Eq:lambdag1} and \eqref{Eq:lambdag2}. With the inferred ground intensity processes, the realized values of the stochastic integration part are calculated by $$\int_{(0,T]} \log \lambda_{gi}(u) N_{gi}(\mathrm{d} u) = \sum_{n=1}^{N_{gi}(T)} \log \lambda_{gi}(u_n) $$ where $u_n$ denotes the realized jump time by $N_{gi}$. In addition, $\int_0^T (\lambda_{g1}(u) + \lambda_{g2}(u)) \mathrm{d} u $ is computed using the Riemann integral which has a closed form formula. By repeating the above procedure with the changing presumed parameters, the numerical solver for the optimization try to find the global maximum of $\log L_g$. Consider the concavity of the log-likelihood function. If the Hessian of the log-likelihood function is negative semi-definite for all parameters with a given realization, then the log-likelihood function is concave. However, the formula of the Hessian of the marked Hawkes model is complicated, we instead examine the conditional concavity when $\beta$ and $\eta$ are fixed. Note that, with the given realized jump times $t_i$, we have \begin{align*} \log L_{g1}(T) :={}& \int_{(0,T]} \log \lambda_{g1}(u) N_{g1}( \mathrm{d} u) - \int_0^T \lambda_{g1}(u) \mathrm{d} u\\ ={}& \sum_{t_i < T} \left( \log \lambda_{g1}(t_i) - \int_{t_{i-1}}^{t_i} \lambda_{g1}(u) \mathrm{d} u \right) - \int_{t_N}^{T} \lambda_{g1}(u) \mathrm{d} u \\ ={}& \sum_{t_i < T} \left( \log \lambda_{g1}(t_i) - \frac{1-\mathrm e^{-\beta \tau_i}}{\beta}\lambda_{g1}(t_i) \right) - \int_{t_N}^{T} \lambda_{g1}(u) \mathrm{d} u \end{align*} where $t_N$ is the last jump time. Using the definition of $\lambda_{g1}$, each term in the summation can be rewritten as follows: \begin{align*} &\log \lambda_{g1}(t_i) - \frac{1-\mathrm e^{-\beta \tau_i}}{\beta}\lambda_{g1}(t_i)\\ ={}& \log \left[ \mu + (\lambda_{g1}(0) - \mu)\mathrm e^{-\beta t_i } + \int_{(0, t_i] \times \mathbb Z_+} \alpha_s \{ 1+(k_1-1)\eta\}\mathrm e^{-\beta (t_i -u)} N_1 (\mathrm{d} u \times \mathrm{d} k_1) \right.\\ &\quad \left. + \int_{(0, t_i]\times \mathbb Z_+} \alpha_c \{ 1+(k_2-1)\eta\}\mathrm e^{-\beta (t_i -u)} N_2 (\mathrm{d} u \times \mathrm{d} k_2) \right]\\ &- \frac{1-\mathrm e^{\beta \tau_i}}{\beta} \left[ \mu + (\lambda_{g1}(0) - \mu)\mathrm e^{-\beta t_i } + \int_{(0, t_i] \times \mathbb Z_+} \alpha_s \{ 1+(k_1-1)\eta\}\mathrm e^{-\beta (t_i -u)} N_1 (\mathrm{d} u \times \mathrm{d} k_1) \right.\\ &\quad \left. + \int_{(0, t_i]\times \mathbb Z_+} \alpha_c \{ 1+(k_2-1)\eta\}\mathrm e^{-\beta (t_i -u)} N_2 (\mathrm{d} u \times \mathrm{d} k_2) \right]. \end{align*} With fixed $\beta$ and $\eta$, the term is represented as $$ \log(m\mu + a_s \alpha_s + a_c \alpha_c + C) - \frac{1-\mathrm e^{\beta \tau_i}}{\beta} (m\mu + a_s \alpha_s + a_c \alpha_c + C)$$ for some constants $m, a_s, a_c,$ and $C$. Therefore, for any fixed $\beta$ and $\eta$, the Hessian matrix of the term with respect to $\mu, \alpha_s$ and $\alpha_c$ is \begin{align*} \frac{1}{\lambda^2_{g1}(t_i)} \begin{bmatrix} - m^2 & -m a_s & -m a_c \\ - m a_s & - a_s^2 & -a_s a_c \\ -m a_c & -a_s a_c & -a_c^2 \end{bmatrix} \end{align*} which is negative semidefinite. The conditional Hessian of the log-likelihood function $\log L_g$ with fixed $\beta$ and $\eta$ is represented by \begin{align*} \sum_{t_i<T} \left(\frac{1}{\lambda^2_{g1}(t_i)} + \frac{1}{\lambda^2_{g2}(t_i)}\right) \begin{bmatrix} - m^2 & -m a_s & -m a_c \\ - m a_s & - a_s^2 & -a_s a_c \\ -m a_c & -a_s a_c & -a_c^2 \end{bmatrix} \end{align*} which is also negative semidefinite. Therefore, at least if the parameter values of $\beta$ and $\eta$ are fixed, the concavity of the log-likelihood $\log L_g(\mu, \alpha_s, \alpha_c | \beta, \eta)$ as a function of $\mu, \alpha_s, \alpha_c$ can be guaranteed, and we can assume that a numerical solver for the optimization will find the global maximum. Consider the points of $\beta$ and $\eta$ over a sufficiently large and dense grid. For each $\beta$ and $\eta$, we can find the conditional global maximum of $\log L_g(\mu, \alpha_s, \alpha_c | \beta, \eta)$ over the parameter space of $\mu, \alpha_s, \alpha_c$ due to the conditional concavity. Now the interest is the shape of the conditional global maximums over the grid of $\beta$ and $\eta$. If the conditional global maximum is still concave and the maximum of the conditional maximums can be found, then the numerical optimizer can be checked to determine if it finds the overall global maximum. In the numerical procedure later, the shape of the conditional log-likelihood function over a grid of $\beta$ and $\eta$ will be examined. \section{Simulation example}\label{Sect:simul} \subsection{Symmetric model}\label{Subsect:geo} In this paper, the specific distribution of the mark is generally not assumed. For the simulation study, however, it is necessary to assume a specific conditional distribution of the mark sizes to generate paths. Suppose that the mark $k_i$ follows the conditional geometric distribution with $$ p(\lambda_{gi}(u)) = \frac{1}{\min(d + c \lambda_{gi}(u), U)}$$ for some constants $c, d$ and $U$, i.e., $$ \mathbb P( k_i = n | \lambda_{gi}(u)) = p(\lambda_{gi}(u))(1-p(\lambda_{gi}(u)))^{n-1}.$$ This suggests that the conditional expectation of the mark size $k_i$ with a given ground intensity $\lambda_{gi}$ is $$\mathbb E[k_i |\lambda_{gi}(u)] = \min(d + c \lambda_{gi}(u), U)$$ for some slope $c$, intercept $d$, and upper bound $U$. It is needed to set the upper bound for the conditional mean of the mark size to prevent a blow up of the marked Hawkes process. With this setting, the conditional expectation of the impact depends on the current intensity: $$ \mathbb E[g(k_i) | \lambda_{gi}(u)] = \frac{ 1+\{ \min(d + c \lambda_{gi}(u), U) - 1\} \eta}{\mathbb E[1+(k_i -1)\eta]}.$$ With each differently presumed conditional distribution and parameter setting, 500 sample paths of the two dimensional marked Hawkes process and corresponding ground intensities are generated. The time horizon for the path is set to be 5.5 hours, which equals the time horizon used in empirical studies later. The simulation mechanism is similar to the simple Hawkes models but it needs to incorporate the mark size and its future impacts. With the realized interarrival times of the generated path and realized mark sizes, the maximum likelihood estimation is performed on the maximized $\log L_g$ in Eq.~\eqref{Eq:likelihood} and the results are listed in Table~\ref{Table:simul}. The table consists of three panels with different parameter settings, which are presented in `True' rows. For the first panel, $c=0.15, d=1.0, U=2.0$, for the second panel, $c = 0.18, d=1.0, U = 2.2$; for the third panel, $c = 0.18, d=1.0, U = 3.5$; and for the fourth panel, $c=0.25, d=1.0, U=9$. Because the likelihood for the ground processes was calculated, the estimates of $\mu, \alpha_s, \alpha_c, \beta$ and $\eta$ were computed but not for $c, d$ and $U$. The sample mean of the estimates with 500 sample paths are reported in the row `mean'. The row `std.' presents the sample standard deviations of each estimate with 500 samples. The table shows that the estimates are consistent with the true values. \begin{table} \caption{Simulation study for the marked Hawkes model with 500 sample paths}\label{Table:simul} \centering \begin{tabular}{cccccccc} \hline & $\mu$ & $\alpha_s$ & $\alpha_c$ & $\beta$ & $\eta$ & TSRV & H.Vol. \\ \hline True & 0.1000 & 0.9500 & 0.8200 & 2.2500 & 0.1900 & \\ mean & 0.0999 & 0.9496 & 0.8199 & 2.2487 & 0.1882 & 0.2807 & 0.2798 \\ std. & 0.0021 & 0.0194 & 0.0190 & 0.0326 & 0.0170 & 0.0333 & 0.0126\\ \hline True & 0.1500 & 0.6200 & 0.5000 & 1.9000 & 0.2200 & \\ mean & 0.1499 & 0.6193 & 0.5008 & 1.8999 & 0.2177 & 0.1317 & 0.1312 \\ std. & 0.0027 & 0.0172 & 0.0149 & 0.0419 & 0.0502 & 0.0101 & 0.0018\\ \hline True & 0.3000 & 1.0500 & 0.9200 & 2.3000 & 0.0100\\ mean & 0.3003 & 1.0508 & 0.9206 & 2.3014 & 0.0094 & 0.6391 & 0.6291\\ std. & 0.0051 & 0.0136 & 0.0135 & 0.0229 & 0.0051 & 0.0562 & 0.0177\\ \hline True & 0.2000 & 1.1000 & 1.2600 & 2.5700 & 0.0100\\ mean & 0.2002 & 1.0997 & 1.2610 & 2.5702 & 0.0099 & 4.4299 & 4.3628\\ std. & 0.0039 & 0.0154 & 0.0164 & 0.0238 & 0.0007 & 1.6973 & 0.6811\\ \hline \end{tabular} \end{table} Figure~\ref{Fig:max} presents the global maximums of the conditional log-likelihood functions with various $\beta$ and $\eta$ under the first simulation setting as explained in the previous section. The numerically computed conditional maximum points shows the concavity and it is expected that the numerical optimizer will find the global maximum in the procedure. \begin{figure} \centering \includegraphics[width = 0.45\textwidth]{global_smallset} \caption{Maximums of conditional log-likelihood function over $\beta$ and $\eta$}\label{Fig:max} \end{figure} To calculate the volatility, we need to compute $K$s in Notation~\ref{Notation} which involves several unconditional expectations with the mark size, intensities and counting processes. In the absence of the exact formula of the expectations due to the complicated relationship between the mark and the intensities, the following statistics are used for the expectations instead: \begin{align} \mathbb E[\lambda_{gi}(t)] &\approx \frac{1}{T}N_{gi}(T) \label{Eq:El}\\ \mathbb E[k_i \lambda_{gi}(t)] &\approx \frac{1}{T}N_i(T)\label{Eq:Ekl}\\ \mathbb E[k_i^2 \lambda_{gi}(t)] &\approx \frac{1}{T}\int_{(0,T] \times \mathbb Z^+} k_i^2 N_{i}(\mathrm{d} u \times \mathrm{d} k_i)\label{Eq:Ek2l}\\ \mathbb E[\lambda^2_{gi}(t)] &\approx \frac{1}{T}\int_{(0,T] \times \mathbb Z^+} \lambda_{gi}(u) N_{i}(\mathrm{d} u \times \mathrm{d} k_i)\label{Eq:ElN}\\ \mathbb E[k_i \lambda^2_{gi}(t)] &\approx \frac{1}{T}\int_{(0,T] \times \mathbb Z^+} k_i \lambda_{gi}(u) N_{i}(\mathrm{d} u \times \mathrm{d} k)\label{Eq:EklN}\\ \frac{1}{t}\mathbb E[\lambda_{gi}(t)N_i(t)] &\approx \frac{2}{T^2}\int_{(0,T]\times \mathbb Z^+} N_{i}(u-) N_{i}(\mathrm{d} u \times \mathrm{d} k_i)\label{Eq:ENN}\\ \frac{1}{t}\mathbb E[k_i \lambda_{gi}(t)N_i(t)] &\approx \frac{2}{T^2}\int_{(0,T] \times \mathbb Z^+} k_i N_{i}(u-) N_{i}(\mathrm{d} u \times \mathrm{d} k_i)\label{Eq:kNN} \end{align} where $[0, T]$ is an observation time interval. To calculate the right hand sides, the realized $k_i$, $N_i$ and $N_{gi}$ of the generated paths and inferred $\lambda_{gi}$ from the estimates of $\mu, \alpha_s, \alpha_c, \beta$ and $\eta$ are used. The inferred intensities $\lambda_{gi}$ are computed using Eqs.~\eqref{Eq:lambdag1}~\eqref{Eq:lambdag2}, once $\mu, \alpha_s, \alpha_c, \beta$ and $\eta$ are estimated. The expectations of the ground intensities are approximated by the sample average of the total number of corresponding up or down moves per unit time in Eq.~\eqref{Eq:El}. Similarly for $\mathbb E[k_i \lambda_{gi}(t)]$ where the counting process $N_i$ is used instead to compute the sample average. The right hand side of Eq.~\eqref{Eq:Ek2l} is the sample average of the total number of jumps per unit time with weight $k_i^2$ for each jump and this approximates the left hand side. For Eqs.~\eqref{Eq:ElN}~and~\eqref{Eq:EklN}, consider \begin{align*} &\mathbb E\left[ \int_{(0,T] \times \mathbb Z^+} \lambda_{gi}(u) N_{i}(\mathrm{d} u \times \mathrm{d} k) \right] = \int_{0}^T \mathbb E [\lambda^2_{gi}(t) ] \mathrm{d} t = T \mathbb E [ \lambda^2_{gi}(t) ]\\ &\mathbb E\left[ \int_{(0,T] \times \mathbb Z^+} k\lambda_{gi}(u) N_{i}(\mathrm{d} u \times \mathrm{d} k) \right] = \int_{0}^T \mathbb E [k_i \lambda^2_{gi}(t) ] \mathrm{d} t = T \mathbb E [k_i \lambda^2_{gi}(t) ]. \end{align*} Figure~\ref{Fig:convergence} presents the convergences of the computed $K$ and $K^{(2)}$ with the above method as the sample size increases with the first parameter set in the simulation. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{convergence_K} \includegraphics[width=0.4\textwidth]{convergence_Ksq} \caption{Convergences of $K_{1\lambda_{g1}}$ and $K_{1\lambda_{g1}}^{(2)}$ as sample size increases}\label{Fig:convergence} \end{figure} Furthermore, $\mathbb E[\lambda_{gi}(t)N_i(t)] = c_1 t + c_2 $ for some constants $c_1 $ and $c_2$, according to \ref{Proof:var} and $\mathbb E[\lambda_{gi}(t)N_i(t)]/t $ converges to $c_1$ as $t$ increases. Note that \begin{align*} \frac{2}{T^2} \mathbb E\left[ \int_{(0,T] \times \mathbb Z^+} N_{i}(u-) N_{gi}(\mathrm{d} u \times \mathrm{d} k) \right] &= \frac{2}{T^2}\int_0^T \mathbb E[\lambda_{gi}(t)N_i(t) ] \mathrm{d} t \\ &= c_1 + \frac{2c_2}{T} \approx c_1 \approx \frac{1}{2t} \mathbb E[\lambda_{gi}(t)N_i(t)] \end{align*} with approximations for a large enough $t$ and $T$. A similar argument was applied to Eq.~\eqref{Eq:kNN}. The column `H. vol' is for the mean of the volatility estimates computed by the likelihood estimates of $\mu, \alpha_s, \alpha_c, \beta, \eta$ and $K$s using Remark~\ref{Remark:vol}. This was compared with the two scale realized volatility (TSRV) in the column `TSRV' proposed by \cite{Zhang2005}. The small time scale was set to be one second and the large time scale was set to be five minutes for the TSRV computation. The results show that the Hawkes volatility and TSRV are similar. The standard deviations of the Hawkes volatility are smaller than those of the TSRV for all simulation cases. \subsection{Other examples}\label{Subsect:full} This subsection examines the cases where there is a discrepancy between the Hawkes volatility and the realized volatility. First, the fully characterized Hawkes model is examined, i.e., the coefficients of the branching matrix is represented by $$q_{ij} = \frac{\alpha_{ij}}{\beta_{ij}} \mathbb E[1+(k_j -1)\eta]$$ with the linear impact function of Assumption~\ref{Assumption}~(ii). Under this setting, no symmetry is guaranteed. Recall that in the symmetric model, $\alpha_s = \alpha_{11} = \alpha_{22}$, $\alpha_c = \alpha_{12} = \alpha_{21}$, and $\beta = \beta_{11} = \beta_{12} = \beta_{21} = \beta_{22}$. Table~\ref{Table:full} lists the estimation results of the fully characterized Hawkes model with simulated paths with the presumed parameters. The presumed parameters are presented in the `true' column and 500 sample paths are generated over a one day time horizon, more precisely, 5.5 hours as in the previous example. The columns `full' report the means and standard deviations of the estimates under the maximum likelihood estimation with the fully characterized Hawkes model. The likelihood estimations were also performed under the symmetric Hawkes model, even though the paths are generated by the fully characterized Hawkes model. The results are presented in the columns `symmetric' at the centers of the rows of corresponding parameters. For example, $\mu$ is presented at the center of two rows of $\mu_1$ and $\mu_2$, $\alpha_s$ is presented at the center of two rows of $\alpha_{11}$ and $\alpha_{22}$, etc.. The `S.Vol.' represents the sample volatility of the return computed by the sample standard deviation of the closing stock prices generated by the 500 sample paths. The TSRV and marked Hawkes volatility with corresponding standard deviations are reported in column `TSRV' and `H.Vol', respectively. The Hawkes volatility is calculated using the estimates of the symmetric Hawkes model. Two volatilities are biased around 4\% compared to the sample volatility. The TSRV are larger than the sample volatilities and the Hawkes volatilities are smaller in these cases. \begin{table} \caption{Fully characterized Hawkes model with 500 sample paths}\label{Table:full} \centering \begin{tabular}{ccccccc|ccccc} \hline & & \multicolumn{2}{c}{full} & \multicolumn{2}{c}{symmetric} & & &\multicolumn{2}{c}{full} & \multicolumn{2}{c}{symmetric} \\ & true & mean & std. & mean & std. & & true & mean & std. & mean & std.\\ \hline $\mu_1$ & 0.1461 & 0.1467 & 0.0038 & \multirow{2}{*}{0.1345} & \multirow{2}{*}{0.0026} & & 0.1130 & 0.1131 & 0.0030 & \multirow{2}{*}{0.1152} & \multirow{2}{*}{0.0024}\\ $\mu_2$ & 0.1155 & 0.1159 & 0.0032 & & & & 0.1149 & 0.1153 & 0.0033 &\\ $\alpha_{11}$ & 0.3185 & 0.3204 & 0.0150 & \multirow{2}{*}{0.4102} & \multirow{2}{*}{0.0148} & & 0.4994 & 0.5031 & 0.0242 & \multirow{2}{*}{0.5252} & \multirow{2}{*}{0.0159}\\ $\alpha_{22}$ & 0.3821 & 0.3865 & 0.0219 & & & & 0.4682 & 0.4799 & 0.0210 &\\ $\alpha_{12}$ & 0.9812 & 0.9848 & 0.0282 & \multirow{2}{*}{1.1512} & \multirow{2}{*}{0.0223} & & 0.5937 & 0.5992 & 0.0232 & \multirow{2}{*}{0.7012} & \multirow{2}{*}{0.0199}\\ $\alpha_{21}$ & 1.4949 & 1.5000 & 0.0334 & & & & 0.9754 & 0.9854 & 0.0368 &\\ $\beta_{11}$ & 1.1799 & 1.1893 & 0.0567 & \multirow{4}{*}{2.0547} & \multirow{4}{*}{0.0315} & & 1.8305 & 1.8512 & 0.0948 & \multirow{4}{*}{1.8744} & \multirow{4}{*}{0.0364} \\ $\beta_{22}$ & 1.9553 & 1.9840 & 0.1195 & & & & 1.4706 & 1.5142 & 0.0666 &\\ $\beta_{12}$ & 2.0952 & 2.1077 & 0.0697 & & & & 1.5963 & 1.6110 & 0.0624 &\\ $\beta_{21}$ & 2.5030 & 2.5132 & 0.0587 & & & & 2.7850 & 2.8064 & 0.1036 &\\ $\eta$ & 0.1488 & 0.1501 & 0.0235 & 0.1424 & 0.0255 & & 0.1761 & 0.1768 & 0.0216 & 0.1756 & 0.0225\\ \hline & \multicolumn{5}{c}{ $c=0.1, d= 1.0, U =2.0$} & & \multicolumn{5}{c}{ $c=0.08, d= 1.5, U = 3.0$} \\ \hline & S.Vol. & TSRV & std. & H.Vol. & std. & & S.Vol. & TSRV & std. & H.Vol. & std. \\ & 0.1405 & 0.1463 & 0.0146 & 0.1346 & 0.0051 & & 0.1853 & 0.1897 & 0.0161 & 0.1795 & 0.0044\\ \hline \end{tabular} \end{table} Second, the symmetric marked Hawkes models were examined, where the model parameters change during the sample period. Table~\ref{Table:timevarying} lists the estimation results with the symmetric Hawkes models of the 5.5 hour's time horizon but the model parameter of the first one hour of the period is according to the row of `True 1' and in the rest of the period, the model follows `True 2'. In the first panel, the varying part is the upper bound of the conditional mean of the mark distribution. In other words, during the first part of the sample period, the price process is quite volatile due to the possible large size of the mark, and the remaining part is rather stable. This mimics the case of the 2010 Flash Crash and the empirical analysis will be performed later. The result shows the discrepancy between the TSRV and the Hawkes volatility which are both less than the sample volatility and the TSRV is even less than the Hawkes volatility. \begin{table} \caption{Simulation study for the marked Hawkes model with 500 sample paths with time varying parameters}\label{Table:timevarying} \centering \begin{tabular}{cccccccccccc} \hline & $\mu$ & $\alpha_s$ & $\alpha_c$ & $\beta$ & $\eta$ & $c$ & $d$ & $U$ & S.Vol. & TSRV & H.Vol. \\ \hline True 1 & 0.1000 & 1.1000 & 1.2600 & 2.5700 & 0.0100 & 0.2500 & 1 & 7 \\ True 2 & 0.1000 & 1.1000 & 1.2600 & 2.5700 & 0.0100 & 0.2500 & 1 & 1.5 \\ mean & 0.1017 & 1.1017 & 1.2662 & 2.5667 & 0.0085 & & & & 0.6431 & 0.5801 & 0.6288 \\ std. & 0.0022 & 0.0205 & 0.0265 & 0.0319 & 0.0039 & & & & & 0.1990 & 0.1381\\ \hline True 1 & 0.1000 & 1.1000 & 1.2600 & 2.5700 & 0.1000 & 0.1000 & 1 & 7 \\ True 2 & 0.0500 & 0.5000 & 0.5000 & 2.0000 & 0.1000 & 0.1000 & 1 & 1.5 \\ mean & 0.0375 & 1.0162 & 1.1190 & 2.3567 & 0.0233 & & & & 0.2496 & 0.2204 & 0.2300 \\ std. & 0.0010 & 0.0302 & 0.0347 & 0.0438 & 0.0142 & & & & & 0.0378 & 0.0205 \\ \hline \end{tabular} \end{table} \section{Empirical study}\label{Sect:empirical} \subsection{Data} The empirical studies used the ultra high-frequency tick-by-tick data of some major stock prices consisting of several years with the best bid and ask quotes reported in the New York Stock Exchange (NYSE). The time horizon of the sample for each day is set to be from 10:00 to 15:30. The data of 30 minutes immediately after the opening and before the closing time were not used to reduce the seasonality effects. The price movement patterns are usually different at the near opening and closing from the rest of the day. The jump sizes of the price movements of equities in the S\&P 500 are not constant over time particularly when the price of the equity is high and hence the ratio between the price and the minimum tick size in transaction on the NYSE, \$0.01, is high. The tick size of the NYSE was reduced from \$1/8 to \$1/16 in 1997 and from \$1/16 to \$0.01 in 2001. In this paper, the mid-price movements is considered for the marked Hawkes modeling to remove the bid-ask bounce and hence the minimum jump size is the half tick size, \$0.005. In the original data, the time resolution of the record is one second. If more than one timestamps of the price changes are reported for one second, then the reported events are distributed over a one second interval to equidistant finer partitions. \subsection{Unconditional distribution of mark} Table~\ref{Table:tick} compares the percentage of the mark size of IBM, GE and CVX from 2008 to 2011, i.e., the unconditional distribution of mark sizes are reported in the table. IBM and CVX have a range of mark sizes over the years but GE's mark size distributions concentrates on the minimum mark size. This is because the price of IBM and CVX is relatively high (IBM is around \$150 and CVX is around \$100), whereas the price of GE is around \$25. The unconditional distributions of the marks have exponentially decreasing shapes which are similar to the geometric distributions. The empirical distribution of the marks of IBM, 2010 and 2011, was compared with the geometric distributions in Figure~\ref{Fig:mark}. The solid lines are for the empirical distribution and the dashed lines are for the geometric distribution fitted by matching the first moments of the empirical and geometric distributions. \begin{table} \caption{Mark size distribution (\%) of IBM (left) and GE (center) and CVX (right) from 2008 to 2011}\label{Table:tick} \centering \begin{tabular}{ccccc|cccc|cccc} \hline mark & 2008 & 2009 & 2010 & 2011 & 2008 & 2009 & 2010 & 2011 & 2008 & 2009 & 2010 & 2011\\ \hline 1 & 51.80 & 59.98 & 80.88 & 57.04 & 89.81 & 98.39 & 99.61 & 99.68 & 60.69 & 68.30 & 91.55 & 77.21\\ 2 & 21.53 & 20.57 & 13.96 & 18.41 & 7.61 & 1.51 & 0.34 & 0.30 & 20.02 & 19.75 & 7.32 & 16.20\\ 3 & 11.22 & 10.89 & 3.53 & 9.54 & 1.55 & 0.00 & 0.00 & 0.00 & 9.25 & 8.43 & 0.77 & 4.77\\ 4 & 6.36 & 5.50 & 1.01 & 5.80 & 0.52 & 0.00 & 0.00 & 0.00 & 4.78 & 2.70 & 0.15 & 1.28\\ 5 & 3.61 & 1.98 & 0.35 & 3.68 & 0.19 & 0.00 & 0.00 & 0.00 & 2.49 & 0.62 & 0.06 & 0.32\\ 6 & 2.04 & 0.66 & 0.10 & 2.22 & 0.00 & 0.00 & 0.00 & 0.00 & 1.29 & 0.13 & 0.03 & 0.12\\ 7 & 1.17 & 0.24 & 0.00 & 1.37 & 0.00 & 0.00 & 0.00 & 0.00 & 0.66 & 0.04 & 0.02 & 0.05\\ 8 & 0.71 & 0.01 & 0.00 & 0.81 & 0.00 & 0.00 & 0.00 & 0.00 & 0.35 & 0.01 & 0.01 & 0.02\\ \hline \end{tabular} \end{table} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig1a_mark_distribution_ibm2008_gd_compare} \caption{IBM, 2008} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig1b_mark_distribution_ibm2009_gd_compare} \caption{IBM, 2009} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig1c_mark_distribution_ibm2010_gd_compare} \caption{IBM, 2010} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig1d_mark_distribution_ibm2011_gd_compare} \caption{IBM, 2011} \end{subfigure} \caption{Empirical unconditional distribution of mark}\label{Fig:mark} \end{figure} \subsection{Mark size and intensity} This subsection examines the dependence between the mark size and the ground intensity, and the number of expected events over unit interval. The empirical evidence shows that the mark size and the current ground intensity are significantly related to each other. First, the empirical conditional expectation of the intensities with given mark size, $\mathbb E[\lambda_{gi}(t) | k_i ]$, were calculated. The proxy intensities are introduced because the ground intensities are unobservable. The proxy intensities for the up, down and total jumps are defined by the numbers of up, down and total jumps, respectively, over a fixed time period, which is ended just before the time of the jump, divided by the length of the period. The period for the proxy intensities was chosen as ten seconds. Mathematically, the up proxy intensity for the mark $k_1$, which takes place at time $t$ is represented by $ N_{g1}(t+\tau) - N_{g1}(t) $, where $\tau$ is the length of the period. Table~\ref{Table:mark_size} presents the calculated sample mean and standard error of the proxy intensities for each mark size and for each year of 2010 and 2011, IBM. For example, for the mark size 6, there are 86,738 up jumps reported and the sample mean of the up proxy intensity is 3.4784 and the sample standard error is 0.0145. Note that the intensity of 3.4784 implies that the expected number of movements over unit time, which was set to 1 second, is approximately 3.4784. The table shows that the proxy intensities increase with increasing given mark size. The negative integers in the column of the mark size represent the down jump of the price. The proxy intensities were also calculated for five seconds time horizon. The results are similar to the previous case of 10 seconds time horizon so the results are not shown. \begin{table} \caption{Relationship between the mark size and the mean of proxy intensity (10 seconds), IBM}\label{Table:mark_size} \centering \begin{tabular}{c|ccc|ccccccc} \hline & & 2011 & & & 2010 & &\\ mark size & up & down & total & up & down & total\\ \hline 6 & 3.4784 & 3.6400 & 7.1184 & 6.4059 & 6.5618 & 12.9676\\ & (0.0145) & (0.0149) & (0.0292) & (0.2306) & (0.1171) & (0.1150)\\ 5 & 3.1191 & 3.2461 & 6.3652 & 4.3289 & 4.5448 & 8.8738\\ & (0.0105) & (0.0106) & (0.0209) & (0.0890) & (0.0449) & (0.0448)\\ 4 & 2.8206 & 2.9237 & 5.7443 & 3.4963 & 3.6881 & 7.1844\\ & (0.0078) & (0.0078) & (0.0155) & (0.0439) & (0.0221) & (0.0221)\\ 3 & 2.5683 & 2.6551 & 5.2233 & 2.7119 & 2.8717 & 5.5836\\ & (0.0061) & (0.0060) & (0.0120) & (0.0182) & (0.0092) & (0.0092)\\ 2 & 2.3051 & 2.3795 & 4.6846 & 2.1799 & 2.2892 & 4.4691\\ & (0.0042) & (0.0041) & (0.0083) & (0.0071) & (0.0036) & (0.0036)\\ 1 & 2.3131 & 2.3501 & 4.6632 & 1.7682 & 1.8481 & 3.6163\\ & (0.0028) & (0.0028) & (0.0056) & (0.0024) & (0.0012) & (0.0012)\\ $-1$ & 2.2403 & 2.4140 & 4.6543 & 1.7488 & 1.8883 & 3.6371\\ & (0.0028) & (0.0028) & (0.0056) & (0.0024) & (0.0012) & (0.0012)\\ $-2$ & 2.3004 & 2.4376 & 4.7381 & 2.1655 & 2.3022 & 4.4687\\ & (0.0042) & (0.0042) & (0.0084) & (0.0072) & (0.0036) & (0.0036)\\ $-3$ & 2.5924 & 2.7167 & 5.3090 & 2.7264 & 2.8552 & 5.5816\\ & (0.0062) & (0.0062) & (0.0123) & (0.0185) & (0.0094) & (0.0093)\\ $-4$ & 2.8174 & 2.9552 & 5.7726 & 3.4806 & 3.6163 & 7.0969\\ & (0.0078) & (0.0079) & (0.0156) & (0.0432) & (0.0218) & (0.0217)\\ $-5$ & 3.1078 & 3.2399 & 6.3477 & 4.4487 & 4.5968 & 9.0454\\ & (0.0104) & (0.0106) & (0.0209) & (0.0902) & (0.0466) & (0.0469)\\ $-6$ & 3.4606 & 3.5845 & 7.0451 & 6.1019 & 6.2334 & 12.3352\\ & (0.0145) & (0.0149) & (0.0292) & (0.2141) & (0.1083) & (0.1074)\\ \hline \end{tabular} \end{table} Second, Table~\ref{Table:mark_size2} presents the relationship between the mark sizes and the inferred ground intensities with the linear impact function using the IBM tick data. Prior to calculating the inferred ground process, the parameters $\omega, \alpha_s, \alpha_c, \beta,$ and $\eta$ were estimated by maximizing $\log L_g$ defined in Eq.~\eqref{Eq:likelihood}. The estimations were performed on a daily basis and the detailed estimation results will be demonstrated later. Subsequently, the inferred ground intensities were computed with the estimates of $\omega, \alpha_s, \alpha_c, \beta$, and $\eta$ using the definition of the ground intensities in Eqs.~\eqref{Eq:lambdag1}~and~\eqref{Eq:lambdag2}. The sample mean and sample standard errors of the inferred ground intensities for each mark size is reported. Similarly with the case of the proxy intensities, the inferred ground intensities increase with increasing given mark size. This implies that if a large size of the mark is observed, it is probably based on the large ground intensities. \begin{table} \caption{Relationship between the mark size and the mean of inferred ground intensity with the linear impact function, IBM}\label{Table:mark_size2} \centering \begin{tabular}{c|ccc|ccccccc} \hline & & 2011 & & & 2010 & &\\ mark size & $\lambda_{g1}$ & $\lambda_{g2}$ & $\lambda_g$ & $\lambda_{g1}$ & $\lambda_{g2}$ & $\lambda_g$\\ \hline 6 & 5.1666 & 5.1108 & 10.2774 & 8.1033 & 8.0330 & 16.1364\\ & (0.0222) & (0.0221) & (0.0442) & (0.1366) & (0.1370) & (0.2734)\\ 5 & 4.7423 & 4.6834 & 9.4257 & 6.5586 & 6.4855 & 13.0443\\ & (0.0165) & (0.0164) & (0.0328) & (0.0621) & (0.0620) & (0.1240)\\ 4 & 4.4092 & 4.3565 & 8.7656 & 5.4497 & 5.3975 & 10.8472\\ & (0.0126) & (0.0125) & (0.0251) & (0.0338) & (0.0337) & (0.0674)\\ 3 & 4.0264 & 3.9842 & 8.0106 & 4.2182 & 4.1806 & 8.3988\\ & (0.0094) & (0.0094) & (0.0188) & (0.0145) & (0.0145) & (0.0290)\\ 2 & 3.5870 & 3.5505 & 7.1375 & 3.4705 & 3.4476 & 6.9181\\ & (0.0066) & (0.0066) & (0.0131)& (0.0064) & (0.0064) & (0.0128)\\ 1 & 3.5924 & 3.5580 & 7.1504 & 2.5428 & 2.5321 & 5.0749\\ & (0.0057) & (0.0057) & (0.0114) & (0.0021) & (0.0021) & (0.0041)\\ $-1$ & 3.5568 & 3.5942 & 7.1509 & 2.5702 & 2.5836 & 5.1538\\ & (0.0057) & (0.0057) & (0.0113) & (0.0021) & (0.0021) & (0.0041)\\ $-2$ & 3.5858 & 3.6270 & 7.2128 & 3.4730 & 3.4978 & 6.9708\\ & (0.0066) & (0.0066) & (0.0132) & (0.0064) & (0.0064) & (0.0129)\\ $-3$ & 4.0451 & 4.0927 & 8.1378 & 4.2369 & 4.2781 & 8.5150\\ & (0.0096) & (0.0096) & (0.0192) & (0.0148) & (0.0148) & (0.0296)\\ $-4$ & 4.3580 & 4.4177 & 8.7757 & 5.3775 & 5.4380 & 10.8154\\ & (0.0126)& (0.0127) & (0.0253) & (0.0333) & (0.0334) & (0.0667)\\ $-5$ & 4.6872 & 4.7587 & 9.4459 & 6.4356 & 6.5047 & 12.9402\\ & (0.0163) & (0.0164) & (0.0327) & (0.0624) & (0.0626) & (0.1250)\\ $-6$ & 5.0360 & 5.0884 & 10.1244 & 7.9426 & 8.0193 & 15.9620 \\ & (0.0216) & (0.0217) & (0.0433) & (0.1343) & (0.1344) & (0.2685)\\ \hline \end{tabular} \end{table} Third, Figure~\ref{Fig:CEk} illustrates the empirical expectations of the mark size conditionally upon given inferred ground intensities, $\mathbb E[k_i |\lambda_{gi}(t)]$, using the tick data of IBM, 2008-2011. For each year, the empirical conditional expectation with a given $\lambda_{gi} = n$ for an integer $n$ was computed using the sample mean of the mark sizes, whose associated inferred ground process is falling into $(n-1, n]$. The conditional expectations of the marks were plotted where the total observed numbers of the mark were larger than 100 for each year, i.e., the samples with a small number of observations are dropped out. In the figure, the ground intensities vary by more as the years pass, suggesting that the overall number of activities increases. Most of intensities were less than 15 in 2008 but the inferred intensity in 2011 were more widely distributed as one can see that the large portion of the observed intensities were larger than 15. The shape of the conditional expectation of the mark changes over time. The conditional expectation tends to increase with increasing intensity in 2008 and 2010. In 2009 and 2011, the conditional expectations showed humped shapes. Figure~\ref{Fig:CEk_M} shows the empirical conditional expectations of the mark given ground intensity computed monthly basis from January to June, 2011, of IBM. In the monthly basis empirical conditional expectation, irregular patterns were observed over time. The changing shape of the conditional distribution of marks over time is the reason why the mark distribution is not specified, and the estimation was performed in a non-parametric manner for the part of the mark distribution. \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig2a_CE_k_IBM2008_line} \caption{IBM, 2008}\label{CE_IBM2008} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig2b_CE_k_IBM2009_line} \caption{IBM, 2009} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig2c_CE_k_IBM2010_line} \caption{IBM, 2010} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig2d_CE_k_IBM2011_line} \caption{IBM, 2011} \end{subfigure} \caption{Conditional expectation of $k_1$ on $\lambda_{g1}$, IBM, 2008-2011}\label{Fig:CEk} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig3a_CE_k_IBM201101} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig3b_CE_k_IBM201102} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig3c_CE_k_IBM201103} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig3d_CE_k_IBM201104} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig3e_CE_k_IBM201105} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig3f_CE_k_IBM201106} \end{subfigure} \caption{Conditional expectation of $k_1$ on $\lambda_{g1}$, IBM, 2011}\label{Fig:CEk_M} \end{figure} \subsection{Estimation result} Table~\ref{Table:estimatesIBM2011} lists the likelihood estimation results of the marked Hawkes model with the tick data of IBM, January 2011, where $\log L_g$ of Eq.~\eqref{Eq:likelihood} is maximized. The numerically computed standard errors are reported in the parentheses. The estimations were performed on a daily basis, i.e., the estimates were recalculated in every business day. The behaviors of $\mu, \alpha_s, \alpha_c,$ and $\beta$ in Figure~\ref{Fig:EstimationIBM2011} are similar to those estimated in the simple Hawkes model, see~\cite{LeeSeo}. \begin{table} \caption{Estimates of IBM, 2011 with linear impact function}\label{Table:estimatesIBM2011} \centering \begin{tabular}{ccccccc} \hline date & $\mu$ & $\alpha_s$ & $\alpha_c$ & $\beta$ & $\eta$ & $\log L_g$ \\ \hline 0103 & 0.1080 & 0.6577 & 0.9956 & 2.2921 & 0.1241 & $-17185.5$ \\ & (0.0021) & (0.0175) & (0.0218) & (0.0339) & (0.0187) \\ 0104 & 0.1450 & 0.7033 & 1.0334 & 2.3527 & 0.2266 & $-17371.5$ \\ & (0.0026) & (0.0157) & (0.0188) & (0.0272) & (0.0233) \\ 0105 & 0.1079 & 0.9736 & 0.9414 & 2.550 & 0.1654 & $-13160.3$ \\ & (0.0021) & (0.0207) & (0.0203) & (0.0334) & (0.0180)\\ 0106 & 0.1335 & 0.8198 & 0.9475 & 2.3615 & 0.1357 & $-16603.4$\\ & (0.0025) & (0.0173) & (0.0191) & (0.0296) & (0.0171) \\ 0107 & 0.1588 & 0.8574 & 1.0145 & 2.435 & 0.1560 & $-15956.4$\\ & (0.0029) & (0.0164) & (0.0184) & (0.0284) & (0.0176)\\ 0110 & 0.1338 & 0.7423 & 1.0724 & 2.4388 & 0.1927 & $-16141.3$\\ & (0.0025) & (0.0160) & (0.0191) & (0.0266) & (0.0163)\\ 0111 & 0.1271 & 0.5855 & 1.2108 & 2.4403 & 0.1570 & $-16923.1$\\ & (0.0024) & (0.0159) & (0.0223) & (0.0314) & (0.0155)\\ 0112 & 0.1160 & 0.6517 & 0.8552 & 2.0639 & 0.3561 & $-19492.9$\\ & (0.0023) & (0.0158) & (0.0185) & (0.0307) & (0.0481) \\ 0113 & 0.1042 & 0.7245 & 1.0502 & 2.5284 & 0.2372 & $-15508.4$\\ & (0.0020) & (0.0192) & (0.0240) & (0.0369) & (0.0261) \\ 0114 & 0.1138 & 0.6702 & 0.8798 & 2.3142 & 0.2380 & $-16589.3$ \\ & (0.0022) & (0.0175) & (0.0202) & (0.0341) & (0.0183)\\ 0118 & 0.1330 & 0.5642& 1.1548& 2.5082& 0.1651 & $-16374.7$\\ & (0.0024) & (0.0169) & (0.0239) & (0.0354) & (0.0147) \\ 0119 & 0.2198 & 0.5423& 1.2631 & 2.4964 & 0.1323 & $-11223.0$\\ & (0.0036) & (0.0133) & (0.0207) & (0.0255) & (0.0093) \\ 0120 & 0.1509 & 0.7060 & 1.0017 & 2.3114 & 0.1824 & $-15709.4$\\ & (0.0028) & (0.0158) & (0.0211) & (0.0332) & (0.0154) \\ 0121 & 0.1447 & 0.4901 & 1.3356 & 2.5806 & 0.1545 & $-16524.6$ \\ & (0.0026) & (0.0152) & (0.0247) & (0.0333) & (0.0132)\\ 0124 & 0.1771 & 0.6095 & 1.2424 & 2.5658 & 0.1649 & $-14711.6$\\ & (0.0030) & (0.0151) & (0.0205) & (0.0290) & (0.0132)\\ 0125 & 0.1669 & 0.8214 & 0.9528 & 2.2982 & 0.1954 & $-13602.1$\\ & (0.0030) & (0.0147) & (0.0167) & (0.0244) & (0.0170) \\ 0126 & 0.1421 & 0.5939 & 1.3809 & 2.6146 & 0.1217 & $-9239.8$\\ & (0.0026) & (0.0154) & (0.0231) & (0.0295) & (0.0100) \\ \hline \end{tabular} \end{table} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig4a_mu_IBM2011} \caption{$\mu$} \label{Fig:mu_IBM2011} \end{subfigure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig4b_beta_IBM2011} \caption{$\beta$} \label{Fig:beta_IBM2011} \end{subfigure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig4c_alphas_IBM2011} \caption{$\alpha_s$} \label{Fig:alphas_IBM2011} \end{subfigure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig4d_alphac_IBM2011} \caption{$\alpha_c$} \label{Fig:alphac_IBM2011} \end{subfigure} \caption{Marked Hawkes estimation result, IBM, 2011}\label{Fig:EstimationIBM2011} \end{figure} The dynamics of $\eta$ are illustrated in Figure~\ref{Fig:eta} where $\eta$ was estimated around 0.2 in general from 2008 to 2011. The slope parameter for the impact function, $\eta$, is positive in general, which means that large mark tends to have a large impact for the future intensities. On the other hand, $\eta$ is less than 1 and this implies that the impact of mark size 2 is generally less than the total impact of the two consecutive unit size jumps that occur over a very short time interval. Note that few negative $\eta$ are also observed. Figure~\ref{Fig:etaCVX} shows the dynamics of $\eta$ estimated from CVX. The overall behaviors of $\eta$ of IBM and CVX are similar but the $\eta$ of CVX was more volatile. Figure~\ref{Fig:vol} compares the Hawkes volatility computed by Remark~\ref{Remark:vol} and TSRV of IBM, 2008-2011. The trends of the Hawkes volatility and TSRV are similar but the Hawkes volatility is generally larger than the TSRV, especially when the volatility is high. This tendency was also found in the simple symmetric Hawkes model. This discrepancy might be because of the restriction that the parameter settings need be symmetric and Markovian. On the other hand, the precise reason for this is unclear at this point. Note that in the previous simulation study, the Hawkes volatility and TSRV converges. The empirical studies suggest that the parameter restrictions for symmetry do not perfectly meet with the real data. Figure~\ref{Fig:EstimationIBM2011full} presents the dynamics of all parameters of the fully characterized Hawkes model of Subsection~\ref{Subsect:full} for IBM, 2011. The result shows that for each parameter pair $(\mu_1, \mu_2)$, $(\alpha_{11}, \alpha_{22})$, $(\alpha_{12}, \alpha_{21})$, $(\beta_{11}, \beta_{22})$ and $(\beta_{12}, \beta_{21})$, a similar trend is observed over time but those were not exactly the same as each other. The summary statistics in Table~\ref{Table:IBM2011_full} show that the parameter pair has a similar mean but the mean absolute percentage error (MAPE) also shows a difference between the parameters. The row `MAPE' presents the error between two adjacent parameters. In addition, $\beta_{11}$ and $\beta_{12}$ are different, even in the mean. A similar observation is found in the simple Hawkes approach. \begin{figure} \centering \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth]{fig5a_mu12_IBM2011} \caption{$\mu_1$ and $\mu_2$} \label{Fig:IBM2011_mu12} \end{subfigure} \centering \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth]{fig5b_alpha1122_IBM2011} \caption{$\alpha_{11}$ and $\alpha_{22}$} \label{Fig:IBM2011_alpha1122} \end{subfigure} \centering \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth]{fig5c_alpha1221_IBM2011} \caption{$\alpha_{12}$ and $\alpha_{21}$} \label{Fig:IBM2011_alpha1221} \end{subfigure} \centering \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth]{fig5d_beta1122_IBM2011} \caption{$\beta_{11}$ and $\beta_{22}$} \label{Fig:IBM2011_beta1122} \end{subfigure} \centering \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth]{fig5e_beta1221_IBM2011} \caption{$\beta_{12}$ and $\beta_{21}$} \label{Fig:IBM2011_beta1221} \end{subfigure} \caption{Estimation result with the fully characterized marked Hawkes, IBM, 2011}\label{Fig:EstimationIBM2011full} \end{figure} \begin{table} \caption{Estimation result of fully characterized self and mutually excited Hawkes process, IBM, 2011}\label{Table:IBM2011_full} \centering \begin{tabular}{ccccccccccccc} \hline & $\mu_1$ & $\mu_2$ & $\alpha_{11}$ & $\alpha_{22}$& $\alpha_{12}$& $\alpha_{21}$ & $\beta_{11}$ & $\beta_{22}$ & $\beta_{12}$ & $\beta_{21}$\\ \hline IBM, 2011 \\ mean & 0.1462 & 0.1470 & 0.5404 & 0.5621 & 0.6443 & 0.6611 & 1.5499 & 1.5793 & 1.7538 & 1.8056 \\ std. & 0.0543 & 0.0545 & 0.1962 & 0.2361 & 0.2144 & 0.2717 & 0.3845 & 0.4192 & 0.3373 & 0.4282 \\ MAPE & \multicolumn{2}{c}{0.1436} & \multicolumn{2}{c}{0.3924} & \multicolumn{2}{c}{0.3179} & \multicolumn{2}{c}{0.2881} & \multicolumn{2}{c}{0.2247}\\ \hline \end{tabular} \end{table} Interestingly, when the stock market is in a highly volatile state, the slope for the impact function, $\eta$ is estimated to be relatively close to zero. For example, September 29, 2008, at the beginning of the financial crisis, the reported $\eta$ of IBM was around 0.02, which was much smaller than the annual average, 0.16, when the market is very unstable with a TSRV of 0.8766 and a Hawkes volatility 1.4530. In the May 6, 2010 Flash Crash, the estimated $\eta$ of IBM was around 0.01 (annual average of $\eta= 0.23$) when the TSRV was 0.9656 and the Hawkes volatility was 1.3264. Because of the statement of Federal Reserve, the stock market was highly volatile at August 9, 2011, and the $\eta$ of IBM was around 0.01 which is much smaller than the annual average 0.11. CVX also showed a similar pattern. The estimated $\eta$ of CVX for the above cases are around 0.04, 0.01, and 0.07, respectively, whereas the annual averages in 2008, 2010 and 2011 were 0.14, 0.41 and 0.24, respectively. In a highly volatile market, a much larger number of large size marks are observed than in a stable market, and with those large size marks, the linear relationship between the mark size and the future impact is weak. \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig6a_eta_IBM2008} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig6b_eta_IBM2009} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig6c_eta_IBM2010} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig6d_eta_IBM2011} \end{subfigure} \caption{Estimation results of $\eta$, IBM, 2008-2011}\label{Fig:eta} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig7a_eta_CVX2008} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig7b_eta_CVX2009} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig7c_eta_CVX2010} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig7d_eta_CVX2011} \end{subfigure} \caption{Estimation results of $\eta$, CVX, 2008-2011}\label{Fig:etaCVX} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig8a_HVol_IBM2008} \caption{Hawkes volatility, 2008} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig8b_TSRV_IBM2008} \caption{TSRV, 2008} \end{subfigure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig8c_HVol_IBM2009} \caption{Hawkes volatility, 2009} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig8d_TSRV_IBM2009} \caption{TSRV, 2009} \end{subfigure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig8e_HVol_IBM2010} \caption{Hawkes volatility, 2010} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig8f_TSRV_IBM2010} \caption{TSRV, 2010} \end{subfigure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig8g_HVol_IBM2011} \caption{Hawkes volatility, 2011} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth]{fig8h_TSRV_IBM2011} \caption{TSRV, 2011} \end{subfigure} \caption{Estimation results of the volatility, IBM, 2008-2011}\label{Fig:vol} \end{figure} In the estimation of the Hawkes models, the data of all arrival times over the sample period are used. Even with a ten minute interval, usually more than a thousand records of arrival times are available, which is sufficient to provide a reliable result for the estimation in the aspect of the sample size and hence adequate intraday analyses are possible. The left of Figure~\ref{Fig:intraday} presents the intraday variation of the volatility of IBM on May 6, 2010, the day of the Flash Crash. The estimation was performed on a ten minute basis from 10:00 to 15:30 and the cumulative volatility is plotted in the figure. An abrupt increase in volatility is observed between 14:30 and 15:00 when the stock market crashed and the volatility was stabilized after 15:00. The right of the figure plots the intraday U-shape pattern of the volatility of IBM on an ordinary day of August 9, 2011. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fig9a_realtime_IBM_20100506} \includegraphics[width=0.45\textwidth]{fig9b_realtime_IBM_20110809} \caption{Cumulative intraday volatitliy estimated under the marked Hawkes model with every ten minutes update}\label{Fig:intraday} \end{figure} In the volatility estimation perspective, the volatility formula under the i.i.d. mark in Corollary~\ref{Cor:iidvol} and the volatility formula without assuming the i.i.d. property are similar. The empirical result also showed that the two formulas have similar values over time, as shown in Figure~\ref{Fig:iid}. Therefore, for practical purposes, such as volatility computation, it may be sufficient to introduce an i.i.d. mark distribution. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{ibm2010_vol_compare_w_iid} \includegraphics[width=0.45\textwidth]{ibm2011_vol_compare_w_iid} \caption{Comparison between the daily volatility under the i.i.d. assumption and without i.i.d. assumption}\label{Fig:iid} \end{figure} \section{Concluding remark}\label{Sect:concl} A marked Hawkes model was developed for price tick dynamics in equity markets. A linear impact function was employed to describe the future effects of price jumps. A specific distribution for the jump size was not assumed but the empirical distribution was used for the estimation. This model is not limited to the independent mark since the empirical studies showed that the jump size depends on the ground intensities. The volatility formula was derived based on stochastic calculus and statistical methods and the simulation studies showed that the Hawkes volatility and realized volatility are similar in the symmetric cases and the Hawkes volatility has less standard error. On the other hand, there are biases when the underlying path is not symmetrical or the parameters vary with time. A significant positive linear impact was observed, approximately 0.2, and various types, linear or humped shape, of the conditional mean structure of mark size in the empirical studies based on the equity prices reported in NYSE. The Hawkes model is useful for estimating the intraday volatility particularly when the volatility is time varying. The U shape seasonality of the changing volatility was observed and the interesting example of Flash Crash was examined. As discussed in the simulation and empirical studies, the discrepancy between the Hawkes volatility and realized volatility and the biases from the sample volatility will be an important subject for a future study. In the presence of the asymmetry in the price dynamics and the time varying parameters, a more robust estimation method is required for a more exact volatility computation. One of the financial application of the marked Hawkes model is intraday volatility measurements. There is a discrepancy between the realized volatility and marked Hawkes volatility but the trend and the overall movements of two volatilities are consistent. In addition, in the marked Hawkes volatility estimation, the all available events reported in the exchange is used and hence even using the data in a relatively small interval, one can compute the Hawkes volatility and recognize the intraday changes of the volatility movement. This feature will help the traders, portfolio managers or algorithmic machines for the decision making by checking the changes in the intraday volatility trends of concerned assets. The return model of the mark size will be considered for future work. The distribution of the mark size depending on the current price of the underlying asset and it is worthwhile to examine the relationship between the mark size measured under the return process and future intensities. In addition, it would be interesting to compare the performance of the marked Hawkes model with the ACD-GARCH model with modification of the future intensity as a function the marks. \newpage
368f1d68505a292cfc9bad8a43878806043a0199
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction and statement of the results}\label{introduction} Consider the problem of computing the expected value \begin{align} \nu^*[f]\equiv \int f(q) d\nu^*(q) \textrm{ with } d \nu^* = Z^{-1} e^{- \beta V(q)} dq \end{align} for some given function $f: \mathcal{Q} \to \mathbb{R}$ with $\mathcal{Q}\subset \mathbb{R}^d$. If the normalization constant $Z$ is unknown or prohibitive to compute, it can be advantageous to construct a ergodic stochastic process $Q_t$ with stationary distribution $\nu^*$ (in this paper only continuous-time processes are considered) and use the fact that, by the strong law of large numbers and for suitable $f$, one has \begin{align} \lim_{T \to \infty} \frac{1}{T} \int_0^T f(Q_t) \, dt = \nu^*[f]\,. \end{align} Such a process $Q_t$ is usually called a Monte-Carlo Markov chain (MCMC) and we can then use the finite time average $\frac{1}{T} \int_0^T f(Q_t)dt$ as an estimator for $\nu^*[f]$. There are of course multiple choices of stochastic processes with invariant measure $\nu^*$ and in order to decide which process to use we need to evaluate its performance. While traditional Monte-Carlo algorithms are often built to be reversible, in recent years non-reversible algorithms have attracted a lot of attention because of their potential to sample the space in a more efficient manner, in particular in the context of Bayesian statistics and molecular dynamics (see for example \cite{DHN,HwHwSh2005,Bi2016,DLP2016,RBS3} and many more references therein). In this paper we consider a variety of non-reversible MCMC samplers such as the Langevin equation, and various modifications thereof, as well as partially deterministic Markov processes such as the zig-zag sampler (\cite{BFR2019}), the bouncy particle sampler (\cite{PetersdeWith2012}), and the hybrid Hamiltonian Monte-Carlo (\cite{Duane1987}). Each of these samplers are constructed by extending the phase space from $\mathcal{Q}$ to $\mathcal{X} = \mathcal{Q} \times \mathcal{P}$ and then constructing a non-reversible MCMC in the extended phase space with an invariant measure $\mu^*=\nu^* \times \rho^*$. The extra dimension $\mathcal{P}$ can be thought as momentum space and the dynamics considered here combine a conservative Hamiltonian-type dynamics with a dissipative sampling mechanism for the measure $\rho^*$ in the momentum variable $p \in \mathcal{P}$. For example $\rho^*$ may be a Gaussian distribution, although other choices are possible; we do assume $\rho^*$ has mean zero. All the algorithms we consider here have been proved to be hypocoercive. The concept of hypocoercivity was formalized by Villani to describe dynamics which do not satisfy a Poincar\'e inequality (otherwise they would be called coercive) but yet converge exponentially fast to equilibrium in $L^2(\mu^*)$. In a series of work \cite{Desvillettes2001,HerauNier2004,EckmannHairer2003,Herau2006,villani2009hypocoercivity} it was proved that the Langevin equation is hypocoercive (see also \cite{EPR1998,EckmannHairer2000,RBT2002,Mattingly2002} for some earlier and related convergence results). A few years ago \cite{Dolbeault2009,Dolbeault2015} found a new, short and very elegant, proof of hypocoercivity, and their techniques have been used for various modifications of the Langevin equation \cite{Iacobucci2017,StoltzTrs2018,StoltzVdE} (some of them without hypoellitpicity) and recently in \cite{ADNR2018} for a class of partially deterministic Markov processes, among them: \begin{enumerate} \item The bouncy particle sampler, which was introduced in \cite{PetersdeWith2012} and whose ergodic properties were studied in \cite{BVC2018} and \cite{WR2017}. \item The zig-zag sampler, introduced in \cite{BFR2019} and further studied in \cite{BRZ2017}, which generalize to higher dimension the so-called telgraph process studied earlier in \cite{FGM2012,FGM2016} and \cite{MonMarch2014}. \item The hybrid Hamiltonian Monte Carlo introduced by \cite{Duane1987} and whose ergodic properties are studied in \cite{BRSS2017}, see also \cite{ELi2008} and \cite{Neal2011}. \end{enumerate} To explain this result, decompose the generator $A$ of the dynamics on the Hilbert space $L^2(\mu^*)$ into symmetric and antisymmetric parts, $A= S+T$ with $S^\dagger=S$, $T^\dagger=-T$, denote by $\Pi$ the projection of $L^2(\mu^*)$ onto $L^2(\nu^*)$ given $\Pi(f)(q) = \int f(q,p) d \rho^*(p)$, and consider the operator \begin{align} B= (I + (T \Pi)^* (T \Pi))^{-1} (- T \Pi)^* \end{align} and the family of modified scalar products $\langle f,\, g\rangle_\epsilon = \langle f,\,(I + \epsilon G) g\rangle$, where $G=B+B^\dagger$; here, and in the following, $\langle \cdot,\cdot\rangle$ will denote the $L^2(\mu^*)$-scalar product. Under suitable conditions (more details are in Section \ref{sec:hypocoercive}) this scalar product is equivalent to the $L^2(\mu^*)$-scalar product and it is shown in \cite{Dolbeault2009,Dolbeault2015,ADNR2018} that, for sufficiently small values of $\epsilon>0$, the dynamics satisfy a Poincar\'e inequality for the modified scalar product \begin{equation}\label{eq:hypocoercive-epsilon} \langle - A f , f \rangle_\epsilon \ge \Lambda(\epsilon) \, {\rm Var}_{\mu^*}(f) \,, \end{equation} where $\Lambda(\epsilon)>0$ can be explicitly bounded in terms of the Poincar\'e constant of the measure $\nu^*$, the spectral gap of the sampling dynamics for $\rho^*$, and properties of the potential $V$ (see Section \ref{sec:hypocoercive}). In this paper we leverage this approach to hypocoercivity to prove concentration inequalities (of Bernstein type) for finite time ergodic averages of a function (observable) $f:\mathcal{X}\to\mathbb{R}$: \begin{align} F_T=\frac{1}{T}\int_0^T f(X_t)dt\,. \end{align} For reversible processes, or more generally for processes whose reversible part satisfies a Poncar\'e inequality, that is for {\em coercive} processes, concentration inequalities were obtained first in \cite{lezaud:hal-00940906} and then both simplified and greatly generalized in \cite{WU2000435,cattiaux_guillin_2008,Guillin2009,doi:10.1137/S0040585X97986667}; our approach relies heavily on the ideas developed in these works. Concentration inequalities are very useful for providing performance guarantees (e.g. confidence intervals) valid for all time $T$ (and which do not rely on the central limit theorem). In practice, algorithm performance is typically evaluated in terms of the asymptotic variance \begin{align} \sigma^2(f) \equiv \lim_{T \to \infty} T \operatorname{Var} \left( \frac{1}{T}\int_0^T f(X_t) \, dt\right); \end{align} we take here the alternative point of view of using concentration inequalities, thereby obtaining {\em non-asymptotic} performance guarantees. In a related manner, it has been advocated in \cite{DupuisSwapping1} that, to evaluate the performance of a MCMC algorithm, one consider the large deviation rate for the empirical measure itself, an approach used in \cite{RBS1} to analyze non-reversible perturbations of the overdamped Langevin equation. We prove the following non-asymptotic performance guarantee in Section \ref{sec:hypocoercive} (see Corollary \ref{cor:confidence-interval}). Here, and in the following, $(X_t,P^\mu)$ will denote a $\mathcal{X}$-valued Markov process with initial distribution $X_0 \sim \mu$ (i.e. $(X_0)_*P^\mu=\mu$), $E^\mu$ will be the expectation with respect to $P^\mu$, and $\|\cdot\|$ will denote the $L^2(\mu^*)$-norm. \begin{theorem}{\bf(Non-asymptotic confidence intervals)}\label{thm:hypo-conf} Suppose that the Markov process $(X_t,P^\mu)$ satisfies the hypocoercive estimate \eqref{eq:hypocoercive-epsilon}. Then for any bounded observable $f$, any time $T>0$, and tolerance level $0< 1-\delta < 1$ we have \begin{align} P^\mu\left( \left| \frac{1}{T}\int_0^Tf(X_t)dt-\mu^*[f]\right| \le r \right)\geq 1-\delta\,, \end{align} where \begin{align} r =\sqrt{2 v \frac{1}{T}\log\left(\frac{2N}{\delta}\right)} + b \frac{1}{T}\log\left(\frac{2N}{\delta}\right)\,, \end{align} with \begin{align} \label{eq:vbM} v=\frac{ (1+\epsilon) (1 - \frac{\epsilon^2}{4})}{1-\epsilon} \frac{2 \operatorname{Var}_{\mu^*}[f]}{\Lambda(\epsilon)}\,, \, b = \frac{(1+\epsilon)^2}{1-\epsilon}\frac{\|\widehat{f}\|_\infty}{\Lambda(\epsilon)}\,,\, N = \frac{\left\|\frac{d\mu}{d\mu^*}\right\| }{\sqrt{1-\epsilon}}\,. \end{align} $\Lambda(\epsilon)$ is defined in \req{Lambda_m_def} and $\epsilon$ must satisfy \req{eq:Lambda_pos}. \end{theorem} We also prove a robustness result for the dynamics, with respect to model-form uncertainty. For such uncertainty quantification (UQ) bounds, we think of $(X_t,P^\mu)$, called the baseline model, as an imperfect representation of a ``true" (or at least, more precise) alternative model. This alternative model may not be fully known, or it might be intractable (analytically or numerically), and so one may want to investigate how sensitive the results for the baseline model are to (not necessarily small) model perturbations. The next theorem provides such performance guarantees, generalizing the results in \cite{BRBMarkovUQ}, and is based on the general approach to uncertainty quantification developed in \cite{chowdhary_dupuis_2013,DKPP,KRW,GKRW}. In this context, the goal is to control the bias \[ \widetilde{E}^{\widetilde{\mu}}\left[ \frac{1}{T}\int_0^Tf(\widetilde{X}_t)\, dt \right] - \mu^*[f]\,, \] where $(\widetilde{X}_t,\widetilde{P}^{\widetilde{\mu}})$ with $\widetilde X_0 \sim \widetilde{\mu}$ is the alternative model and $\widetilde{E}^{\widetilde{\mu}}$ is the expectation with respect to $\widetilde{P}^{\widetilde{\mu}}$. We denote by $P^\mu_T$ and $\widetilde{P}_T^{\widetilde{\mu}}$ the path-space distributions of the base and alternative models on the time window $[0,T]$ and prove the following result in Section \ref{sec:concentration} (see Theorem \ref{thm:UQ_mod_poincare}). \begin{theorem}{\bf(Uncertainty quantification bounds)}\label{thm:hypo-uq} Suppose that the baseline Markov process $X_t$ satisfies the hypocoercive estimate \eqref{eq:hypocoercive-epsilon} and $(\widetilde X_t,\widetilde{P}^{\widetilde{\mu}})$ is a stochastic process such that the path space relative entropy satisfies \begin{align} R \big( \widetilde{P}_T^{\widetilde{\mu}}||P_T^{\mu^*}\big) <\infty\,. \end{align} Then we have \begin{align} \left| \widetilde{E}^{\widetilde\mu} \left[ \frac{1}{T}\int_0^Tf(\widetilde X_t) \, dt \right] - \mu^*[f] \right| \leq & \sqrt{ 2 v \eta_T}+ b\eta_T \,, \end{align} where $v$ and $b$ are given in \eqref{eq:vbM}, and \begin{align} \eta_T =\frac{1}{T} \left(\log((1-\epsilon)^{-1/2})+ R\big(\widetilde P^{\widetilde{\mu}}_T||P^{\mu^*}_T\big) \right)\,.\notag \end{align} \end{theorem} The remainder of this paper is organized as follows. In Section \ref{sec:hypocoercive} we review several examples of hypocoercive systems to which our results apply. There, we also give an overview of the hypocoercivity method of \cite{Dolbeault2009,Dolbeault2015}. This method is a crucial tool in the proofs of our new results, namely the concentration inequalities and UQ bounds outlined above; proofs of these are given in Section \ref{sec:concentration}. \section{Hypocoercive MCMC samplers}\label{sec:hypocoercive} In this section we introduce several examples of popular hypocoercive samplers for which the modified Poincar\'e inequality \eqref{eq:hypocoercive-epsilon} has been proven by following the strategy of \cite{Dolbeault2009,Dolbeault2015}. In particular we consider several examples of partially deterministic MCMC samplers studied in the recent paper \cite{ADNR2018}. We will refer the reader to the original papers for technical details and content ourselves with a brief, and at times somewhat informal, overview: Consider a probability measure $d \nu^*(q)=Z^{-1} e^{-\beta V(q)}dq$ on $\mathbb{R}^d$ to be sampled, and for which a Poincar\'e inequality holds, i.e., there exists a constant $C_{\nu^*}>0$ such that for all for $g \in L^2(\nu^*)$ \begin{align} \| \nabla_q g \|^2_{L^2(\nu^*)} \ge C_{\nu^*} \operatorname{Var}_{\nu^*}[g]\,. \end{align} See e.g. \cite{bakry2008} for conditions on $V$ which imply a Poincar\'e inequality. Define the product measure $\mu^*= \nu^*\times\rho^*$ on the extended phase space $\mathbb{R}^d \times E$ and the projection $\Pi f = \int f d\rho^*$. We consider a Markov processes $X_t=(Q_t,P_t)$ on $\mathbb{R}^d\times E$ with invariant measure $\mu^*$ and assume standard smoothness and growth conditions on $V$ to ensure that $X_t$ induces a strongly continuous semi-group $\mathcal{P}_t$ on $L^2(\mu^*)$ with generator $A$, and with the time-reversed process having generator given by the adjoint $A^\dagger$ of $A$ on $L^2(\mu^*)$. We decompose $A$ into symmetric and antisymmetric parts: \begin{equation} A = S+T, \textrm{ with } S= \frac{A + A^\dagger}{2} \textrm{ and } T= \frac{A - A^\dagger}{2}\,. \end{equation} The following four examples fit within this framework and that will be used to illustrate the utility of our results; see \cite{ADNR2018} for a proof of hypocoercivity of a more general class of models which covers all examples considered here, as well as \cite{StoltzTrs2018,Iacobucci2017,StoltzVdE} for further examples (some of them being non-equilibrium as well). \begin{enumerate} \item{\bf (Langevin and modified Langevin equations)} The (underdamped) Langevin equation is the system of stochastic differential equations on $\mathbb{R}^{2d}$ given by \begin{align} &dQ_t=\frac{P_t}{m}dt,\,\,\,\,\, dP_t=\left(-\nabla V(Q_t)-\gamma \frac{P_t}{m}\right)dt+\sqrt{\frac{2\gamma}{\beta}} dW_t, \end{align} where $m>0$ is the mass, $\beta>0$ is proportional to the inverse temperature, $\gamma>0$ is the drag coefficient, $W_t$ is a Wiener process, and $V:\mathbb{R}^d\to\mathbb{R}$ is a smooth potential. The appropriate $\rho^*$ is a Gaussian measure with mean $0$ and covariance matrix $m/\beta I$. The generator $A$ is an extension of the differential operator \begin{equation} A = \underbrace{ \frac{\gamma}{\beta} \Delta_p - \gamma \left(\frac{p}{m}\right)^T \nabla_p}_{=S} + \underbrace{ \left( \frac{p}{m}\right)^T \nabla_q-\nabla V(q)^T\nabla_p}_{=T}\,. \end{equation} This is the model originally considered in \cite{Dolbeault2009,Dolbeault2015} and several modifications of this models have also been shown to be hypocoercive. For example \cite{StoltzTrs2016,StoltzTrs2018} consider a Langevin equation with a modified kinetic energy (non-quadratic) so that that $\rho^*$ is not Gaussian and the diffusion needs not be hypoelliptic. Further generalizations of the Langevin equations with general $\rho^*$ are also considered in \cite{ADNR2018}. \item{\bf (Hybrid Hamiltonian Monte Carlo)} In this randomized version of Hamiltonian Monte-Carlo introduced by \cite{Duane1987}, the system follows Hamiltonian equations of motion with Hamiltonian $V(q) + p^2/2m$ for an exponentially distributed amount of time, after which the momentum is resampled from the Gaussian measure $\rho^*$. The generator has the form \begin{align} A &= \underbrace{ \lambda(\Pi - I )}_{=S} + \underbrace{ \left( \frac{p}{m}\right)^T \nabla_q-\nabla V^T\nabla_p}_{=T}\,. \end{align} \item{\bf (Bouncy Particle Sampler)} In this sampler, introduced originally in \cite{PetersdeWith2012}, a particle starting at time $t_0$ in the state $(q_0,p_0)$ moves freely $p(t)=p(t_0)$ and $q(t) =q(t_0) + t \frac{p(t_0)}{m}$ up to the random time $t_0 + \tau$. The updating time $\tau$ is governed by two mechanisms: either the velocity of the particle is refreshed, i.e., $p$ is sampled from the Gaussian $\rho^*$ (this occurs at rate $\lambda$), or the particle ``bounces", i.e., it undergoes a Newtonian elastic collision on the hyperplane tangential to the gradient of the energy and the momentum is updated according to the rule \begin{equation} R(q) p = p - \frac{ p^T \nabla V(q) }{ \| \nabla V\|^2} \nabla V \,. \end{equation} The time at which this happens is governed by an inhomogeneous Poisson process of intensity $\lambda(q,p) = \left[\left( \frac{p}{m}\right)^T \nabla V(q)\right]^+$. If we set $R f( q,p) = f(q, R(q) p)$ then the generator is \begin{align} A & = \left(\frac{p}{m}\right)^T \nabla_q + \left[\left( \frac{p}{m}\right)^T \nabla V(q)\right]^+ (R - I) + \lambda (\Pi - I) \,, \end{align} and elementary computations shows that $\mu^*=\nu^*\times\rho^*$ is invariant and \begin{align} S &= \, \left|\left( \frac{p}{m}\right)^T \nabla V(q)\right| (R - I) + \lambda( \Pi - I ) \,, \\ T&= \left( \frac{p}{m}\right)^T \nabla_q + \left( \frac{p}{m} \right)^T \nabla V(q) (R - I) \,. \end{align} \item{\bf (Zig-Zag Sampler)} In the zig-zag sampler, contrary to the other examples, the velocity is discrete, and, for example, $\rho^*$ is the uniform distribution on $\{-1,1\}^d$. As in the bouncy sampler, the trajectories are piecewise linear. At updating times, the (randomly chosen) $i$'th component of the velocity is reversed; see \cite{BFR2019} for a more detailed discussion. The generator of the Markov process has the form \begin{equation} A = v^T \nabla_q + \sum_{i=1}^d \left[ v_i \partial_{q_i} V(q)\right]^+ (R_i - I) + \lambda (\Pi - I) \,, \end{equation} where $R_i f(q,v) = f(q, v - 2 (e_i^T v) e_i)$ (with $e_i$ the standard basis vector in $\mathbb{R}^d$). A computation similar to the one for the bouncy sampler shows that \begin{align} S &= \, \sum_{i=1}^d \left| v_i \partial_{q_i} V (q) \right| (R_i - I) + \lambda( \Pi - I ) \,, \\ T&= v \nabla_q + \sum_{i=1}^d v_i \partial_{q_i} V(q) (R_i - I) \,. \end{align} \end{enumerate} Note that for all the examples considered, it is easy to verify that one has the identity \begin{equation}\label{eq:TPi} T \Pi = \frac{p}{m} \nabla_q \Pi \end{equation} (with the convention that $p/m=v$ for the zig-zag sampler). This fact is used to establish the following functional analytic estimates (see \cite{Dolbeault2009,Dolbeault2015}) which are the basis for the hypocoercive estimates (for the convenience of the reader the proof is in Appendix \ref{app:B}). \begin{proposition}\label{thm:elem} Define \begin{equation} B= (I + (T \Pi)^* (T \Pi))^{-1} (- T \Pi)^* \,. \end{equation} The operators $S$, $T$, and $B$ have the following properties: \begin{enumerate} \item $B1 = B^\dagger1 = 0$, \item $S = (I -\Pi) S (I- \Pi)$, \item $T \Pi = (I- \Pi) T \Pi$, \item $B = \Pi B = \Pi B (I - \Pi)$ and $B$ and $TB$ are bounded operators with $\|Bf \| \le 1/2 \| (I - \Pi) f\| $ and $\|T B f \| \le \| (I - \Pi) f\|$. \end{enumerate} \end{proposition} Next, define the family of modified scalar products on $L^2(\mu^*)$, \begin{align}\label{mod_norm_def} \langle f , g\rangle_{\epsilon}=\langle f, g\rangle + \epsilon\langle f ,(B+B^*) g\rangle\,, \,\,\,\epsilon \in (0,1)\,. \end{align} As $\|B\|\le 1/2$, $\langle \cdot , \cdot \rangle_{\epsilon}$ is an inner product which is equivalent to $\langle \cdot , \cdot \rangle$. As a consequence of Lemma \ref{thm:elem} one obtains for suitable $f$ with $\mu^*[f]=0$: \begin{align}\label{eq:decomp-hypo} \langle A f , f \rangle_\epsilon =& \langle S f , f \rangle + \epsilon \left[ \langle B S f,f \rangle + \langle B T f,f \rangle + \langle S B f , f \rangle - \langle T B f , f \rangle \right] \\ =& \langle (S - \epsilon TB) (I-\Pi)f , (I-\Pi) f \rangle + \epsilon \langle B T \Pi f , \Pi f \rangle \notag \\ & + \epsilon\left[ \langle B S (I-\Pi) f , \Pi f \rangle + \langle B T (I-\Pi) f , \Pi f \rangle \right] \,, \notag \end{align} where we have used that $SB=0$. The various terms in \eqref{eq:decomp-hypo} can be bounded as follows: \begin{enumerate} \item The term $\langle (S-\epsilon TB) (I-\Pi)f , (I-\Pi) f\rangle$ is controlled by the dissipative term in the $p$-variables (since $TB$ is bounded) and it is not difficult to see that in the cases considered here we have a Poincar\'e inequality in the $p$-variables (averaged over $\nu^*$): \begin{align}\label{Poincare-p} \langle f , -S f \rangle \ge \lambda_p \|(I-\Pi) f\|^2 \end{align} for some $\lambda_p>0$. For the Langevin equation $\lambda_p=\frac{\gamma}{\beta}$ is the spectral gap of the Ornstein-Uhlenbeck process, while for the other examples we can take $\lambda_p=\lambda$ from the velocity resampling mechanism. \item For the term $\langle B T \Pi f , \Pi f \rangle$, note that using \eqref{eq:TPi} together with the Poincar\'e inequality for the measure $\nu^*$, we have \begin{align} \langle f , (T\Pi^\dagger T\Pi) f \rangle = \Pi\left(\frac{p^2}{m^2}\right) \|\nabla_q \Pi f\| \ge \Pi\left(\frac{p^2}{m^2}\right) C_\nu^* \|\Pi f\|^2 \,, \end{align} where $C_\nu^*$ is the Poincar\'e constant for the measure $\nu^*$. Then, since $ - BT\Pi = (I+(T\Pi)^\dagger T\Pi)^{-1} (T\Pi)^\dagger T\Pi$, by functional calculus we have \begin{align} \langle -BT\Pi f, \Pi f \rangle \ge \left( 1 - \left(1 + \Pi\left(\frac{p^2}{m^2}\right) C_\nu^*\right)^{-1} \right)\|\Pi f \|^2 \equiv \lambda_q \|\Pi f \|^2\,. \end{align} \item For the off-diagonal terms it is enough to show that they are bounded, i.e., \begin{align} \|BT(I - \Pi)f\| + \|BS(I - \Pi)f\| \le R_0\|(I- \Pi)f\| \,. \end{align} The bound of the first term is the technical part of the proof; for the Langevin equation this is proved in \cite{Dolbeault2015}, and is generalized in \cite{ADNR2018} for the other samplers (see Lemma 29 and Lemma 32 in particular and the bound in Section 3.3 as well as the bound in Lemma 11 which is specific to the zig-zag sampler). \end{enumerate} Based on these estimates, one has constants $\lambda_q,\lambda_p,R_0\geq 0$ such that for any $f$ with $\mu^*[f]=0$: \begin{align}\label{eq:matrix-hypo} \langle -A f , f \rangle_\epsilon \,&\ge \,\begin{bmatrix} \|\Pi f\|\\ \|(I-\Pi)f\| \end{bmatrix}^T\begin{bmatrix} \epsilon\lambda_q &-\epsilon R_0/2\\ -\epsilon R_0/2 & \lambda_p-\epsilon \end{bmatrix}\begin{bmatrix} \|\Pi f\|\\ \|(I-\Pi)f\| \end{bmatrix}\\ &\ge \Lambda(\epsilon) \operatorname{Var}_{\mu^*}[f] \notag \,, \end{align} where \begin{align}\label{Lambda_m_def} \Lambda(\epsilon)\equiv\frac{(\lambda_q-1)\epsilon+\lambda_p -\sqrt{\left(( \lambda_q+1)\epsilon-\lambda_p \right)^2+\epsilon^2 R_0^2}}{2} \end{align} is the smallest eigenvalue of the matrix in \req{eq:matrix-hypo}; $\Lambda(\epsilon)$ is positive if \begin{align}\label{eq:Lambda_pos} 0<\epsilon \le 4 \lambda_q \lambda_p /( 4 \lambda_q + R_0). \end{align} In the next section, we show how the Poincar{\'e} inequality (\ref{eq:matrix-hypo}) for the modified inner product (\ref{mod_norm_def}) can be used to derive non-asymptotic confidence intervals and UQ bounds for hypocoercive systems, having in mind the four examples outlined above. \section{Concentration inequalities and performance guarantees via Feynmann-Kac semigroups}\label{sec:concentration} In this section, we prove our main new results for hypocoercive systems: \begin{enumerate} \item A concentration inequality and corresponding non-asymptotic confidence intervals in Section \ref{sec:conc_ineq}. \item UQ bounds in Section \ref{sec:UQ}. \end{enumerate} The former are obtained by an adaptation of the technique from \cite{WU2000435} and \cite{doi:10.1137/S0040585X97986667} to hypocoercive systems, which we first summarize. \subsection{Background}\label{sec:Kac_background} As in \cite{WU2000435,doi:10.1137/S0040585X97986667}, we will prove Bernstein-type concentration inequalities. The following related elementary facts will be used repeatedly (see e.g. the discussion of sub-gamma random variables in Chapter 2 in \cite{Boucheron:2016}): Consider the convex function $\Psi_{v,b}$ given by \begin{align} \Psi_{v,b}(\lambda) & = \frac{\lambda^2 v}{2(1-\lambda b)} \quad \textrm { for } 0 \le \lambda < 1/b \,. \label{eq:psi} \end{align} Its (one-sided) Legendre transform $\Psi_{v,b}^*$ is \begin{align} \Psi_{v,b}^*(r) & =\sup_{0\le \lambda < 1/b}\left\{\lambda r - \Psi_{v,b}(\lambda)\right\} = \frac{2 r^2}{v \left(1 + \sqrt{1 + \frac{2br}{v} }\right)^2} \quad \textrm {for } r \ge 0 \label{eq:psi*} \end{align} and the inverse of the Legendre transform $\Psi_{v,b}^*$ is \begin{align} (\Psi_{v,b}^*)^{-1}(\eta) & = \inf_{\lambda>0} \left\{ \frac{\Psi_{v,b}(\lambda)+ \eta}{\lambda} \right\} \,=\, \sqrt{2 v \eta } + b \eta \quad \textrm {for } \eta \ge 0 \,. \label{eq:psi*-1} \end{align} Now we summarize the method of \cite{WU2000435,doi:10.1137/S0040585X97986667}:\\ Let $\mathcal{X}$ be a Polish space and suppose we have time homogeneous, $\mathcal{X}$-valued, c\`adl\`ag Markov processes $ (\Omega,\mathcal{F},\mathcal{F}_t,X_t,P^x)$, $x\in\mathcal{X}$, with initial distributions $(X_0)_*P^x=\delta_x$ for all $x$. For an initial measure $\mu$, write $P^\mu = \int P^x d\mu(x)$. We assume that $\mu^*$ is an invariant ergodic measure on $\mathcal{X}$ consider the real Hilbert space $L^2(\mu^*)$ with scalar product $\langle \cdot, \cdot \rangle$. We consider the strongly continuous Markov semigroup $\mathcal{P}_t^V:L^2(\mu^*)\to L^2(\mu^*)$ given by \begin{align} \mathcal{P}_t f(x)=E^x[f(X_t)] \end{align} and whose generator we denote by $(A,D(A))$. More generally, for a bounded measurable $V:\mathcal{X}\to\mathbb{R}$, define the Feynman-Kac semigroup $\mathcal{P}_t^V:L^2(\mu^*)\to L^2(\mu^*)$ by \begin{align} \mathcal{P}_t^V[f](x)=E^x\left[f(X_t)e^{\int_0^t V(X_s)ds}\right], \end{align} which is a strongly continuous semigroup with generator $(A+V,D(A))$. If we set \begin{align}\label{kappa_def} \kappa(V)\equiv\sup\left\{\langle (A+V)g,g\rangle:g\in D(A), \|g\| =1\right\} \end{align} then, by definition (and as long as $\kappa(V)<\infty$), for any $g \in D(A)$ we have \begin{align} \langle (A + V - \kappa(V)) g \,,\, g \rangle \le 0 \end{align} and thus by the Lumer-Philipps theorem (see e.g. Chapter IX in \cite{YosidaFA}) the semigroup generated by $A + V - \kappa(V)$ is a contraction semigroup on $L^2(\mu^*)$. This implies that \begin{align} \label{eq:Kac_semigroup_bound} \| \mathcal{P}_t^V \| \le e^{t \kappa(V)}\,,\,\,\,t\geq 0 \end{align} (note that \req{eq:Kac_semigroup_bound} also trivially holds if $\kappa(V)=\infty$). Therefore by the Chernov bound we have \begin{align}\label{L2_concentration_ineq} P^\mu\left(\frac{1}{T}\int_0^T f(X_t)dt-\mu^*[f]>r\right) & \le \inf_{\lambda >0} e^{-\lambda T r} E^\mu\left[e^{\lambda\int_0^T \widehat{ f}(X_t) dt}\right] \notag \\ &\le \inf_{\lambda >0} e^{-\lambda T r} \int \mathcal{P}_T^{\lambda\widehat f}(1) d\mu \notag \\ & \le \inf_{\lambda>0} e^{-\lambda T r} \left\| \frac{d\mu}{d\mu^*}\right\| \left\| \mathcal{P}_T^{\lambda\widehat f}\right\| \notag \\ &\leq \left\|\frac{d\mu}{d\mu^*}\right\| e^{-T\sup_{\lambda>0}\{ \lambda r- \kappa(\lambda\widehat{f})\}} \,. \end{align} This basic insight, first noted by \cite{WU2000435}, can also be extended to unbounded $V$. From here, one can obtain explicit concentration inequalities by further bounding $\kappa(\lambda\widehat{f})$ (which contains the Dirichlet form $\langle A g \,,g\rangle$) using $L^2(\mu^*)$-functional inequalities, such as a Poincar{\'e} inequality (or $\log$-Sobolev inequalities, Lyapunov functions, and so on...); see \cite{WU2000435,lezaud:hal-00940906,cattiaux_guillin_2008,Guillin2009,doi:10.1137/S0040585X97986667} for many such examples. \subsection{Concentration inequalities}\label{sec:conc_ineq} In the hypocoercive examples considered in this paper, the generator is non-reversible and there is no Poincar\'e inequality with respect to the $L^2(\mu^*)$-scalar product but, as discussed in Section 2, there is a Poincar\'e inequality for an equivalent modified scalar product. In the following theorem, we show that one still obtains concentration inequalities in this more general setting. \begin{theorem}\label{thm:conc-star}{\bf (Concentration inequalities).} Let $ (\Omega,\mathcal{F},\mathcal{F}_t,X_t,P^x)$, $x\in\mathcal{X}$, be $\mathcal{X}$-valued c\`adl\`ag Markov processes with invariant ergodic measure $\mu^*$. Let $\langle \cdot,\cdot\rangle_{\#}$ be an inner product on $L^2(\mu^*)$ such that \begin{enumerate} \item The induced norms $\|\cdot\|_\#$ and $\| \cdot\|$ are equivalent: there exists $0<c\leq C<\infty$ such that $c\|\cdot\| \leq \|\cdot\|_\#\leq C\| \cdot \|$. \item For all $g\in L^2(\mu^*)$, we have $\langle g,1\rangle_\#=\langle g,1\rangle$. \item A Poincar\'e inequality holds for $\langle\cdot,\cdot\rangle_\#$, i.e., we have $\alpha>0$ such that \begin{align} \|g\|_\#^2\leq \alpha\langle -Ag,g\rangle_\# \,\,\, \text{ for all $g\in D(A)$ with $\mu^*[g]=0$.} \end{align} \end{enumerate} For bounded $f$, let $M_{\widehat{f}}$ denote the multiplication operator with $\widehat{f}=f - \mu^*[f]$. We have the following concentration inequalities for $T>0$: \begin{align}\label{concentration_ineq_mod} &P^\mu\left(\pm \left[ \frac{1}{T}\int_0^T f(X_t)dt-\mu^*[f]\right]\geq r\right)\leq c^{-1}\left\|\frac{d\mu}{d\mu^*}\right\| e^{-T \Psi^*_{v_\pm, b_\pm}(r)} \,, \end{align} where $\Psi^*_{\nu, b}$ is given in \eqref{eq:psi*} and \begin{align}\label{eq:v+b+} v_\pm= 2 \alpha \left\| \frac{1}{2}(M_{\pm \widehat{ f}}+M_{\pm \widehat{f}}^\dagger)1\right\|_\#^2\,,\,\, \quad b_\pm = \alpha \max\left\{0, \sup_{\|g\|_\#=1}\langle M_{\pm \widehat{f}}g,g\rangle_\# \right\}\,. \end{align} \end{theorem} We emphasize that $M_{\widehat{f}}^\dagger$ is the adjoint with respect to the $\langle\cdot,\cdot\rangle_\#$-inner product. Also $\nu_\pm$ and $b_\pm$ can be replaced by any upper bounds on these quantities, for example in terms of the $L^2(\mu^*)$-norm (see the calculation for the hypocoercive examples in Section \ref{sec:app} below). \begin{proof} The proof is a modification of the strategy used in \cite{doi:10.1137/S0040585X97986667}. We start as in \req{L2_concentration_ineq} but use the Lumer-Phillips theorem \ for the $\| \cdot \|_\#$ norm instead since, by equivalence of the norms, $\mathcal{P}^{\lambda \widehat{f}}_t$ is also a strongly continuous semigroup on $(L^2(\mu^*),\|\cdot\|_\#)$ with the same generator. Using the Chernov bound, the equivalence of the norm and the fact that, by Assumption 2, $\|1\|_\# = \langle 1, 1 \rangle_\# =\langle 1, 1 \rangle =1$, we obtain \begin{align}\label{Markov_ineq} P^\mu\left(\frac{1}{T}\int_0^T f(X_t)dt \geq \mu^*[f]+r\right) & \leq \inf_{\lambda >0} e^{ -\lambda Tr} E^\mu\left[ e^{\lambda \int_0^T \widehat{f}(X_t)dt}\right] \notag \\ & = \inf_{\lambda >0} e^{ -\lambda Tr} \int \mathcal{P}^{\lambda \widehat{f}}_T(1)\frac{d\mu}{d\mu^*}d\mu^*\notag\\ & \leq \inf_{\lambda > 0} e^{ -\lambda Tr} \left\|\frac{d\mu}{d\mu^*}\right\| \left\| \mathcal{P}^{\lambda \widehat{f}}_T(1)\right\| \notag\\ & \leq \inf_{\lambda >0} e^{ -\lambda Tr} \left\|\frac{d\mu}{d\mu^*}\right\| c^{-1} \left\|\mathcal{P}^{\lambda \widehat{f}}_T\right\|_\# \|1\|_\#\notag \\ & \leq c^{-1} \left\|\frac{d\mu}{d\mu^*}\right\| e^{-T \sup_{\lambda> 0} ( \lambda r - \kappa_{\#}(\lambda \widehat{f})}\,, \end{align} where, by the Lumer-Phillips Theorem applied to $L^2(\mu^*)$ with the scalar product $\langle \cdot, \cdot \rangle_\#$, \begin{align}\label{kappa_star_def} &\kappa_{\#}( \lambda \widehat{f} )\equiv\sup\left\{\langle (A+ \lambda \widehat{f})g,g\rangle_{\#} : g\in D(A),\|g\|_\#=1\right\}. \end{align} Next we use the following lemma proved in \cite{BRBMarkovUQ}, which is a generalization of a result in \cite{doi:10.1137/S0040585X97986667}, which itself was a simplification of the argument originally used in \cite{lezaud:hal-00940906}. For completeness, the proof is given in the appendix. \begin{lemma}\label{perturb_lemma} Let $H$ be a real Hilbert space, $A:D(A)\subset H\to H$ a linear operator, and $M:H\to H$ a bounded linear operator. Assume there exists $\alpha>0$ and $x_0\in H$ with $\|x_0\|=1$ such that \begin{align} \langle Mx_0,x_0\rangle=0\,\,\,\text{ and }\,\,\, \langle Ax,x\rangle \leq - \alpha^{-1} \|P^\perp x\|^2 \end{align} for all $x\in D(A)$, where $P^\perp$ is the orthogonal projector onto $x_0^\perp$. Then \begin{align} \sup_{x\in D(A),\|x\|=1} \langle (A+\lambda M)x,x\rangle \leq\frac{\lambda^2 \alpha V}{1- \lambda \alpha K} = \Psi_{2\alpha V, \alpha K}(\lambda) \end{align} for $0\le \lambda < 1/\alpha K$, where \begin{align} V = \left\| \frac{1}{2}(M+M^\dagger)x_0\right\|^2 \,, \quad K=\max\left\{0, \sup_{\|y\|=1} \langle My,y\rangle\right\}\,. \end{align} \end{lemma} To use this result we take $x_0=1$, $A$ to be the generator on $(L^2(\mu^*),\|\cdot\|_\#)$, and $M=M_{\widehat{f}}$. By Assumption 2, we have $\langle Mx_0\,,\,x_0\rangle_\#=\langle \widehat{f}\,, 1 \rangle_\# =\langle \widehat{f}\,, 1 \rangle =0$. This assumption also implies that the projection onto $1^\perp$ (for both scalar products) is given by $P^\perp f = \widehat{f}$ and \begin{align} \langle Ag,1\rangle_{\#}=\langle Ag,1\rangle=0,\,\,\, g\in D(A). \end{align} Combined with Assumption 3 and the fact that $A[1]=0$ we get \begin{align} \langle Ag,g\rangle_\#=&\langle A\widehat{g},\widehat{g}\rangle_\# \le -\alpha^{-1}\|\widehat{g}\|^2_\#=-\alpha^{-1}\|P^\perp g\|^2_\#,\notag \end{align} and thus we can apply Lemma \ref{perturb_lemma} to obtain \begin{align} \label{eq:bernstein1} \kappa_\#(\lambda\widehat{f})=\sup_{g\in D(A),\|g\|_\#=1}\langle (A+\lambda \widehat{f})g,g\rangle_\# \leq \Psi_{v_+,b_+}(\lambda) \end{align} for all $0\leq \lambda< 1/b_+$, where \begin{align} v_+=2 \alpha \left\| \frac{1}{2}(M_{\widehat{f}}+M_{\widehat{f}}^\dagger)1\right\|_\#^2\,,\,\,\,b_+ = \alpha \max\left\{0, \sup_{\|g\|_\#=1}\langle M_{\widehat{f}}g,g\rangle_\# \right\} \end{align} (as was given in \eqref{eq:v+b+}). Therefore \begin{align} P^\mu\left(\frac{1}{T}\int_0^T f(X_t)dt \geq \mu^*[f]+r\right) & \le c^{-1} \left\|\frac{d\mu}{d\mu^*}\right\| e^{ - T \sup_{0\leq\lambda<1/b_+}\{ \lambda r - \Psi_{v_+,b_+}(\lambda)\}} \\ & = c^{-1} \left\|\frac{d\mu}{d\mu^*}\right\| e^{ - T \Psi^*_{v_+,b_+}(r)} \,.\notag \end{align} The lower bound is obtained by replacing $f$ by $-f$ and this concludes the proof. \end{proof} As an immediate corollary we obtain a non-asymptotic confidence interval. \begin{corollary}\label{cor:confidence-interval}{\bf (Confidence intervals).} Under the same assumptions as in Theorem \ref{thm:conc-star}, given a time $T$ and a confidence level $0 < 1-\delta < 1$ we have \begin{align}\label{eq:conf_interval} P^\mu\left(\frac{1}{T}\int_0^Tf(X_t)dt-\mu^*[f]\in(-r_-,r_+)\right)\geq 1-\delta \end{align} where \begin{align}\label{eq:rpm} r_\pm =\sqrt{2 v_\pm \frac{1}{T}\log\left(\frac{2N}{\delta}\right)} + b_\pm \frac{1}{T}\log\left(\frac{2N}{\delta}\right)\,, \end{align} with $N = c^{-1}\left\|\frac{d\mu}{d\mu^*}\right\|$ and $v_\pm$ and $b_\pm$ given in \eqref{eq:v+b+}. \end{corollary} \begin{proof} Define $\eta= \frac{1}{T} \log\left( \frac{2N}{\delta}\right)$ (note that $N\geq 1$ follows from Assumptions 1 and 2 of Theorem \ref{thm:conc-star}), so that $r_\pm = (\Psi^*_{v_\pm,b_\pm})^{-1}(\eta)$, with $r_\pm$ given as in \eqref{eq:rpm}. Using $r=r_\pm$ in the concentration bound in Theorem \ref{thm:conc-star} we find \begin{align} P^\mu\left(\pm\left[\frac{1}{T}\int_0^Tf(X_t)dt-\mu^*[f]\right] \geq r_\pm\right) \le N e^{- T \eta}=\frac{\delta}{2}. \end{align} The result (\ref{eq:conf_interval}) then follows from a union bound. \end{proof} \subsection{Robustness bounds on steady state bias due to model form uncertainty}\label{sec:UQ} Following on the recent results in \cite{GKRW,BRBMarkovUQ}, we also use concentration inequalities to obtain bounds on the bias of the expectation of ergodic averages when the process itself is subject to (model-form) uncertainty. We think of the Markov process $(X_t,P^\mu)$ considered in Section \ref{sec:Kac_background} as the baseline process and consider an alternative stochastic process $(\widetilde X_t,\widetilde{P}^{\widetilde{\mu}})$ with initial distribution $(X_0)_*\widetilde{P}^{\widetilde{\mu}}=\widetilde{\mu}$ and let $\widetilde E^{\widetilde\mu}$ be the associated expectation. \begin{remark} The requirements on the alternative process are very minimal. In particular, we are {\em not} assuming $(\widetilde{X}_t,\widetilde P^{\widetilde\mu})$ is a Markov processes. \end{remark} We will compare the two processes using relative entropy; we assume absolute continuity of the path-space distributions on finite time windows $[0,T]$, i.e., $\widetilde{P}^{\widetilde{\mu}}_T\ll P^\mu_T$, and also assume the relative entropy is finite: \begin{align}\label{eq:rel_ent_T} R\big(\widetilde{P}^{\widetilde{\mu}}_T||P^{\mu}_T\big) < \infty \,. \end{align} See the supplementary material to \cite{DKPP} for a collection of techniques that can be used to bound the path-space relative entropy (\ref{eq:rel_ent_T}) for various classes of alternative models. Given an observable $f$ we consider the ergodic averages \begin{align}\label{eq:F_T} \widetilde{F}_T= \frac{1}{T}\int_0^T f(\widetilde{X}_t) dt \,,\,\,\,\, F_T= \frac{1}{T}\int_0^T f(X_t) dt\,, \end{align} and are interested in bounding the bias between the baseline and the alternative processes: \begin{align} \textrm{\bf Bias:} \quad \widetilde{E}^{\widetilde{\mu}}[\widetilde{F}_T] - E^{\mu}[F_T] \,. \end{align} \begin{theorem}\label{thm:UQ_mod_poincare}{\bf(Uncertainty Quantification bounds).} Let $(X_t,P^x)$, $x\in\mathcal{X}$, be a family of Markov process satisfying the assumptions of Theorem \ref{thm:conc-star}, $\mu$ be an initial distribution, and $(X_t,\widetilde{P}^{\widetilde{\mu}})$ be an alternative process with $R(\widetilde{P}^{\widetilde{\mu}}_T||P^{\mu}_T) < \infty$. Then for any bounded measurable $f$ we have \begin{align}\label{ModPoincare_UQ_bound} \pm \left( \widetilde E^{\widetilde\mu} [ \widetilde{F}_T] - E^{\mu} [F_T ]\right) \leq & \sqrt{ 2 v_\pm \eta_T}+ b_\pm \eta_T + \frac{C}{c}\frac{1-e^{-\alpha T}}{T} \left\|\frac{d \mu}{d\mu^*}\right\| {\rm Var}_{\mu^*}[f]\,, \notag \end{align} where $v_\pm$ and $b_\pm$ are given in \eqref{eq:v+b+} and \begin{align} \eta_T =\frac{1}{T} \left(\log(c^{-1})+ \log \left\|\frac{d \mu}{d\mu^*}\right \| + R\big(\widetilde P^{\widetilde{\mu}}_T||P^{\mu}_T\big) \right)\,.\notag \end{align} If, in addition, the process $(\widetilde{X}_t,\widetilde{P}^{\widetilde{\mu}})$ is ergodic with invariant measure $\widetilde{\mu}^*$, the limit \begin{align} \eta_\infty = \lim_{T\to \infty} \frac{1}{T} R\big(\widetilde{P}^{\widetilde{\mu}}_T||P^{\mu^*}_T\big) \end{align} exists for the relative entropy rate, and $\|d\mu/d\mu^*\|<\infty$, then we have the steady-state bias bound \begin{align}\label{ModPoincare_UQ_bound_infinity} \pm \left( {\widetilde \mu}^* [f] - \mu^* [f] \right) \leq \sqrt{ 2 v_\pm \eta_\infty}+ b_\pm \eta_\infty\,. \end{align} \end{theorem} \begin{proof} The proof proceeds along the same line as in \cite{BRBMarkovUQ} to which we refer for more details. The starting point is the Gibbs information inequality \cite{chowdhary_dupuis_2013,DKPP}: for $g$ bounded and measurable and probability measures $Q$ and $\widetilde{Q}$ \begin{align}\label{goal_oriented_bound} \pm\left( E_{\widetilde Q}E[g]-E_Q[g]\right) \leq \inf_{\lambda >0} \left\{ \frac{\log E_Q[e^{ \pm \lambda (g - E_Q[g])}] +R(\widetilde{Q} ||Q)}{\lambda} \right\} \,. \end{align} This is a direct consequence of the Gibbs variational principle for the relative entropy, \cite{dupuis2011weak}. We apply the bound to the measures $P^{\mu}_T$, $\widetilde P^\mu_T$ (distributions on path-space up to time $T$) and $g(x)=\int_0^T f(x_t) dt$ (a bounded measurable function of paths, $x$, up to time $T$) and then divide both sides by $T$: \begin{align} & \pm \left( {\widetilde E}^{\widetilde\mu} [ \widetilde{F}_T] - E^{\mu} [F_T ] \right) \notag \\ &\,\,\, \leq \inf_{\lambda >0} \left\{ \frac{ \log E^{\mu}[ e^{\pm \lambda T(F_T-E^\mu[F_T]}] +R(\widetilde P^{\widetilde{\mu}}_T||P^{\mu}_T) }{\lambda T} \right\} \notag \\ &\,\,\, \leq \underbrace{\inf_{\lambda >0}\left\{ \frac{ \log E^{\mu}[ e^{\pm \lambda T(F_T-\mu^*[f]}] +R(\widetilde P^{\widetilde{\mu}}_T||P^{\mu}_T) }{\lambda T} \right\} }_{=\textrm{(I)}} \mp \underbrace{ \left(E^\mu[F_T] - \mu^*[f]\right)}_{=\textrm{(II)}}\,. \end{align} The term (II) only involves the baseline process and is easily bounded, for example using the Poincar\'e inequality for the scalar product $\langle\cdot,\cdot\rangle_\epsilon$: \begin{align}\label{eq:UQB-II} |\textrm{(II)}|= \left| E^{\mu}\left[ \frac{1}{T}\int_0^T \widehat{f}(X_t) \, dt\right] \right| & \le \frac{1}{T}\int_0^T \left|\left\langle \frac{d \mu}{d\mu^*},\mathcal{P}_t \widehat{f}\right\rangle \right|dt \\ & \le \frac{1}{T}\int_0^T e^{-t/\alpha} \left\|\frac{d \mu}{d\mu^*}\right\| \frac{C}{c} \|\widehat{f}\|dt \notag \\ & =\frac{C}{c}\frac{1-e^{-T/\alpha}}{T/\alpha} \left\|\frac{d \mu}{d\mu^*}\right\| \sqrt{{\rm Var}_{\mu^*}[f]}\,. \notag \end{align} To bound the term (I), we use Lemma \ref{perturb_lemma} to bound the moment generating function, similarly to the proof of Theorem \ref{thm:conc-star}: \begin{align}\label{eq:UQB-I} \textrm{(I)} & = \inf_{\lambda >0} \left\{ \frac{\log \int \mathcal{P}^{\pm \lambda \widehat{f}}_T (1)d\mu +R\big(\widetilde P^{\widetilde{\mu}}_T||P^{\mu}_T\big) }{\lambda T} \right\} \\ & \le \inf_{\lambda >0} \left\{ \frac{\log \left( c^{-1}\left\| \frac{d\mu}{d\mu^*}\right\| e^{ T \kappa_\#(\pm \lambda \widehat{f})} \right) +R\big(\widetilde P^{\widetilde{\mu}}_T||P^{\mu}_T\big) }{\lambda T} \right\} \notag \\ & = \inf_{\lambda >0} \left\{ \frac{\kappa_\#(\pm \lambda \widehat{f}) + \eta_T}{\lambda} \right\} \notag \\ & \leq \inf_{\lambda >0} \left\{ \frac{ \Psi_{v_\pm,b_\pm}(\lambda) + \eta_T}{\lambda} \right\} \notag \\ &= (\Psi_{v_\pm,b_\pm}^*)^{-1}(\eta_T) = \sqrt{2 v_\pm \eta_T} + b_\pm \eta_T \,. \notag \end{align} Finally, by taking $T \to \infty$ we obtain the bounds in \req{ModPoincare_UQ_bound_infinity} \end{proof} \subsection{Application to hypocoercive samplers}\label{sec:app} Theorems \ref{thm:hypo-conf} and \ref{thm:hypo-uq} for hypocoercive MCMC samplers follow rather immediately from Corollary \ref{cor:confidence-interval} and from Theorem \ref{thm:UQ_mod_poincare}. We first verify the three assumptions in Theorem \ref{thm:conc-star}. The modified scalar product is $\langle f , g \rangle_\epsilon = \langle f , g \rangle + \epsilon \langle f , G g \rangle$ with $G1=0$ and $\|G\|\le 1$. Therefore we we have $c=(1-\epsilon)^{1/2}$, $C=(1+\epsilon)^{1/2}$, and $\langle f \,,\, 1 \rangle_\epsilon = \langle f \,,\, 1 \rangle $, and, for $\epsilon>0$ sufficiently small (see \req{eq:Lambda_pos}), by \req{eq:matrix-hypo} we have $ \alpha=\frac{1+\epsilon}{\Lambda(\epsilon)}$. Since $\langle M_{\widehat{f}}g,g\rangle_\epsilon \le \|\widehat{f}\|_\infty \|g\|^2 (1+ \epsilon) \le \frac{1+\epsilon}{1-\epsilon} \|\widehat{f}\|_\infty \|g\|^2_\epsilon $ we have \begin{align} b_\pm = \alpha \max\left\{0, \sup_{\|g\|_\epsilon=1}\langle M_{\pm\widehat{f}}g,g\rangle_\epsilon\right\} \le \frac{(1+\epsilon)^2}{1-\epsilon}\frac{\|\widehat{f}\|_\infty}{\Lambda(\epsilon)}\,. \end{align} Furthermore, using self-adjointness of $G$, we have \begin{align} M_{\widehat{f}}^\dagger=& (I+ \epsilon G)^{-1}M_{\widehat{f}}(I+ \epsilon G) = M_{\widehat{f}} + \epsilon (I+ \epsilon G)^{-1} (M_{\widehat{f}}G -GM_{\widehat{f}}), \end{align} and thus, since $G1=0$, \begin{align} \frac{1}{2}(M_{\widehat{f}}+M_{\widehat{f}}^*)1=\widehat{f}-\frac{\epsilon}{2} (I+\epsilon G )^{-1} G\widehat{f}\,. \notag \end{align} Therefore \begin{align} \left\|\frac{1}{2}(M_{\widehat{f}}+M_{\widehat{f}}^*)1\right\|_\epsilon^2 &= \left\langle (I-\frac{\epsilon}{2} (I+\epsilon G )^{-1} G)\widehat{f} \,,\, (I+ \epsilon G) (I-\frac{\epsilon}{2} (I+\epsilon G )^{-1} G) \widehat{f}\right\rangle \notag \\ &=\left\langle (I-\frac{\epsilon}{2} (I+\epsilon G )^{-1} G)\widehat{f} \,,\, (I+ \frac{\epsilon}{2} G) \widehat{f} \right\rangle \notag \\ &\le \left(1 + \frac{\epsilon}{2}\frac{1}{1 - \epsilon}\right)\left(1 + \frac{\epsilon}{2}\right) \|\widehat{f}\|^2 = \frac{1 - \frac{\epsilon^2}{4}}{1-\epsilon} {\rm Var}_{\mu^*}[f]\,, \end{align} and so \begin{align} v_\pm \le \frac{ (1+\epsilon) (1 - \frac{\epsilon^2}{4})}{1-\epsilon} \frac{2 \operatorname{Var}_{\mu^*}[f]}{\Lambda(\epsilon)}\,. \end{align}
3152856ffcb82c6b0224215fad4e606d29f4b540
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\subsection{Proof of Lemma \ref{lemma_1}} \label{appendix_1} \noindent \textbf{Proof of 1):} when $f^k = \phi(\xi^k)$, the constraint on the vertical force Eq. \eqref{eq:pos_vert_forces} is given by: \begin{equation*} f_z = e^{\xi_3} + f_z^{min} > f_z^{min}, \end{equation*} which is satisfied for all $\xi_3$. Now, substitute the parametrization~\eqref{eq:parametrized_wrench} in the remaining stability constraints \eqref{eq:static_friction}--\eqref{eq:torsional_friction}: \begin{IEEEeqnarray}{RCL} \IEEEyesnumber \label{eq:contact_constraints_parametrized} \sqrt[]{{\mu_c}^2\frac{\tanh^2(\xi_1) \,f_z^2}{{1+\tanh^2(\xi_2)}} + {\mu_c}^2\frac{\tanh^2(\xi_2) \,f_z^2}{{1+\tanh^2(\xi_1)}}} &<& {\mu_c} f_z \quad \quad \IEEEyessubnumber \label{eq:static_friction_parametrized},\\ y_c^{min} < \frac{(\delta_y \tanh(\xi_4) + \delta_{y_0})\, f_z}{f_z} < y_c^{max} \IEEEyessubnumber \label{eq:CoP_y_parametrized},\\ -x_c^{max} < \frac{(\delta_x \tanh(\xi_5) + \delta_{x_0})\, f_z}{f_z} < -x_c^{min} \IEEEyessubnumber \label{eq:CoP_x_parametrized},\\ \left|\frac{\mu_z \tanh(\xi_6)\, f_z}{f_z}\right| \hspace{0.5 mm} < \mu_z. \IEEEyessubnumber \label{eq:torsional_friction_parametrized} \end{IEEEeqnarray} % \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{pictures/jointNorm.pdf} \caption{Joint position error norm during highly dynamics movements. The plot shows that the system's zero dynamics does not diverge while achieving the primary task.} \label{fig:joint_err_norm} \vspace{-0.5cm} \end{figure} % \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{pictures/momentumDistNorm.pdf} \caption{Momentum rate of change (top plot) and momentum error norm (Bottom plot) during the disturbance rejection experiment. The controller can retain stability despite the action of the external unmodeled forces.} \label{fig:mom_err_norm_disturbances} \end{figure} The vertical force $f_z$ is greater than zero and can be removed from \eqref{eq:contact_constraints_parametrized}, thus leading to the following set of inequalities: \begin{IEEEeqnarray}{LCL} \IEEEyesnumber \label{eq:contact_constraints_parametrized_simplified} \sqrt{\frac{\tanh^2(\xi_1)}{1+\tanh^2(\xi_2)} + \frac{\tanh^2(\xi_2)}{1+\tanh^2(\xi_1)}} < 1 \IEEEyessubnumber \label{eq:static_friction_parametrized_simplified},\\ y_c^{min} < \delta_y \tanh(\xi_4) + \delta_{y_0} < y_c^{max} \IEEEyessubnumber \label{eq:CoP_y_parametrized_simplified},\\ -x_c^{max} < \delta_x \tanh(\xi_5) + \delta_{x_0} < -x_c^{min} \IEEEyessubnumber \label{eq:CoP_x_parametrized_simplified},\\ |\tanh(\xi_6)| \hspace{0.5 mm} < 1, \IEEEyessubnumber \label{eq:torsional_friction_parametrized_simplified} \end{IEEEeqnarray} where also the coefficients $\mu_c$ and $\mu_z$ have been removed from Eq. \eqref{eq:static_friction_parametrized} and \eqref{eq:torsional_friction_parametrized}, respectively. It is straightforward to verify that the constraint \eqref{eq:torsional_friction_parametrized_simplified} is verified for all finite $\xi_6$. Also, direct calculations on Eq. \eqref{eq:CoP_y_parametrized_simplified}-\eqref{eq:CoP_x_parametrized_simplified} show that: \begin{IEEEeqnarray}{LCL} \label{eq:cop_constraints_simplified} y_c^{min} < \frac{y_c^{max}(1+ \tanh(\xi_4)) + y_c^{min}(1- \tanh(\xi_4))}{2} < y_c^{max} \nonumber \\ x_c^{min} < \frac{x_c^{max}(1- \tanh(\xi_5)) + x_c^{min}(1+ \tanh(\xi_5))}{2} < x_c^{max}. \nonumber \end{IEEEeqnarray} Using the facts that $y_c^{min} < y_c^{max}$, $x_c^{min} < x_c^{max}$ and $|\tanh(\cdot)| < 1$, one easily shows that the above quantities are always satisfied for all $\xi_4,\xi_5$. Concerning the remaining constraint Eq. \eqref{eq:static_friction_parametrized_simplified}, note that the argument of the square root can be rearranged as follows: \begin{IEEEeqnarray}{LCL} \frac{\tanh^2(\xi_1) (1 + \tanh^2(\xi_1)) + \tanh^2(\xi_2) (1 + \tanh^2(\xi_2))}{\tanh^2(\xi_1)+ \tanh^2(\xi_2) + \tanh^2(\xi_1)\tanh^2(\xi_2) +1} \nonumber \\ = \frac{\tanh^2(\xi_1) + \tanh^2(\xi_2) + \tanh^4(\xi_1) + \tanh^4(\xi_2)}{\tanh^2(\xi_1)+ \tanh^2(\xi_2) + \tanh^2(\xi_1)\tanh^2(\xi_2) +1}. \nonumber \end{IEEEeqnarray} So, the constraint \eqref{eq:static_friction_parametrized_simplified} is satisfied if and only if \begin{IEEEeqnarray}{LCL} \label{constraint28a-2} \tanh^2(\xi_1)\tanh^2(\xi_2) - \tanh^4(\xi_1) - \tanh^4(\xi_2) + 1 > 0. \IEEEeqnarraynumspace \end{IEEEeqnarray} Since $|\tanh(\cdot)| < 1$, then $\tanh^4(\cdot) \leq \tanh^2(\cdot)$. As a consequence, one easily finds a minorant of the left hand side of~\eqref{constraint28a-2}, namely \begin{IEEEeqnarray}{LCL} \label{constraint28a-3} \tanh^2(\xi_1)\tanh^2(\xi_2) - \tanh^4(\xi_1) - \tanh^4(\xi_2) + 1 &\geq& \nonumber \\ \tanh^2(\xi_1)\tanh^2(\xi_2) - \tanh^2(\xi_1) - \tanh^2(\xi_2) + 1 &=& \nonumber \\ \left(1-\tanh^2(\xi_1)\right)\left( 1-\tanh^2(\xi_2)\right). \IEEEeqnarraynumspace \end{IEEEeqnarray} Since $|\tanh(\cdot)| < 1$, then~\eqref{constraint28a-3} is (strictly) greater than zero, which renders~\eqref{constraint28a-2} satisfied. \noindent \textbf{Proof of 2):} First, it follows from \eqref{eq:parametrized_wrench} that any value of $\xi$ generates a unique $f^k \in \mathcal{K'}$. Now, assume that $f^k \in \mathcal{K'}$. Then, we have to show that there exists a unique $\xi = \phi^{-1}(f^k)$. It is straightforward to compute the \emph{inverse mapping} of the vertical force and moments parametrization: \begin{IEEEeqnarray}{LCL} \xi_3 = \ln{}\bigg(f_z -f_z^{min}\bigg) \nonumber\\ \xi_4 = \text{atanh}\bigg(\frac{M_x - \delta_{y_0}f_z}{\delta_{y}f_z}\bigg) \nonumber \\ \xi_5 = \text{atanh}\bigg(\frac{M_y - \delta_{x_0}f_z}{\delta_{x}f_z}\bigg) \nonumber \\ \xi_6 = \text{atanh}\bigg(\frac{M_z}{\mu_{z}f_z}\bigg). \nonumber \end{IEEEeqnarray} Since $\mathcal{K'} \subset \mathcal{K}$, then $f^k$ satisfies \eqref{eq:contact_constraints}. Using this fact and the definitions \eqref{eq:feet_size_constraints} we conclude that the solutions $(\xi_3,\xi_4,\xi_5,\xi_6)$ exist. Furthermore, since the above equations are composed of one-to-one correspondences (hyperbolic tangent, logarithm), then these solutions are also unique. % For what concerns the tangential forces $f_x$ and $f_y$, let us recall the expressions of the $f_x$ and $f_y$ parametrizations: \begin{IEEEeqnarray}{LCL} \IEEEyesnumber \label{eq:parametrization_f_xy} f_x = {\mu_c}\frac{\tanh(\xi_1) \,f_z}{\sqrt{{1+\tanh^2(\xi_2)}}} \IEEEyessubnumber\\ f_y = {\mu_c}\frac{\tanh(\xi_2) \,f_z}{\sqrt{{1+\tanh^2(\xi_1)}}}. \IEEEyessubnumber \end{IEEEeqnarray} An easy way to compute the inverse mapping is to raise to the square Eq. \eqref{eq:parametrization_f_xy}, which gives: \begin{IEEEeqnarray}{LCL} \IEEEyesnumber \label{eq:parametrization_f_xy_sq} f_x^2 = {\mu_c}^2\frac{\tanh^2(\xi_1) \,f_z^2}{{1+\tanh^2(\xi_2)}} \IEEEyessubnumber \\ f_y^2 = {\mu_c}^2\frac{\tanh^2(\xi_2) \,f_z^2}{{1+\tanh^2(\xi_1)}}. \IEEEyessubnumber \end{IEEEeqnarray} In the resulting equations, the square hyperbolic tangents $\tanh^2(\xi_1)$, $\tanh^2(\xi_2)$ appear linearly. Therefore they can be easily computed through matrix inversion: \begin{IEEEeqnarray}{LCL} \IEEEyesnumber \label{eq:inverse_mapping_xy} \begin{bmatrix} \tanh^2(\xi_1) \\ \tanh^2(\xi_2) \end{bmatrix} &=& \begin{bmatrix} {\mu_c}^2f_z^2 & -f_x^2 \\ -f_y^2 & {\mu_c}^2f_z^2 \end{bmatrix}^{-1} \begin{bmatrix} f_x^2 \\ f_y^2 \end{bmatrix}. \end{IEEEeqnarray} Since $f^k \in \mathcal{K'}$, then the right hand side of \eqref{eq:inverse_mapping_xy} is necessarily smaller than one and there exists at least one $\xi$ satisfying \eqref{eq:inverse_mapping_xy}. Resolving Eq. \eqref{eq:inverse_mapping_xy} w.r.t. $\xi_1,\xi_2$ gives \emph{two} possible solutions, namely $\xi_{1(2)} = \pm \text{atanh}\left(\sqrt{\tanh^2(\xi_{1(2)})}\right)$. However, only one of the two solutions satisfies the parametrization \eqref{eq:parametrization_f_xy}: in fact, the terms $f_z, \mu_c$, and $\sqrt{{1+\tanh^2(\xi_{1(2)})}}$ on the right-hand side of \eqref{eq:parametrization_f_xy} are always positive. So, the sign of $\xi_1$ ($\xi_2$) must correspond to the sign of $f_x$ ($f_y$), leading to the unique solution: \begin{IEEEeqnarray}{LCL} \xi_{1} = \text{sign}(f_x) \text{atanh}\left(\sqrt{\tanh^2(\xi_{1})}\right) \nonumber \\ \xi_{2} = \text{sign}(f_y) \text{atanh}\left(\sqrt{\tanh^2(\xi_{2})}\right). \nonumber \end{IEEEeqnarray} \textit{Remark:} it is possible to verify that if $f_x$ and $f_y$ belong to $\mathcal{K'}$, then the matrix inversion in Eq. \eqref{eq:inverse_mapping_xy} can always be performed. In fact, singularities arise when \[\det\left(\begin{bmatrix} {\mu_c}^2f_z^2 & -f_x^2 \\ -f_y^2 & {\mu_c}^2f_z^2 \end{bmatrix}\right) = \mu_c^4f_z^4 -f_x^2f_y^2 = 0.\] Substituting Eq. \eqref{eq:parametrization_f_xy_sq} in the expression of the determinant allows to verify that the condition $\mu_c^4f_z^4 = f_x^2f_y^2$ never occurs for any $\xi_1,\xi_2$. \noindent \textbf{Proof of 3):} let $\Phi_k \in \mathbb{R}^{6 \times 6}$ denote the gradient of $f^k = \phi(\xi)$. Then, straightforward calculations show that $\Phi_k$ is given by: \begin{equation} \label{gradient_matrix} \Phi_k = \begin{bmatrix} F_{11} & F_{12} & F_{13} & 0 & 0 & 0 \\ F_{21} & F_{22} & F_{23} & 0 & 0 & 0 \\ 0 & 0 & F_{33} & 0 & 0 & 0 \\ 0 & 0 & F_{43} & F_{44} & 0 & 0 \\ 0 & 0 & F_{53} & 0 & F_{55} & 0 \\ 0 & 0 & F_{63} & 0 & 0 & F_{66} \\ \end{bmatrix}. \nonumber \end{equation} Applying the Laplace's formula for the calculation of the determinant of $\Phi_k$ leads to: \begin{equation} \text{det}(\Phi_k) = F_{66}F_{55}F_{44}F_{33} \text{det}\begin{bmatrix} F_{11} & F_{12} \\ F_{21} & F_{22} \end{bmatrix} \nonumber \end{equation} where one has: \begin{align} F_{11} =\,\, & \frac{{\mu_c}(1-\tanh^2(\xi_{1}))(e^{\xi_3} + f_z^{min})}{\sqrt{1 +\tanh^{2}(\xi_{2})}}, \nonumber \\ F_{12} =\,\, & \frac{{\mu_c}\tanh(\xi_{1})(e^{\xi_3} + f_z^{min})}{(1 +\tanh^{2}(\xi_{2}))^{\frac{3}{2}}}(\tanh^3(\xi_{2})-\tanh(\xi_{2})) , \nonumber \\ F_{21} =\,\, & \frac{{\mu_c}\tanh(\xi_{2})(e^{\xi_3} + f_z^{min})}{(1 +\tanh^{2}(\xi_{1}))^{\frac{3}{2}}}(\tanh^3(\xi_{1})-\tanh(\xi_{1})) , \nonumber \\ F_{22} =\,\, & \frac{{\mu_c}(1-\tanh^2(\xi_{2}))(e^{\xi_3} + f_z^{min})}{\sqrt{1 +\tanh^{2}(\xi_{1})}}, \nonumber \\ F_{33} = \,\,& e^{\xi_{3}}, \nonumber \\ F_{44} = \,\,& \delta_y (1 - \tanh^2(\xi_4))(e^{\xi_3} + f_z^{min}), \nonumber \\ F_{55} = \,\,& \delta_x (1 - \tanh^2(\xi_5))(e^{\xi_3} + f_z^{min}), \nonumber \\ F_{66} = \,\,& \mu_{z}(1 - \tanh^2(\xi_6))(e^{\xi_3} + f_z^{min}). \nonumber \end{align} It can be verified that $F_{33}, F_{44}, F_{55}$ and $F_{66}$ are always different from zero for any finite $\xi_3,\xi_4,\xi_5,\xi_6$. We are then left to evaluate the determinant of $\begin{bmatrix} F_{11} & F_{12} \\ F_{21} & F_{22} \end{bmatrix}$, which is $F_{11}F_{22} - F_{12}F_{21}$. After noting that the product $F_{11}F_{22}$ is contained in the expression of $F_{12}F_{21}$, one is left with: \begin{IEEEeqnarray}{LCL} \text{det} \begin{bmatrix} F_{11} & F_{12} \\ F_{21} & F_{22} \end{bmatrix} \nonumber &=& {\mu_c^2}(e^{\xi_3} + f_z^{min})^2 \cdot \nonumber \\ & & \frac{(1-\tanh^2(\xi_{1}))(1-\tanh^2(\xi_{2}))}{\sqrt{(1+\tanh^2(\xi_{1}))(1+\tanh^2(\xi_{2}))}} \cdot \nonumber \\ & & \frac{(1 + \tanh^2(\xi_{1}) + \tanh^2(\xi_{2}))}{(1+\tanh^2(\xi_{1}))(1+\tanh^2(\xi_{2}))} \nonumber \end{IEEEeqnarray} which is non-zero for any finite $\xi_1, \xi_2, \xi_3$. Hence, matrix $\Phi_k$ is invertible for any finite $\xi$. \subsection{Proof of Lemma \ref{lemma_12}} \label{appendix_12} \noindent \textbf{Proof of 1):} first, observe that Lyapunov stability of the equilibrium point $(\tilde{H},\dot{\tilde{H}}) = (0,0)$ follows from \eqref{eq:closed_loop_fb_linH} with $\ddot{H}^*$ given by \eqref{eq:closed_loop_fb_lin}. By definition, Lyapunov stability implies that there exist initial conditions $(\tilde{H},\dot{\tilde{H}})(0)$ such that the system trajectories $(\tilde{H},\dot{\tilde{H}})(t)$ remain as close as we wish to the point $(0,0)$. We are then left to show that the variable $\xi$ remains bounded while the system trajectories evolve close to the equilibrium point: the boundedness of $\xi$ would imply that the matrix $A\Phi(\xi)$ in \eqref{eq:centroidal_momentum_acc} remains of full rank, thus ensuring that the equality \eqref{eq:closed_loop_fb_linH} can be satisfied $\forall t$. Now, since the desired momentum $H_d$ is a feasible system equilibrium, then there exists a bounded $\xi_e(t)$ such that \begin{IEEEeqnarray}{LCL} \label{equilibrium} \dot{H}_d &=& A\phi(\xi_e) -mge_3. \end{IEEEeqnarray} Furthermore, the behaviour of system \[\dot{\tilde{H}} = A\phi(\xi) -mge_3 -\dot{{H}}_d\] close to the equlibrium point can be obtained via linearisation techniques, which in view of \eqref{equilibrium} yields \begin{IEEEeqnarray}{LCL} \label{linearisation} \dot{\tilde{H}} = A(\phi(\xi_e) + \Phi(\xi_e)\tilde{\xi}) -mge_3 - \dot{{H}}_d = A\Phi(\xi_e)\tilde{\xi}, \IEEEeqnarraynumspace \end{IEEEeqnarray} with $\tilde{\xi} = \xi - \xi_e$. Since it is assumed that the robot makes a single contact with the environment, namely $n_c = 1$, then the matrix $A \in \mathbb{R}^{6\times6}$. From \eqref{matrix_Ak} one easily verifies that the matrix $A$ is always invertible, with $A^{-1}$ a bounded matrix. Then, one has \[|\tilde{\xi}| = |(A\Phi(\xi_e))^{-1}\dot{\tilde{H}}| \leq c_a|\dot{\tilde{H}}|,\] with $c_a > 0$. As a consequence of the Lyapunov stability of $(\tilde{H},\dot{\tilde{H}}) = (0,0)$, there exist initial conditions $(\tilde{H},\dot{\tilde{H}})(0)$ for which the variable $\tilde{\xi}$ stays bounded and as close as we wish to zero, thus rendering the linearisation \eqref{linearisation} consistent $\forall \ t$. Finally, since $\xi_e$ is bounded, then $\xi$ is also bounded, and the equality \eqref{eq:closed_loop_fb_linH} can be satisfied $\forall t$. This latter fact, along with $\ddot{H}^*$ given by \eqref{eq:closed_loop_fb_lin}, then implies local asymptotic stability since \eqref{eq:closed_loop_fb_linH} is satisfied $\forall \ t$. \noindent \textbf{Proof of 2):} since it is assumed that the variable $\xi$ is bounded $\forall t$, then \eqref{eq:closed_loop_fb_linH} can be satisfied $\forall \ t$ because the matrix $A\Phi(\xi)$ in \eqref{eq:centroidal_momentum_acc} remains of full rank. Then, the closed loop dynamics is $\ddot{\tilde{H}} = - K_d\dot{\tilde{H}} - K_p{\tilde{H}} \ \forall t$, which implies global asymptotic stability of the equilibrium point $(\tilde{H},\dot{\tilde{H}}) = (0,0)$. \subsection{Proof of Lemma \ref{lemma_2}} \label{appendix_2} \noindent \textbf{Proof of 1):} define the following Lyapunov function candidate: \begin{equation} \label{lyapunov-integral} V(I, \tilde{H}, \zeta):= \frac{1}{2}I^{\top}K_pI + \frac{1}{2} \tilde{H}^\top\tilde{H} + \frac{1}{2}\zeta^\top K_0 \zeta. \end{equation} Note that $V = 0 \iff (I, \tilde{H}, \zeta) = (0,0,0)$. Compute the Lyapunov function derivative $\dot{V}$: \begin{align} \dot{V} & = I^\top K_p \tilde{H} + \tilde{H}^\top \dot{\tilde{H}} + \zeta^\top K_o \dot{\zeta} \nonumber \\ & = I^\top K_p \tilde{H} + \tilde{H}^\top (\zeta -K_p I -K_d\tilde{H}) + \zeta^\top K_o \dot{\zeta} \nonumber \\ & = - \tilde{H}^\top K_d\tilde{H} + \zeta^\top K_o(\dot{\zeta} + K_o^{-1}\tilde{H}). \nonumber \end{align} It is easy to prove that $\dot{V} \leq 0$ when $\dot{\zeta} + K_o^{-1}\tilde{H} = -\zeta$. Then, in view of $\dot{\zeta}$ given by \eqref{eq:dot_zeta} and the definition of $\zeta$ one has: \begin{align} \label{eq:input_momentum_definition} \dot{A}f + A\Phi\dot{\xi} - \ddot{H}_d + K_d \dot{\tilde{H}} + K_p\tilde{H} + K_o^{-1}\tilde{H} \\ = -Af +mge_3 + \dot{H}_d - K_d \tilde{H} - K_p I, \nonumber \end{align} and after a rearrangement, Eq. \eqref{eq:input_momentum_definition} leads to the definition of the control input $\dot{\xi}^*$ as in Eq. \eqref{eq:input_momentum}, which gives \begin{equation} \label{lyapunov-derivative-integral} \dot{V} = - \tilde{H}^\top K_d\tilde{H} - \zeta^\top K_o \zeta \leq 0. \end{equation} This result implies stability of the equilibrium point and the boundedness of system's trajectories. Furthermore, as long as Eq. \eqref{eq:input_momentum_definition} holds, the closed loop dynamics is given by \[\dot{\zeta} = - \zeta -K_o^{-1}\tilde{H},\] and Eq. \eqref{eq:dot_I}-\eqref{eq:dot_tilde_H}. The system is therefore autonomous, and the convergence of $\tilde{H}, \zeta$ and $\dot{\tilde{H}}, \dot{\zeta}$ to zero can be proved via the La Salle's theorem. Convergence to zero of $I$ can be proven by computing Eq. \eqref{eq:dot_tilde_H} on the invariant manifold. Analogously to the proof of Lemma \ref{lemma_12}, we are left to show that in a neighborhood of the equilibrium point, the variable $\xi$ remains bounded, which guarantees that \eqref{eq:input_momentum_definition} holds. Note that since the desired momentum $H_d$ is a feasible equilibrium, then there exists a bounded $\xi_e(t)$ such that \[\dot{H}_d = A\phi(\xi_e) -mge_3.\] Consider now the right hand side of \eqref{eq:dot_tilde_H}, namely \begin{IEEEeqnarray}{LCL} Af -mge_3 -\dot{H}_d = \zeta - K_d \tilde{H} - K_p I, \label{eq:dot_tilde_H_1} \end{IEEEeqnarray} and linearize $Af -mge_3 -\dot{H}_d$ close to the equilibrium trajectory $\xi_e$, which yields \begin{IEEEeqnarray}{LCL} \label{linearisation2} A\Phi(\xi_e)\tilde{\xi} &=& \zeta - K_d \tilde{H} - K_p I = \Bar{K} \begin{pmatrix} I \\ \tilde{H} \\ \zeta \end{pmatrix} \label{eq:dot_tilde_H_2} \end{IEEEeqnarray} with $\Bar{K} = (-K_p, \ -K_d, \ 1_6)$. Since it is assumed that the robot makes a single contact with the environment, namely $n_c = 1$, then the matrix $A \in \mathbb{R}^{6\times6}$. From \eqref{matrix_Ak} one easily verifies that the matrix $A$ is always invertible, with $A^{-1}$ a bounded matrix. Then, one has \[|\tilde{\xi}| = \left|(A\Phi(\xi_e))^{-1}\Bar{K} \begin{pmatrix} I \\ \tilde{H} \\ \zeta \end{pmatrix}\right| \leq c_b\left| \begin{pmatrix} I \\ \tilde{H} \\ \zeta \end{pmatrix}\right|,\] with $c_b > 0$. As a consequence of the Lyapunov stability of $(I,\tilde{H},\zeta) = (0,0,0)$, there exist initial conditions $(I,\tilde{H},\zeta)(0)$ for which the variable $\tilde{\xi}$ stays bounded and as close as we wish to zero, thus rendering the linearisation \eqref{linearisation2} consistent $\forall \ t$. Finally, since $\xi_e$ is bounded, then $\xi$ is also bounded, and the equality \eqref{eq:input_momentum_definition} can be satisfied $\forall t$. \noindent \textbf{Proof of 2):} it is assumed that the $\xi$ is bounded $\forall t$; then \eqref{eq:input_momentum_definition} is satisfied $\forall \ t$ because the matrix $A\Phi(\xi)$ in \eqref{eq:input_momentum_definition} remains of full rank. In view of the radial unboundedness of \eqref{lyapunov-integral}, and of the expression of \eqref{lyapunov-derivative-integral} that is valid $\forall t$, then global asymptotic stability follows from the same La Salle arguments above. \subsection{Computation of $\dot{\xi}_0$:} \label{computation-xi-0} The calculation of $\dot{\xi}_0$ is carried out by substituting \eqref{eq:input_momentum} in the expression of \eqref{eq:input_torques}. Fist, rewrite \eqref{eq:input_torques} as: \begin{subequations} \label{input_compact} \begin{align} \tau = \,\, & \Theta f + \theta\\ \Theta:= \,\, & -\Lambda^{\dagger}JM^{-1}J^\top\\ \theta:= \,\, & \Lambda^{\dagger}[JM^{-1}h - \dot{J}\nu] + N_{\Lambda}\tau_0. \end{align} \end{subequations} Then, consider the following Lyapunov function: \begin{IEEEeqnarray}{LCL} V = \frac{1}{2}\tau^{\top}\tau \nonumber \quad \Rightarrow \quad \dot{V} = \tau^{\top}\dot{\tau} = \tau^{\top}(\dot{\Theta}f + \Theta\Phi\dot{\xi} + \dot{\theta}). \nonumber \end{IEEEeqnarray} Substitute now $\dot{\xi}$ given by \eqref{eq:input_momentum} into $\dot{V}$, which leads to: \begin{IEEEeqnarray}{LCL} \label{eq:lyap_xi_0} \dot{V} & = & \tau^{\top}(\dot{\Theta}f + \Theta\Phi\dot{\xi}_1 + \Theta\Phi N_{A\Phi}\dot{\xi}_0 + \dot{\theta}), \end{IEEEeqnarray} $\dot{\xi}_1 {:=} {(A\Phi)}^{\dagger}\,[\ddot{H}_d {-} (K_d {+} 1_6)\dot{\tilde{H}} {-} (K_d {+} K^{-1}_o {+} K_p)\tilde{H} {-} K_p I {-}\dot{A}f]$. A solution that minimizes the joint torque norm is to impose: \begin{IEEEeqnarray}{LCL} \label{eq:input_xi_0} \dot{\Theta}f + \Theta\Phi\dot{\xi}_1 + \Theta\Phi N_{A\Phi}\dot{\xi}_0 + \dot{\theta} = -K_{\tau}\tau, \end{IEEEeqnarray} with $K_{\tau}$ a symmetric and positive definite matrix. When the equivalence \eqref{eq:input_xi_0} is satisfied, the Lyapunov derivative Eq. \eqref{eq:lyap_xi_0} becomes $\dot{V} = -\tau^{\top}K_{\tau}\tau \leq 0$ and the input joint torques converge to zero. However, this is not the case as the rank of the matrix $(\Theta\Phi N_{A\Phi})$ that multiplies the free variable $\dot{\xi}_0$ is lower than the dimension of the joint torques vector $\tau \in \mathbb{R}^n$. Nevertheless, we compute the closest solution to Eq. \eqref{eq:input_xi_0}, that leads to the following expression of $\dot{\xi}_0$: \begin{IEEEeqnarray}{LCL} \dot{\xi}_0 = -(\Theta\Phi N_{A\Phi})^{\dagger}(\dot{\Theta}f + \Theta\Phi\dot{\xi}_1 + \dot{\theta} +K_{\tau}\tau). \nonumber \end{IEEEeqnarray} \section{Background} \label{sec:background} \subsection{Notation} \begin{itemize} \item $\mathcal{I}$ is an inertial frame, with its $z$ axis pointing against the gravity, $\mathcal{B}$ is a frame attached to the robot's \emph{base link}. \item The constant $m$ represents the total mass of the robot, and $g$ is the norm of the gravitational acceleration. \item Given a matrix $A \in \mathbb{R}^{p \times n}$, we denote with $A^{\dagger} \in \mathbb{R}^{n \times p}$ its Moore-Penrose pseudoinverse. \item $e_i \in \mathbb{R}^6$ is the canonical vector, consisting of all zeros but the $i$-th component that is equal to one. \item Let $S(x) \in \mathbb{R}^{3 \times 3}$ be the skew-symmetric matrix such that $S(x)y {=} x {\times} y$, with $\times$ the cross product operator in $\mathbb{R}^3$. \end{itemize} \subsection{Robot Modeling} The robot is modeled as a multi-body system composed of $n + 1$ rigid bodies, called links, connected by $n$ joints with one degree of freedom each. We also assume that none of the links has an \emph{a priori} constant position-and-orientation with respect to an inertial frame, i.e. the system is \emph{free floating}. The robot configuration space is the Lie group $\mathbb{Q} = \mathbb{R}^3 \times \mathbb{SO}(3) \times \mathbb{R}^n$ and it is characterized by the \emph{pose} (i.e. position-and-orientation) of a \emph{base frame} attached to the robot's \emph{base link}, and the joint positions. An element $q \in \mathbb{Q}$ can be defined as the following triplet: $q = (^{\mathcal{I}}o_{\mathcal{B}}, {}^{\mathcal{I}}R_{\mathcal{B}}, s)$ where $^{\mathcal{I}}o_{\mathcal{B}} \in \mathbb{R}^3$ denotes the origin of the base frame with respect to the inertial frame, $^{\mathcal{I}}R_{\mathcal{B}}$ is the rotation matrix representing the orientation of the base frame, and $s \in \mathbb{R}^n$ is the joint configuration characterizing the robot posture. The velocity of the multi-body system can be characterized by the \emph{algebra} of $\mathbb{Q}$. We here choose to represent the velocity of the multi-body system by the set $\mathbb{V} = \mathbb{R}^3 \times \mathbb{R}^3 \times \mathbb{R}^n$, where an element of $\mathbb{V}$ is the vector $\nu = (^{\mathcal{I}}\dot{o}_{\mathcal{B}}, \,^{\mathcal{I}}\omega_{\mathcal{B}},\, \dot{s}) =(\mathrm{v}_{\mathcal{B}}, \dot{s})$, and $^{\mathcal{I}}\omega_{\mathcal{B}}$ is the angular velocity of the base frame expressed w.r.t. the inertial frame, i.e. $^{\mathcal{I}}\dot{R}_{\mathcal{B}} = S(^{\mathcal{I}}\omega_{\mathcal{B}})^{\mathcal{I}}R_{\mathcal{B}}$. A more detailed description of the configuration space and its algebra is provided in \cite{traversaro2017}. We assume that the robot interacts with the environment by exchanging $n_c$ distinct wrenches. The equations of motion of the multi-body system can be described by applying the Euler-Lagrange formalism \cite[Chapter 13.5]{Marsden1999}: \begin{IEEEeqnarray}{LCL} \IEEEyesnumber \label{eq:dynamics} M(q)\dot{\nu} + C(q,\nu)\nu + G(q) = B\tau + \sum_{k=1}^{n_c} J_{C_k}^T f^k, \end{IEEEeqnarray} where $M \in \mathbb{R}^{n+6 \times n+6}$ is the mass matrix, $C \in \mathbb{R}^{n+6 \times n+6}$ is the matrix accounting for Coriolis and centrifugal effects, $G$ $\in \mathbb{R}^{n+6}$ is the gravity term, $B = [0_{n\times6},\,\, 1_n]^\top$ is a selector of the actuated DoFs, $\tau \in \mathbb{R}^n$ is a vector representing the internal actuation torques, and $f^k \in \mathbb{R}^6$ represents an external wrench applied by the environment to the link of the $k$-th contact. The Jacobian $J_{\mathcal{C}_k} = J_{\mathcal{C}_k}(q)$ is the map between the robot's velocity $\nu$ and the linear and angular velocity of the $k$-th contact link. Lastly, it is assumed that a set of holonomic constraints acts on Eq. \eqref{eq:dynamics} of the form $c(q) = 0$: they may represent, for instance, a frame having a constant position-and-orientation w.r.t. the inertial frame. Hence, we represent the holonomic constraints on links in rigid contact with the environment as $J_{\mathcal{C}_k}(q)\nu = 0$. The holonomic constraints associated with all the rigid contacts can be then compactly represented as: \begin{IEEEeqnarray}{LCL} \IEEEyesnumber \label{eq:holonomic_constraint} J(q) \nu = & \begin{bmatrix} J_{\mathcal{C}_1}(q)\\ ...\\ J_{\mathcal{C}_{n_c}}(q) \end{bmatrix} \nu = 0. \end{IEEEeqnarray} By differentiating the kinematic constraints Eq. \eqref{eq:holonomic_constraint}, one has: \begin{IEEEeqnarray}{LCL} \IEEEyesnumber \label{eq:holonomic_constraint_acc} J\dot{\nu} + \dot{J}\nu = 0. \end{IEEEeqnarray} By combining the system dynamics \eqref{eq:dynamics} and the constraint equations \eqref{eq:holonomic_constraint_acc}, one obtains the following set of equations: \begin{IEEEeqnarray}{LCL} \IEEEyesnumber \label{eq:constrained_dynamics} M\dot{\nu} + h &=&\, J^\top f + B\tau \IEEEyessubnumber \label{eq:dynamics_constr} \\ J\dot{\nu} + \dot{J}\nu &=& 0, \IEEEyessubnumber \label{eq:constraint_acc} \end{IEEEeqnarray} where $h:= C(q,\nu)\nu + G(q)$, and $f:= [f^1;...; f^{n_c}] \in \mathbb{R}^{6n_c}$ is the vector of contact wrenches making Eq. \eqref{eq:holonomic_constraint_acc} satisfied. \subsection{Contact Stability Constraints} \label{sub-section:contact-stability-constraints} \begin{figure}[!t] \includegraphics[height=5cm,width=\columnwidth]{pictures/footSize.png} \centering \caption{Contact surface. The picture highlights the rectangle's dimensions w.r.t. the contact frame $\mathcal{C}$.} \label{fig:foot_size} \end{figure} Often, the holonomic constraints acting on the system represent a robot flat surface in \emph{complete} rigid contact with the environment. To maintain the constraints \emph{stable}, then, conditions on the contact forces and moments shall be met. More precisely, let $f^k \in \mathbb{R}^6$ denote the vector of the $k$-th contact wrench associated with the $k$-th \emph{active} rigid contact of a planar robot surface, namely $f^k= [f_x, \hspace{1 mm} f_y, \hspace{1 mm} f_z, \hspace{1 mm} M_x, \hspace{1 mm} M_y, \hspace{1 mm} M_z]^\top$. Then, the \emph{contact stability} conditions to preserve the planar unilateral contact can be formulated as follows: \begin{IEEEeqnarray}{LCL} \IEEEyesnumber \label{eq:contact_constraints} f_z \hspace{1 mm} > \hspace{1 mm} f_z^{min} \geq 0 \IEEEyessubnumber \label{eq:pos_vert_forces},\\ \sqrt[]{f_x^2 + f_y^2} \hspace{1 mm} < \hspace{1 mm} \mu_c f_z \IEEEyessubnumber \label{eq:static_friction},\\ y_c^{min} < \frac{M_x}{f_z} < y_c^{max} \IEEEyessubnumber \label{eq:CoP_y},\\ x_c^{min} < -\frac{M_y}{f_z} < x_c^{max} \IEEEyessubnumber \label{eq:CoP_x},\\ \left |\frac{M_z}{f_z} \right| \hspace{1 mm} < \hspace{1 mm} \mu_z. \IEEEyessubnumber \label{eq:torsional_friction} \end{IEEEeqnarray} Being the constraints unilateral, condition \eqref{eq:pos_vert_forces} imposes that the force normal to the contact is greater than a value $f_z^{min}$, which must be greater than or equal to zero. Eq. \eqref{eq:static_friction} limits the magnitude of the forces parallel to the contact surface not to overcome the static friction characterised by the coefficient $\mu_c$. Conditions \eqref{eq:CoP_y}-\eqref{eq:CoP_x} constrain the local Center of Pressure -- see, e.g.,~\cite[Appendix B, Eq. (A7)]{Frontiers2015} -- to remain inside the contact surface, which is assumed to be rectangular of dimensions $x_c^{min},x_c^{max},y_c^{min},y_c^{max}$, calculated w.r.t. the contact reference frame $\mathcal{C}$ and defined as shown in Fig. \ref{fig:foot_size}. Eq.~\eqref{eq:torsional_friction} imposes no foot rotation along the axis normal to the contact surface, and $\mu_z$ is the torsional friction coefficient. \section{Conclusions and Future Work} \label{sec:conclusion} In this paper, we addressed some common limitations of force and torque controllers for floating base systems based on Quadratic Programming. More specifically, we removed inequality constraints from the optimization problem by designing an invertible, one-to-one mapping that parametrises the contact wrenches into a new set of unconstrained variables. This parametrization guarantees that the contact wrenches always satisfy the contact stability constraints. Based on this mapping, we designed a jerk control framework for floating base systems. We then analyzed a specific use case of the jerk controller, namely a momentum-based jerk control architecture for balancing a 23 DoFs humanoid robot. The controller has been validated both in simulation and on the real iCub, and compared with a classical momentum-based controller. Sensitivity to errors in the momentum rate of change estimation is identified as a drawback of the approach, as it may affect stability and convergence of the closed loop dynamics. A solution for increasing robustness of the controller w.r.t. biases on momentum estimation is presented. A limitation is that the proposed jerk control architecture does not take into account joint position and torque limits. A future work may involve the integration of these limits in the control framework by extending the approach presented in \cite{marie2016} to the case of floating base robots. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{pictures/momentumNorm.pdf} \caption{Linear (top) and angular (bottom) momentum error norm during contact switching demo. A peak of momentum error is present during contact switching if momentum-QP control is used. During highly dynamic movements, the tracking performances of the two controllers are comparable.} \label{fig:mom_err_norm} \end{figure} In this paper, the jerk control architecture implements a momentum task and a postural task for balancing. Further development will be done to extend the jerk control to more complex control objectives, as for example humanoid robot walking. \section{Jerk Control} \label{sec:control} This section proposes control laws that exploit the contact wrench parametrisation~\eqref{eq:parametrized_wrench} and attempt to address the limitations -- listed in Section~\ref{sub-section:classical-qp} -- of the classical torque-based controllers framed as stack-of-tasks optimisation problems. \subsection{ Jerk control with parametrised contact wrenches} The wrench parametrisation~\eqref{eq:parametrized_wrench} can be used to remove the constraint $f \in \mathcal{K}$ from the optimisation problem~\eqref{eq:sot}. This process would lead to the following formulation: \begin{IEEEeqnarray}{LLL} \IEEEyesnumber \label{eq:sot-noK} \minimize_{y=(\dot{\nu},\xi,\tau)} ~& \norm{Ag(y)-a^*}^2 \\ \text{subject to:}~& \nonumber \\ &\begin{bmatrix} M(q) & -J^\top & -B \\ J & 0_{6n_c} & 0_{6n_c,n} \end{bmatrix} \begin{bmatrix}\dot{\nu} \\ \phi(\xi) \\ \tau \end{bmatrix} {=} \begin{bmatrix} -h(q,\nu) \\ -\dot{J}\nu \end{bmatrix} \nonumber \end{IEEEeqnarray} with $g(y):= [\dot{\nu};\phi(\xi);\tau]$. The main drawbacks of the above approach are: $i)$ the optimisation problem~\eqref{eq:sot-noK} can no longer be casted in a QP being the parametrisation $\phi(\xi)$ nonlinear; we would then need nonlinear -- and often slower than QPs -- optimisers to solve~\eqref{eq:sot-noK}; $ii)$ the limitations 1), 2) listed in Section~\ref{sub-section:classical-qp} are not addressed. To include feedback terms into the control laws, the contact wrenches, or accelerations, shall become part of the system state. In the language of Automatic Control, we shall then proceed with augmenting the relative degree of the output (or task) that one wants to stabilise~\cite{isidori2013}. More precisely, assume that: $hp{-}i)$ the control objective is the stabilisation of a desired jerk $\dot{a}^*$; $hp{-}ii)$ the joint torque rate-of-change $\dot{\tau}$ can be considered as a control input; $hp{-}iii)$ both the joint torques $\tau$ and the contact forces $f$ are measurable quantities, so the robot acceleration $\dot{\nu}$ -- if not measurable -- can be obtained from~\eqref{eq:constrained_dynamics}. Now, define \begin{IEEEeqnarray}{RCL} \IEEEyesnumber D&:= &\begin{bmatrix} M(q) & -J^\top & -B \\ J & 0_{6n_c} & 0_{6n_c,n} \end{bmatrix}, \IEEEyessubnumber \\ \beta &:=& \begin{bmatrix} -h(q,\nu) \\ -\dot{J}\nu \end{bmatrix}. \IEEEyessubnumber \end{IEEEeqnarray} As a consequence of $hp{-}iii)$, one has a measurement of the vector $y=(\dot{\nu},f,\tau)$, while the variable $\dot{y}$ can be used as a search variable. Then, control laws for $\dot{\tau}$ that contain feedback information from the FT sensors are obtained as an outcome of the following optimisation problem: \begin{IEEEeqnarray}{LLL} \IEEEyesnumber \label{eq:sotJerk} \minimize_{\dot{y}=(\ddot{\nu},\dot{f},\dot{\tau})} ~& \norm{\dot{A}y + A\dot{y} -\dot{a}^*}^2 \IEEEyessubnumber \\ \text{subject to:}~& \nonumber \\ &\dot{D}y+D\dot{y} = \dot{\beta}\nonumber \\ & f \in \mathcal{K}. \IEEEyessubnumber \label{eq:finKsotJerk} \end{IEEEeqnarray} The solutions to the above problem are continuous in $\tau$ (even if $\dot{\tau}$ is discontinuous) and contain FT measurement feedback from the vector $y$. One of the main difficulties when solving~\eqref{eq:sotJerk} is given by~\eqref{eq:finKsotJerk}. Since the variable $f$ no longer is a search variable, in fact, one cannot instantaneously choose values of the contact wrenches such that $f \in \mathcal{K}$. One may attempt to make~\eqref{eq:finKsotJerk} satisfied by regulating appropriately the variable~$\dot{f}$, which influences the wrench $f$ at the next time step. A possibility to make~\eqref{eq:finKsotJerk} satisfied is to use the parametrisation in Lemma~\ref{lemma_1}: the gradient of the parametrisation automatically enforces the fact that $f(t) \in \mathcal{K} \ \forall t$. More precisely, in view of~\eqref{gradient-phi}, one has $\dot{y} = (\ddot{\nu},\Phi(\xi)\dot{\xi},\dot{\tau})$, which leads to the following optimisation problem: \begin{IEEEeqnarray}{LLL} \IEEEyesnumber \label{eq:sotJerkNoK} \minimize_{u=(\ddot{\nu},\dot{\xi},\dot{\tau})} ~& \norm{\dot{A}y + APu -\dot{a}^*}^2 \IEEEyessubnumber \\ \text{subject to:}~& \nonumber \\ &\dot{D}y+DPu = \dot{\beta}, \IEEEyessubnumber \end{IEEEeqnarray} with $P$ defined as: \[ P:= \begin{pmatrix} \mathbf{I}_{n+6} & 0_{n+6,6n_c} & 0_{n+6,n} \\ 0_{6n_c,n+6} & \Phi(\xi) & 0_{6n_c,n} \\ 0_{n,n+6} & 0_{n,6n_c} & \mathbf{I}_n \end{pmatrix}. \] In order to be solved at each time instant, the optimisation problem~\eqref{eq:sotJerkNoK} requires the variable $\xi$. This variable may be retrieved from either time integration of~$\dot{\xi}$, or by inverting the relationship $f = \phi(\xi)$: being the parametrisation a one-to-one correspondence (see Lemma~\ref{lemma_1}), there exists a unique $\xi$ for any value of the contact wrench $f$ provided that it belongs to $\mathcal{K'} \subset \mathcal{K}$. The latter allows us to inject further information from the FT sensor measurements into the optimal control laws $u$ solving the optimisation problem~\eqref{eq:sotJerkNoK}. Note also that the matrix $P$ is invertible thanks to the property $3)$ of Lemma~\ref{lemma_1}. The invertibility of $P$ clearly plays a pivotal role when solving the optimisation problem~\eqref{eq:sotJerkNoK}. \subsection{On the modeling and control requirements for jerk control} \label{sub-tauDot} The optimal value $\dot{\tau}$ solving~\eqref{eq:sotJerkNoK} may be sent directly to the robot low-level control system if it allows to set desired rate-of-changes of joint torques. This may be feasible, for instance, when the low-level robot control exploits the model between the joint torques $\tau$ and the motor currents $i$, e.g. $\tau = k_\tau i$. More precisely, the motor currents are usually subject to electrical dynamics of the kind $\tfrac{d}{dt}{i} = k_i v$, where $v$ is often the motor voltages to be applied to the motors -- namely, the real control input. Then, it is straightforward to express the optimisation problem~\eqref{eq:sotJerkNoK} so that the search variable $u$ contains~$v$. Let us observe, however, that this control architecture in general requires high-frequency control loops (e.g. $5-20$ KHz) for generating the motor voltages~$v$: these loops have to compute inverse dynamics within a short control cycle. If the control loops are not fast enough, sampling effects may be preponderant phenomena that render the assumption $\tfrac{d}{dt}{i}~=~k_i v$ not representative of the underlying physical dynamics. In this case, the associated control strategy resulting from~\eqref{eq:sotJerkNoK} may prove to be ineffective. Another necessary requirement for achieving jerk control is the calculation of the terms $\dot{A}$, $\dot{D}$ and $\dot{\beta}$ to solve the optimisation problem~\eqref{eq:sotJerkNoK}. These terms in general depend on the robot configuration space $q$, velocity $\nu$, and accelerations $\ddot{\nu}$, and need the derivatives of the system inverse dynamics. Besides numerical approximations for computing these terms, existing libraries nowadays provide users with the support of automatic differentiation and/or directly derivatives of inverse dynamics~\cite{carpentier2018analytical,Andersson2018}. If some of the terms in~\eqref{eq:sotJerkNoK} are not available, one may attempt setting them equal to zero and tune the feedback control gains in $\dot{a}^\ast$ so as to achieve robustness against them. However, we present below a jerk control architecture that overcomes the above modeling and control limitations of the mere application of~\eqref{eq:sotJerkNoK}. \section{Momentum-based Jerk control} \label{sec-jerk-momentum-control} This section proposes control laws that can be obtained from the problem~\eqref{eq:sotJerkNoK} when it is explicitly solved and extended for a two layer stack-of-task. These laws can also be shown to possess stability properties. Interestingly, the architecture presented below does not need the feedforward terms that depend on the inverse dynamics derivatives required by~\eqref{eq:sotJerkNoK}. This is achieved by loosing the continuity property of $\tau$ but retaining the continuity of the contact wrenches $f$. More precisely, we assume that: i) the highest priority task is the stabilisation of a desired robot centroidal momentum~\cite{traversaro2017,orin08}; ii) the lower priority task aims at stabilising the robot \emph{posture} to regulate the system \emph{zero dynamics} \cite{isidori2013}. Let us recall that the momentum rate-of-change equals the summation of all the external wrenches acting on the robot. In a multi-contact scenario, the external wrenches reduce to the contact wrenches plus the gravity force: \begin{IEEEeqnarray}{LCL} \IEEEyesnumber \dot{H} &=& \sum_{k = 1}^{n_c} A_kf^k - mge_3 = Af -mge_3, \IEEEyessubnumber \label{eq:centroidal_momentum} \\ A &:=& \hspace{0.5 mm} [A_1, ... \hspace{0.5 mm}, A_{n_c}] \in \mathbb{R}^{6 \times 6n_c}, \IEEEyessubnumber \label{matrix_A} \\ % A_k &=& \begin{bmatrix} 1_3 & 0_{3} \\ S(\,^{\mathcal{I}}o_{\mathcal{C}_k} - \,^{\mathcal{I}}o_{CoM}) & 1_3 \end{bmatrix}, \IEEEyessubnumber \label{matrix_Ak} \end{IEEEeqnarray} where $H \in \mathbb{R}^6$ is the robot's momentum, $A_k \in \mathbb{R}^{6 \times 6}$ is the matrix mapping the $k$-th contact wrench to the momentum dynamics, $^{\mathcal{I}}o_{\mathcal{C}_k} \in \mathbb{R}^3$ is the origin of the frame associated with the $k$-th contact, and $^{\mathcal{I}}o_{CoM} \in \mathbb{R}^3$ is the CoM position. Recall that the rate-of-change of the robot momentum~\eqref{eq:centroidal_momentum} is related to the system accelerations (e.g. acceleration of the system center of mass). So, to derive jerk-based control laws, we need to differentiate~\eqref{eq:centroidal_momentum} w.r.t. time, which writes: \begin{IEEEeqnarray}{LCL} \IEEEyesnumber \label{eq:centroidal_momentum_acc} \ddot{H} &=& A\dot{f} + \dot{A}f = A\Phi(\xi)\dot{\xi} + \dot{A}f, \\ \dot{A} &:=& \hspace{0.5 mm} [\dot{A}_1, ... \hspace{0.5 mm}, \dot{A}_{k}]\hspace{0.5 mm} \hspace{0.5 mm} \forall\hspace{0.5 mm}{k} = 1, ... \hspace{0.5 mm}, n_c, \nonumber \\ \dot{A}_k &=& \begin{bmatrix} 0_3 & 0_{3} \\ S(\,^{\mathcal{I}}\dot{o}_{\mathcal{C}_k} - \,^{\mathcal{I}}\dot{o}_{CoM}) & 0_3 \end{bmatrix}. \nonumber \end{IEEEeqnarray} Note that Eq. \eqref{eq:centroidal_momentum_acc} is linear w.r.t. $\dot{\xi}$ and optimisation problems similar to~\eqref{eq:sotJerkNoK} may be laid down. In particular, to obtain the robot momentum stability, one may: $i)$~consider $\dot{\xi}$ as control input -- or search variable -- of the momentum acceleration~\eqref{eq:centroidal_momentum_acc}; $ii)$~apply feedback linearization to~\eqref{eq:centroidal_momentum_acc} in order to impose a momentum acceleration $\ddot{H}^*$ of the form: \begin{IEEEeqnarray}{LCL} \label{eq:closed_loop_fb_lin} \ddot{H}^* = \ddot{H}_d - K_d\dot{\tilde{H}} - K_p{\tilde{H}}, \end{IEEEeqnarray} where $K_d,K_p \in \mathbb{R}^{6 \times 6}$ are symmetric and positive definite matrices, $H_d\in \mathbb{R}^{6}$ is the reference momentum, and $\tilde{H} = H - H_d$ is the momentum error. Observe that it is always possible to find~$\dot{\xi}$ such that \begin{IEEEeqnarray}{LCL} \label{eq:closed_loop_fb_linH} \ddot{H}(\dot{\xi}) = \ddot{H}^* \end{IEEEeqnarray} because of the item $3)$ of Lemma~\ref{lemma_1}. More precisely, the gradient~$\Phi$ being always invertible ensures that the matrix~$A\Phi$ in~\eqref{eq:centroidal_momentum_acc} is full rank $\forall \ \xi$. Consequently, $\dot{\xi}$ has full control authority on the momentum acceleration for any value of $\xi$. Clearly, one can impose~\eqref{eq:closed_loop_fb_linH} as long as $\xi$ remains bounded. The stability properties of the control laws ensuring \eqref{eq:closed_loop_fb_linH} are then presented below. \newtheorem{theorem}{Theorem} \begin{lemma} \label{lemma_12} Assume that: \begin{itemize} \item the robot makes at least one rigid contact with the environment, i.e. $n_c \geq 1$; \item the desired momentum $H_d$ is a feasible system equilibrium such that there exists $f_e(t) \in \mathcal{K'} $ satisfying \[\dot{H}_d = Af_e -mge_3;\] \item the variable $\dot{\xi}$ is chosen so as \eqref{eq:closed_loop_fb_linH} is satisfied, with $\ddot{H}$ given by \eqref{eq:centroidal_momentum_acc} and $\ddot{H}^*$ by \eqref{eq:closed_loop_fb_lin}. \end{itemize} Then: \begin{enumerate} \item the equilibrium point $(\tilde{H},\dot{\tilde{H}}) = (0,0)$ is locally asymptotically stable if the robot makes one rigid contact with the environment, i.e. $n_c = 1$; \item the equilibrium point $(\tilde{H},\dot{\tilde{H}}) = (0,0)$ is globally asymptotically stable if $\xi$ is bounded, namely if there exists a constant $c \in \mathbb{R}^+$ such that $|\xi(t)| < c \ \forall t$. \end{enumerate} \end{lemma} The proof is in the Appendix \ref{appendix_12}. Lemma \ref{lemma_12} shows that as long as we satisfy Eq. \eqref{eq:closed_loop_fb_linH}, the system trajectories converge towards the desired values. The possibility of satisfying Eq. \eqref{eq:closed_loop_fb_linH} is inherently related to the boundedness of $\xi$, which guarantees that the matrix $A\Phi(\xi)$ in \eqref{eq:centroidal_momentum_acc} remains of full rank. More precisely, the item $1)$ shows that the boundeness of $\xi$ is achieved locally to the equilibrium point when the number of contacts is equal to one. When the number of contacts is greater than one, there is a redundancy for $\dot{\xi}$ that solve \eqref{eq:closed_loop_fb_linH}, and this redundancy should be chosen so as $\xi$ is always bounded. In this case, in fact, the item $2)$ of Lemma \ref{lemma_12} shows that the system trajectories globally converge towards the desired values. We show in the next section a possible choice for the redundancy of $\dot{\xi}$ that proved to work effectively both in simulation and in real experiments. Let us remark, however, that the aim of a global bounded $\xi$ cannot be in general achieved, and so global asymptotic stability. For instance, think of a humanoid robot standing on one foot and starting with an high velocity of its center of mass: there is a limit for this velocity that would cause the contact to break, namely, the variable $\xi$ growing indefinitely. However, we can monitor the adverse conditions, where contacts are about to break, by looking at the overall norm of the variable $\xi$. This is, in the authors' opinion, an interesting result of the proposed approach. Often, the control objective of the center-of-mass trajectory tracking is framed as momentum control. In this case, one is tempted to use the approach presented above with \begin{IEEEeqnarray}{LCL} \label{eq:closed_loop_fb_lin_int} \ddot{H}^* = \ddot{H}_d - K_d\dot{\tilde{H}} - K_p{\tilde{H}} - K_i\int_0^t{\tilde{H}} dt, \\ K_i = K_i^\top > 0, \nonumber \end{IEEEeqnarray} where the (linear momentum) integral correction terms can be replaced by the position errors between the center of mass trajectory and its desired values. The resulting third order system~\eqref{eq:closed_loop_fb_linH}-\eqref{eq:closed_loop_fb_lin_int}, however, is in general very sensitive to gain tuning, as not all possible combinations of the gain matrices guarantee stability of the associated closed-loop system. This limitation affects the controller's performances when applied to the real robot, where phenomena as modeling errors, measurements noise and external disturbances further limit the control gain choice. \subsection{Momentum-based jerk control with integral terms} We propose a control algorithm alternative to \emph{pure} feedback linearization with the goal of facilitating the gain tuning of the closed-loop system dynamics. In particular, consider as control objective the stabilization of $(I,\tilde{H},\zeta)$ towards the reference values $(0,0,0)$, with $I$ the integral of the momentum error, $\tilde{H}$ the momentum error, and $\zeta$ an exogenous state that will be used to prove Lyapunov stability, i.e. \begin{IEEEeqnarray}{LCL} \IEEEyesnumber \label{eq:output_ext} I &:=& \int_0^t\tilde{H}dt, \IEEEyessubnumber \label{eq:I}\\ \tilde{H}&:=& H - {H}_d, \IEEEyessubnumber \label{eq:tilde_H}\\ \zeta &:=& Af -mge_3 - \dot{H}_d + K_d \tilde{H} + K_p I, \IEEEyessubnumber \label{eq:zeta} \end{IEEEeqnarray} whose dynamics write: \begin{IEEEeqnarray}{LCL} \IEEEyesnumber \label{eq:output_dyn} \dot{I} &:=& \tilde{H}, \IEEEyessubnumber \label{eq:dot_I}\\ \dot{\tilde{H}} &:=& Af -mge_3 -\dot{H}_d = \zeta - K_d \tilde{H} - K_p I, \IEEEyessubnumber \label{eq:dot_tilde_H}\\ \dot{\zeta} &:=& \dot{A}f + A\Phi(\xi)\dot{\xi} - \ddot{H}_d + K_d \dot{\tilde{H}} + K_p\tilde{H}. \IEEEyessubnumber \label{eq:dot_zeta} \end{IEEEeqnarray} The dynamics \eqref{eq:output_dyn} are obtained by taking the derivative of $(I,\tilde{H},\zeta)$, and substituting their corresponding definitions in (\ref{eq:I})-(\ref{eq:zeta}). Then, the following result holds. \begin{lemma} \label{lemma_2} Assume that: \begin{itemize} \item the robot makes at least one rigid contact with the environment, i.e. $n_c \geq 1$; \item the desired momentum $H_d$ is a feasible system equilibrium such that there exists $f_e(t) \in \mathcal{K'} $ satisfying \[\dot{H}_d = Af_e -mge_3.\] \end{itemize} Choose, \begin{align} \label{eq:input_momentum} \dot{\xi} = \, & {(A\Phi)}^{\dagger}\,[\ddot{H}_d - (K_d + 1_6)\dot{\tilde{H}} \\ & - (K_d + K^{-1}_o + K_p)\tilde{H} - K_p I - \dot{A}f] \nonumber \\ &+ N_{A\Phi}\dot{\xi}_0 \nonumber, \end{align} with $K_o,K_p,K_d {\in} \mathbb{R}^{6 \times 6}$ symmetric positive definite matrices, \[N_{A\Phi} = (1_6 - {(A\Phi)}^{\dagger}A\Phi)\] the projector in the null space of $A\Phi$, and $\dot{\xi}_0$ a free variable of proper dimension. Then: \begin{enumerate} \item the equilibrium point $(I,\tilde{H},\zeta) = (0,0,0)$ is locally asymptotically stable if the robot makes one rigid contact with the environment, i.e. $n_c = 1$; \item the equilibrium point $(I,\tilde{H},\zeta) = (0,0,0)$ is globally asymptotically stable if $\xi$ is bounded, namely if there exists a constant $c \in \mathbb{R}^+$ such that $|\xi(t)| < c \ \forall t$. \end{enumerate} \end{lemma} The proof is in the Appendix \ref{appendix_2}. Lemma~\ref{lemma_2} shows that the additional integral terms do not break the stability properties achieved in Lemma \ref{lemma_12}, and that one is left with free positive definite gains $K_o, \ K_p, \ K_d$. Let us recall that the constraints~\eqref{eq:contact_constraints} remain satisfied while ensuring the stability properties of the associated closed-loop system, and such a claim cannot usually be made in classical stack-of-task approaches~\eqref{eq:sot}. The control law~\eqref{eq:input_momentum} contains both feedforward and feedback terms that depend on the measured contact wrenches. It makes use, in fact, of~\eqref{eq:centroidal_momentum} for computing $\dot{H}$, which depends on the measured contact wrenches. In case of a single contact, there exists a unique control input $\dot{\xi}$ that satisfies~\eqref{eq:input_momentum}, and the null space of the matrix $A\Phi$ is empty, i.e. $N_{A\Phi} = 0$. In case of multiple contacts ($n_c > 1$), instead, infinite control inputs satisfy~\eqref{eq:input_momentum}. We solve the associated redundancy using the free variable $\dot{\xi}_0$ to minimize the norm of the robot joint torques. The computation of $\dot{\xi}_0$ is detailed in Appendix \ref{computation-xi-0}. Let us remark again the importance of the invertibility of the gradient $\Phi$ -- see Lemma~\ref{lemma_1}. This property guarantees that the matrix $A\Phi$ in~\eqref{eq:input_momentum} is full rank, so $\dot{\xi}$ has full control authority on the momentum acceleration for any value of $\xi$. \subsection{Computation of $f$, $\dot{H}(f)$ and $\Phi(\xi)$} \label{subsec:get_dyn_params} The control input~\eqref{eq:input_momentum} requires: the contact wrenches~$f$; the momentum derivative $\dot{H} = \dot{H}(f)$; and the associated variable $\xi$ such that $f = \phi(\xi)$. The contact wrenches can be measured/estimated using the measurements from 6-axis FT sensors installed on the robot. Once the wrenches $f$ are retrieved, we can compute the momentum rate of change via~\eqref{eq:centroidal_momentum} The associated $\xi$ can be computed by applying the parametrisation \emph{inverse} mapping, namely~$\xi = \phi^{-1}(f)$. The inverse mapping exists provided that the measured contact wrenches remain inside the set $\mathcal{K'}$. If the measured wrenches do not belong to $\mathcal{K'}$ (because, e.g., measurements noise, external unmodeled disturbances, etc.), a saturation shall be applied in the calculation of the inverse mapping so that the control input ${\xi}$ always remains finite. \subsection{Computation of the joint torques to realize $\dot{\xi}$} \label{sub-xiDotControl} To realize a $\dot{\xi}$, e.g. the law in~\eqref{eq:input_momentum}, we have to choose the \emph{real} control input of the system properly. We assume in this section that the control input is the joint torque $\tau$, so we cannot impose a desired $\dot{\tau}$ instantaneously. A possibility for computing the joint torques is to find $\dot{\tau}$ realising $\dot{\xi}$, and then perform time-integration of $\dot{\tau}$ to obtain $\tau$. This procedure, however, requires to compute some derivatives of the inverse dynamics, which may not be available in practice. For this reason, we follow here another route for finding the joint torques $\tau$ attempting to realise $\dot{\xi}$. First, we find the instantaneous relationship between the joint torques $\tau$ and the contact wrenches $f$. This relationship can be found, for instance, by substituting the state accelerations $\dot{\nu}$ from~\eqref{eq:dynamics_constr} into the constraints~\eqref{eq:constraint_acc}, which leads to: \begin{IEEEeqnarray}{LCL} \label{eq:torques_forces} JM^{-1}(J^\top f -h) + \Lambda\tau + \dot{J}\nu = 0, \end{IEEEeqnarray} with $\Lambda = JM^{-1}B$. Then, we proceed as follows: \begin{itemize} \item Integrate the control input $\dot{\xi}$ to obtain $\xi$. The initial conditions for the integrator can be calculated by measuring the initial contact forces $f(0)$ and by applying the parametrization \emph{inverse mapping}, i.e. $\xi(0) = \phi^{-1}(f(0))$; \item Apply the parametrization direct mapping to evaluate the wrenches $f$ from $\xi$, i.e. $f = \phi(\xi)$. By doing so, note that $f$ always satisfy the contact stability constraints; \item Retrieve the input torques $\tau$ from~\eqref{eq:torques_forces}, which write \end{itemize} \begin{IEEEeqnarray}{LCL} \label{eq:input_torques} \tau = \Lambda^{\dagger}(JM^{-1}(h - J^\top f) - \dot{J}\nu) + N_{\Lambda}\tau_0, \end{IEEEeqnarray} where $N_{\Lambda} = (1_n - {\Lambda}^{\dagger}\Lambda)$ is the projector in the null space of $\Lambda$, and $\tau_0$ is a free variable, that can be chosen as in \cite{nava2016} to guarantee the stability of the system's zero-dynamics. \section{Introduction} \label{sec:intro} \IEEEPARstart{N}{onlinear} feedback control of fixed-based (e.g. manipulators) and floating-base (e.g. humanoids) robots is not new to the Control community~\cite{deWit1996,yoshikawa2000,mistry2010,sentis2005}. Feedback-linearisation, robust control, and adaptive laws are only few examples of the large variety of control methods developed for steering these systems towards desired quantities. When fixed and floating-base systems make contact with the environment, the robot control has to deal with regulating also the forces and torques towards values that ensure a desired interaction. This paper contributes towards the stabilisation of floating-base systems\renewcommand{\thefootnote}{\Roman{footnote}} in contact with the environment by proposing controllers that ensure \emph{contact stability} (see, e.g., \cite{Frontiers2015}) while including force feedback in the control laws. The proposed approach uses the rate-of-change of the joint torques as control input, and for this reason it is here referred to as \emph{jerk} control. Force control strategies for fixed-base systems can be roughly divided into two categories: \emph{direct} and \emph{indirect} force control methods~\cite{deWit1996,Villani2015}. Indirect methods achieve compliance without explicitly closing a feedback loop on the measured contact forces. In particular, \emph{impedance control} is a common objective for indirect techniques whose goal is often that of achieving a desired dynamic behavior of the robot's end-effector. The control design requires to include force and torque measurements as feedforward terms to achieve full feedback linearization of the end-effector dynamics. If no force measurement is available, a simplified \emph{stiffness control} can still be applied \cite{deWit1996}. On the other hand, direct force control methods include explicit feedback from the measured interaction wrenches, usually related to the force error \cite{Villani2015,ortenzi2017}. An example of these techniques is the hybrid position/force control, which is often applied when the environment is rigid and the end-effector has to continuously maintain a contact. The rigid environment assumption enables the decomposition of the end-effector dynamics into a \emph{constrained} and a \emph{free} direction~\cite{raibert1981}. Along the constrained directions, feasible desired forces are exerted. Here, additional feedback terms are added to ensure convergence in presence of external disturbances and unmodeled dynamics. Although the \emph{desired} force satisfies the contact stability constraints, the \emph{commanded} force, which includes the feedback from Force/Torque (FT) sensors, may instantaneously violate the contact constraints The recent research effort on humanoid robots gave impetus to the force control of floating-base systems~\cite{Featherstone2007,Ott2011,Wensing2013,Hopkins2015a}. These systems are often underactuated, namely, the number of control inputs is fewer than the system’s degrees of freedom~\cite{spong1998}. The lack of actuation is usually circumvented by means of the contacts between the robot and the environment, but this requires close attention to the forces the robot exerts at the contact locations. If not regulated appropriately, uncontrolled contact forces may break a contact, which makes the robot control critical~\cite{Ott2011,Wensing2013}. The contacts between a floating-base system and the environment are often assumed to be rigid and \emph{planar}, although compliant contacts and uneven ground are also considered in the literature \cite{Azad2015,Caron2019,Henze2018,liu2015b}. Furthermore, all contacts are also \emph{unilateral}, being the robot not physically attached to ground and in general able to make and break contacts. Contact activation and deactivation occur continuously, e.g. in the case of humanoid robot walking, and can be addressed with the design of a proper \emph{state machine} that plans references for a balancing/walking controller \cite{park2013,liu2015}. From the control design perspective, a common strategy for floating-base systems is based on the so called stack-of-task approach~\cite{mansard2009}. These strategies usually consider several control objectives organized in a hierarchical or weighted prioritization. Often, the high-priority task is the stabilization of the robot momentum~\cite{koolen2015design}: its objective is the control of the robot center-of-mass and angular momentum by means of contact forces while guaranteeing stable zero-dynamics~\cite{nava2016}. Quadratic programming (QP) solvers can be used to monitor contact forces to ensure both robot and contact stability \cite{Lee2012}. Analogously to direct and indirect force controllers, QP based force control of floating base systems usually suffers from the following limitations: $i)$ optimal control inputs (i.e., the joint torques) may be discontinuous in certain cases, e.g. during the switching from two feet and one foot balancing of humanoid robots. In this case, the robot control may become critical. Although further (and often numerous) constraints can be added to the QP solver to enforce continuity, the effectiveness of this approach is not always satisfactory in practice; $ii)$ force feedback from FT sensors is missing in the control action. This paper proposes a control approach that attempts to address these two main limitations of QP based controllers for floating base systems. The key ingredients of our approach are: $a)$ propose an invertible one-to-one mapping between a set of constraint-free variables and an inner approximation of the contact stability manifold. $b)$ propose controllers that use the wrench mapping and exploit the rate-of-change of the joint torques as control input; $c)$ extend the proposed controllers to the case when the joint torques (and not their rate-of-change) is the available control input. The proposed approach exploits a relative degree augmentation of the underlying system, i.e., the system state is composed of the system position, velocity, and acceleration. For this reason, the proposed approach is referred to as \emph{jerk} control, which also incorporates force feedback from FT sensors. Furthermore, we present control laws that stabilise a desired robot momentum using joint-torques as input and having guaranteed Lyapunov stability properties. The proposed approach is validated with simulations and experiments using the humanoid robot iCub balancing on rigid contacts. The paper is organized as follows. Section \ref{sec:background} recalls the notation, the robot modeling, the contact stability constraints, and introduces the problem statement. Section~\ref{sec:parametrization} presents a novel contact-stable parametrization of the contact wrenches. Section~\ref{sec:control} introduces the main ideas behind \emph{jerk} control using the contact-stable parametrisation. Section~\ref{sec-jerk-momentum-control} presents Lyapunov stable \emph{jerk} controllers when the control objective is the stabilisation of the robot momentum. Section~\ref{sec:results} presents validations of the approach on the humanoid robot iCub. Conclusions and perspectives conclude the paper. \section{A Contact-Stable Wrench Parametrization} \label{sec:parametrization} Parametrisations can be an effective way to transform constrained optimisation problems into unconstrained ones~\cite{marie2016}. Consider, for instance, the following optimisation problem: \begin{IEEEeqnarray}{LLL} \IEEEyesnumber \label{eq:minConstraned} \minimize_{y} ~& \text{cost}(y) \\ \text{subject to}~& \nonumber \\ & y>0 \nonumber, \end{IEEEeqnarray} with $\text{cost}(\cdot):\mathbb{R}{\rightarrow}\mathbb{R}$, and $y\in\mathbb{R}$. If there exists a solution to~\eqref{eq:minConstraned}, the process of seeking for this solution is equivalent to solving the following problem: \begin{IEEEeqnarray}{LLL} \IEEEyesnumber \label{eq:minUnConstraned} \minimize_{\xi} ~& \text{cost}(e^\xi), \end{IEEEeqnarray} with $\xi \in \mathbb{R}$. For the specific case \eqref{eq:minConstraned}, it is trivial to find a parametrisation ensuring $y>0$. Note, however, that the mapping $y = e^\xi$ is one-to-one, and its gradient is always invertible, namely $\frac{\partial}{\partial \xi}(e^\xi) = e^\xi \ne 0 \ \forall \xi$. These two additional properties are of particular importance for the numerical stability of solvers addressing the problem~\eqref{eq:minUnConstraned}. Next Lemma proposes a wrench parametrisation that may be used to remove the constraint $f \in \mathcal{K}$ into the problem~\eqref{eq:sot}. \newtheorem{lemma}{Lemma} \begin{lemma} \label{lemma_1} Consider the parametrization $f^k {=} \phi(\xi): \mathbb{R}^6 \rightarrow \mathcal{K'}$ defined by \begin{equation} \label{eq:parametrized_wrench} \phi(\xi) := \begin{bmatrix} \,{\mu_c}\frac{\tanh(\xi_1) \,(e^{\xi_3} + f_z^{min})}{\sqrt[]{1+\tanh^2(\xi_2)}}\\ \,{\mu_c}\frac{\tanh(\xi_2) \,(e^{\xi_3} + f_z^{min})}{\sqrt[]{1+\tanh^2(\xi_1)}}\\ e^{\xi_3} + f_z^{min}\ \\ (\delta_y \tanh(\xi_4) + \delta_{y_0})\, (e^{\xi_3} + f_z^{min})\\ (\delta_x \tanh(\xi_5) + \delta_{x_0})\, (e^{\xi_3} + f_z^{min})\\ \mu_z \tanh(\xi_6)\, (e^{\xi_3} + f_z^{min}) \end{bmatrix}, \end{equation} with $f_z^{min} \geq 0$ the minimum value of the vertical force $f_z$, \begin{IEEEeqnarray}{LCL} \IEEEyesnumber \label{eq:feet_size_constraints} \delta_x := \frac{x_c^{max}- x_c^{min}}{2}, \quad \delta_{x_0} := - \frac{x_c^{min}+x_c^{max}}{2} \IEEEyessubnumber \\ \delta_y := \frac{y_c^{max} - y_c^{min}}{2}, \quad \delta_{y_0} := \frac{y_c^{max} + y_c^{min}}{2}, \IEEEyessubnumber \end{IEEEeqnarray} and $x_c^{max}, y_c^{max}$ and $x_c^{min}, y_c^{min}$ the contact surface dimensions as described in Fig. \ref{fig:foot_size}. \noindent Then, the following properties hold: \begin{enumerate} \item The contact constraints \eqref{eq:contact_constraints} are always satisfied, namely, $\mathcal{K'} \subset \mathcal{K}$ or, equivalently, $\phi(\xi) \in \mathcal{K} \ \ \forall \xi \in \mathbb{R}^6 $. \item The function $\phi(\xi): \mathbb{R}^6 \rightarrow \mathcal{K'}$ is a bijection, namely, a one-to-one correspondence from $\mathbb{R}^6 $ to $ \mathcal{K'}$. \item The gradient of the function $\phi(\xi)$, i.e. \begin{IEEEeqnarray}{LLL} \label{gradient-phi} \Phi(\xi) &:= \left[\frac{\partial\phi}{\partial \xi_1}, ... \hspace{0.5 mm}, \frac{\partial\phi}{\partial \xi_6}\right]\in \mathbb{R}^{6 \times 6}, \end{IEEEeqnarray} is an invertible matrix $\forall \hspace{0.5 mm}\xi \in \mathbb{R}^6$. \end{enumerate} \end{lemma} The proof is in the Appendix \ref{appendix_1}. Lemma~\ref{lemma_1} shows that \eqref{eq:parametrized_wrench} generates wrenches that always belong to the contact stability manifold defined by \eqref{eq:contact_constraints}. Furthermore, it shows that $\phi(\cdot)$ is a one-to-one correspondence between a set of free parameters~$\xi$ and the manifold $\mathcal{K'}$, i.e. the image of \eqref{eq:parametrized_wrench}. Clearly, one may find other functions for which the contact constraints~\eqref{eq:contact_constraints} are always satisfied. The proposed function $\phi (\cdot)$ in~\eqref{eq:parametrized_wrench}, however, has an image that approximates the friction cones \eqref{eq:static_friction} similarly to what one obtains using a set of linear inequalities. \begin{figure}[ht!] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{pictures/frictionCones.png} \end{subfigure} % \begin{subfigure}[b]{0.18\textwidth} \includegraphics[width=\textwidth]{pictures/frictionConesBorders.png} \end{subfigure} \caption{A comparison between the friction cone (left) and its parametrisation with hyperbolic tangents (right). On the right: top view of the two manifolds.} \label{figure_approximation_params} \end{figure} More precisely, when \eqref{eq:parametrized_wrench} is substituted into \eqref{eq:static_friction}, instead of a cone we obtain a set that much resembles an octagonal flipped pyramid. In fact, it is easy to verify that $\sqrt[]{f_x^2 + f_y^2} = \mu_c f_z$ is satisfied eight times when $f^k {=} \phi(\xi)$ and either, or both, $\tanh^2(\cdot) = 1$. By computing the ratio between the areas of an octagon and that of a circle, we conclude that \eqref{eq:parametrized_wrench} covers more than 90 \% of the set given by \eqref{eq:static_friction}. For a visual representation of this fact, see Figure~\ref{figure_approximation_params} that shows a typical approximation of \eqref{eq:static_friction} given by the function~\eqref{eq:parametrized_wrench}. Let us also observe that the gradient of this function is invertible for any value of the parameter~$\xi$. This property will be of pivotal importance in Sections~\ref{sec:control} and \ref{sec-jerk-momentum-control} when designing stable controllers for system~\eqref{eq:constrained_dynamics}. The parametrization~\eqref{eq:parametrized_wrench} can be easily extended in case of $n_c$ distinct contact wrenches. In this case, define: \begin{IEEEeqnarray}{LCL} \IEEEyesnumber \label{eq:parametrization_multiple_contacts} f &=& [f^{1}; ... \hspace{0.5 mm}; f^{n_c}] := [\phi({\xi^{1}}); ... \hspace{0.5mm}; \phi({\xi^{n_c}})], \IEEEyessubnumber \\ \dot{f} &=& \Phi(\xi)\dot{\xi}, \IEEEyessubnumber \end{IEEEeqnarray} where $\Phi = \text{blkdiag}(\Phi_1, ... \hspace{0.5 mm}, \Phi_{n_c}) \in \mathbb{R}^{6n_c \times 6n_c}$ and $\xi = [\xi^1; ... \hspace{0.5 mm}; \xi^{n_c}] \in \mathbb{R}^{6n_c}$. It is then straightforward to verify that the properties described in Lemma \ref{lemma_1} are retained even in case of multiple contact wrenches. \subsection{\commentt{blue}{#1}}} \newcommand{\response}[1]{\commentt{red}{Authors: #1}} \newcommand{\todo}[1]{\commentt{red}{ToDo: #1}} \newcommand{\stc}[1]{\commentt{green}{ST: #1}} \hyphenation{op-tical net-works semi-conduc-tor} \pdfminorversion=4 \section{Simulations and Experimental Results} \label{sec:results} \subsection{Simulation Environment} The modeling and control framework presented in Sec. \ref{sec-jerk-momentum-control} is tested on the 23-DoFs iCub humanoid robot \cite{Metta20101125}, both on the real robot and in simulations using Gazebo \cite{Koenig04}. The controller is implemented in Simulink, and runs at a frequency of $100~\text{[Hz]}$. An advantage of using the Simulink-Gazebo simulator consists in the possibility to test directly on the real robot the same control software used in simulation. Gazebo offers different physic engines to integrate the system's dynamics. We chose the Open Dynamics Engine (ODE), that uses a fixed step semi-implicit Euler integration scheme, with a maximum simulation time step of $1$ [ms]. On the real iCub, the Simulink controller runs on an external PC and provides reference joint torques to an \emph{inner} joint torque control loop, that runs on board the robot at $1000~\text{[Hz]}$. At the moment, iCub is not endowed with joint torque sensors. The measured joint torques are achieved by combining the FT sensor information, the joint encoders, IMU data and the robot model. The robot is equipped with 6 FT sensors: two located in the robot's upper arms, two in the robot thighs, and two in the robot's feet~\cite{Metta20101125}. During experiments on the real robot, we verified that the desired controller step time of $0.01 \ \rm [s]$ is respected most of the times. More precisely, statistics show that the desired step time is respected around $97.9 \%$ of iterations for the jerk control. This is comparable with the performances of the classical momentum-based QP control \cite{nava2016}, which meets the desired step time around $98.2 \%$ of iterations. The choice of running the high-level Simulink controller with a frequency of $100~\text{[Hz]}$ is due to: a CPU limitation of the current iCub; the limited bandwidth of the current FT sensors at the robot's feet. However, we are working on robot hardware upgrades that will allow us to run the Simulink controllers with a standard frequency of $1000~\text{[Hz]}$. \subsection{Robustness analysis for jerk control} \label{subsec:regularization} A preliminary analysis of the robot balancing behavior with the control law \eqref{eq:input_momentum}--\eqref{eq:input_torques} indicates that the proposed jerk control strategy may be particularly sensitive to bias errors on the estimated momentum rate of change $\dot{H}$, which is used as feedback in Eq. \eqref{eq:input_momentum}. The momentum rate of change is estimated as detailed in Sec. \ref{subsec:get_dyn_params}. The sources of this bias may be errors in the robot dynamic parameters, or low FT sensor accuracy. To enforce the closed loop system robustness w.r.t. errors in the estimation of $\dot{H}$, we modified Eq. \eqref{eq:input_momentum} by adding a regularization term as follows: \begin{align} \label{eq:input_momentum_modif} \dot{\xi} = \, & {(A\Phi)}^{\dagger}\,[\ddot{H}_d - (K_d + 1_6)\dot{\tilde{H}} \\ & - (K_d + K^{-1}_o + K_p)\tilde{H} - K_p I - \dot{A}f] \nonumber \\ & + N_{A\Phi}\dot{\xi}_0 -k_e(\xi -\xi_d)\nonumber, \end{align} where $k_e > 0$ is a positive scalar, $\xi$ the integral of $\dot{\xi}$ and $\xi_d$ is obtained by applying the parametrization inverse mapping on the set of wrenches satisfying the desired momentum rate of change, i.e. $\dot{H}_d(f_d)$. In case of multiple solutions, the one ensuring minimum norm of $f_d$ is chosen. It is important to point out that the regularization term $-k_e(\xi -\xi_d)$ is only necessary in case of errors when estimating the momentum rate-of-change $\dot{H}$. To support this statement, Figure \ref{fig:mom_err_norm_reg_dynamic} shows the linear and angular momentum error norm during balancing simulations when $7.5 \%$ modeling errors are added to the parameters $M,m,C$ and $G$. Note that the robot falls after few seconds if no regularization is added to Eq.~\eqref{eq:input_momentum} (orange line). When the regularization term is added (red line) or the errors on $\dot{H}$ are removed (blue line), stability is retained. In this stability retained case, let us remark that only the estimation of $\dot{H}$ is evaluated correctly, while the model errors are kept present for all the other calculations. We also carried out robustness tests on the real iCub during a contact switching scenario \cite{nava2016}. The robot starts balancing on two feet, then it switches to balance on the left foot via a finite state machine, and performs highly dynamic movements on the left foot. Finally, the robot returns back to two feet balancing. Results are reported in Table \ref{tab:robustness}: the robot succeeded to conclude the demo $60\%$ of times in case of parameters overestimation by $7.5 \%$. Dealing with parameters underestimation, instead, seems to be a more challenging task for the controller~\eqref{eq:input_momentum_modif} despite the presence of the regularization term. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{pictures/regularization_dyn.pdf} \caption{Linear (top) and angular (bottom) momentum error norm during two feet balancing simulations; dynamic model overestimated by $7.5 \%$. No regularization are added when the momentum rate of change is not affected by errors. If $\dot{H}$ is biased, the robot falls unless the regularization is used.} \label{fig:mom_err_norm_reg_dynamic} \end{figure} When mounted on the robot, the FT sensors accuracy is affected by several phenomena such as temperature, internal stresses and vibrations. More specifically, we observed that even after FT sensor fine calibration the linear forces measurements still have an offset of $\pm 2.5 N$, and this offset biases the estimation of $\dot{H}$. \begin{table}[t!] \centering \caption{Robustness tests on the real iCub while performing highly dynamic movements.} \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{\textbf{Robustness w.r.t. modeling errors on real robot}} \\ \hline Error on dynamic model $[\%]$ & Success rate $[\rm trials]$ \\ \hline Overest. $5\%$ & $5/5$ \\ \hline Overest. $7.5\%$ & $3/5$ \\ \hline Overest. $10\%$ & $1/5$ \\ \hline Underest. $5\%$ & $1/5$ \\ \hline \end{tabular} \label{tab:robustness} \vspace{-0.5cm} \end{table} Figure \ref{fig:mom_err_norm_reg} shows the behavior of the linear and angular momentum error norm during several two feet balancing simulations and experiments. On the real iCub, the robot falls after few seconds when $k_e = 0$, as pointed out by the \emph{green} line. When adding the regularization term in Eq. \eqref{eq:input_momentum_modif}, stability is retained and the momentum error does not diverge (\emph{purple} line). On the other hand, the \emph{blue} line is obtained in simulation with perfect estimation of external forces, and with $k_e = 0$. In this case, the momentum error does not diverge, thus showing again that the regularization term is not needed when $\dot{H}$ is properly estimated. In simulation, we injected a constant offset of amplitude $2.5 N$ to the "measured" $f_x$ component of one of the two contact wrenches. Results correspond to the \emph{orange} line in Figure \ref{fig:mom_err_norm_reg}: stability is no longer retained and the robot falls after few seconds. With the regularization term, the previous balancing performances are restored (\emph{red} line). \subsection{Comparison with a momentum-based QP controller} We compared the performance of the \emph{momentum-based} jerk controller~\eqref{eq:input_momentum_modif} with a classical \emph{momentum-based} QP controller that solves the optimization problem \eqref{eq:sot} on the real iCub during the contact switching scenario introduced in Section \ref{subsec:regularization}. Both controllers have been fine tuned for the specific demo. The goal is to show that the momentum-based jerk control guarantees performances that are comparable with a controller already available in the literature. Also, the momentum-based jerk control provides smoother references to the torque controller, as the desired contact wrenches $f^*$ are always continuous. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{pictures/regularization.pdf} \caption{Linear (top) and angular (bottom) momentum error norm during two feet balancing. On the real robot the additional regularization term is required, while in simulation it is not. Adding noise to the FT measurements in simulation generates a response similar to that of the real iCub.} \label{fig:mom_err_norm_reg} \end{figure} Figure \ref{fig:des_force_torque_norm} depicts the norm of the left foot (input) contact forces and moments for both the momentum-based jerk control and the momentum-based QP control. Results have been achieved by running 10 experiments for each control strategy. The solid lines represents the average values, while the transparent regions are the variance over the 10 experiments. The orange background represents the instants at which the robot is balancing on two feet, while the white background is when the robot is balancing on the left foot. The momentum-based jerk controller helps provide smoother references to the torque controller during transitions. In Figure \ref{fig:mom_err_norm}, we compared the norms of the linear and angular momentum error. During transition from two feet to left foot balancing, a peak of error is present when the momentum-QP control is used. The peak is caused by the sharp change in the input forces. When jerk control is used (and smoother force input is required), the peak error is reduced by ~$90\%$. During highly dynamic movements (white background), jerk and momentum-based QP control show similar tracking performances. Both controllers use Eq. \eqref{eq:input_torques} to generate the input torques. Figure \ref{fig:joint_err_norm} verifies the boundedness of the system's \emph{zero dynamics}. In both cases, the zero dynamics does not diverge. Convergence to zero of the joint position error is not necessarily expected as the controllers implement strict task priorities, and the \emph{postural task} is chosen as the lowest priority task. For further details, a video of the experiment is attached to this paper. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{pictures/wrenchNorm.pdf} \vspace*{-0.27cm} \caption{Norm of the left foot input forces (top) and moments (bottom). The jerk control (red line) helps in providing smoother references to the torque controller during contact switching (orange background).} \label{fig:des_force_torque_norm} \vspace{0.25cm} \end{figure} \subsection{Disturbance rejection} To evaluate the robustness of the momentum-based jerk controller against unmodeled external force perturbations, we performed the following experiment. The robot balances on two feet and a person pushes and pulls continuously the robot's upper body. The applied external force is unmodeled, so it is treated as a disturbance by the momentum-based jerk controller. Figure \ref{fig:mom_err_norm_disturbances} shows the momentum rate of change error norm and the momentum error norm during interaction. Despite the high peaks of errors when the external force is applied, the controller is still able to retain stability. When the force is removed, in fact, the momentum error and its rate of change converge to a value. Exact convergence to zero of the momentum derivative error on the real iCub is difficult to obtain because of the low sensitivity of the FT sensors, so a compromise is achieved by properly tuning the corresponding feedback gains. A video describing the experiment is attached to the paper. \subsection{Problem Statement} \label{sub-section:classical-qp} \vspace{-0.05cm} A common control approach for system~\eqref{eq:constrained_dynamics} usually considers several control objectives organized in a hierarchical or weighted prioritization \cite{mansard2009,nava2016}. More precisely, let $a^*$ be a desired acceleration that the system should achieve. Then, a single priority\footnote{When several priorities are defined into the optimisation problem, higher priority tasks can be defined as constraints of~\eqref{eq:sot}.} stack-of-task can be represented by the following optimisation problem: \vspace{-0.1cm} \begin{IEEEeqnarray}{LLL} \label{eq:sot} \minimize_{y=(\dot{\nu},f,\tau)} ~& \norm{Ay-a^*}^2 \\ \text{subject to:}~& \nonumber \\ &\begin{bmatrix} M(q) & -J^\top & -B \\ J & 0 & 0 \end{bmatrix} \begin{bmatrix}\dot{\nu} \\ f \\ \tau \end{bmatrix} {=} \begin{bmatrix} -h(q,\nu) \\ -\dot{J}\nu \end{bmatrix} \nonumber \\ &f \in \mathcal{K} \nonumber \end{IEEEeqnarray} with $A$ a proper projection matrix, and $\mathcal{K}$ the manifold given by the constraints~\eqref{eq:contact_constraints}. The above optimisation problem is usually framed as a Quadratic Programming (QP) one, and its solutions may suffer from the following limitations: \begin{enumerate} \item The solution may be discontinuous, e.g. at contact switching or after reference trajectory sharp variations; \item The closed-loop dynamics does not include any feedback term from the measured contact wrenches $f_{m}$. \end{enumerate} Limitation $1)$ is often addressed by approximating the \emph{continuity property} with a set of inequality constraints to be added to~\eqref{eq:sot}, but the effectiveness of this approach is often unsatisfactory from the experimental standpoint~\cite{dafarra2018}. Limitation $2)$ is the most critical one, since FT sensor information are not used in the optimal control law $\tau$ that solves~\eqref{eq:sot}, thus potentially wasting important feedback information at the control level. Let us observe that Limitation $2)$ may be attenuated when desired force tasks are added to the problem~\eqref{eq:sot} \cite{bouyarmane2018}. For instance, if we aim to achieve a desired force $f_d$, then the force task can be achieved by adding equality constraints in the form $f = f_d$ to the problem~\eqref{eq:sot}. At this point, one may attempt at using the FT measurements by replacing $f_d$ with \vspace{-0.11cm} \[f^{*} = f_d -K_i\int_0^t (f_{m} - f_d)ds, \vspace{-0.08cm}\] where $f_m$ is the measured force, and $f = f^{*}$ being the equality constraint. The main limitation of this approach is that this equality constraint may require $f$ to violate the constraint $f \in \mathcal{K}$. Putting the desired force as part of the cost function of~\eqref{eq:sot} may be an option, but this alters the priorities that the force task has over the acceleration one. What follows presents an alternative, theoretically sound approach that aims at addressing the above limitations $1)$, and~$2)$ of classical QP based stack-of-task approaches for the control of floating base systems in contact with the environment.
a933865c8f0e92ba5cb890b36534e7094de474d2
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Acknowledgement} This paper is partially supported by Beijing Municipal Commission of Science and Technology under Grant No. Z181100008918005 as well as the National Natural Science Foundation of China (NSFC Grant Nos.61772039, 61472006 and 91646202). We would like to thank Haoran Shi for collecting Douban data used in this paper. \section{Conclusions} We propose a model based on graph convolutional networks for session-based social recommendation in online communities. Our model first learns individual user representations by modeling the users' current interests. Each user's representation is then aggregated with her friends' representations using a graph convolutional networks with a novel attention mechanism. The combined representation along with the user's original representation is then used to form item recommendations. Experimental results on three real-world data sets demonstrate the superiority of our model compared to several state-of-the-art models. Next steps involve exploring user and item features indicative of preferences and further improving the performance of recommender systems for online communities. \section{Problem Definition} Recommender systems suggest relevant items to their users according to their historical behaviors. In classical recommendation models (e.g., matrix factorization \cite{mnih2008probabilistic}), the order in which a user consumes items is ignored. However, in online communities, user-preferences change rapidly, and the order of user preference behaviors must be considered so as to model users' dynamic interests. In practice, since users' entire history record can be extremely long (e.g., certain online communities have existed for years) and users' interests switch quickly, a common approach is to segment user preference behaviors into different sessions (e.g., using timestamps and consider each user's behavior within a week as a session) and provide recommendations at session level~\cite{hidasi2016session}. We define this problem as follows: \textup{DEFINITION 1.} \textbf{(Session-based Recommendation)} Let $U$ denote the set of users and $I$ be the set of items. Each user $u$ is associated with a set of sessions by the time step $T$, $I^u_T=\{ \vec{S}_1^u, \vec{S}_2^u,\ldots, \vec{S}_T^u\}$, where $\vec{S}_t^u$ is the $t_{th}$ session of user $u$. Within each session, $\vec{S}_t^u$ consists of a sequence of user behaviors $\{i_{t,1}^{u}, i_{t,2}^{u},\ldots, i_{t,N_{u,t}}^{u}\}$, where $i_{t,p}^{u}$ is the $p_{th}$ item consumed by user $u$ in $t_{th}$ session, and $N_{u,t}$ is the amount of items in the session. For each user $u$, given a new session $\vec{S}_{T+1}^u=\{i_{T+1,1}^{u},\ldots, i_{T+1,n}^{u}\}$, the goal of \emph{session-based recommendation} is to recommend a set of items from $I$ that the user is likely to be interested in during the next step $n+1$, i.e., $i_{T+1,n+1}^{u}$. In online communities, users' interests are not only correlated to their historical behaviors, but also commonly influenced by their friends. For example, if a friend watches a movie, I may also be interested in watching it. This is known as social influence~\cite{tang2009social}. Moreover, the influences from friends are \emph{context-dependent}. In other words, the influences from friends vary from one situation to another. For example, if a user wants to buy a laptop, she will be more likely referring to friends who are keen on high-tech devices; while she may be influenced by photographer friends when shopping a camera. Like as Figure 1, a user can be influenced by both her friends' short- and long-term preferences. To provide an effective recommendation to users in online communities, we propose to model both users' dynamic interests and context-dependent social influences. We define the resulting problem as follows: \textup{DEFINITION 2.} \textbf{(Session-based Social Recommendation)} Let $U$ denote the set of users, $I$ be the set of items, and $G=(U, E)$ be the social network, where $E$ is the set of social links between users. Given a new session $\vec{S}_{T+1}^u=\{i_{T+1,1}^{u},\ldots, i_{T+1,n}^{u}\}$ of user $u$, the goal of \emph{session-based social recommendation} is to recommend a set of items from $I$ that $u$ is likely to be interested in during the next time step $n+1$ by utilizing information from both her dynamic interests (i.e., information from $\cup_{t=1}^{T+1}\vec{S}_t^u$) and the social influences (i.e., information from $\cup_{k=1}^{N(u)}\cup_{t=1}^{T}\vec{S}_t^k$, where $N(u)$ is the set of $u$'s friends). \section{Experiments} Studying the effectiveness of our \gls{DGRec} using real-world data sets, we highlight the following results: \begin{itemize} \item \gls{DGRec} significantly outperforms all seven methods that it is compared to under all experimental settings. \item Ablation studies demonstrate the usefulness of the different components of \gls{DGRec}. \item Exploring the fitted models shows that attention contextually weighs the influences of friends. \end{itemize} \subsection{Experimental Setup} \subsubsection{Data Sets} We study all models using data collected from three well-known online communities. Descriptive statistics for all data sets are in Table~\ref{tab::stat}. \textit{Douban.}\footnote{http://www.douban.com} A popular site on which users can review movies, music, and books they consume. We crawled the data using the identities of the users in the movie community, obtaining every movie they reviewed along with associated timestamps. We also crawled the users' social networks. We construct our data set by using each review as an evidence that a user consumed an item. Users tend to be highly active on Douban so we segment users' behaviors (movie consumption) into week-long sessions. \textit{Delicious.}\footnote{Data set available from {\url{https://grouplens.org/datasets/hetrec-2011/}}} An online bookmarking system where users can store, share, and discover web bookmarks and assign them a variety of semantic tags. The task we consider is personalized tag recommendations for bookmarks. Each session is a sequence of tags a user has assigned to a bookmark (tagging actions are timestamped). This differs from the ordinary definition of sessions as a sequence of consumptions over a short horizon. \textit{Yelp.}\footnote{Data set available from {\url{https://www.yelp.com/dataset}}} An online review system where users review local businesses (e.g., restaurants and shops). Similar as for Douban, we treat each review as an observation. Based on the empirical frequency of the reviews, we segment the data into month-long sessions. We also tried different segmentation strategies. Preliminary results showed that our method consistently outperformed Session-RNN and NARM for other session lengths. We leave a systematic study for optimizing session segmentation as our future work. \subsubsection{Train/valid/test splits} We reserve the sessions of the last $d$ days for testing and filter out items that did not appear in the training set. Due to the different sparseness of the three data sets, we choose $d = 180, 50$ and $25$ for \textit{Douban}, \textit{Yelp} and \textit{Delicious} data sets respectively. We randomly and equally split the held out sessions into validation and test sets. \begin{table} \centering \begin{tabular}{lccc} \toprule & Douban & Delicious & Yelp \\ \midrule \#\ Users & 32,314 & 1,650 & 141,804 \\ \#\ Items & 14,109 & 4,282 & 17,625 \\ \#\ Events & 3,493,821 & 296,705 & 1,200,503 \\ \#\ Social links & 331,315 & 15,328 & 6,818,026 \\ Start Date & 01/12/2008 & 08/12/2009 & 01/01/2009\\ End Date & 07/22/2016 & 07/01/2016 & 10/15/2010 \\ \midrule Avg. friends/user & 10.25 & 9.00 & 48.08 \\ Avg. events/user & 108.12 & 179.82 & 8.47 \\ Avg. session length & 4.38 & 4.30 & 3.63 \\ \bottomrule \end{tabular} \caption{Descriptive statistics of our three data sets.} \vspace{-15pt} \label{tab::stat} \end{table} \begin{table*} \centering \begin{tabular}{llcccccc} \toprule \multirow{2}{*}{Model Class} & \multirow{2}{*}{Model} & \multicolumn{2}{c}{Douban} & \multicolumn{2}{c}{Delicious} & \multicolumn{2}{c}{Yelp} \\ & & Recall@20 & NDCG & Recall@20 & NDCG & Recall@20 & NDCG \\ \midrule \multirow{2}{*}{Classical} & ItemKNN~\cite{linden2003amazon} & 0.1431 & 0.1635 & 0.2729 & 0.2241 & 0.0441 & 0.0989 \\ & BPR-MF~\cite{rendle2009bpr} & 0.0163 & 0.1110 & 0.2775 & 0.2293 & 0.0365 & 0.1190 \\ \midrule \multirow{3}{*}{Social} & SoReg~\cite{ma2011recommender} & 0.0177 & 0.1113 &0.2703 & 0.2271 & 0.0398 & 0.1218 \\ & SBPR~\cite{zhao2014leveraging} & 0.0171 & 0.1059 & 0.2948 & 0.2391 & 0.0417 & 0.1207\\ & TranSIV~\cite{xiao2017learning} & 0.0173 & 0.1102 & 0.2588 & 0.2158 & 0.0420 & 0.1187 \\ \midrule \multirow{2}{*}{Temporal} & RNN-Session~\cite{hidasi2016session} & 0.1643 & 0.1854 & 0.3445 & 0.2581 & 0.0756 & 0.1378 \\ & NARM~\cite{li2017neural} &0.1755 &0.1872 &0.3776 &0.2768 &0.0765 & 0.1380 \\ \midrule Social + Temporal (Ours) & DGRec & \textbf{0.1861} & \textbf{0.1950} & \textbf{0.4066} & \textbf{0.2944} & \textbf{0.0842} & \textbf{0.1427} \\ \bottomrule \end{tabular} \caption{Quantitative Results of Different Algorithms. We highlight that \gls{DGRec} outperforms all other baselines across all three data sets and both metrics. Further analysis is provided in \S \ref{sec:quantitative_results}.} \vspace{-10pt} \label{table:results} \end{table*} \subsubsection{Competing Models}\label{sec:models} We compare \gls{DGRec} to three classes of recommenders: (A) classical methods that utilize neither social nor temporal factors; (B) social recommenders, which take context-independent social influences into consideration; and (C) session-based recommendation methods, which model user interests in sessions. (Below, we indicate a model's class next to its name.) \begin{itemize}[leftmargin=0.2in] \item ItemKNN~\cite{linden2003amazon} (A): inspired by the classic KNN model, it looks for items that are similar to items liked by a user in the past. \item BPR-MF~\cite{rendle2009bpr} (A): matrix factorization (MF) technique trained using a ranking objective as opposed to a regression objective. \item SoReg~\cite{ma2011recommender} (B): uses the social network to regularize the latent user factors of matrix factorization. \item SBPR~\cite{zhao2014leveraging} (B): an approach for social recommendations based on BPR-MF. The social network is used to provide additional training samples for matrix factorization. \item TranSIV~\cite{xiao2017learning} (B): uses shared latent factors to transfer the learned information from the social domain to the recommendation domain. \item RNN-Session~\cite{hidasi2016session} (C): recent state-of-the-art approach that uses recurrent neural networks for session-based recommendations. \item NARM~\cite{li2017neural} (C): a hybrid model of both session-level preferences and the user's ``main purpose'', where the main purpose is obtained via attending on previous behaviors within the session. \end{itemize} \subsubsection{Evaluation Metrics} We evaluate all models with two widely used ranking-based metrics: Recall@K and Normalized Discounted Cumulative Gain (NDCG). \textit{Recall@K} measures the proportion of the top-K recommended items that are in the evaluation set. We use $K=20$. \textit{NDCG} is a standard ranking metric. In the context of session-based recommendation, it is formulated as: $\text{NDCG}=\frac{1}{\log_{2}(1+\text{rank}_{pos})}$, where $\text{rank}_{pos}$ denotes the rank of a positive item. We report the average value of NDCG over all the testing examples. \subsubsection{Hyper-parameter Settings} For RNN-Session, NARM and our models, we use a batch size of 200. We use Adam~\cite{kingma2014adam} for optimization due to its effectiveness with $\beta_1=0.9$, $\beta_2=0.999$ and $\epsilon=1e^{-8}$ as suggested in TensorFlow~\cite{tensorflow2015-whitepaper}. The initial learning rate is empirically set to 0.002 and decayed at the rate of 0.98 every 400 steps. For all models, the dimensions of the user (when needed) and item representations are fixed to 100 following~\citet{hidasi2016session}. We cross-validated the number of hidden units of the LSTMs and the performance plateaued around 100 hidden units. The neighborhood sample sizes are empirically set to 10 and 15 in the first and second convolutional layers, respectively. We tried to use more friends in each layer but observed no significant improvement. In our models, dropout~\cite{srivastava2014dropout} with rate $0.2$ is used to avoid overfitting. \subsubsection{Implementation Details} We implement our model using TensorFlow~\cite{tensorflow2015-whitepaper}. Training graph attention networks on our data with mini-batch gradient descent is not trivial since node degrees have a large range. We found the neighbor sampling technique proposed in \cite{hamilton2017inductive} pretty effective. Further, to reasonably reduce the computational cost of training \gls{DGRec}, we represent friends' short-term interests using only their most recent sessions. \subsection{Quantitative Results}\label{sec:quantitative_results} The performance of different algorithms is summarized in Table~\ref{table:results}. ItemKNN and BPR-MF perform very similarly, except on \textit{Douban}. A particularity of Douban is that users typically only consume each item once (different from \text{Delicious} and \text{Yelp}). MF-based methods tend to recommend previously consumed items which explain BPR-MF's poor performance. By modeling social influence, the performance of social recommenders improves compared to BPR-MF in most cases. However, the improvement is marginal because these three algorithms (B) only model context-independent social influence. By modeling dynamic user interests, RNN-Session significantly outperforms ItemKNN and BPR, which is consistent with the results in \citet{hidasi2016session}. Further, NARM extends RNN-Session by explicitly modeling user's main purpose and becomes the strongest baseline. Our proposed model \gls{DGRec} achieves the best performance among all the algorithms by modeling both user's dynamic interests and context-dependent social influences. Besides, the improvement over RNN-Session and NARM is more significant compared to that of SoReg over BPR-MF, which shows the necessity of modeling context-dependent social influences. \subsection{Variations of \gls{DGRec}} To justify and gain further insights into the specifics of \gls{DGRec}'s architecture, we now study and compare variations of our model. \subsubsection{Self v.s. Social} \gls{DGRec} obtains users' final preferences as a combination of user's consumed items in the current session and context-dependent social influences (see Eq.\ \ref{eqt:attnlayer}). To tease apart the contribution of both sources of information, we compare \gls{DGRec} against two submodels: a) (\gls{DGRec}$_\text{self}$) a model of the user's current session only (Eq.\ \ref{eqt:attnlayer} without social influence features $h_{u}^{(L)}$) and; b) (\gls{DGRec}$_\text{social}$) a model using context-dependent social influence features only (Eq.\ \ref{eqt:attnlayer} without individual features $h_n$). Note that when using individual features only, \gls{DGRec}$_\text{self}$ is identical to RNN-Session (hence the results are reproduced from Table~\ref{table:results}). Table~\ref{table:feature} reports the performance of all three models on our data sets. \gls{DGRec}$_\text{self}$ consistently outperforms \gls{DGRec}$_\text{social}$ across all three data sets, which means that overall users' individual interests have a higher impact on recommendation quality. Compared to the full model \gls{DGRec}, the performance of both \gls{DGRec}$_\text{self}$ and \gls{DGRec}$_\text{social}$ significantly decreases. To achieve good recommendation performance in online communities, it is, therefore, crucial to model both a user's current interests as well as her (dynamic) social influences. \begin{table} \centering \begin{tabularx}{1.0\linewidth}{l>{\centering\arraybackslash}l>{\centering\arraybackslash}X>{\centering\arraybackslash}X} \toprule Data Sets & Models & Recall@20 & NDCG \\ \midrule \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Douban \end{tabular}} & \gls{DGRec}$_\text{self}$ & 0.1643 & 0.1854 \\ & \gls{DGRec}$_\text{social}$ & 0.1185 & 0.1591 \\ & \gls{DGRec} & 0.1861 & 0.1950 \\ \midrule \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Delicious \end{tabular}} & \gls{DGRec}$_\text{self}$ & 0.3445 & 0.2581 \\ & \gls{DGRec}$_\text{social}$ & 0.3306 & 0.2516 \\ & \gls{DGRec} & 0.4066 & 0.2944 \\ \midrule \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Yelp \end{tabular}} & \gls{DGRec}$_\text{self}$ & 0.0756 & 0.1378 \\%0.0637 & 0.1301 \\ & \gls{DGRec}$_\text{social}$ & 0.0690 & 0.1356 \\ & \gls{DGRec} & 0.0842 & 0.1427 \\ \bottomrule \end{tabularx} \caption{Ablation study comparing the performance of the complete model (\gls{DGRec}) with two variations. } \label{table:feature} \end{table} \begin{figure} \centering \vspace{-5pt} \begin{subfigure}[t]{0.48\linewidth} \includegraphics[width=\linewidth]{pdfs/douban_local_global_v2.pdf} \caption{Douban} \label{fig:locala} \end{subfigure} \hspace*{\fill} \begin{subfigure}[t]{0.48\linewidth} \includegraphics[width=\linewidth]{pdfs/delicious_local_global_v2.pdf} \caption{Delicious} \label{fig:localc} \end{subfigure} \hspace*{\fill} \vspace{-10pt} \caption{Performance w.r.t. friends' short-term and long-term preferences on different data sets. The result of \textit{Yelp} data set is similar with \textit{Douban} hence omitted. } \vspace{-10pt} \label{fig:local_global} \end{figure} \begin{table} \centering \begin{tabularx}{1.0\linewidth}{l>{\centering\arraybackslash}X>{\centering\arraybackslash}X>{\centering\arraybackslash}X} \toprule Data Sets & Conv. Layers & Recall@20 & NDCG \\ \midrule \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Douban \end{tabular}} & 1 & 0.1726 & 0.1886 \\ & 2 & 0.1861 & 0.1950 \\ & 3 & 0.1793 & 0.1894 \\ \midrule \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Delicious \end{tabular}} & 1 & 0.4017 & 0.2883 \\ & 2 & 0.4066 & 0.2944 \\ & 3 & 0.4037 & 0.2932 \\ \midrule \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Yelp \end{tabular}} & 1 & 0.0760 & 0.1387 \\ & 2 & 0.0842 & 0.1427 \\ & 3 & 0.0846 & 0.1423 \\ \bottomrule \end{tabularx} \caption{Performance of our model w.r.t. different numbers of convolution layers. } \vspace{-20pt} \label{table:conv_layer} \end{table} \subsubsection{Short-term v.s. Long-term} \gls{DGRec} provides a mechanism for encoding friends' short- as well as long-term interests (see \S~\ref{sec:rep_of_friends}). We study the impact of each on the model's performance. Similar to above, we compare using either short- or long-term interests to the results of using both. Figure~\ref{fig:local_global} reports that for \textit{Douban}, the predictive capability of friends' short-term interests outperforms that of friends' long-term interest drastically, and shows comparable performance in regard to the full model. It is reasonable, considering that the interests of users in online communities (e.g., Douban) change frequently, and exploiting users' short-term interests should be able to predict user behaviors more quickly. Interestingly, on the data set \textit{Delicious}, different results are observed. Using long-term interests yield more accurate predictions than doing short-term. This is not surprising since, on \textit{Delicious} website, users tend to have static interests. \subsubsection{Number of Convolutional Layers} \gls{DGRec} aggregates friends' interests using a multi-layer graph convolutional network. More convolutional layers will yield influences from higher-order friends. In our study so far we have used two-layer graph convolutional networks. To validate this choice we compare the performance to one- and three-layer networks but maintain the number of selected friends to 10 and 5 in the first and third layer, respectively. Table \ref{table:conv_layer} shows a significant decline in performance when using a single layer. This implies that the interests of friends' friends (obtained by 2 layers) is important for recommendations. Next, we test our model using three convolutional layers to explore the influences of even higher-order friends. The influence of the third layer on the performance is small. There is a small improvement for \textit{Yelp} but a slightly larger drop in performance for both \textit{Douban} and \textit{Delicious}, which may be attributed to model overfitting or noises introduced by higher-order friends. This confirms that two convolutional layers are enough for our data sets. \begin{figure} \centering \vspace{-8pt} \includegraphics[width=0.8\linewidth]{pdfs/inter_attention.pdf} \vspace{-8pt} \caption{The heat map of the attention weights across different sessions (left) and within a session (right). For both plots, the y-axis represents friends of the target user. The x-axis represents (1) eight sessions of the target user on the left and (2) the item sequence within session \#7 on the right.} \label{fig:attention} \end{figure} \begin{figure} \centering \vspace{-8pt} \includegraphics[width=0.8\linewidth]{pdfs/var_scatter_plot_dist_v3.pdf} \vspace{-8pt} \caption{Attention variance distribution of \gls{DGRec} for both inter-session and intra-session. Variance values are discretized into 20 intervals.} \vspace{-10pt} \label{fig:attention_var} \end{figure} \subsection{Exploring Attention} \gls{DGRec} uses an attention mechanism to weigh the contribution of different friends based on a user's current session. We hypothesized that while friends have varying interests, user session typically only explores a subset of these interests. As a consequence, for a target user, different subsets of her friends should be relied upon in different situations. We now explore the results of the attention learned by our model. First, we randomly select a \textit{Douban} user from those who have at least 5 test sessions as well as 5 friends and plot her attention weights (Eq.\ \ref{eqt:alignment}) within and across session(s) in Figure~\ref{fig:attention}. For the inter-session level plot (left), we plot the average attention weight of a friend within a session. For intra-session level plot (right), the user's attention weights within one session (i.e., SessionId=7) are presented. We make the following observations. First, the user allocates her attention to different friends across different sessions. This indicates that social influence is indeed conditioned on context (i.e., target user's current interests). Further, friend \#8 obtains little attention in all sessions, which means that social links do not necessarily lead to observed shared interest. Second, the distribution of attention is relatively stable within a single session. This confirms that the user's behaviors are coherent in a short period and suitable to be processed in a session manner. As a second exploration of the behavior of the attention mechanism we take a macro approach and analyze the attention across all users (as opposed to a single user across friends). We use the attention levels inferred on the \textit{Douban} test set. Figure~\ref{fig:attention_var} reports the empirical distributions of the inter-session (brown) and intra-session (blue) attention variance (i.e., how much does the attention weights vary in each case). The intra-session variance is lower on average. This agrees with our assumption that users' interests tend to be focused within a short time so that the same set of friends are attended to for the duration of a session. On the contrary, a user is more likely to trust different friends in different sessions, which further validates modeling context-dependent social influences via attention-based graph convolutional networks. \section{Introduction} \label{sec::intro} \noindent Online social communities are an essential part of today's online experience. Platforms such as Facebook, Twitter, and Douban enable users to create and share information as well as consume the information created by others. Recommender systems for these platforms are therefore critical to surface information of interest to users and to improve long-term user engagement. However, online communities come with extra challenges for recommender systems. \begin{figure*} [!ptb] \centering \includegraphics[width=0.95\linewidth]{pdfs/concept_v11.pdf} \caption{An illustration of Alice's social influences in two sessions. Alice's interests might change across different sessions, while she may be influenced by her friends, by either their short-term or long-term preferences at different times. } \label{fig::motivation} \end{figure*} First, user interests are dynamic by nature. A user may be interested in sports items for a period of time and then search for new music groups. Second, since online communities often promote sharing information among friends, users are also likely to be influenced by their friends. For instance, a user looking for a movie may be influenced by what her friends have liked. Further, the set of influencers can be dynamic since they can be context-dependent. For instance, a user will trust a set of friends who like comedies when searching for funny films; while she could be influenced by another set of friends when searching for action movies. \noindent \textbf{Motivating Example.} Figure~\ref{fig::motivation} presents the behavior of Alice's and her friends' in an online community. Behaviors are described by a sequence of actions (e.g., item clicks). To capture users' dynamic interests, their actions are segmented into sub-sequences denoted as \emph{sessions}. We are therefore interested in \emph{session-based recommendations}~\cite{schafer1999recommender}: within each session, we recommend the next item Alice should consume based on the items in the current session she has consumed so far. Figure~\ref{fig::motivation} presents two sessions: session (a) and (b). In addition, the items consumed by Alice's friends are also available. We would like to utilize them for better recommendations. We are thus in a \emph{session-based social recommendation} setting. In session (a), Alice browses sports items. Two of her friends: Bob and Eva, are notorious sports fans (long-term interests), and they are browsing sports' items recently (short-term interests). Considering both facts, Alice may be influenced by the two and, e.g., decides to learn more about Ping Pong next. In session (b), Alice is interested in ``literature \& art'' items. The situation is different with session (a) since none of her friends have consumed such items recently. But David is generally interested in this topic (long-term interests). In this case, it would make sense for Alice to be influenced by David, and say, be recommended a book that David enjoyed. These examples show how a user's current interests combining with the (short- and long-term) interests of different friends' provide session-based social recommendations. In this paper, we present a recommendation model based on both. The current recommendation literature has modeled either users' dynamic interests or their social influences, but, as far as we know, has never combined both (like in the example above). A recent study~\cite{hidasi2016session} models session-level user behaviors using recurrent neural networks, ignoring social influences. Others studied merely social influences~\cite{ma2011recommender,zhao2014leveraging,chaney2015probabilistic}. For example, \citet{ma2011recommender} explores the social influence of friends' long-term preferences on recommendations. However, the influences from different users are static, not depicting the users' current interests. We propose an approach to model both users' session-based interests as well as dynamic social influences. That is, which subset of a user's friends influence her (the influencers) according to her current session. Our recommendation model is based on dynamic-graph-attention networks. Our approach first models user behaviors within a session using a recurrent neural network (RNN)~\cite{elman1990finding}. According to user's current interests---captured by the hidden representation of the RNN---we capture the influences of friends using the graph-attention network~\cite{Velickovic2018graph}. To provide session-level recommendations, we distinguish the model of friends' short-term preferences from that of the long-term ones. The influence of each friend, given the user's current interests, is then determined automatically using an attention mechanism~\cite{bahdanau2015neural,xu2015show}. We conduct extensive experiments on data sets collected from several online communities (Douban, Delicious, and Yelp). Our proposed approach outperforms the well-known competitive baselines by modeling both users' dynamic behaviors and dynamic social influences. To summarize, we make the following contributions: \begin{itemize} \item We propose to study both dynamic user interests and context-dependent social influences for the recommendation in online communities. \item We propose a novel recommendation approach based on dynamic-graph-attention networks for modeling both dynamic user interests and context-dependent social influences. The approach can effectively scale to large data sets. \item We conduct extensive experiments on real-world data sets. Experimental results demonstrate the effectiveness of our model over strong and state-of-the-art baselines. \end{itemize} \noindent\textbf{Organization.} \S 2 discusses related works. In \S 3 we give a formal definition of the session-based social recommendation problem. Our session-based social recommendation approach is described in \S 4. \S 5 presents the experimental results, followed by concluding remarks in \S 6. \section{Dynamic Social Recommender Systems} As is discussed previously, users are not only guided by their current preferences but also by their friends' preferences. We propose a novel dynamic graph attention model \gls{DGRec} which models both types of preferences. \gls{DGRec} is composed of four modules (Figure~\ref{fig:overview_model}). First (\S \ref{sec:dynamic_individual_interests}), a recurrent neural network (RNN)~\cite{elman1990finding} models the sequence of items consumed in the (target) user's current session. Her friends' interests are modeled using a combination of their short- and long-term preferences (\S \ref{sec:rep_of_friends}). The short-term preferences, or items in their most recent session, are also encoded using an RNN. Friends' long-term preferences are encoded with a learned individual embedding. The model then combines the representation of the current user with the representations of her friends using a graph-attention network (\S \ref{sec:context_dependent_social_influences}). This is a key part of our model and contribution: our proposed mechanism learns to weigh the influence of each friend based on the user's current interests. At the final step (\S \ref{sec:recommendation}), the model produces recommendations by combining a user's current preferences with her (context-dependent) social influences. \subsection{Dynamic Individual Interests} \label{sec:dynamic_individual_interests} To capture a user's rapidly-changing interests, we use RNN to model the actions (e.g., clicks) of the (target) user in the current session. RNN is standard for modeling sequences and has recently been used for modeling user (sequential) preference data~\cite{hidasi2016session}. The RNN infers the representation of a user's session $\vec{S}_{T+1}^u=\{i_{T+1,1}^{u},\ldots, i_{T+1,n}^{u}\}$, token by token by recursively combining the representation of all previous tokens with the latest token, i.e., \begin{equation}\label{eqt:rnn} h_n = f(i_{T+1,n}^{u}, h_{n-1}), \end{equation} where $h_n$ represents a user's interests and $f(\cdot,\cdot)$ is a non-linear function combining both sources of information. In practice, the long short-term memory (LSTM)~\cite{hochreiter1997long} unit is often used as the combination function $f(\cdot,\cdot)$: \begin{equation} \label{eqt:lstm} \begin{aligned} x_n &= \sigma(\textbf{W}_x[h_{n-1}, i_{T+1,n}^{u}] + b_x)\\ f_n &= \sigma(\textbf{W}_f[h_{n-1}, i_{T+1,n}^{u}] + b_f)\\ o_n &= \sigma(\textbf{W}_o[h_{n-1}, i_{T+1,n}^{u}] + b_o)\\ \tilde{c}_n &= \tanh(\textbf{W}_c[h_{n-1}, i_{T+1,n}^{u}] + b_c)\\ c_n &= f_n \odot c_{n-1} + x_n \odot \tilde{c}_n\\ h_n &= o_n \odot \tanh(c_n),\\ \end{aligned} \end{equation} where $\sigma$ is the sigmoid function: $\sigma(x) =(1+\exp(-x))^{-1}$. \subsection{Representing Friends' Interests} \label{sec:rep_of_friends} We consider both friends' short- and long-term interests. Short-term interests are modeled using the sequence of recently-consumed items (e.g., a friend's latest online session). Long-term interests represent a friend's average interest and are modeled using individual embedding. \textbf{Short-term preference}: For a target user's current session $\vec{S}_{T+1}^u$, her friends' short-term interests are represented using their sessions right before session $T+1$ (our model generalizes beyond single session but this is effective empirically). Each friend $k$'s actions $\vec{S}_{T}^k=\{i_{T,1}^{k}, i_{T,2}^{k},\ldots, i_{T,N_{k,T}}^{k}\}$ are modeled using an RNN. In fact, here we reuse the RNN for modeling the target user's session (\S~\ref{sec:dynamic_individual_interests}). In other words, both RNNs share the same weights. We represent friend $k$'s short-term preference $s_k^s$ by the final output of the RNN: \begin{equation} \begin{aligned} s_k^s &= r_{N_{k,T}} = f(i_{T, N_{k,T}}^{k}, r_{N_{k,T}-1}). \end{aligned} \end{equation} \textbf{Long-term preference}: Friends' long-term preferences reflect their average interests. Since long-term preferences are not time-sensitive, we use a single vector to represent them. Formally, \begin{equation} s_k^l = \mathbf{W}_u[k,:], \end{equation} where friend $k$'s long-term preference $s_k^l$ is the $k_{th}$ row of the user embedding matrix $\textbf{W}_u$. Finally, we concatenate friends' short- and long-term preferences using a non-linear transformation: \begin{equation}\label{eqa:long_short} s_k = ReLU(\mathbf{W}_1[s_k^{s};s_k^{l}]), \end{equation} where $ReLU(x)=max(0,x)$ is a non-linear activation function and $\textbf{W}_1$ is the transformation matrix. \subsection{Context-dependent Social Influences}\label{sec:context_dependent_social_influences} We described how we obtain representations of target user (\S~\ref{sec:dynamic_individual_interests}) and her friends (\S~\ref{sec:rep_of_friends}). We now combine both into a single representation that we then use downstream (\S \ref{sec:recommendation}). The combined representation is a mixture of the target user's interest and her friends' interest. We obtain this combined representation using a novel graph-attention network. First, we encode the friendship network in a graph where nodes correspond to users (i.e., target users and their friends) and edges denote friendship. In addition, each node uses its corresponding user's representation (\S \ref{sec:dynamic_individual_interests} \& \S\ref{sec:rep_of_friends}) as (dynamic) features. Second, these features are propagated along the edges using a message-passing algorithm~\cite{gilmer2017neural}. The main novelty of our approach lies in using an attention mechanism to weigh the features traveling along each edge. A weight corresponds to the level of a friend's influence. After a fixed number of iterations of message passing, the resulting features at the target user's node are the combined representation. Below we detail how we design the node features as well as the accompanying graph-attention mechanism. \subsubsection{Dynamic feature graph} For each user, we build a graph where nodes correspond to that user and her friends. For target user $u$ with $|N(u)|$ friends, the graph has $|N(u)|+1$ nodes. User $u$'s initial representation $h_n$ is used as node $u$'s features $h_u^{(0)}$ (the features are updated whenever $u$ consumes a new item in $\vec{S}_{T+1}^u$). For a friend $k$, the corresponding node feature is set to $s_k$ and remains unchanged for the duration of time step $T+1$. Formally, the node features are $h_u^{(0)}=h_n$ and \{$h_{k}^{(0)} = s_k, k\in{N(u)}$\}. \subsubsection{Graph-Attention Network} With the node features defined as above, we then pass messages (features) to combine friends' and the target user's interests. This procedure is formalized as inference in a graph convolutional network~\cite{kipf2016semi}. \citet{kipf2016semi} introduce graph convolutional networks for semi-supervised node representation learning. In these networks, the convolutional layers ``pass'' the information between nodes. The number of layers $L$ of the networks corresponds to the number of iterations of message passing.\footnote{We propagate information on a graph that also contains higher-order relationships (e.g., friends of friends of friends) in practice. In the $l^{th}$ layer of the network, the target user then receives information from users that are $l$ degrees away.} However, all neighbors are treated equally. Instead, we propose a novel dynamic graph attention network to model context-dependent social influences. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{pdfs/GCN_model_v3.pdf} \caption{The graphical model of the single convolutional layer using attention mechanism, where the output conditioned on current interest is interpreted as context-dependent social influences.} \label{fig:attention_module} \end{figure} The fixed symmetric normalized Laplacian is widely used as a propagation strategy in existing graph convolutional networks~\cite{defferrard2016convolutional,kipf2016semi}. In order to distinguish the influence of each friend, we must break the static propagation schema first. We propose to use an attention mechanism to guide the influence propagation. The process is illustrated in Figure~\ref{fig:attention_module}. We first calculate the similarity between the target user's node representation $h_u^{(l)}$ and all of its neighbors' representations $h_k^{(l)}$: \begin{equation}\label{eqt:alignment} \alpha_{uk}^{(l)} = \frac{exp(f(h_u^{(l)},h_k^{(l)}))}{\sum_{j\in{N(u)\cup\{u\}}}exp(f(h_u^{(l)},h_j^{(l)}))}, \end{equation} where $h_u^{(l)}$ is the representation of node/user $u$ at layer $l$, and $f(h_u^{(l)},h_k^{(l)}) = {h_u^{(l)}}^\top h_k^{(l)}$ is the similarity function between two elements. Intuitively, $\alpha_{uk}^{(l)}$ is the \emph{level of influence} or weight of friend $k$ on user $u$ (conditioned on the current context $h_u^{(l)}$). Note that we also include a self-connection edge to preserve a user's revealed interests. $\alpha_{u:}^{(l)}$ then provide the weights to combine the features: \begin{equation} \tilde{h}_u^{(l)} = \sum_{k\in{N(u)\cup\{u\}}}\alpha_{uk}^{(l)}h_k^{(l)}, \end{equation} where $\tilde{h}_u^{(l)}$ is a mixture of user $u$'s friends' interests at layer $l$, followed by a non-linear transformation: $h_u^{(l+1)} = ReLU(\textbf{W}^{(l)}\tilde{h}_u^{(l)}).$ $\textbf{W}^{(l)}$ is the shared and learnable weight matrix at layer $l$. We obtain the final representation of each node by stacking this attention layer $L$ times.\footnote{We also tested our model with two popular context-independent propagation strategies that do not use an attention mechanism: a) averaging friends' interests and; b) element-wise max-pooling over their interests---similar to techniques for aggregating word-level embeddings \cite{weston2014tagspace}. Mean aggregation outperforms the latter, but both are inferior to our proposed attention model.} The combined (social-influenced) representation is denoted by $h_u^{(L)}$. \subsection{Recommendation}\label{sec:recommendation} Since a user's interest depends on both her recent behaviors and social influences, her final representation is obtained by combining them using a fully-connected layer: \begin{equation}\label{eqt:attnlayer} \hat{h}_n = \textbf{W}_2[h_n; h_u^{(L)}], \end{equation} where $\textbf{W}_2$ is a linear transformation matrix, and $\hat{h}_n$ is the final representation of the user $u$'s current interest. We then obtain the probability that the next item will be $y$ using a softmax function: \begin{equation}\label{eqt:softmax2} p(y | i_{T+1,1}^{u},\ldots, i_{T+1,n}^{u}; \{\vec{S}_{T}^k, k\in{N(u)}\}) = \frac{\exp({\hat{h}_n}^\top z_y)}{\sum_{j=1}^{|I|}\exp({\hat{h}_n}^\top z_j)}, \end{equation} where $N(u)$ are user $u$'s set of friends according to the social network $G$, $z_y$ is the embedding of item $y$, and $|I|$ the total number of items. \subsection{Training} We train the model by maximizing the log-likelihood of the observed items in all user sessions: \begin{equation} \label{eqn::obj} \sum_{u\in U} \sum_{t=2}^{T} \sum_{n=1}^{N_{u, t}-1} \log p(i_{t,n+1}^{u}|i_{t,1}^{u},\ldots, i_{t,n}^{u}; \{\vec{S}_{t-1}^k, k\in{N(u)}\}). \end{equation} This function is optimized using gradient descent. \section{Related Work} We discuss three lines of research that are relevant to our work: 1) recommender systems that model the dynamic user behaviors, 2) social recommender systems that take social influence into consideration, and 3) recent progress of convolutional network developed for graph-structured data. \subsection{Dynamic Recommendation} Modeling user interests that change over time has already received some attention~\cite{xiong2010temporal,koren2010collaborative,charlin2015dynamic}. Most of these models are based on (Gaussian) matrix factorization \cite{mnih2008probabilistic}. For example, \citet{xiong2010temporal} learned temporal representations by factorizing the (user, item, time) tensor. \citet{koren2010collaborative} developed a similar model named timeSVD++. \citet{charlin2015dynamic} modeled similarly but using Poisson factorization~\cite{gopalan2015scalable}. However, these approaches assume that the interest of users changes slowly and smoothly over long-term horizons, typically on the order of months or years. To effectively capture users' short-term interests, recent works introduce RNN to model their recent (ordered) behaviors. For example, \citet{hidasi2016session} first proposed Session-RNN to model user's interest within a session. \citet{li2017neural} further extended Session-RNN with attention mechanism to capture user's both local and global interests. \citet{wu2017recurrent} used two separate RNNs to update the representations of both users and items based on new observations. \citet{beutel2018latent} built an RNN-based recommender that can incorporate auxiliary context information. These models assume that items exhibit coherence within a period of time, and we use a similar approach to model session-based user interests. \subsection{Social Recommendation} Modeling the influence of friends on user interests has also received attention~\cite{massa2007trust,ma2008sorec,ma2011recommender,jamali2010matrix,jiang2012social}. Most proposed models are (also) based on Gaussian or Poisson matrix factorization. For example, \citet{ma2011recommender} studied social recommendations by regularizing latent user factors such that the factors of connected users are close by. \citet{chaney2015probabilistic} weighted the contribution of friends on a user's recommendation using a learned ``trust factor''. \citet{zhao2014leveraging} proposed an approach to leverage social networks for active learning. \citet{xiao2017learning} framed the problem as transfer learning between the social domain and the recommendation domain. These approaches can model social influences assuming influences are uniform across friends and independent from the user's preferences. \citet{tang2012etrust} and \citet{tang2012mtrust} proposed multi-facet trust relations, which relies on additional side information (e.g., item category) to define facets. \citet{wang2016social} and \citet{wang2017learning} distinguished strong and weak ties among users for recommendation in social networks. However, they ignore the user's short-term behaviors and integrate context-independent social influences. Our proposed approach models dynamic social influences by modeling the dynamic user interests, and context-dependent social influences. \subsection{Graph Convolutional Networks} Graph convolutional networks (GCNs) inherits convolutional neural networks (CNNs). CNNs have achieved great success in computer vision and several other applications. CNNs are mainly developed for data with 2-D grid structures such as images~\cite{krizhevsky2012imagenet}. Recent works focus on modeling more general graph-structure data using CNNs~\cite{bruna2013spectral,henaff2015deep,defferrard2016convolutional,kipf2016semi}. Specifically, \citet{kipf2016semi} proposed graph-convolutional networks (GCNs) for semi-supervised graph classification. The model learns node representations by leveraging both the node attributes and the graph structure. It is composed of multiple \emph{graph-convolutional layers}, each of which updates node representations using a combination of the current node's representation and that of its neighbors. Through this process, the dependency between nodes is captured. However, in the original formulation, all neighbors are given the static ``weight'' when updating the node representations. \citet{Velickovic2018graph} addressed this problem by proposing \emph{graph-attention networks}. They weighed the contribution of neighbors differently using an attention mechanism~\cite{bahdanau2015neural,xu2015show}. We propose a dynamic-graph-attention network. Compared to previous work, we focus on a different application (modeling the context-dependent social influences for recommendations). Besides, we model a dynamic graph, where the features of nodes evolve over time, and the attention between nodes also changes along with the current context over time.
34a7144aa54fb9170cea3370c5ef06d4c9dee6b2
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\chapter*{Introduction} \addcontentsline{toc}{chapter}{Introduction} \markboth{}{} The purpose of this work is to demonstrate that it is possible to formulate Einstein's equations as an initial value problem, that is a Cauchy problem. The idea of viewing the field equations of general relativity as a system of evolution equations, see Ringström \cite{ringstrom2015origins}, goes back to Einstein himself, in an argument justifying that gravitational waves propagate at the speed of light. In his papers \cite{einstein2005naherungsweise, einstein1918gravitationswellen}, Einstein considers a situation in which the metric is close to that of Minkowski space, in practice, he studied the linearized problem. Using a special choice of coordinates, he derived a wave equation for the perturbation, a result he used to justify the statement that gravitational waves propagate at the speed of light. The arguments of Einstein give an indication that the field equations of general relativity are a system of wave equations, and thus the problem to pose is an initial value problem. Despite that, the role of the choice of coordinates was not entirely clear at the time. In fact, in his criticism Eddington \cite{Eddington1930-EDDTMT} pointed out that if the coordinates are chosen so as to satisfy a certain condition which has no very geometrical importance, the speed is that of light, but any other choice of coordinates would give a different speed. Thus, the result stands or falls by the choice of coordinates. Furthermore, the choice of the type of coordinates to use was made in order to obtain the simplification which results from representing the propagation occurring with the speed of light. One way to approach the objections of Eddington is to argue that gravitational waves propagate at the speed of light without appealing to a specific choice of coordinates. In a paper \cite{vessiot1918propagation}, Vessiot argued that the desired statement follows from the observation that discontinuities in the derivatives of the metric of order strictly higher than one are only allowed along null hypersurfaces. On the other hand, the work of Darmois \cite{darmois1927equations} stressed the fact that characteristic hypersurfaces play a special role in the process of solving the field equations. One particular consequence of Darmois' analysis is that given a metric and its first normal derivatives on a spacelike hypersurface, all the derivatives of the metric are determined on the hypersurface. This yields a local uniqueness result in the real analytic setting. Moreover, there is a linear homogeneous system of equations for the components of the Ricci tensor corresponding to the constraints. Thus, it is not only necessary, but also sufficient, that the constraints be satisfied for the existence of a real analytic solution to the field equations. Furthermore, Darmois, making use of the coordinate choice made by de Donder, known as $\textit{isothermal}$ coordinates, proved that Einstein's argumentation is successful to demonstrate that the gravitational fields propagate at the speed of light. In addition, Darmois states the naturalness of these coordinates because they satisfy the scalar wave equation. Despite of this, a fundamental question remains since, given a solution to the field equations, there are two notions of $\textit{causality}$. There is the causality associated with the metric and there is the notion of domain of dependence associated with solving Einstein's equations considered as a partial differential equation. Then, it is of interest to know if these two notions coincide. This cannot be addressed in the real analytic setting, since real analytic functions have the unique continuation property. This question was addressed by Stellmacher \cite{stellmacher1938anfangswertproblem}, whose argument was based on the use of isothermal coordinates. In fact, given two solutions of Einstein-Maxwell equations, Stellmacher constructs isothermal coordinates such that the PDE techniques can be applied. Then, the conclusion is that two solutions coincide up to a coordinate transformation. Moreover, his work constitutes a justification of the statement that the gravitational field propagates at a speed bounded by that of light. The argument is such that Eddington's objections do not apply. Acknowledging the results of Stellmacher, Lichnerowicz \cite{lichnerowicz1939problemes} stated the initial value problem as that of finding the solution to Einstein's equations on the basis of the metric and its first derivatives on a hypersurface. Hence, he solves the problem in the real analytic setting for spacelike hypersurfaces and notes the importance of the constraints. Furthermore, he point out the importance to generalize the existence result to the non-real analytic set. The work of Yvonne Choquet-Bruhat \cite{foures1952theoreme} provides this generalization by showing that not only does local uniqueness hold in the class of $C^k$-functions for $k$ large enough but, given initial data, there is a unique local solution. Thus, the Cauchy problem in general relativity stands on a solid basis in the $C^k$-setting. It is natural to ask why is the specific regularity class of importance and why it is not sufficient to consider the class of real analytic functions. A large part of the difficulty in obtaining the desired result lies in proving the local existence of solution to Einstein's equations in the prescribed regularity. Moreover, it is necessary to use coordinates with respect to which the equations become hyperbolic. Finally, it is necessary to connect the problem of solving the reduced equations with the constraint equations and the problem of solving Einstein's equations. Hence, by following Yvonne Choquet-Bruhat \cite{foures1952theoreme}, we will show how to construct solutions to Einstein's vacuum equations, given initial data. In Chapter 1, by considering a system of partial differential equations, we will give the definition of characteristic manifold, the concept of wavelike propagation and we will introduce and prove the existence of the Riemann Kernel. In Chapter 2, we will stress the relation between Riemann's kernel and the fundamental solution, moreover we will introduce the concept of characteristic conoid and the geodesic equations. In Chapter 3, we will show how to build the fundamental solution with some examples, in particular we will study the scalar wave equation with smooth initial conditions. In Chapter 4, by considering linear systems of normal hyperbolic form we will see on which assumptions they can be solved and we will find the solution. In Chapter 5, we will see under which assumptions a non-linear hyperbolic system can turn into a linear system such that a solution can be found by making use of the results obtained for them. In Chapter 6, by making use of the isothermal coordinates we will see how the previous method applies to Einstein vacuum equations to find their solutions and we will discuss the causal structure of space-time. Eventually, in Chapter 7, we will give an useful application by studying the Green functions in the gravitational radiation theory, more precisely, we will use the Riemann Kernel to find a solution of the problem of black hole collisions at the speed of light. \chapter{Hyperbolic Equations} \label{Chap:1} \epigraph{In nature's infinite book of secrecy a little I can read.}{William Shakespeare, Antony and Cleopatra } \section{Systems of Partial Differential Equations} To begin with, following Esposito \cite{esposito2017ordinary}, let us consider a system of $m$ partial differential equations in the unknown functions $ \varphi_1, \varphi_2,..., \varphi_n $ of $ n+1$ independent variables $ x^1,x^2,..., x^n $ that reads as \cite{civita1931caratteristiche} \begin{equation} \label{eq:1} E_{\mu} = 0, \hspace{3cm} \mu =1,2,...,m, \end{equation} \\ the $ E_{\mu} $ being functions that depend on the x, on the $ \varphi $ and on the partial derivatives of the $\varphi $ with respect to the x. Such a system is said to be $ \textit{normal} $ with respect to the variable $ x^0 $ if it can be reduced to the form: \begin{equation} \label{eq:2} \frac {\partial^{r_{\nu}} \varphi}{\partial (x^0)^{r_{\nu}}} = \Phi_{\nu} (x|\varphi|\psi|\chi), \hspace{1cm} \nu=1,2,...,m, \end{equation} \\ where the $\psi$ occurring on the right-hand side are partial derivatives of each $\varphi_{\nu}$ with respect to $x^0$ only of order less than $r_{\nu}$, and the $\chi$ are partial derivatives of the $\varphi$ with respect to the $x$ of arbitrary finite order, provided that, with respect to $x^0$, they are of order less than $r_{\nu}$ for the corresponding $\varphi_{\nu}$.\\ The functions $\Phi_{\nu}$ are taken to be real-analytic in the neighbourhood of a set of values of Cauchy data. Before stating the associated Cauchy-Kowalevsky theorem, it is appropriate to recall the existence theorem for integrals of a system of ordinary differential equations. Hence we consider the differential system (having set $x^0 = t $) \begin{equation} \label{eq:3} \frac{d^{r_{\nu}} \varphi_{\nu}}{ dt^{r_{\nu}}} = \Phi_{\nu} (t| \varphi|\psi), \hspace{2cm} \nu = 1,2,...,m. \end{equation} \\ This system can be re-expressed in canonical form, involving only first-order equations, by defining \begin{equation} \label{eq:4} \frac{d}{dt} \varphi_{\nu} \equiv \varphi'_{\nu}, \hspace{5mm} \frac{d}{dt} \varphi'_{\nu} \equiv \varphi''_{\nu}, \hspace{5mm} ... \hspace{5mm} \frac{d}{dt} \equiv {\varphi_{\nu}}^{(r_{\nu} - 1)}, \end{equation} \\ from which replacement we obtain \\ \begin{equation} \label{eq:5} \frac{d}{dt} {\varphi_{\nu}}^{(r_{\nu} -1)} = \Phi_{\nu}(t|\varphi|\psi), \hspace{2cm} \nu=1,2,...,m. \end{equation} \\ One can also denote by $y_\rho $ the generic element of a table displaying $\varphi_1$ and its derivatives up to the order $(r_1 - 1)$ on the first column, $\varphi_2$ and its derivatives up to $(r_2 - 1)$ on the second column, ..., $\varphi_m$ and its derivatives up to the order $(r_m - 1)$ on the last column. With such a notation, the canonical form ($\ref{eq:4}$) is further re-expressed as \begin{equation} \label{eq:6} \frac{d}{dt} y_\rho = Y_\rho (t|y), \hspace{3 cm} \rho=1,2,...,r; \hspace{5mm} r\equiv \sum\limits_{k=1}^m r_k. \end{equation} \\ \\ If each $Y_\rho$ is real-analytic in the neighbourhood of $t=t_0$, $y_\rho = b_\rho $, there exists a unique set of functions $y_\rho$, analytic in the $ t$ variable, which take the value $b_\rho$ at $t=t_0$. In order to prove such a theorem, one begins by remarking that the differential equations make it possible, by means of subsequent differentiations, to evaluate the derivatives of any order of an unknown function $y_\rho$ at the point $ t=t_0$ and hence to write, for each $y_\rho$, the Taylor expansion pertaining to such a point. The essential point of the proof consists in showing that such series converge in a suitable neighbourhood of $t=t_0$. For this purpose one considers some appropriate majorizing functions $Y_\rho$; the corresponding differential system (\ref{eq:5}), which can be integrated by elementary methods, defines some real-analytic functions in the neighbourhood of $t=t_0$, whose Taylor expansions majorize the Taylor expansions of the $y_\rho$ functions. The Cauchy theorem for the differential systems (\ref{eq:5}) holds also when the right-hand side depends on a certain number of parameters that can be denoted by $x_1$,$x_2$,..., $x_n$ provided that they vary in such a way that the functions $Y_\rho$ are real-analytic. One can then state the following: \begin{thm} Given the differential system \begin{equation} \label{eq:7} \frac{d^{r_\nu} \varphi_{\nu}}{d{t}^{r_\nu}} = \Phi_\nu (t|x|\varphi|\psi), \hspace{1cm} \nu=1,2,..., m, \end{equation} \\ by assigning at will, at $t= t_0$, the values of each $\varphi_\nu$ and of the subsequent derivatives up to the order $ (r_\nu - 1)$ as functions of the parameters $x_1$,$x_2$,..., $x_n$, there exists a unique set of functions $\varphi$, real-analytic, of the variable t and of the parameters, satisfying Eq.(\ref{eq:7}) and equal to the assigned values at $t=t_0$. \end{thm} This theorem admits an extension to systems of partial differential equations (\ref{eq:2}) in normal form, the new feature being that, on the right-hand side of Eq. (\ref{eq:7}), there occur also derivatives of the unknown functions with respect to the parameters, so that one deals with partial differential equations. The Cauchy problem consists in finding the functions $\varphi$ satisfying the system (\ref{eq:2}) in normal form, and the initial conditions given by the values of the unknown functions and their partial derivatives with respect to the variable $x^0$, of order less than the maximal order. Let $S$ be the space of the variables, from now on denoted by $x^0$,$x^1$,...,$x^n$. In order to fix the ideas, one can assume that $S$ is endowed with an Euclidean metric, and interpret the $x$ as Cartesian coordinates. Let $\omega$ be the hyperplane with equation \\ \begin{equation} \label{eq:8} x^0 = a^0. \end{equation} \\ The Cauchy existence theorem states that, in the neighbourhood of the hyperplane $\omega$, which is said to be the $\textit{carrier hyperplane}$, one can find the values taken by the $\varphi$ functions, once the initial values of $\varphi$ and $\psi$ functions are freely specified at each point of $\omega$. An easy generalization of the theorem is obtained by replacing the hyperplane $\omega$ with a hypersurface $\sigma$ of $S$. For example, if \begin{equation} \label{eq:9} z(x^0,x^1,...,x^n)=z^0 = constant \end{equation} \\ is the equation of the hypersurface $\sigma$, it is enough to replace the $x$ with $n + 1$ independent combinations of the $x$, here denoted by $z,z^1,..., z^n$, in such a way that one of them, i.e. $z$, is precisely the left-hand side of the Eq. (\ref{eq:9}) here written for $\sigma$. \section{Characteristic Manifolds} Let us consider differential systems for which the maximal order of derivation is $s=1$ or $s=2$. Such systems can be made explicit by writing them in the form: \begin{equation} \label{eq:10} E_\mu = \sum\limits_{\nu =1}^m \sum\limits_{i=0}^n E^i_{\mu \nu} \frac{\partial \varphi_\nu}{\partial x^i} + \Phi_\mu (x|\varphi) =0, \hspace{2cm} \mu=1,2,..., m, \end{equation} \\ and \begin{equation} \label{eq:11} E_\mu = \sum\limits_{\nu =1}^m \sum\limits_{i,k=0}^n E^{ik}_{\mu \nu} \frac{\partial^2 \varphi_\nu}{\partial x^i \partial x^k} + \Phi_\mu (x|\varphi| \chi) =0, \hspace{1cm} \mu=1,2,..., m, \end{equation} \\ respectively. In Eq. ($\ref{eq:10}$) the $E^i_{\mu \nu}$ and $ \Phi_\mu $ depend on the $x$ and $\varphi$, whereas in Eq. ($ \ref{eq:11}$) the $E^{ik}_{\mu \nu} $ and $\Phi_\mu$ depend on the $x$, $\varphi$ and on the first-order partial derivatives of the $\varphi$ with respect to the $x$. Since the $\varphi_\nu$ are taken to fulfill the conditions under which one can exchange the order of derivatives, one can always assume that $E^{ik}_{\mu \nu}$ is symmetric in the lower case Latin indices. In the particular case of a single unknown function $\varphi$, Eqs. ($\ref{eq:11}$) reduce to the single equation: \begin{equation} \label{eq:12} E = \sum\limits_{i,k=0}^n E^{ik} \frac{\partial^2 \varphi_\nu}{\partial x^i \partial x^k} + \Phi(x|\varphi| \chi) =0, \end{equation} \\ where $\chi$ is a concise notation for the first-order partial derivatives of $\varphi$ with respect to $x^0$, $x^1$, ..., $x^n$. A remarkable equation of the type ($\ref{eq:12}$) is the scalar wave equation (having set $x^0=t$ in $c=1$ units): \begin{equation} \label{eq:13} \Box \varphi = \bigg{(} \frac{1}{V^2}\frac{\partial^2}{\partial t^2} - \Delta \bigg{)} \varphi = 0, \end{equation} where $V$ is a constant and $\Delta = \sum \limits_{i=1}^3 \frac{\partial^2}{\partial (x^i)^2}$ the standard notation for minus the Laplacian in Euclidean three-dimensional space $\mathbb{R}^3$. The $\Box$ operator in Eq. ($\ref{eq:13}$) is the familiar D'Alembert operator for the wave equation in Minkowski space-time. The systems ($\ref{eq:10}$) and ($\ref{eq:11}$) are not yet written in normal form, and we now aim at finding the conditions under which such systems are normal with respect to the variable $x^0$. For this purpose, we begin with the system ($\ref{eq:10}$) and point out that, since we are only interested in first-order partial derivatives with respect to $x^0$, we can re-express such equations in the form \begin{equation} \label{eq:14} \sum\limits_{\nu =1}^m E^0_{\mu \nu} \frac{\partial \varphi_\nu}{\partial x^0} + ... =0, \hspace{3cm} \mu= 1,2,...,m. \end{equation} \\ This system can be solved with respect to the derivatives $\frac{\partial \varphi_\nu}{\partial x^0}$ if the determinant of the matrix $E^0_{\mu \nu}$ does not vanish, i.e. \begin{equation} \label{eq:15} \Omega = det E^0_{\mu \nu} \neq 0, \hspace{2cm} \mu,\nu=1,2,...,m. \end{equation} Such a determinant involves the independent variables $x^0$, $x^1$, ..., $x^n$ and also, in general, the unknown functions $\varphi_1$, $\varphi_2$, ..., $\varphi_m$. \\ Let us now consider the Eq. ($\ref{eq:11}$) of the second system, which are written more conveniently in the form \begin{equation} \label{eq:16} \sum\limits_{\nu=1}^m E^{00}_{\mu \nu} \frac{ \partial^2 \varphi_\nu}{\partial (x^0)^2} + ... = 0, \hspace{2cm} \mu=1,2,...,m, \end{equation} and are hence solvable with respect to $\frac{\partialì2 \varphi_\nu}{\partial (x^0)^2}$ if the determinant of the matrix $E^{00}_{\mu \nu}$ does not vanish, i.e. \begin{equation} \label{eq:17} \Omega = det E^{00}_{\mu \nu} \neq 0, \hspace{2cm} \mu,\nu=1,2,...,m. \end{equation} Furthermore, the single equation Eq. ($\ref{eq:12}$) can be put in normal form provided that \begin{equation} \label{eq:18} E^{00} \neq 0. \end{equation} If the normality conditions ($\ref{eq:15}$), ($\ref{eq:17}$) and ($\ref{eq:18}$) are satisfied, for a given carrier hyperplane having equation $x^0 = a^0$, one can apply the Cauchy theorem, and the functions $\varphi_\nu$, or the single function $\varphi$ of Eq. ($\ref{eq:12}$), are uniquely determined in the neighbourhood of such hyperplane.\\ It is now necessary to investigate under which conditions the normal character is preserved, if the independent variables $x^0$, $x^1$, ..., $x^n$ are mapped into new variables $z$, $z^1$, ..., $z^n$, so that the hyperplane of equation $x^0=a^0$ is turned into a hypersurface $\sigma$ of the space $S$ having equation \begin{equation} \label{eq:19} z(x^0,x^1,...,x^n)=z^0, \end{equation} \\ starting from which one can determine (at least in a neighbourhood) the $\varphi$ functions. \\ For this purpose, one defines \begin{equation} \label{eq:20} p_i \equiv \frac{\partial z}{\partial x^i}, \hspace{2cm} i=0,1,...,n, \end{equation} from which one obtains \begin{equation} \label{eq:21} \frac{\partial \varphi_\nu}{\partial x^i} = \frac{\partial \varphi_\nu}{\partial z} p_i + \sum\limits_{j=1}^3 \frac{\partial \varphi_\nu}{\partial z^j} \frac{\partial z_j}{\partial x^i}, \hspace{2cm} \nu=1,2,...,m, \end{equation} where we need, on the right-hand side, only the first term, so that we write: \begin{equation} \label{eq:22} \frac{\partial \varphi_\nu}{\partial x^i} = \frac{\partial \varphi_\nu}{\partial z}p_i + ..., \hspace{2cm} \nu=1,2,...,m. \end{equation} The insertion of ($\ref{eq:22}$) into the system ($\ref{eq:10}$) yields \begin{equation}\label{eq:23} \sum\limits_{\nu=1}^m \frac{\partial \varphi_\nu}{\partial z} \sum\limits_{i=0}^n E^i_{\mu \nu} p_i + ... = 0, \hspace{2cm} \mu=1,2,...,m. \end{equation} If now one sets \begin{equation} \label{eq:24} \omega^{(1)}_{\mu \nu} \equiv \sum\limits_{i=0}^n E^i_{\mu \nu}p_i, \end{equation} the transformed system turns out to be normal provided that \begin{equation} \label{eq:25} \Omega^{(1)} \equiv det \omega^{(1)}_{\mu \nu} \neq 0, \hspace{2cm} \mu,\nu=1,2,..,m. \end{equation} As far as the system ($\ref{eq:11}$) is concerned, one finds in analogous way \begin{equation}\label{eq:26} \frac{\partial^2 \varphi_\nu}{\partial x^i \partial x^k} = \frac{\partial^2 \varphi_\nu}{\partial z^2} p_i p_k + ..., \end{equation} and Eqs. $(\ref{eq:11})$ are turned into \begin{equation*} \sum_{\nu =1}^m \frac{\partial^2 \varphi_\nu}{\partial z^2} \sum_{i, k=0}^n E^{ik}_{\mu \nu} p_i p_k + ...=0, \hspace{2cm} \mu=1, 2, ..., m. \end{equation*} If one defines the matrix \begin{equation} \label{eq:27} \omega^{(2)}_{\mu \nu} \equiv \sum\limits_{i,k=0}^n E^{ik}_{\mu \nu} p_i p_k, \end{equation} the condition of normality of the system is expressed by non-singularity of this matrix, i.e. \begin{equation} \label{eq:28} \Omega^{(2)} \equiv det \omega^{(2)}_{\mu \nu} \neq 0, \hspace{2cm} \mu, \nu=1,2,...,m. \end{equation} Note that, in Eq. ($\ref{eq:25}$), the $\omega^{(1)}_{\mu \nu}$ are linear forms of the variables $p_0$, $p_1$, ..., $p_n$, and hence $\Omega^{(1)}$ is a form of degree $m$ in such arguments, while in Eq. ($\ref{eq:28}$) the $\omega^{(2)}_{\mu \nu}$ are quadratic forms of the $p$, and hence $\Omega^{(2)}$ is a form of degree $2m$ of the argumets $p_0$, $p_1$, ..., $p_n$. \\ In the case of the unique Eq. ($\ref{eq:12}$), the determinant reduces to the only element \begin{equation} \label{eq:29} \Omega = \sum\limits_{i,k=0}^n E^{ik}p_i p_k \end{equation} To sum up, to every function $z(x^0, x^1, ..., x^n)$ for which $\Omega$ does not vanish identically, there corresponds a family of hypersurfaces $z=z^0$, starting from each of which it is still possible to solve the Cauchy problem. This consists in determining the unknown functions when the initial data are relative to the hypersurface itself. This holds by virtue of the normal character of the transformed system with respect to $z$. \\ When the function $z(x^0, x^1, ..., x^n)$ satisfies the equation \begin{equation} \label{eq:30} \Omega=0, \end{equation} it is no longer possible to apply (regardless of the value taken by the constant $z^0$) the Cauchy theorem starting from the carrier hypersurfaces $z=z^0$. In such a case, the carrier hypersurfaces are said to be $\textit{characteristic manifolds}$ \cite{Friedlander:2010eqa, civita1931caratteristiche}. \\ Equation ($\ref{eq:30}$) warns us that the system formed by the variables $z, z^1, ..., z^n$ is not normal with respect to $z$ and makes it possible to assign the manifolds, in correspondence to which one cannot state that the unknown functions can be determined, once the values of the unknown functions and their derivatives of order less than the maximum have been assigned on the manifold. \\ For the case of the single Eq. ($\ref{eq:12}$), the characteristic manifold is the one satisfying the equation \begin{equation} \label{eq:31} \sum\limits_{i,k=0}^n E^{ik}p_i p_k =0. \end{equation} \\ Such a manifold is necessarily complex if the quadratic form on the left-hand side of ($\ref{eq:31}$) is positive-definite. Otherwise the manifold is real, if the initial data, called Cauchy data, are real. Equation ($\ref{eq:31}$) can then be viewed as expressing the vanishing of the square of the pseudo-norm of the normal vector, and is therefore a null hypersurface. In other words, with the nomenclature of relativity and pseudo-Riemannian geometry, $\textit{characteristic manifolds}$ are null hypersurfaces. \section{The Concept of Wavelike Propagation} The scal wave equation ($\ref{eq:13}$) can be applied, in particular, to the air's acoustic vibrations, or to the vibrations of other gaseous masses, since one can neglect in a first approximation any dissipative effect and hence consider the motion as if it were irrotational, without heat exchange among particles (this behaviour is called $\textit{adiabatic}$). If the velocity potential $\varphi$ in Eq. ( $\ref{eq:13}$) describes sound vibrations in the air, the three partial derivatives represent the speed of the air molecule located in $(x^1, x^2, x^3)$ at time $t$. More precisely, what is vibrating at a generic time $t$ is a certain air's stratum, placed in between the two surfaces \begin{equation} \label{eq:32} z(t|x)=c_1, \hspace{1cm} z(t|x) = c_2 . \end{equation} Outside this stratum there is rest; i.e. the solution of Eq. ($\ref{eq:13}$) vanishes, whereas within the stratum the acoustic phenomenon is characterized by a non-vanishing solution $\varphi(t|x)$. \\ From now on, without insisting on the acoustic interpretation of the solutions of Eq. ($\ref{eq:13}$), we assume that $\varphi(t|x)$ and $\varphi^*(t|x)$ are solutions of this equation within and outside of the stratum determined by the surfaces in Eq. ($\ref{eq:32}$), respectively.\\ The phenomenon described by Eq. ($\ref{eq:13}$) is characterized by two distinct functions, depending on whether it is studied inside or outside the stratum. Throughout the surface of Eq. ($\ref{eq:32}$) the derivatives of various orders of $\varphi$ will undergo, in general, sudden variations and, for this reason, one says we are dealing with $\textit{discontinuity surfaces}$. Now it may happen that such discontinuities vary with time, in which case the discontinuity that undergoes propagation is said to be a $\textit{wave}$.\\ Thus, if Eq. ($\ref{eq:13}$) is interpreted as characterizing a possible wavelike propagation, the discontinuity surface (or $\textit{wave surface}$) bounds a stratum that undergoes displacement and, possibly, deformation with time. We shall assume that, during the motion, no interpenetration or molecular cavitation occurs, so that, on passing through a wave surface, the normal component of the velocity of a generic particle does not undergo any discontinuity. We also rule out the possible occurence of sliding phenomena of molecules on such surfaces, which would lead to tangential discontinuities of the velocity of particles. Moreover, in light of the postulates on the pressure that the mechanics of continua relies upon, the pressure cannot, under ordinary conditions, undergo sudden jumps, even if the state of motion were to change abruptly. The density $\mu$ is related to the pressure $p$ by the equation of state $f(\mu,p)=0$, which is the same on both sides of the discontinuity surface. The continuity of $p$ implies therefore that also $\mu$ is continuous. On the other hand, we have \begin{equation}\label{eq:33} \frac{\partial \varphi}{\partial t} + V^2 \sigma =0 \end{equation} the derivatives of $\varphi$ and $\varphi^*$ with respect to $t$ represent a density up to a constant factor, hence also such derivatives must be continuous across the wave surface.\\ By virtue of all previous considerations one can say that, for the Eq. ($\ref{eq:13}$) to describe a wavelike propagation, one has to assume the existence of two different solutions, say $\varphi$ and $\varphi^*$, taken to characterize the physical phenomenon inside and outside of a stratum, that match each other, i.e. have equal first-order derivatives in time and space, through the wave surface which bounds the stratum at every instant of time. The second derivatives undergo instead sudden variations.\\ Let us now consider one of the wave surfaces $\sigma$ which, at time $t$, bound the stratum where the pertubation is taking place, and let $n$ be the outward-pointing normal to such a stratum at a generic point $P$. The surface undergoes motion and, at time $t + dt$, intersects the normal $n$ at a point $Q$. The measure of the $PQ$ segment, counted positively towards the exterior, can be denoted $d\textit{n}$. The ratio \begin{equation} \label{eq:34} a \equiv \frac{d \textit{n}}{dt} \end{equation} \\ is said to be the $\textit{progression velocity}$ of the wave surface at the point $P$ at the instant of time under consideration. Under ordinary circumstances, at all points of one of the two limiting surfaces of the stratum, $\textit{a}$ is positive, while at all points of the other limiting surface $\textit{a}$ is negative. The former surface is said to be a $\textit{wave front}$ or a $\textit{bow}$, while the latter is said to be a $\textit{poop}$. The difference \begin{equation} \label{eq:35} v(P) \equiv a - \frac{d \varphi}{d \textit{n}} \end{equation} \\ between the progression velocity and the component orthogonal to $\sigma$ of the velocity of the fluid particle placed at the point $P$ at the instant $t$ is said to be the $\textit{normal propagation velocity}$ of the surface $\sigma$ at the point $P$. This velocity measures the rate at which the surface is moving with respect to the medium (and not with respect to the fixed axes!). If outside the stratum there is a rest condition, the solution $\varphi^*$ vanishes and therefore, by virtue of the matching conditions at $\sigma$, one can write that \begin{equation} \label{eq:36} \frac{d \varphi}{ d \textit{n}} =0 \Rightarrow v(P)= a. \end{equation} \\ In this particular case the propagation velocity coincides with the progression velocity.\\ Note now that the surface $\sigma$ is a characteristic manifold of Eq. ($\ref{eq:13}$), i.e. an integral of the equation \begin{equation} \label{eq:37} \frac{1}{V^2}(p_0)^2 - \sum\limits_{i=1}^3(p_i)^2=0. \end{equation} \\ Indeed, if this were not true, a unique solution of Eq. ($\ref{eq:13}$) would be determined in the neighbourhood of $\sigma$ by the mere knowledge of the values taken upon $\sigma$ by $\varphi$ and $\frac{\partial \varphi}{\partial t}$, in light of Cauchy's theorem. The wavelike propagation is therefore possible because the wave surfaces are characteristic manifolds. In order to further appreciate how essential is the consideration of characteristic manifols, let us study the following example \cite{civita1931caratteristiche}. Let us assume for simplicity that we study the wave equation ($\ref{eq:13}$) in two-dimensional Minkowski space-time, with $x^1$ denoted by $x$. Hence we write it in the form \begin{equation} \label{eq:38} \bigg{(} \frac{1}{V^2} \frac{\partial^2}{\partial t^2} - \frac{\partial^2}{\partial x^2} \bigg{)} \varphi = \bigg{(} \frac{1}{V} \frac{\partial}{\partial t} + \frac{\partial}{\partial x} \bigg{)} \bigg{(} \frac{1}{V} \frac{\partial}{\partial t} - \frac{\partial}{\partial x} \bigg{)}=0. \end{equation} \\ The form of Eq. ($\ref{eq:38}$) suggests defining the new variables \begin{equation} \label{eq:39} z \equiv x - Vt, \hspace{1cm} z_1 \equiv x + Vt, \end{equation} from which the original variables are re-expressed as \begin{equation}\label{eq:40} x= \frac{1}{2} (z+z_1), \hspace{1cm} t= \frac{1}{2} \frac{(z_1-z)}{V}. \end{equation} Moreover, the standard rules for differentiation of composite functions lead now to \begin{equation}\label{eq:41} \frac{\partial}{\partial z}= \frac{1}{2} \bigg{(} \frac{\partial}{\partial x}- \frac{1}{V} \frac{\partial}{\partial t} \bigg{)}, \hspace{1cm} \frac{\partial}{\partial z_1} = \frac{1}{2} \bigg{(} \frac{\partial}{\partial x} + \frac{1}{V} \frac{\partial}{\partial t} \bigg{)}, \end{equation} \\ and hence Eq. ($\ref{eq:38}$) reads as \begin{equation} \label{eq:42} \frac{\partial^2 \varphi}{\partial z \partial z_1} = 0, \end{equation} which is solved by a sum of arbitrary smooth functions \begin{equation} \label{eq:43} \varphi(z,z_1)= \alpha(z) + \beta (z_1) \end{equation} depending only on $z$ and on $z_1$, respectively. Thus, it is not possible in general to solve the Cauchy problem for a carrier line $z=c$, but it is necessary that the data satisfy a compatibility condition. In our case, from the solution ($\ref{eq:43}$) one finds \begin{equation} \label{eq:44} \varphi(c,z_1)= \alpha(c) + \beta(z_1), \hspace{1cm} \bigg{(} \frac{\partial \varphi}{\partial z}\bigg{)}_{z=c} = \alpha'(c). \end{equation} \\ The functions $\varphi_0=\varphi(z=c)$ and $\varphi_1= \big{(}\frac{\partial \varphi}{\partial z} \big{)}_{z=c}$ of the variable $z_1$ cannot be therefore chosen at will, but the function $\varphi_1(z_1)$ must be a constant, in which case there exist infinitely many forms of the solution of the Cauchy problem for the scalar wave equation. \section{The Concept of Hyperbolic Equation} The scalar wave equation ($\ref{eq:13}$) is a good example of hyperbolic equation, but before we go on it is appropriate to define what is an equation of hyperbolic type. Following Leray \cite{leray1955hyperbolic}, we first define this concept on a vector space and then on a manifold.\\ We consider a $l$-dimensional vector space $X$ over the field of real numbers, whose dual vector space is denoted by $\Xi$. The point $x=(x^1, ..., x^l) \in X$, and the point $p=\big{(} \frac{\partial}{\partial x^1},..., \frac{\partial}{\partial x^l}\big{)} \in \Xi$. A differential equation of order $m$ can be therefore written in the form \begin{equation} \label{eq:45} a(x,p)u(x)=v(x), \end{equation} \\ where $a(x,\xi)$ is a given real polynomial in $\xi$ of degree $m$ whose coefficients are functions defined on $X$, $u(x)$ is the unknown function and $v(x)$ a given function. Let $h(x,\xi)$ be the sum of the homogeneous terms of $a(x,\xi)$ of degree $m$ (also called the $\textit{leading symbol}$ of the differential operator $a(x,p)$), and let $V_x(h)$ be the cone defined in $\Xi$ by the equation \begin{equation} \label{eq:46} h(x,\xi) =0. \end{equation} \\ The differential operator $a(x,p)$ is said to be $\textit{hyperbolic at the point x}$ if $\Xi$ contains points $\xi$ such that any real line through $\xi$ cuts the cone $V_x(h)$ at $m$ real and distinct points. These points $\xi$ constitute the interior of two opposite convex and closed half-cones $\Gamma_x(a)$ and $- \Gamma_x(a)$, whose boundaries belong to $V_x(h)$.\\ Suppose that the following conditions hold: \begin{description} \item[(i)] The operator $a(x,p)$ is hyperbolic at each point $x$ of the vector space $X$. \item[(ii)] The set \begin{equation} \label{eq:47} \Gamma_X \equiv \cap_{x \in X} \Gamma_x \end{equation} has a non-empty interior. \item[(iii)] No limit of $h(x,\xi)$ as the norm of $x$ approaches 0 is vanishing. \item[(iv)] No limit of the cones $V_x(h)$ as the norm of $x$ approaches infinity has singular generator. \end{description} Under such circumstances, the operator $a(x,p)$ is said to be $\textit{regularly}$ $\textit{hyperbolic}$ on $X$. When $X$ is instead a $l-dimensional$ $(m+ M)$-smooth manifold, not necessarily complete, the operator $a(x,p)$ is said to be $\textit{hyperbolic on}$ $X$ when the following conditions hold: \begin{description} \item[(1)] $a(x,p)$ is hyperbolic at any point $x$ of $X$, in the sense specified above. \item[(2)] The set of timelike paths (i.e. with timelike tangent vector) from $y$ to $z$ is compact or empty for any $y$ and $z$ $\in$ $X$. \item[(3)] Either the coefficients of $a(x,p)$ have $\textit{locally bounded}$ (which means boundedness on any compact subset of $X$) derivatives of order $M$ such that $1 \leq M \leq l$, or they have locally bounded derivatives of order $\leq l'$ and locally square integrable derivatives of order $>l'$ and $\leq M$, $l'$ being the smallest integer $> \frac{l}{2}$. This technical condition will became clear in one of the following chapters. \item[(4)] The total curvature of the interior of $\Gamma_x$ is positive. If $M=1$, then the first derivatives of the coefficients of $h(x,\xi)$ are continuous. \end{description} \section{Riemann Kernel} The modern theory of hyperbolic equations was initiated by Riemann’s representation of the solution of the initial-value problem for an equation of second order. Riemann was motivated by a very concrete problem in acoustics, but here we focus on the mathematical ingredients of his conceptual construction. \\ Given a differential expression $\varphi(x, y, y^1, ..., y^n)$ of the variable $x$, a function $y$ and its derivatives up to the $n^{\rm th}$-order, the equation \begin{equation} \label{eq:48} \frac{\partial \varphi}{\partial y} - \frac{d}{dx}\bigg{(} \frac{\partial \varphi}{\partial y^1}\bigg{)} + \frac{d^2}{dx^2}\bigg{(} \frac{\partial \varphi}{\partial y}\bigg{)} - ... =0 \end{equation} expresses the necessary and sufficient condition such that the $\varphi$ function is the derivative of a function $\psi$ which contains, at the same time, the independent variable $x$, the function $y$ and its $(n-1)$ first derivatives. In the same way, if we consider an expression $\varphi \big{(}x, y, z, \frac{\partial z}{\partial x}, \frac{\partial z}{\partial y}, \frac{\partial^2 z}{\partial x^2}, \frac{\partial z}{\partial x \partial y}, \frac{\partial^2 z}{\partial y^2}, \dots \big{)}$ which contains two independent variables, a function $z$ of them and its partial derivatives up to any $n^{\rm th}$-order, the equation \begin{equation} \label{eq:49} \begin{split} &\frac{\partial \varphi}{\partial z} - \frac{\partial}{\partial x} \bigg{(} \frac{\partial \varphi}{\partial ( \frac{\partial z}{\partial x}) }\bigg{)} - \frac{\partial}{\partial y} \bigg{(} \frac{\partial \varphi}{\partial ( \frac{\partial z}{\partial y}) }\bigg{)} + \frac{\partial^2}{\partial x^2} \bigg{(} \frac{\partial \varphi}{\partial ( \frac{\partial^2 z}{\partial x^2}) }\bigg{)}\\ &+ \frac{\partial^2}{\partial y^2} \bigg{(} \frac{\partial \varphi}{\partial ( \frac{\partial^2 z}{\partial y^2}) }\bigg{)} + \frac{\partial^2}{\partial x \partial y} \bigg{(} \frac{\partial \varphi}{\partial ( \frac{\partial^2 z}{\partial x \partial y}) }\bigg{)}- ... =0 \end{split}\end{equation} expresses the necessary and sufficient condition such that $\varphi$ can be read as $\frac{\partial M}{\partial x} + \frac{\partial N}{\partial y}$, where $M$ and $N$ are functions of $x$, $y$, $z$ and of their partial derivatives up to an order that can be reduced to $n$ or to $(n-1)$. Now, we consider a linear hyperbolic equation of order $n$ \begin{equation} \label{eq:50} L[z] = \sum \limits_{i,k=0}^n A_{ik} \frac{\partial^{i+k} z}{\partial x^i \partial y^k} = 0. \end{equation} \\ If we multiply the left-hand side by an unknown $u$ and if the Eq. ($\ref{eq:49}$) is verified, we have the linear equation: \begin{equation} \label{eq:51} L^*[u] = \sum\limits_{i,k=0}^n (-1)^{i+k} \frac{\partial^{i+k}}{\partial x^i \partial y^k} (A_{ik} u) =0, \end{equation} \\ which defines $u$. This equation is the adjoint of the proposed equation. For any $z$ and $u$, a series of integrations by parts lead us to the identity \begin{equation} \label{eq:52} u L[z] - z L^*[u] = \frac{\partial M}{\partial x} + \frac{\partial N}{\partial y} \end{equation} \\ where $M$ and $N$ have the following values: \begin{equation} \label{eq:53} \left\{\begin{array} {l} M= A_{10} zu + A_{20} u \frac{\partial z}{\partial x} - z \frac{\partial (A_{20} u)}{ \partial x} + \frac{1}{2} A_{11}u \frac{\partial z}{\partial y} - \frac{1}{2} z \frac{\partial (A_{11} u)}{\partial y} + ... \\ N= A_{01}zu + A_{02} u \frac{\partial z}{\partial y} - z \frac{\partial (A_{02} u)}{ \partial y} + \frac{1}{2} A_{11}u \frac{\partial z}{\partial x} - \frac{1}{2} z \frac{\partial (A_{11} u)}{\partial x} + ... \end{array}\right. \end{equation} and depend on $z$, $u$ and their partial derivatives up to the $(n-1)^{\rm th}$ order.\\ It is important to remark that the expressions of $M$ and $N$ are not completely specified. The right-hand side of the identity ($\ref{eq:52}$), has the same expression if we replace $M$ and $N$ with $M - \frac{\partial \theta}{\partial x} $ and $ N - \frac{\partial \theta}{\partial y}$ and we can take as $\theta$ a linear function of $z$, $u$ and their partial derivatives up to the $(n-2)^{\rm th}$ order, without changing the general form of the values of $M$ and $N$. We can deduce from the previous identity that the relation between $L[z]$ and $L^*[u]$ is mutual, meaning that each equation is the adjoint of the other. \\ To estabilish the identity $ u L[z] - z L^*[u] = \frac{ \partial M}{\partial x} + \frac{\partial N}{\partial y}$ in all its generality, we can make the following calculation.\\ We say that the expression $u \frac{\partial^i v}{\partial x^i} - (-1)^i v \frac{\partial^i u}{\partial x^i} $ is the exact derivative of a function of $u$ and $v$ and of their derivatives up to the $(i-1)^{\rm th}$ order. If we replace $v$ with $\frac{\partial^k v}{\partial y^k}$, we have \begin{equation} \label{eq:54} u \frac{\partial^{i+k} v}{\partial x^i \partial y^k} - (-1)^i \frac{\partial^k v}{\partial y^k} \frac{\partial^i u}{\partial x^i} = \frac{\partial P}{\partial x} \end{equation} \\ $P$ contains the derivatives of $u$ and $v $ up to the order $(i+k-1)$.\\ If we replace, in the previous equation, $u$ with $v$, $x$ with $y$ and $i$ with $k$, we have \begin{equation} \label{eq:55} v \frac{\partial^{i+k}u}{\partial x^i \partial y^k} - (-1)^k \frac{\partial^i u}{\partial x^i}\frac{\partial^k v}{\partial y^k} = \frac{\partial Q}{\partial y}. \end{equation} \\ The combination of equations $(\ref{eq:54})$ and $(\ref{eq:55})$ gives us the most general identity \begin{equation} \label{eq:56} u \frac{\partial^{i+k} v}{\partial x^i \partial y^k} - (-1)^{i-k}v \frac{\partial^{i+k} u}{\partial x^i \partial y^k} = \frac{\partial P}{\partial x} - (-1)^{i - k} \frac{\partial Q}{\partial y} \end{equation} \\ where $P$ and $Q$ contain the $u$ and $v$ partial derivatives up to the $(i+k-1)^{\rm th}$ order. We have \begin{equation} \label{eq:57} u L[z] - z L^*[u] = \sum\limits_{i,k=0}^n \bigg{(} u A_{ik} \frac{\partial^{i+k} z}{\partial x^i \partial y^k} - (-1)^{i-k}z \frac{\partial^{i+k}(A_{ik}u)}{\partial x^i \partial y^k}\bigg{)} \end{equation} and it is possible to use the identity ($\ref{eq:52}$), by replacing $u$ with $A_{ik}u$ and $v$ with $z$, to recognize that the right-hand side of the previous equation can be written as $\frac{\partial M}{\partial x} + \frac{\partial N}{\partial y}$, where $M$ and $N$ contain the derivatives up to the order $(n-1)$. Let us now consider the double integral \begin{equation} \label{eq:58} \int \int dx dy (u L[z] - z L^*[u]) = \int \int dx dy \bigg{(} \frac{\partial M}{\partial x} + \frac{\partial N}{\partial y}\bigg{)} \end{equation} extended to a plane's area $A$ which we suppose to be simply connected and bounded in $S$; this double integral has the same value of the simple integral $\int ( M dy - N dx)$ extended to the bound $S$ walked in the strict sense. Thus, Eq. ($\ref{eq:57}$) may be written in the form \begin{equation} \label{eq:59} \int \int dx dy (u L[z] - z L^*[u]) = \int_S (M dy - N dx) \end{equation} that is equivalent to the identity ($\ref{eq:52}$). It is possible to recognize that the indeterminacy stated above for the values of $M$ and $N$ does not affect the previous equation. Indeed, if we replace in Eq. ($\ref{eq:59}$) $M$ and $N$ with their more general values $M + \frac{\partial \theta}{\partial y}$ and $ N - \frac{\partial \theta}{\partial x}$, the right-hand side of the previous equation increases of the integral $\int_S d \theta$ which clearly vanishes everytime that $\theta$ is a finite and uniform function inside the area $A$.\\ To discuss Riemann's method, let us consider a second-order linear hyperbolic equation in two variables that can be written in two equivalent forms \begin{equation}\label{eq:60} L[z] \equiv \bigg{(} \frac{\partial^2}{\partial x \partial y} + a(x,y) \frac{\partial}{\partial x} + b(x,y) \frac{\partial}{\partial y} + c(x,y) \bigg{)} z = f(x,y), \end{equation} \begin{equation} \label{eq:61} L[z] \equiv \bigg{(} \frac{\partial^2}{\partial y^2} - \frac{\partial^2}{\partial x^2} + d(x,y) \frac{\partial}{\partial x} + h(x,y) \frac{\partial}{\partial y} + e(x,y) \bigg{)} z = f(x,y), \end{equation} where $a$, $b$, $c$, $d$, $h$, $e$ and $f$ are of a suitable differentiability class. The initial curve $C$ is taken to be nowhere tangent to a characteristic direction; the characteristics pertaining to Eq. ($\ref{eq:60}$) are straight lines parallel to the coordinate axes; the characteristics in Eq. ($\ref{eq:61}$) are the lines $x + y = const.$ and $x - y = const$. The aim is to represent a solution $z$ at a point $P$ in terms of $f$ and of the initial data, i.e. the values taken by $z$ and one derivative of $z$ on $C$. If the initial curve degenerates into a right angle formed by the characteristics $x = \gamma$ and $y= \delta$, it is no longer possible to prescribe two conditions on the initial curve $C$, but it is necessary to consider the $\textit{characteristic initial-value problem}$, in which only the values of $u$ on $x = \gamma$ and $y=\delta$ are prescribed. Now we choose to consider Eq. ($\ref{eq:60}$) and to use the Riemann's method which consists in multiplying the hyperbolic equation by a function $u$, integrating over a region $A$, transforming the integral by Green's formula such that $z$ appears as a factor of the integrand, then to try to determine $u$ in such a way that the required representation is obtained. This procedure is implemented by introducing the adjoint operator $L^*$, defined to give, as we have seen before, $u L[z] - zL^*[u]$, which is a divergence. For the hyperbolic equation in the form ($\ref{eq:60}$), the adjoint operator $L^*$ turns out to be \begin{equation} \label{62} L^*[u] = \bigg{(} \frac{\partial^2}{\partial x \partial y} - \frac{\partial a}{\partial x} - a(x,y) \frac{\partial}{\partial x} - \frac{\partial b}{\partial y} - b(x,y) \frac{\partial}{\partial y} + c(x,y) \bigg{)} u, \end{equation} and hence we have \begin{equation} \label{eq:63} \left\{\begin{array} {l} L[z] = \frac{\partial^2 z}{\partial x \partial y} + a \frac{\partial z}{\partial x} + b \frac{\partial z}{\partial y} + cz, \\ L^*[u] = \frac{\partial^2 u}{\partial x \partial y} - a \frac{\partial u}{\partial x} - b \frac{\partial u}{\partial y} + \bigg{(} c - \frac{\partial a}{\partial x} - \frac{\partial b}{\partial y} \bigg{)} u, \\ M = auz + \frac{1}{2} \bigg{(} u \frac{\partial z}{\partial y} - z \frac{\partial u}{\partial y} \bigg{)}, \\ N= buz + \frac{1}{2} \bigg{(} u \frac{\partial z}{\partial x} - z \frac{\partial u}{\partial x} \bigg{)}. \end{array}\right. \end{equation} More precisely, the identity $ u L[z] - z L^*[u] = \frac{\partial M}{\partial x} + \frac{\partial N}{\partial y}$ reads as \begin{equation*} \begin{split} & u L[z] - zL^*[u] = u\frac{\partial^2 z}{\partial x \partial y} + au \frac{\partial z}{\partial x} + bu\frac{\partial z}{\partial y} + cu z - z \frac{\partial^2 u }{\partial x \partial y} + az \frac{\partial u}{\partial x} \\ &+ bz \frac{\partial u}{\partial y} - z \bigg{(} c - \frac{\partial a}{\partial x} - \frac{\partial b}{\partial y} \bigg{)} u = \frac{\partial}{\partial x}(azu) + \frac{\partial}{\partial y}(bzu) + u \frac{\partial^2 z}{\partial x \partial y} - z \frac{\partial^2 u}{\partial x \partial y}\\ & = \frac{\partial}{\partial x} \bigg{[} azu + \frac{1}{2} \bigg{(} u \frac{\partial z}{\partial y} - z \frac{\partial u}{\partial y} \bigg{)}\bigg{]} + \frac{\partial}{\partial y} \bigg{[} bzu + \frac{1}{2} \bigg{(} u \frac{\partial z}{\partial x} - z \frac{\partial u}{\partial x} \bigg{)} \bigg{]}. \end{split}\end{equation*} This equation can be re-expressed in the form \begin{equation}\label{eq:64}u L [z] - z L^*[u] = \frac{\partial}{\partial y} \bigg{(} u \frac{\partial z}{\partial x} + b zu \bigg{)} - \frac{\partial}{\partial x} \bigg{(} z \frac{\partial u}{\partial y} - a z u \bigg{)}. \end{equation} Let us suppose to take as $z$ and $u$ any integrals of the equation proposed and of its adjoint equation. The integration over a two-dimensional domain $A$ with boundary $S$ and Gauss' formula lead to \begin{equation} \label{eq:65} - \int\int_A dx dy ( u L[z] - z L^*[u] ) = \int_S \bigg{[} \bigg{(} u \frac{\partial z}{\partial x} + bzu\bigg{)} dx + \bigg{(} z \frac{\partial u}{\partial y} - azu \bigg{)} dy \bigg{]}. \end{equation} Now, if $L[z]=0$ and $L^*[u]=0$, the left-hand side of Eq. ($\ref{eq:65}$) will always be equal to zero and then it reads as \begin{equation} \label{eq:66} \int_S \bigg{[}\bigg{(} u \frac{\partial z}{\partial x} + bzu \bigg{)} dx + \bigg{(} z \frac{\partial u}{\partial y} - azu \bigg{)} dy \bigg{]}=0, \end{equation} \\ or similarly \begin{equation*} \int_S \big{(} M dy - N dx \big{)} =0. \end{equation*} \\ Let $A$ be a point of the plane and $B'C'$ an arbitrary curve in this plane. If we draw from $A$ two straight lines $AB$ and $AC$ parallel to the axes which intersect the curve, and suppose that the integrals $z$, $u$, as well as the coefficients of the proposed equation and their derivatives, are finite and continuous inside the area $ABC$. By integration of the previous equation along the path $ACBA$, in fig.($\ref{fig:1}$), we have: \begin{equation} \label{eq:67} \int\limits_A^C M dy + \int\limits_C^B (M dy - N dx) - \int\limits_B^A N dx = 0 \end{equation} where \begin{equation} \label{eq:68} \int\limits_A^C M dy = \int\limits_A^C \bigg{[} \frac{1}{2} \frac{\partial (uz)}{\partial y} dy - z \bigg{(} \frac{\partial u}{\partial y} - au \bigg{)}dy \bigg{]} \end{equation} \begin{equation} \label{eq:69} \int\limits_A^B N dx = \int\limits_A^B \bigg{[} \frac{1}{2} \frac{\partial (uz)}{\partial x} dx - z \bigg{(} \frac{\partial u}{\partial x} - bu \bigg{)}dx \bigg{]} \end{equation} \begin{figure} \centering \includegraphics{fig1mod.png} \caption{}\label{fig:1} \end{figure} Then, if we denote by $\varphi_P$ the value of a function $\varphi$ at $P$, the previous equations read as \begin{equation} \label{eq:70} \int\limits_A^C M dy = \frac{ (uz)_C - (uz)_A}{2} - \int\limits_A^C z \bigg{(} \frac{\partial u}{\partial y} - au \bigg{)} dy \end{equation} \begin{equation} \label{eq:71} \int\limits_A^B N dx = \frac{ (uz)_B - (uz)_A}{2} - \int\limits_A^B z \bigg{(} \frac{\partial u}{\partial x} - bu \bigg{)} dx \end{equation} If we insert Eqs. ($\ref{eq:70}$) and ($\ref{eq:71}$) inside Eq. ($\ref{eq:67}$), we have \begin{equation} \label{eq:72} \begin{split} & (uz)_A = \frac{(uz)_B + (uz)_C}{2} - \int\limits_B^C (M dy - N dx)\\ & - \int\limits_A^B z \bigg{(} \frac{\partial u}{\partial x} - bu \bigg{)} dx - \int\limits_A^C z \bigg{(} \frac{\partial u}{\partial y} - au \bigg{)} dy \end{split} \end{equation} Let us study first the right-hand side of the previous equation. Our aim is to determine, using Riemann's method, the solution $z$ of the partial differential equation proposed, which assumes given values, as well as one of its two derivatives, along all points of the $B'C'$ curve. The equation $ dz= \frac{\partial z}{\partial x} dx + \frac{\partial z}{\partial y}dy $, applied to a path along this curve, clearly determines the two first derivatives which are not given at priori; then we can consider the two derivatives of $z$ as known at each point of the $B'C'$ curve. It follows that, if we choose the solution $u$ of the adjoint equation, the three terms $ (uz)_B$, $(uz)_C$ and $\int\limits_B^C (M dy - N dx) $, are perfectly known and depend only on the boundary conditions imposed upon $z$. Therefore, we try to evaluate the two integrals on the righ-hand side of Eq. ($\ref{eq:72}$) with $z_A$ as unknown. These two integrals depend, in general, from the unknown values of $z$ along the straight lines $AB$ and $AC$. To avoid these values, it is necessary that the solution $u$ has to be chosen in such a way that we have \begin{equation} \label{eq:73} \left\{\begin{array} {l} \frac{\partial u}{\partial x} - bu =0 \; \; {\rm along} \; {\rm every} \; {\rm point} \; {\rm of} \; {\rm AB} \\ \frac{\partial u}{\partial y} - au=0 \; \; {\rm along} \; {\rm every} \; {\rm point} \; {\rm of} \; {\rm AC} \end{array}\right. \end{equation} If these conditions hold, the fundamental equation reads as \begin{equation} \label{eq:74} (uz)_A = \frac{(uz)_B + (uz)_C}{2} - \int\limits_C^B (N dx - M dy) \end{equation} and it will give us the value of $z$ for each point of the plane's region $A$, depending only on the boundary conditions. Thus, we have to determine the solution $u$ of the adjoint equation in order to satify the previously stated conditions. To represent $z_A = z(A)= z(\xi,\eta)$, we choose as $u$ a two-point function or $\textit{kernel}$ $R(x,y;\xi, \eta)$, where $x$ and $y$ are the coordinates of any point while $\xi$ and $\eta$ are the coordinates of $A$, subject to the following conditions: \begin{description} \item[(i)] As a functon of $x$ and $y$, $R$ satisfies the homogeneous equation \begin{equation} \label{eq:75} L^*_{(x,y)}[R]=0. \end{equation} \item[(ii)] $\frac{\partial R}{\partial x} = bR$ on the segment $AB$ parallel to the $x$-axis, and $\frac{\partial R}{\partial y} = aR$ on the segment $AC$ parallel to the $y$-axis. More precisely, one has to write \begin{equation} \label{eq:76} \frac{\partial}{\partial x} R(x,y; \xi,\eta) - b(x,\eta)R(x,y;\xi,\eta) = 0 \hspace{1mm} \; {\rm on} \; {\rm y = \eta } \end{equation} \begin{equation} \label{eq:77} \frac{\partial}{\partial y} R(x,y; \xi,\eta) - a(\xi,y)R(x,y;\xi,\eta) = 0 \hspace{1mm} \; {\rm on} \; {\rm x = \xi} \end{equation} \item[(iii)]The kernel $R$ equals 1 at coinciding points, i.e. \begin{equation} \label{eq:78} R(\xi,\eta;\xi,\eta)=1. \end{equation} \end{description} Note that Eqs.($\ref{eq:76}$) and ($\ref{eq:77}$) reduce to ordinary differential equations for the kernel $R$ along the characteristics. The integration of Eq. ($\ref{eq:76}$) gives us \begin{equation} \label{eq:79} R(x,\eta;\xi,\eta)= u_M = u_A \exp \bigg{(}\int\limits_A^M b(\lambda,\eta) d\lambda \bigg{)} \end{equation} \\ for every point $M$ along $AB$. In the same manner, we can integrate Eq. ($\ref{eq:77}$) and obtain \begin{equation} \label{eq:80} R(\xi,y;\xi,\eta)= u_M = u_A \exp \bigg{(}\int\limits_A^M a(\lambda, \xi) d\lambda \bigg{)} \end{equation} \\ for every point $M$ along $AC$. To fix the constant of integration $u_A$ to 1, we exploit Eq. ($\ref{eq:78}$). Therefore, we have \begin{equation} \label{eq:81} R(x,\eta;\xi,\eta)= u_M = \exp \bigg{(}\int\limits_\xi^x b(\lambda, \eta) d \lambda \bigg{)}, \end{equation} \begin{equation} \label{eq:82} R(\xi,y;\xi,\eta) = u_M = \exp \bigg{(}\int\limits_\eta^y a(\lambda, \xi) d \lambda \bigg{)}. \end{equation} The formulae provide the value of $R$ along the characteristics passing through the point $A(\xi,\eta)$. The problem of finding a solution $R$ of Eq. ($\ref{eq:75}$) with data ($\ref{eq:81}$) and ($\ref{eq:82}$) is said to be a $\textit{characteristic initial-value problem}$. Riemann did not actually prove the existence of such a solution, but brought this novel perspective in mathematics, i.e. solving hyperbolic equations by finding kernel functions that obey characteristic initial-value problems. In the case under examination, the desired Riemann's representation formula can be written as \begin{equation} \begin{split} \label{eq:83} z_A = &\frac{z_B R(B;\xi,\eta) + z_C R(C; \xi, \eta)}{2} + \int\limits_{B}^C \bigg{(}\bigg{[} bRz + \frac{1}{2} \bigg{(} R \frac{\partial z}{\partial x} - \frac{\partial R}{\partial x} z \bigg{)}\bigg{]}dx \\ & - \bigg{[}aRz + \frac{1}{2} \bigg{(} R \frac{\partial z}{\partial y} - \frac{\partial R}{\partial y} z \bigg{)} \bigg{]}dy \bigg{)} = \frac{z_B R(B;\xi,\eta) + z_C R(C; \xi, \eta)}{2} \\ &+ \int\limits_{B}^C (N dx - M dy). \end{split} \end{equation} This is the fundamental result established by Riemann. He found the function $u$, which is the solution of an equation that is in fact $E(\beta,\beta')$ \cite{Friedlander:2010eqa}. Now, we want to make an observation about the previous results.\\ Let us suppose that the curve $BC$ reduces itself to two straight lines parallel to the axes $B'D$ and $DC'$, in fig. ($\ref{fig:2}$), and let $x_1$ and $y_1$ be the coordinates of a point $D$. We have \begin{equation} \label{eq:84} \int\limits_C^D (Ndx - M dy) = \int\limits_C^D N dx - \int\limits_D^B M dy. \end{equation} \begin{figure} \centering \includegraphics{fig2mod.png} \caption{}\label{fig:2} \end{figure} For this purpose, the right-hand side of Eq.($\ref{eq:84}$) can be replaced by \begin{equation} \label{eq:85} \int\limits_C^D N dx = \int\limits_C^D \bigg{[}\frac{1}{2} \bigg{(} u \frac{\partial z}{\partial x} - z \frac{\partial u}{\partial x} \bigg{)} + buz \bigg{]} dx = \int\limits_C^D \bigg{[}- \frac{1}{2} \frac{\partial (uz)}{\partial x} + u \bigg{(} \frac{\partial z}{\partial x} + bz \bigg{)} \bigg{]} dx \end{equation} and then \begin{equation} \label{eq:86} \int\limits_C^D N dx = \frac{(uz)_C - (uz)_D}{2} + \int\limits_C^D u \bigg{(} \frac{\partial z}{\partial x} + bz\bigg{)}dx \end{equation} In the same manner we have \begin{equation} \label{eq:87} - \int\limits_D^B M dy = \frac{(uz)_B - (uz)_D}{2} + \int\limits_B^D u \bigg{(} \frac{\partial z}{\partial y} + az\bigg{)}dy \end{equation} On inserting ($\ref{eq:86}$) and ($\ref{eq:87}$) inside Eq. ($\ref{eq:74}$) and Eq. ($\ref{eq:84}$), we then have \begin{equation} \label{eq:88} z_A = (uz)_D - \int\limits_C^D u \bigg{(} \frac{\partial z}{\partial x} + bz \bigg{)} dx - \int\limits_B^D u \bigg{(} \frac{\partial z}{\partial y} + az \bigg{)} dy. \end{equation} This formula holds for every solution $z$ of the proposed equation. It gives the most general analogy with Eq. ($\ref{eq:74}$), but it differs from it by an essential property. We can recognize immediately that it is still not necessary to specify one of the $z$ derivatives on the path $C'DB'$. In order to evaluate the two integrals inside Eq. ($\ref{eq:88}$) it is sufficient to know the values of the solution over the straight lines $C'D$ and $DB'$. We have to find the underlying reason for this result, in this case where the new contour consists of the characteristics of the proposed linear equation.\\ Let us suppose now to take as $z$ a particular solution $z(x,y;x_1,y_1)$ of the proposed equation which can be determined by the same conditions of $u(x,y;x_0,y_0)$ considered as the solution of the adjoint equation. When we pass from the equation to its adjoint, there is a sign change for the coefficients $a$ and $b$ and the solution becomes \begin{equation} \label{eq:89} z = \exp\bigg{(}-\int\limits_{x_1}^x b d \lambda\bigg{)} \; {\rm for} \hspace{0.5cm} y=y_1; \hspace{1cm} z = \exp \bigg{(}-\int\limits_{y_1}^y a d \lambda \bigg{)} \; {\rm for} \hspace{0.5cm} x=x_1 \end{equation} and consequently $z=1$ when $x=x_1$ and $y=y_1$. Hence we will have \begin{equation} \label{eq:90} \left\{\begin{array} {l} \frac{\partial z}{\partial x} + bz =0 \; \; {\rm along} \; {\rm every} \; {\rm point} \; {\rm of} \; {\rm CD}, \\ \frac{\partial z}{\partial y} + az=0 \; \; {\rm along} \; {\rm every} \; {\rm point} \; {\rm of} \; {\rm BD},\\ z=1 \; \; {\rm for} \; {\rm the} \; {\rm point} \; {\rm D}. \end{array}\right. \end{equation} Then the equation \begin{equation} \label{eq:91} z_A = (uz)_D - \int\limits_C^D u \bigg{(} \frac{\partial z}{\partial x} + bz \bigg{)} dx - \int\limits_B^D u\bigg{(} \frac{\partial z}{\partial y} + az \bigg{)} dy \end{equation} reduces to $z_A=u_D$, i.e. $z(x_0,y_0; x_1,y_1)=u(x_1,y_1;x_0,y_0)$. The solution $u(x,y;x_0,y_0)$ of the adjoint equation can be considered as a function of the parameters $x_0$, $y_0$; it is a solution of the originary equation, where we have replaced $x$, $y$, with $x_0$, $y_0$, and it has in relation to this equation and the variable $x_0$, $y_0$, the property for which it has been defined as solution of the adjoint equation and a function of $x$, $y$. In other words, the definition of $u$ is still the same if we replace the linear equation with its adjoint, provided we replace the variables $x$, $y$, with $x_0$, $y_0$. It follows that the integration of two linear equations, the proposed equation and its adjoint, is reduced to the determination of the function $u(x,y;x_0,y_0)$, i.e. of $R$. This function can be defined, both as solution of the proposed equation and as a solution of the adjoint equation, by the boundary conditions to which this function is subjected. \\ Let us apply this general proposition to the equation \begin{equation} \label{eq:92} E(\beta,\beta')= \frac{\partial^2 z}{\partial x \partial y} - \frac{\beta'}{(x-y)} \frac{\partial z}{\partial x} + \frac{\beta}{(x-y)}\frac{\partial z}{\partial y}=0 \end{equation} \\ and we try to define the function $u(x,y;x_0,y_0)$ associated with this equation, considered as a solution of the adjoint equation subjected to the previously stated conditions. The adjoint equation to $E(\beta,\beta')$ reads as \begin{equation} \label{eq:93} \frac{\partial^2 u}{\partial x \partial y} + \frac{\beta'}{(x-y)} \frac{\partial u}{\partial x} - \frac{\beta}{(x-y)} \frac{\partial u}{\partial y} - \frac{\beta + \beta'} {(x-y)^2} u =0. \end{equation} If we define \begin{equation} \label{eq:94} u= (x-y)^{\beta + \beta'}\nu, \end{equation} the equation ($\ref{eq:93}$) becomes \begin{equation} \label{eq:95} \frac{\partial^2 \nu}{\partial x \partial y} - \frac{\beta}{(x-y)} \frac{\partial \nu}{\partial x} + \frac{\beta'}{(x-y)} \frac{\partial \nu}{\partial y}=0. \end{equation} A solution to this equation can be represented as $Z(\beta',\beta)$. We then have \begin{equation}\label{eq:96} u \equiv (x-y)^{\beta + \beta'} Z(\beta',\beta). \end{equation} Among the particular solutions $Z$, there exist many general properties that can be derived from the solutions to the homogeneous equation. We have that \begin{equation} \label{eq:97} x^\lambda F\bigg{(}-\lambda,\beta; 1- \beta - \lambda, \frac{y}{x}\bigg{)} \end{equation} is a solution of the equation $E(\beta,\beta')$. If we interchange $\beta$ and $\beta'$, the expression $\nu= x^\lambda F(-\lambda, \beta; 1-\beta'-\lambda,\frac{y}{x})$ will be a particular solution of ($\ref{eq:95})$ which contains only a constant $\lambda$; but we can introduce two new ones. We can make on the variables $x$ and $y$ the linear substitution \begin{equation} \label{eq:98} x \rightarrow \frac{x- y_0}{x - x_0}; \hspace{2cm} y \rightarrow \frac{y- y_0}{y - x_0} \end{equation} provided we multiply by a factor $(x- x_0)^{-\beta'}(y-y_0)^{-\beta}$. Hence, we obtain the most general formula \begin{equation} \label{eq:99} \nu = (y_0 - x)^\lambda (x-x_0)^{-\beta' - \lambda} (y-x_0)^{-\beta} F(-\lambda, \beta; 1-\beta'-\lambda,\sigma),\end{equation} where $\sigma = \frac{(x-x_0) (y-y_0)}{(x-y_0)(y-x_0)}$. It is enough to multiply by $(y-x)^{\beta+\beta'}$ to find $u$ as \begin{equation}\label{eq:100} u =(y_0 - x)^\lambda (x_0 - x)^{-\beta' - \lambda} (y-x)^{\beta+\beta'}(y-x_0)^{-\beta} F(-\lambda,\beta;1-\beta'-\lambda,x), \end{equation} that leads to the expected result. If we set $ x=x_0$ in the previous result, $\sigma$ vanishes, the $F$ series reduces to unity, while $(x_0-x)^{-\lambda - \beta'}$ makes $u$ equal to zero or makes it infinite, unless $\lambda= - \beta'$. If $\lambda$ takes this value, $u$ reads as \begin{equation}\label{eq:101} u= (y_0 - x)^{-\beta'} (y-x)^{\beta+ \beta'} (y-x_0)^{-\beta}F(\beta,\beta';1,\sigma); \end{equation} if $x=x_0$, we have $u=\bigg{(} \frac{y-x_0}{y_0-x_0 } \bigg{)}^{\beta'}$; whereas if $y=y_0$, we obtain $\sigma=0$ and $u= \bigg{(} \frac{y_0-x}{y_0-x_0 } \bigg{)}^{\beta}$. Thus, Eq. ($\ref{eq:101}$) is the expected solution. In this manner, Riemann's method applied to $E(\beta,\beta')$ makes it possible to determine its integrals with the most general boundary conditions. It will be enough to insert the value of $u$ inside Eq. ($\ref{eq:74}$) or Eq. ($\ref{eq:83}$) to find the general integral of the equation. For example, if we replace $u$ in Eq. ($\ref{eq:83}$), $z$ reads as \begin{equation} \label{eq:102} z_{x_0,y_0} = (uz)_{x_1,y_1} + \int\limits_{x_0}^{x_1} u_{x,y_1} f(x) dx + \int\limits_{y_0}^{y_1} u_{x_1,y}, \varphi(y) dy \end{equation} where $f$ and $\varphi$ are two arbitrary functions which depend on the boundary value of $z$ and $\Phi_{\alpha.\beta}$ represent the result of replacing $x$ and $y$ with $\alpha$ and $\beta$ inside $\Phi(x,y)$. \section{Proof of the existence of Riemann's Kernel} There is nothing left to do but to demonstrate that the Poisson and M. Appell formula \cite{darboux1894leccons} \begin{equation}\begin{split} \label{eq:103} Z(\beta, \beta')=& \int_x^y \varphi(u) (u - x)^{- \beta} (y - u)^{- \beta'} du \\ &+ (y-x)^{1- \beta - \beta'} \int_x^y \varphi(u) (u-x)^{\beta' - 1}(y-u)^{\beta -1} du \end{split}\end{equation} effectively provides all the integrals of the proposed equation. Following the work of Darboux \cite{darboux1894leccons}, we will first present the following observation on this integral. The equation ($\ref{eq:92}$) has its coefficients finite and continuous as long as $x$ is different from $y$. If we bisect the angle $yox$, as shown in fig. ($\ref{fig:3}$), we can say that this line is a line of discontinuity for the previous equation, and hence that the coefficients of the equation remain always finite and continuous as long as we remain on the same side of this line. \begin{figure} \centering \includegraphics{fig3mod.png} \caption{}\label{fig:3} \end{figure} Let us see what happens to the Poisson integral when $y$ approaches $x$. If we revert to the form \begin{equation} \label{eq:104} \frac{\partial^2 z}{\partial x \partial y} = \frac{M}{(1-x-y)} \frac{\partial z}{\partial x} + \frac{N}{(1-x-y)} \frac{\partial z}{\partial y} + \frac{P}{(1-x-y)^2}z \end{equation} the first term on the right-hand side of ($\ref{eq:104}$) has principal part \begin{equation} \label{eq:105} (y-x)^{t- \beta-\beta'} \int\limits_0^1 \varphi(x) t^{-\beta}(1-t)^{-\beta'} dt = \frac{ \Gamma(1-\beta) \Gamma(1-\beta')}{\Gamma(2- \beta -\beta')} \varphi(x) (y-x)^{1-\beta-\beta'}. \end{equation} This approximate value can be seen as the first term of the expansion in powers of $(y-x)$, the unwritten terms are of a higher degree. Likewise, the approximate expression of the second integral on the right-hand side of Eq. ($\ref{eq:104}$) will read as \begin{equation} \label{eq:106} \psi(x) \int\limits_0^1 t^{\beta'-1} (1-t)^{\beta-1} dt = \frac{ \Gamma(\beta)\Gamma(\beta') }{\Gamma(\beta+\beta')} \psi(x). \end{equation} It follows that, for any solution provided by the Poisson formula, the expansion according to powers of $(y-x)$ consists of two series of terms, one of integer degree and the other of degree $(1- \beta - \beta')$ increased by an integer; and limiting the expansion to the first term of each of the two series. The integral will have the approximate expression \begin{equation} \label{eq:107} z = \frac{ \Gamma(1- \beta)\Gamma(1-\beta')}{\Gamma(2-\beta- \beta')} \varphi(x) (y-x)^{1-\beta-\beta'} + \frac{\Gamma(\beta)\Gamma(\beta')}{\Gamma(\beta+\beta')} \psi(x). \end{equation} This formula will tell us the path that must be followed in order to verify that all solutions of the equation are given by the Poisson formula. We will first try to establish that, in the neighborhood of the discontinuity line, the solution sought is of the form \begin{equation} \label{eq:108} \varphi_1(x) (y-x)^{1-\beta-\beta'} + \psi_1(x) \end{equation} \\ The comparison of this form with the previous one will allow us to connect the functions $\varphi$ and $\psi$ that must appear in the Poisson formula; and all that will remain is to verify an equation or nothing will remain unknown.\\ Let us apply this method to the general integral as given by the formula \begin{equation} \label{eq:109} z_{x_0,y_0} = (uz)_{x_1,y_1} + \int\limits_{x_0}^{x_1}u_{x,y_1} f(x) dx + \int\limits_{y_0}^{y_1} u_{x_1,y} \varphi(y) dy. \end{equation} The third term on the right-hand side of Eq. ($\ref{eq:109}$) will be deduced from the second upon interchanging $x$ with $y$. We can verify that the first two terms on the right-hand side are given by the Poisson formula. To be clearer, let us suppose that we are on the discontinuity line, as in figure ($\ref{fig:3}$); $x_1$ and $y_1$ are the coordinates of $D$, while $x_0$, $y_0$ are those of $A$, and we have $x_1 <x_0<y_0$, where $x_0$ and $y_0$ are the independent variables. The first term of the previous integral is $u(x_1,y_1;x_0,y_0)$ multiplied by the constant $z_{x_1,y_1}$. We must, therefore, first check that the expression $u(x_1,y_1;x_0,y_0)$ considered as a function of the variables $x_0$, $y_0$ verifies the proposed equation and that it is given by the Poisson formula. In accordance with the general method that we are going to indicate, it will therefore be necessary to obtain first its approximation when $(y_0-x_0)$ becomes infinitely small. If we refer to the expression of $u$ \begin{equation}\label{eq:110} u =(y_0-x)^{-\beta'} (y-x)^{\beta+\beta'}(y-x_0)^{-\beta}F(\beta,\beta';1,\sigma) \end{equation} and to the definition of $\sigma$, we see that we will have \begin{equation}\label{eq:111} 1 - \sigma = \frac{(y-x)(y_0-x_0)}{(y-x_0)(y_0-x)} \end{equation} and then $(1-\sigma)$ is of the same order of $(y_0-x_0)$ and we are led to expand $F(\beta,\beta';1,\sigma)$ according to the powers of $(1-\sigma)$. For this we will borrow the theory of the hypergeometric series: \begin{equation} \begin{split} \label{eq:112} &F(\beta,\beta',1,\sigma)= \frac{\Gamma(1-\beta-\beta')}{\Gamma(1-\beta)\Gamma(1-\beta')}F(\beta,\beta',\beta+\beta',1- \sigma) \\ &+ \frac{\Gamma(\beta+\beta'-1)}{\Gamma(\beta)\Gamma(\beta')} F(1-\beta,1-\beta',2-\beta-\beta',1-\sigma)(1-\sigma)^{1-\beta-\beta'}. \end{split} \end{equation} If we bring this value of $F(\beta,\beta',1,\sigma)$ into the formula of $u$, we immediately deduce the approximate expression of $u$, when $y_0$ is approaching $x_0$. It is sufficient to bring the series $F$ back to the unit and we will find therefore for the first two terms of $u$ \begin{equation} \begin{split} \label{eq:113} u =& \frac{\Gamma(1-\beta-\beta')}{\Gamma(1-\beta)\Gamma(1-\beta')} (x_0-x)^{-\beta'} (y-x)^{\beta+\beta'}(y-x_0)^{-\beta} \\ &+\frac{\Gamma(\beta+\beta'-1)}{\Gamma(\beta)\Gamma(\beta')}(x_0-x)^{\beta-1} (y-x)(y-x_0)^{\beta'-1}(y_0-x_0)^{1-\beta-\beta'}. \end{split}\end{equation} From the comparison of this formula with the equation for $z$, where we have replaced $x$ and $y$ with $x_0$ and $y_0$, we immediately obtain the two functions that must occur in the Poisson formula. We then find \begin{equation} \label{eq:114} \left\{\begin{array} {l} \varphi(\alpha)=-A(\alpha - x)^{\beta-1}(y-x)(y-\alpha)^{\beta'-1}, \\ \psi(\alpha)=A(\alpha-x)^{-\beta'}(y-x)^{\beta+\beta'}(y-\alpha)^{-\beta}, \end{array}\right. \end{equation} \\ where $A$ denotes the constant \begin{equation} \label{eq:115} A = \frac{\Gamma(1-\beta-\beta')\Gamma(\beta+\beta')}{\Gamma(\beta)\Gamma(1-\beta)\Gamma(\beta')\Gamma(1-\beta')} = \frac{sin(\beta\pi)sin(\beta'\pi)}{\pi sin(\beta+\beta')\pi}. \end{equation} If we replace these values in the Poisson formula we find the following result: \begin{equation}\label{eq:116} \begin{split} u =& A(y-x)^{\beta+\beta'}(y_0-x_0)^{1-\beta-\beta'} \times \\ &\int\limits_{x_0}^{y_0} (\alpha-x)^{-\beta'}(y-\alpha)^{-\beta}(y_0-\alpha)^{\beta-1}( \alpha -x_0)^{\beta'-1}d \alpha \\ &- A(y-x) \int\limits_{x_0}^{y_0}(\alpha - x)^{\beta-1}(y-\alpha)^{\beta'-1}(y_0-\alpha)^{-\beta'}(\alpha-x_0)^{-\beta}d\alpha; \end{split}\end{equation} and we must at most verify the agreement of this expression with that given by the expression of $z$. We verify this as follows. We match the two expressions of $u$ and the equation to be verified will take the form \begin{equation}\label{eq:117} \begin{split} F(\beta,\beta',1,\sigma)=& A(y_0-x_0)^{1-\beta-\beta'}(y_0-x)^{\beta'}(y-x_0)^{\beta} \\ & \times \int\limits_{x_0}^{y_0}(\alpha-x)^{-\beta'}(y-\alpha)^{-\beta}(y_0-\alpha)^{\beta-1} (\alpha - x_0)^{\beta'-1}d\alpha \\ & -A(y-x)^{1-\beta-\beta'}(y_0 - x)^{\beta'}(y-x_0)^{\beta} \\ & \times \int\limits_{x_0}^{y_0} (\alpha - x)^{\beta -1} (y-\alpha)^{\beta'-1}(y_0 - \alpha)^{-\beta'}(\alpha- x_0)^{-\beta}d\alpha; \end{split}\end{equation} and we note that the two terms on the right-hand side remain formally unaffected if we make the same linear substitution on $\alpha$, $x$, $ y$, $x_0$, and $y_0$. We choose the coefficients of this substitution in such a way that $x$, $ x_0$, $y_0$, reduce to infinity, 0 and 1, respectively. Then $y$ will reduce to $\frac{1}{1-\sigma}$, $\sigma$ is the one defined previously as a harmonic ratio. The right-hand side of the equation to verify becomes \begin{equation} \label{eq:118} \begin{split} & A\int\limits_0^1 [1-\alpha(1-\sigma)]^{-\beta}(1-\alpha)^{\beta-1}\alpha^{\beta'-1}d\alpha \\ & - A(1-\sigma)^{1-\beta-\beta'}\int\limits_0^1[1-\alpha(1-\sigma)]^{\beta'-1}(1-\alpha)^{-\beta'}\alpha^{-\beta}d\alpha. \end{split}\end{equation} From a well-known formula of Euler, the two previous integrals are expressed through the hypergeometric series and we find the two terms on the right-hand side of the identity $(\ref{eq:112})$ for $F(\beta,\beta',1,\sigma)$. Hence, the equality is verified.\\ Let us now consider the term \begin{equation}\label{eq:119} \int\limits_{x_0}^{x_1} u_{x,y_1}f(x)dx \end{equation} of the Riemann integral. We wrote the approximate expression of $u_{x,y_1}$ when $(x_0-y_0)$ approaches zero. If we replace it in the previous integral, we have the same approximate expression of the integral \begin{equation}\label{eq:120} \begin{split}& \frac{\Gamma(1-\beta-\beta')}{\Gamma(1-\beta)\Gamma(1-\beta')}\int\limits_{x_0}^{x_1} (x_0-x)^{-\beta'}(y_1-x)^{\beta+\beta'}(y_1-x_0)^{-\beta}f(x)dx \\ &+ \frac{\Gamma(\beta+\beta'-1)}{\Gamma(\beta)\Gamma(\beta')}(y_0-x_0)^{1-\beta-\beta'}\int\limits_{x_0}^{x_1}(x_0-x)^{\beta-1}(y_1-x)(y_1-x_0)^{\beta'-1}f(x)dx. \end{split}\end{equation} The comparison of $(\ref{eq:120})$ with the expression of $z$ gives the two functions that must occur in the Poisson formula. We find therefore \begin{equation} \label{eq:121} \left\{\begin{array} {l} \varphi(\alpha)=-A\int\limits_\alpha^{x_1}(\alpha - x)^{\beta-1}(y_1-x)(y_1-\alpha)^{\beta'-1}f(x)dx, \\ \psi(\alpha)=A\int\limits_\alpha^{x_1}(\alpha-x)^{-\beta'}(y_1-x)^{\beta+\beta'}(y_1-\alpha)^{-\beta}f(x)dx, \end{array}\right. \end{equation} where $A$ is the constant that we have previously defined. It is sufficient to verify that, by introducing these values into the Poisson integral, we find the term ($\ref{eq:119}$). The substitution of the values $(\ref{eq:121})$ gives two terms that are both of the form \begin{equation}\label{eq:122} \int\limits_{x_0}^{y_0} d\alpha\int\limits_\alpha^{\alpha_1}P dx, \end{equation} where $x_1<x_0<\alpha<y_0<y_1$. The integration variable $x$, which lies between $\alpha$ and $x_1$, can be either smaller or bigger than $x_0$. We can, therefore, decompose the previous integral \begin{equation} \label{eq:123} \int\limits_{x_0}^{y_0}d\alpha \int\limits_{\alpha}^{x_0} P dx + \int\limits_{x_0}^{y_0}d\alpha\int\limits_{x_0}^{x_1}P dx. \end{equation} For the first term, the order of magnitude of the variables will be defined by the inequalities $x_1<x_0<x<\alpha<y_0<y_1$. We can therefore invert the order of integration, that will give us $ - \int\limits_{x_0}^{y_0}dx\int\limits_{x}^{y_0}Pd\alpha$. For the second term, we have $x_1<x<x_0<\alpha<y_0<y_1$ and we can then write $\int\limits_{x_0}^{x_1}dx \int\limits_{x_0}^{y_0}P d\alpha$. If we apply these transformations to the two terms that make up the Poisson integral, we have the following result \begin{equation}\label{eq:124}\begin{split}& - A \int\limits_{x_0}^{x_1}f(x)(y_1-x)dx\int\limits_{x_0}^{y_0}(y_0-\alpha)^{-\beta'}(\alpha-x_0)^{-\beta}(\alpha-x)^{\beta-1}(y_1-\alpha)^{\beta'-1}d\alpha \\ &-A \int\limits_{x_0}^{y_0}f(x)(y_1-x)dx\int\limits_{x}^{y_0}(y_0-\alpha)^{-\beta'}(\alpha-x_0)^{-\beta}(\alpha-x)^{\beta-1}(y_1-\alpha)^{\beta'-1}d\alpha \\ &-A \int\limits_{x_0}^{x_1}f(x)(y_0-x_0)^{1-\beta-\beta'}(y_1-x)^{\beta+\beta'}dx \times \\ & \times \int\limits_{x_0}^{y_0}(y_0-\alpha)^{\beta-1}(\alpha-x_0)^{\beta'-1}(\alpha-x)^{-\beta'} (y_1-\alpha)^{-\beta}d\alpha \\ &-A \int\limits_{x_0}^{y_0}f(x)(y_0-x_0)^{1-\beta-\beta'}(y_1-x)^{\beta-\beta'}dx\int\limits_{x}^{x_0}(y_0-\alpha)^{\beta-1} \times\\ & \times (\alpha-x_0)^{\beta'-1}(\alpha-x)^{-\beta'}(y_1-\alpha)^{\beta}d\alpha \end{split}\end{equation} The first and third terms in ($\ref{eq:124}$) represent the expression ($\ref{eq:119}$). In order to recognize them, it is enough to refer to the expression of $u$; while as far as the second and fourth terms are concerned, their sum vanishes by virtue of the equation \begin{equation}\label{eq:125} \begin{split} &(y_1-x)^{1-\beta-\beta'}\int\limits_{x}^{y_0} (y_0-\alpha)^{-\beta'}(\alpha-x_0)^{-\beta}(\alpha - x)^{\beta-1}(y_1-\alpha)^{\beta'-1} d\alpha \\ &= (y_0-x_0)^{1-\beta-\beta'}\int\limits_{x}^{y_0}(y_0-\alpha)^{\beta -1}(\alpha -x_0)^{\beta' -1}(\alpha -x)^{-\beta'}(y_1-\alpha)^{-\beta}d\alpha, \end{split}\end{equation} that we will verify as follows. We will perform on the variable of the first integral the linear substitution for which $y_0$, $x_0$, $y_1$ and $x$ are turned into $x$, $y_1$, $x_0$ and $y_0$ and will find the second integral. \\ The Poisson formula contains two arbitrary functions $\varphi(\alpha)$ and $\psi(\alpha)$. Suppose that we know these functions only for $\alpha_0 < \alpha <\alpha_1$; the general integral can be determined only for the values of $x$ and $y$ lying between these values of $\alpha$. Let us assume, to fix the ideas, that $y$ is greater than $x$. If we construct the $OC'B'$ bisector of the angle formed by the axes and the points $C'$, $B'$, of abscissa $\alpha_0$ and $\alpha_1$, the value of the integral will be known for all the points of the plane included within and on $DB'$ , $DC'$ of the $DB'C'$ triangle; but it will be impossible to determine the solution outside that triangle. We have assumed that the functions $\varphi$ and $\psi$ are determined only for $\alpha_0<\alpha<\alpha_1$. We can extend them beyond this interval in an infinite number of ways, preserving also the continuity of the derivatives up to any order, both for $\alpha=\alpha_0$, and for $\alpha=\alpha_1$. By adopting different extensions, we will have different integrals of the proposed equation that will have the same values within the $DB'C'$ triangle, whose derivatives will be the same up to any order for the points of each of the $DB'$, $DC'$ segments; but that will be different outside the triangle. Thus, an integral of the proposed equation that assumes given values on the segmets $DB'$ and $DC'$, of the $DB'C'$ triangle is well determined for the points located within the triangle. This is evident from the expression \begin{equation} \label{eq:126} z_{x_0,y_0} =(uz)_{x_1,y_1} + \int\limits_{x_0}^{x_1}u_{x,y_1}f(x)dx + \int\limits_{y_0}^{y_1}u_{x_1,y} \varphi(y)dy. \end{equation} Conversely, it is not defined outside the triangle. It can take on the outside of the triangle an infinity of values that we can define in a very general way, making sure to respect the continuity of $z$ and its derivatives up to any order for all the points of $DB'$ and $DC'$. It is interesting that the lines $DB'$ and $DC'$ are $\textit{characteristics}$. The general formula of Riemann shows us in fact that, if on any other curve that is a line parallel to the axes, the function is given as well as its first derivatives, it is determined on both sides of the curve. \\ The study we have made of the Riemann method, in the particular case of the equation $E(\beta,\beta')$, allows us to return to the general theory and to eliminate an objection that can be made to this theory. The value of $z$ is given by \begin{equation} \label{eq:127} (uz)_A= \frac{(uz)_B -(uz)_C}{2} -\int\limits_{C}^{B} (Ndx-Mdy) \end{equation} and satisfies the partial differential equation \begin{equation*} \frac{\partial^2 z}{\partial x \partial y} + a \frac{\partial z}{\partial x} + b \frac{\partial z}{\partial y} + cz =0. \end{equation*} It also satisfies the boundary conditions that have been set a priori, but it can be objected that the existence of the function $u$, on which all our reasoning is based and that we have determined in the particular case of $E(\beta,\beta')$, is not established for the more general equations. It is possible to raise this objection at least for the specific case in which the coefficients $a$, $b$ and $c$ of the linear equation are finite and continuous functions, for the consequent series expansion. The function $u$, considered as a solution to the adjoint equation, must reduces, for $y=y_0$, to a function given by $x$, $exp \bigg{(}\int\limits_{x_0}^{x}b dx\bigg{)}$, and, for $x=x_0$, to a function given by $y$, $exp \bigg{(}\int\limits_{y_0}^{y}a dy\bigg{)}$. The functions $a$ and $b$ can be expanded in series according to the powers of $(x-x_0)$ and $(y-y_0)$. Thus it will be enough to admit the existence of the function $u$ to establish the following general proposition: \begin{prop} Given the linear equation \begin{equation} \label{eq:128} \frac{\partial^2 z}{\partial x \partial y} + a \frac{\partial z}{\partial x} + b \frac{\partial z}{\partial y} + cz=0,\end{equation} where the coefficients $a$, $b$, $c$ can be expanded in series ordered according to the integer and positive powers of $(x-x_0)$ and $(y-y_0)$, there exists a solution to the partial differential equation, which reduces, for $y=y_0$, to a given function $\varphi(x)$ of $x$, expandable in a series according to the powers of $(x-x_0)$ and, for $x=x_0$, to a given function $\psi(y)$ of $y$, expandable in a series according to the powers of $(y-y_0)$. \end{prop} To prove this proposition, we perform the substitution $ x \rightarrow x_0 + \frac{x}{\rho}$, $y \rightarrow y_0 + \frac{y}{\sigma}$, where $\rho$ and $\sigma$ are two constants that we will choose in such a way that the expansions of the functions $a$, $b$, $c$, $\varphi(x)$ and $\psi(y)$, that are ordered, by substitution, according to the powers of $x$ and $y$, are convergent for all the values of those variables whose modulus is less than or equal to one. Hence, we plan to determine all the derivatives of the function $z$ for $x=y=0$. Since $z$ must be reduced to $\varphi(x)$ for $y = 0$, this condition will determine all derivatives of $z$ in relation to the single variable $x$; in the same way, since $z$ must be reduced to $\psi(y)$ for $x = 0$, we will know all derivatives in relation to the single variable $y$; eventually, the partial differential equation will allow us to know all derivatives, depending on the previous ones, in relation to $x$ and $y$. One can, with all these derivatives, form the series expansion of the solution sought according to the powers of $x$ and $y$, and all is reduced to determining whether this expansion is convergent; because, in the affirmative case, it will satisfy both the boundary conditions and the proposed equation. Now, if the series for the functions $a$, $b$, $c$, $\varphi$ and $\psi$ converge in a circle of radius 1, it is always possible to find the constants $M$, $N$, $P$ and $H$ which are positive and such that the derivatives of any order of $a$, $b$, $c$, $\varphi$ and $\psi$ have modules smaller than the derivatives of the corresponding functions \begin{equation} \label{eq:129} \frac{M}{(1-x-y)}, \hspace{0.5cm} \frac{N}{(1-x-y)}, \hspace{0.5cm} \frac{P}{(1-x-y)^2}, \hspace{0.5cm} \frac{H}{(1-x)}, \hspace{0.5cm} \frac{H}{(1-y)}. \end{equation} In fact, if we aim at determining the function that satisfies the equation \begin{equation}\label{eq:130} \frac{\partial^2 z}{\partial x \partial y} = \frac{M}{(1-x-y)}\frac{\partial z}{\partial x} + \frac{N}{(1-x-y)}\frac{\partial z}{\partial y} + \frac{P}{(1-x-y)^2}z \end{equation} \\ which reduces to $\frac{H}{(1-x)}$, for $y=0$, and to $\frac{H}{(1-y)}$, for $x=0$. We will obtain for this function a series whose coefficients will be bigger than those related to the given equation. It will be sufficient to show that this new series is convergent for the values of $x$ and $y$ sufficiently close to zero. The new problem to which we have arrived is already solved, in fact, if we replace in the equation $x$ with $(1-x)$, it assumes the same form of the equation $\varphi(x,y,y',...,y^n)$ and therefore it can be reduced to $E(\beta,\beta')$, for which the problem is solved. The result leads to a function that is actually expandable in series. It is therefore possible to determine a solution of the partial differential equation proposed with the boundary conditions that we have indicated and to establish the general theorem upon which relies the existence of the function $u$. \chapter{Fundamental Solutions} \epigraph{Natural science is the attempt to comprehend nature by precise concepts.}{Bernhard Riemann} \section{Wavelike Propagation for a Generic Normal System} Let us consider the two systems \begin{equation} \label{eq:2.1} E_{\mu} = \sum\limits_{\nu =1}^{m} \sum\limits_{i=0}^{n} E^{i}_{\mu \nu} \frac{\partial \varphi_{\nu}}{\partial x^i} + \Phi_{\mu}(x|\varphi) =0, \hspace{2cm} \mu=1,2,...,m, \end{equation} \begin{equation} \label{eq:2.2} E_{\mu}= \sum\limits_{\nu=1}^{m}\sum\limits_{i,k=0}^{n} E^{ik}_{\mu \nu} \frac{\partial^2 \varphi_\nu}{\partial x^i \partial x^k} + \Phi_{\mu}(x|\varphi|\chi) =0, \hspace{1cm} \mu=1,2,...,m, \end{equation} following Levi-Civita \cite{levi1988caratteristiche}, we assume that, inside and outside the stratum determined by two hypersurfaces of equations \begin{equation} \label{eq:2.3} z=c_1, \hspace{1cm} z=c_2, \end{equation} they are satisfied by the $m$ functions $\varphi_1$, $\varphi_2$, ..., $\varphi_m$ and $\varphi^*_1$, $\varphi^*_2$, ..., $\varphi^*_m$, respectively. We assume that the stratum determined by Eq. ($\ref{eq:2.3}$) undergoes motion and possibly also bendings, and that through the hypersurfaces ($\ref{eq:2.3}$) the partial derivatives of first order for ($\ref{eq:2.1}$) and of second order for ($\ref{eq:2.2}$) undergo sudden variations (or jumps) and are therefore discontinuous therein.\\ The solutions $\varphi$ of Eq. ($\ref{eq:2.1}$) are taken to be continuous through the hypersurfaces ($\ref{eq:2.3}$), while the solutions $\varphi^*$ of Eq. ($\ref{eq:2.2}$) are taken to be continuous together with their first derivatives through the confining hypersurfaces. This describes a wavelike phenomenon, where the wave surfaces are those bounding the stratum. \\ For a system of maximal order $s$, the functions $\varphi$ and $\varphi^*$ should obey matching conditions through the wave surfaces of order less than $s$, whereas some discontinuities occur for the derivatives of order $s$. The wave surfaces turn out to be $\textit{characteristic manifolds}$, because out of them it is not possible to apply the theorem that guarantees uniqueness of the integrals.\\ Hereafter we merely assume the existence of the functions $\varphi$ and $\varphi^*$ with the associated wavelike propagation, and we describe some of their properties. If $z=c$ is a wave surface $\sigma$, the function $z$ must satisfy the equation \begin{equation} \label{eq:2.4} \Omega(x|p)=0, \end{equation} where the $p$ variables are given by \begin{equation} \label{eq:2.5} p_i = \frac{\partial z}{\partial x^i}, \hspace{3cm} i=0,1,...,n. \end{equation} The validity of Eq. ($\ref{eq:2.4}$) is indeed established only on $\sigma$, i.e. for $z=c$. However, the limitation $z=c$ is inessential, because $\Omega$ certainly vanishes whenever the $p_i$ are set equal to the derivatives of the function $z$. One therefore deals, with respect to $z$, with a partial differential equation. Such an equation can characterize $z$ by itself provided that the $E_\mu$ functions occurring in such systems depend only on the $x$ variables.\\ Now we aim at studying the velocity of progression of the wave surface $\sigma$ at a point $P$, by assuming that the space of variables $x^1$, $x^2$, ..., $x^n$ is endowed with an Euclidean metric, and that such variables are Cartesian coordinates. We suppose that \begin{equation} \label{eq:2.6} z(t|x)=c, \hspace{1cm} z(t+dt|x)=c \end{equation} are the equations of $\sigma$ at the instants of time $t$ and $t+dt$, respectively. The normal $N$ to $P$ at $\sigma$ intersects the second of Eq. ($\ref{eq:2.6}$) at a point $Q$. If $dN$ is the measure, with sign, of the segment $PQ$, counted positively towards the exterior of the stratum determined by $\sigma$ and by the other wave surface pertaining to the instant $t+dt$, the ratio \begin{equation} \label{eq:2.7} a \equiv \frac{dN}{dt} \end{equation} is said to be the $\textit{progression velocity}$ of the wave surface at the point $P$ at the instant of time under consideration. \\ The directional cosines of the normal $N$ to $\sigma$ at $P$ are given by \begin{equation} \label{eq:2.8} \alpha_i = \frac{p_i}{|\rho|}, \hspace{1cm} i=1,2,...,n, \end{equation} where \begin{equation} \label{eq:2.9} \rho^2\equiv \sum\limits_{i,j=1}^{n} \delta^{ij}p_ip_j=\sum\limits_{i=1}^n p_ip^i. \end{equation} If the points $P$ and $Q$ have coordinates $x^i$ and $x^i+dx^i$, respectively, one has from ($\ref{eq:2.6}$) \begin{equation} \label{eq:2.10} z(t|x)=c, \hspace{1cm} z(t+dt|x+dx)=c, \end{equation} \\ and hence, by taking the difference, \begin{equation} \label{eq:2.11} dz=p_0dt + \sum\limits_{i=1}^n p_i dx^i=0. \end{equation} Since the $dx^i$ are the components of the vector $Q-P$, one has also \begin{equation} \label{eq:2.12} dx^i= \pm \sum\limits_{j=1}^n \delta^{ij} \alpha_j dN = \pm \alpha^i dN, \hspace{1cm} i=1,2,...,n. \end{equation} The sign is $\pm$ depending on whether $z$ is positive or negative outside of the stratum. We do not need to fix it. By virtue of ($\ref{eq:2.8}$) and ($\ref{eq:2.12}$), Eq. ($\ref{eq:2.11}$) leads to (we set $\epsilon \equiv \pm 1$) \begin{equation} \label{eq:2.13} \begin{split}& p_0dt + \sum\limits_{i=1}^n p_i \epsilon \alpha^i dN = p_0dt + \epsilon \sum\limits_{i=1}^{n} \frac{p_ip^i}{|\rho|}dN \\ &= p_0dt + \epsilon |\rho|dN=0, \end{split} \end{equation} \\ from which \begin{equation} \label{eq:2.14} |a|= \bigg{|} \frac{dN}{dt} \bigg{|} = \bigg{|} \frac{p_0}{\rho}\bigg{|}. \end{equation} \\ This is the desired formula for the modulus of the velocity of progression. As the point $P$, the time parameter $t$ and the wave surface are varying. \section{Cauchy's Method for Integrating a First-Order Equation} We have seen that the characteristic manifolds \begin{equation} \label{eq:2.15} z(x^0,x^1,...,x^n)= {\rm const}. \end{equation} of a normal system of equations in the $n+1$ independent variables $x^0, x^1$, ..., $x^n$ ensure the vanishing of a certain determinant \begin{equation} \label{eq:2.16} \Omega(x|p)=0, \end{equation} where the $p_i$ are obtained as \begin{equation} \label{eq:2.17} p_i \equiv \frac{\partial z}{\partial x^i}, \hspace{2cm} i=0,1,2,...,n. \end{equation} In the most general case, $\Omega$ depends not only on the $x$ and $p$, but also on the unknown functions $\varphi$ of the normal system under consideration. There exists however a particular set of normal systems, of order $s=1$ and $s=2$, where $\Omega$ depends only on $x$ and $p$ variables, provided that the coefficients $E^i_{\mu \nu}$ in Eq. ($\ref{eq:2.1} $) and $E^{ik}_{\mu \nu}$ in Eq. ($\ref{eq:2.2}$) depend only on the $x$ variables. We are going to describe the Cauchy method for integrating a first-order partial differential equation, considering, in particular, Eq. ($\ref{eq:2.16}$), where the unknown function $z$ does not occur explicitly. We are therefore guaranteed that $\Omega$ contains at least one of the $p$ functions, e.g. $p_0$. If Eq. ($\ref{eq:2.16}$) can be solved with respect to $p_0$, one can write \begin{equation} \label{eq:2.18} p_0 + H(t,x^1,...,x^n|p_1,...,p_n)=0. \end{equation} Let us study first the linear case, i.e. when $H$ is a linear function of the $p$ variables. We are going to show that the task of integrating Eq. ($\ref{eq:2.18}$) is turned into the integration of a system of ordinary differential equations. Indeed, Eq. ($\ref{eq:2.18}$) is then of the type \begin{equation} \label{eq:2.19} p_0+A_0+ \sum\limits_{i=1}^n A^ip_i=0, \end{equation} where the $A$'s depend only on the variables $t$, $x^1$, ..., $x^n$. Let us consider the space $S_{n+2}$ of the $(n+2)$ variables $(t,x^1,...,x^n,z)$ and an integral hypersurface \begin{equation} \label{eq:2.20} z= \varphi(t|x) \end{equation} \\ of Eq. ($\ref{eq:2.19}$), that we shall denote by $\sigma$. Let $\Gamma$ be the section of $\sigma$ with the hyperplane $t=0$, i.e. the locus of points defined by the equation \begin{equation} \label{eq:2.21} \Gamma: \hspace{1cm} z= \varphi(0|x)= \varphi_0(x). \end{equation} The fundamental guiding principle adopted at this stage consists in viewing $\sigma$ as the locus of $\infty^n$ curves obtainable by integration of a suitable system of ordinary differential equations of the kind \begin{equation} \label{eq:2.22} \frac{d}{dt} x^i = X^i(t|x), \hspace{2cm} i=1,2,...,n, \end{equation} \begin{equation} \label{eq:2.23} \frac{d}{dt} z=Z(t|x), \end{equation} \\ of rank $(n+1)$ in the unknown functions $x^1$, ..., $x^n$, $z$ of the variable $t$. Such a system involves $(n+1)$ arbitrary constants, but their number is reduced by 1 if one requires compatibility of the system with Eq. ($\ref{eq:2.20}$) for the integral surface $\sigma$. The basic assumption, which justifies the interest in the system ($\ref{eq:2.22}$) and ($\ref{eq:2.23}$), is that it is independent of the preliminary integration of Eq. ($\ref{eq:2.19}$). Once we have made this statement, we must express the condition that any integral curve of Eqs. ($\ref{eq:2.22}$) and ($\ref{eq:2.23}$) belongs to $\sigma$. Upon viewing $z$ as a function of $t$ and $x$, Eqs. ($\ref{eq:2.22}$) and ($\ref{eq:2.23}$) lead to \begin{equation} \label{eq:2.24} \frac{dz}{dt}= Z = p_0 + \sum\limits_{i=1}^n p_i \frac{dx^i}{dt} = p_0 + \sum\limits_{i=1}^n p_i X^i, \end{equation} and, bearing in mind Eq. ($\ref{eq:2.19}$) to re-express $p_0$, one obtains \begin{equation} \label{eq:2.25} Z= -A_0 + \sum\limits_{i=1}^n p_i (X^i-A^i). \end{equation} Since we want to make sure that the differential system ($\ref{eq:2.22}$) and ($\ref{eq:2.23}$) is independent of the integration of Eq. ($\ref{eq:2.19}$), the coefficients of the $p_i$ must vanish, and hence \begin{equation}\label{eq:2.26} X^i=A^i, \end{equation} \\ from which if follows that \\ \begin{equation}\label{eq:2.27} Z=-A_0. \end{equation} \\ The desired differential system reads therefore \begin{equation} \label{eq:2.28} \frac{dx^i}{dt} = A^i, \hspace{1cm} i = 1,2,...,n, \end{equation} \begin{equation} \label{eq:2.29} \frac{dz}{dt}=-A_0, \end{equation} \\ or also, with the notation used until the end of nineteenth century, \\ \begin{equation}\label{eq:2.30} \frac{dx^i}{A^1} = \frac{dx^2}{A^2}=...=\frac{dx^n}{A^n} = - \frac{dz}{A_0}=dt, \end{equation} \\ which is capable to determine the integral hypersurfaces $\sigma$ of Eq. ($\ref{eq:2.19}$). Indeed, in order to solve the Cauchy problem relative to a pre-assigned $\Gamma$ of the hyperplane $t=0$, it is enough to consider, in the first place, the whole set of integral curves, which are $\infty^n$, of the system of Eq. ($\ref{eq:2.28}$), in which the function $z$ does not occur. The integration of the residual differential equation ($\ref{eq:2.29}$), which is performed by a simple quadrature, once the system ($\ref{eq:2.28}$) has been integrated, completes the determination of the curves of the space $S_{n+2}$ (of the $t$, $x$ and $z$ variables) which are integral curves of the system ($\ref{eq:2.28}$) and ($\ref{eq:2.29}$). If one wants that these curves emanate from the points of $\Gamma$, it is necessary and sufficient that $z$ takes the value $\varphi_0(x)$ at $t=0$, the $x$ referring to the same zero value of $t$ and being therefore identifiable with the $n$ arbitrary constants introduced from the integration of the system ($\ref{eq:2.28}$). Thus, the total number of arbitrary constants is $n$, and every integral hypersurface $\sigma$ of Eq. ($\ref{eq:2.19}$) occurs as the locus of $\infty^n$ integral curves of Eq. ($\ref{eq:2.28}$) and Eq. ($\ref{eq:2.29}$), emanating from the points of $\Gamma$. The concept of transforming the problem of the integration of a linear partial differential equation of first order into the problem of integrating a system of ordinary differential equations, originally developed by Lagrange, was generalized by Lagrange himself, Charpit, Cauchy and Jacobi to non-linear equations. Hereafter, following Levi-Civita \cite{levi1988caratteristiche}, we describe the Cauchy method. For this purpose, let us revert to the general equation \begin{equation} \label{eq:2.31} p_0+ H(t,x^1,...,x^n|p_1,...,p_n)=0, \end{equation} and let us try to understand whether it is possible to determine a generic integral hypersurface (the one whose existence is guaranteed by virtue of the Cauchy theorem for given initial data) as a locus of integral curves of a suitable differential system. One can easily recognize that it is no longer possible, in general, to associate with Eq. ($\ref{eq:2.31}$) a congruence of curves of the space $S_{n+2}$ that holds for whichever integral hypersurface, but it is necessary to pass to an auxiliary higher-dimensional space. It will be useful to regard as arguments, besides the $x$ coordinates of a generic point $P$ of the integral hypersurface $\sigma$, also the $p_0$, $p_1$, ..., $p_n$ which, geometrically, define a facet for $P$. In order to give a concrete metrical meaning to such $p$ variables, we may regard $t$, the $x$ and $z$ as Cartesian coordinates of the space $S_{n+2}$. The variables $p_0$, $p_1$, ...., $p_n$, -1 are then proportional to the directional cosines of the normal to $\sigma$, with reference to the axes $t$, $x^1$, ..., $x^n$, $z$ respectively. Having made this choice, let us try to associate with Eq. ($\ref{eq:2.31}$) a differential system of the kind \begin{equation} \label{eq:2.32} \frac{d}{dt}x^i = X^i (t,x|p), \hspace{0.5cm} \frac{d}{dt}p_i = P_i(t,x|p), \hspace{0.5cm} i=1,2,...,n, \end{equation} \begin{equation} \label{eq:2.33} \frac{d}{dt} z= Z(t,x|p). \end{equation} \\ Once the expressions of the $X^i$ have been determined in terms of the $t$, $x$, $p$ variables, one finds also the form of $Z$. Indeed, since $z$ is a function of $t$ by means of $x^0=t$ and of $x^1$, ..., $x^n$, one has \begin{equation} \label{eq:2.34} \frac{dz}{dt} = p_0 + \sum\limits_{i=1}^n p_i \frac{dx^i}{dt}, \end{equation} \\ and, by virtue of the first of Eq. ($\ref{eq:2.32}$), \begin{equation} \label{eq:2.35} \frac{dz}{dt} = Z(t,x|p)=p_0 + \sum\limits_{i=1}^n p_i X^i. \end{equation} Note that Eq. ($\ref{eq:2.33}$), with $Z$ given by Eq. ($\ref{eq:2.35}$), should be associated after having integrated the system Eq. ($\ref{eq:2.32}$), because then $z$ can be expressed in terms of $t$, by means of a quadrature. Hereafter we denote by $\Gamma$ a hypersurface in the hyperplane $t=0$, $M_0$ a point of $\Gamma$, $ \bar \omega_0$ the hyperplane tangent at $M_0$ to the integral hypersurface $\sigma$ of Eq. ($\ref{eq:2.32}$) that is passing through $\Gamma$. We aim at expressing the condition for the integral curve $C_0$ of the system ($\ref{eq:2.32}$) and ($\ref{eq:2.33}$), that emanates from $M_0$ and is tangent to $\bar \omega_0$, to belong to the integral hypersurface $\sigma$, while still fulfilling the equations \begin{equation} \label{eq:2.36} p_i=\frac{\partial z}{\partial x^i}, \hspace{1cm} i=0,1, ..., n; \hspace{1cm} x^0=t, \end{equation} \\ and this for whatever hypersurface $\Gamma$ passing through the point $M_0$. On passing from $t$ to $t+dt$, $p_i$ undergoes an infinitesimal change \begin{equation}\label{eq:2.37} dp_i=P_i dt, \end{equation} and on the other hand, for the Eq. ($\ref{eq:2.36}$) to remain valid, one requires that \begin{equation} \label{eq:2.38} dp_i = \sum\limits_{j=0}^n p_{ij}dx^j, \; \; \; \; i=0, 1, ..., n, \end{equation} having defined \begin{equation} \label{eq:2.39} p_{ij} \equiv \frac{\partial^2z}{\partial x^i \partial x^j}=p_{ji}, \; \; i,j=0,1,...,n. \end{equation} \\ The formulae ($\ref{eq:2.37}$) and ($\ref{eq:2.38}$) for $dp_i$ should agree. Note that the quantities $p_{ij}$ when both indices are positive are arbitrary, because of the choice, arbitrary by hypothesis, of the hypersurface $\Gamma$. The components $p_{0i}$ satisfy instead relations that can be obtained by differentiation of Eq. ($\ref{eq:2.31}$). In other words, one has the $(n+1)$ equations \begin{equation} \label{eq:2.40} p_{0i} + \sum\limits_{j=1}^n \frac{\partial H}{\partial p_j} p_{ji} + \frac{\partial H}{\partial x^i} =0, \; \; i=0,1,...,n. \end{equation} Since the full number of $p_{ij}$ components is $\frac{1}{2}(n+1)(n+2)$, we are left with \begin{equation*} \frac{1}{2} (n+1)(n+2) - (n+1) =\frac{1}{2} n(n+1) \end{equation*} free components, while we have at our disposal the $2n$ quantities $x^1$, $x^2$, ..., $x^n$; $p_1$, $p_2$, ..., $p_n$. It would therefore seem impossible to determine $P_i$ in such a way that \begin{equation} \label{eq:2.41} P_i dt= \sum\limits_{j=0}^n p_{ij}dx^j, \end{equation} independently of the $p_{ij}$. However, Cauchy's idea works because, by virtue of $p_i= \frac{\partial z}{\partial x^i}$, one finds, by differentiation with respect to $t$, \begin{equation} \label{eq:2.42} \frac{dp_i}{dt} = P_i = p_{i0} + \sum\limits_{j=1}^n p_{ij} \frac{dx^j}{dt} = p_{i0} + \sum\limits_{j=1}^n p_{ij}X^j. \end{equation} Now we eliminate the $p_{i0}=p_{0i}$ by means of Eq. ($\ref{eq:2.40}$) and we exploit the symmetry of the $p_{ij}$. Hence we find that Eq. ($\ref{eq:2.42}$) is equivalent to \begin{equation} \label{eq:2.43} P_i= - \frac{\partial H}{\partial x^i} + \sum\limits_{j=1}^n \bigg{(} X^j - \frac{\partial H}{\partial p_j} \bigg{)}p_{ij}, \hspace{1cm} i=1, 2, ..., n.\end{equation} Such equations are satisfied independently of the $p_{ij}$ values provided that, for all $i$ ranging from 1 through $n$, the following equations hold: \begin{equation} \label{eq:2.44} X^i= \frac{\partial H}{\partial p_i}, \end{equation} \begin{equation} \label{eq:2.45} P_i=-\frac{\partial H}{\partial x^i}. \end{equation} \section{The Bicharacteristics} We have just shown that if, starting from a generic point $M_0$ of the integral hypersurface $\sigma$, one assigns to the $t$, $x$, $p$, $z$ variables some increments which obey the differential system ($\ref{eq:2.32}$) and ($\ref{eq:2.33}$), which is uniquely characterized in the form \begin{equation} \label{eq:2.46} \frac{d}{dt} x^i = \frac{\partial H}{\partial p_i}, \hspace{1cm} \frac{d}{dt}p_i=- \frac{\partial H}{\partial x^i}, \hspace{1cm} i=1, 2, ..., n, \end{equation} \begin{equation} \label{eq:2.47} \frac{d}{dt}z= \sum\limits_{i=1}^n p_i \frac{\partial H}{\partial p_i} - H, \end{equation} one reaches an infinitely close point $M_1$ which belongs again to $\sigma$ and for which the $p_i + dp_i$ define the direction of the normal to the hypersurface $\sigma$ itself. The same considerations may be certainly repeated starting from the point $M_1$, and this for the essential reason that the system ($\ref{eq:2.32}$), ($\ref{eq:2.33}$) and hence ($\ref{eq:2.46}$), ($\ref{eq:2.47}$) has been built in such a way that it holds for all integral hypersurfaces $\sigma$ passing through $M_0$ with given orientation of the normal, i.e. with given values of $p$. As far as the integral hypersurface $\sigma$ is concerned, we are therefore at $M_1$ in the same conditions in which we found ourselves at $M_0$. Hence the whole curve $C$, defined uniquely from Eqs. ($\ref{eq:2.46}$) and ($\ref{eq:2.47}$) under the condition that the $x$, $p$, $z$ take at $t=0$ the values corresponding to $M_0$, belongs to the integral hypersurface under consideration, which is an integral hypersurface whatsoever among the many passing through $M_0$ and having therein the $\bar \omega_0$ as tangent hyperplane. Thus, we discover the geometric corollary according to which, if two integral manifolds meet each other at a point, they meet each other along the whole curve $C$ passing through that point. The curves $C$ are called $\textit{bicharacteristics}$ by Hadamard, whereas we call $\textit{characteristics}$ (of the space $S$ of the $t$, $x$ variables) the hypersurfaces having exceptional behaviour with respect to the Cauchy problem. \section{Fundamental Solution and its relation to Riemann's Kernel} Following the work of Hadamard, Chap. 3 of his famous book \cite{hadamard1952lectures}, he studies the fundamental solutions of partial differential operators, starting from the familiar form of the fundamental solution \begin{equation} \label{eq:2.48} \mathcal{U}\; log \frac{1}{r} + \omega, \; r \equiv \sqrt{(x-x_0)^2 + (y-y_0)^2} \end{equation} for the equation \begin{equation} \label{eq:2.49} \bigg{(} \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + C(x,y) \bigg{)} u=0. \end{equation} \\ In the formula ($\ref{eq:2.48}$), $ \mathcal{U}$ and $\omega$ are properly chosen functions of $(x, y; x_0, y_0)$ which are regular in the neighbourhood of $x=x_0$, $y=y_0$. The function $\omega$ remains arbitrary to some extent, because any regular solution of Eq. ($\ref{eq:2.49}$) might be added to it. As a next step, Hadamard went on to consider the more general equation \begin{equation} \label{eq:2.50} \bigg{(} \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} + D(x,y)\frac{\partial}{\partial x} + H(x,y)\frac{\partial}{\partial y} + E(x,y) \bigg{)} u=0, \end{equation} where $D$, $H$, $E$ are taken to be analytic functions. In this analytic case, there is no essential distinction between Eq. ($\ref{eq:2.50}$) and the equation \begin{equation} \label{eq:2.51} \mathcal{F}(u) = \bigg{(}\frac{\partial^2}{\partial x \partial y} + A(x,y) \frac{\partial}{\partial x} + B(x,y) \frac{\partial}{\partial y} + C(x,y) \bigg{)} u =0, \end{equation} which can be obtained from Eq. ($\ref{eq:2.50}$) by changing $(x+iy) \rightarrow x$, $(x-iy) \rightarrow y$. This map has the effect of changing $r^2$ in Eq. ($\ref{eq:2.48}$) into $(x-x_0)(y-y_0)$. Thus, Hadamard wrote the fundamental solution of Eq. ($\ref{eq:2.51}$) in the form \begin{equation} \label{eq:2.52} u= \mathcal{U} \; log \bigg{[} (x-x_0)(y-y_0) \bigg{]} + \omega. \end{equation} For this to be a solution of Eq. ($\ref{eq:2.51}$) at all $x \neq x_0$, $y \neq y_0$, we have to require that \begin{equation} \label{eq:2.53} \mathcal{F} \bigg{[} \mathcal{U} \; log(x-x_0)(y-y_0) \bigg{]} = \mathcal{M}, \end{equation} where $\mathcal{M}$ is a regular function, while for $\omega$ we can take any regular solution of the equation \begin{equation} \label{eq:2.54} \mathcal{F}(u) = - \mathcal{M}, \end{equation} Indeed, by virtue of the definition ($\ref{eq:2.51}$) of the operator $\mathcal{F}$, one finds \begin{equation} \label{eq:2.55} \begin{split} &\mathcal{F} \Biggl\{ \mathcal{U}\; log[(x-x_0)(y-y_0)] \Biggr\} = \mathcal{F}(\mathcal{U}) log[(x-x_0)(y-y_0)] \\ &+ \frac{1}{(x-x_0)} \bigg{(} \frac{\partial }{\partial y} + A(x,y) \bigg{)} \mathcal{U} + \frac{1}{(y-y_0)} \bigg{(} \frac{\partial}{\partial x} + B(x,y) \bigg{)} \mathcal{U}. \end{split} \end{equation} This is found to be a regular function of $x$, $y$ near each of the lines $x=x_0$, $y=y_0$ if and only if the following conditions hold: \begin{description} \item[(i)]The logarithmic term vanishes, so that $\mathcal{U}$ itself is a solution of Eq. ($\ref{eq:2.51}$). \item[(ii)]The numerators of the two fractions on the second line of ($\ref{eq:2.55}$) vanish at same time as the denominators, i.e. \begin{equation} \label{eq:2.56} \bigg{(} \frac{\partial}{\partial y} + A \bigg{)} \mathcal{U} =0 \; \; {\rm for}\; x=x_0, \end{equation} \begin{equation} \label{eq:2.57} \bigg{(} \frac{\partial}{\partial x} + B \bigg{)} \mathcal{U} =0 \; \; {\rm for}\; y=y_0.\end{equation} \end{description} Note now that these conditions, together with \begin{equation} \label{eq:2.58} \mathcal{U}=1 \; \; {\rm at} \; x=x_0, y=y_0, \end{equation} are precisely the conditions for the Riemann kernel. Thus, we have just proved that Riemann's kernel coincides with the coefficient of the logarithmic term in the fundamental solution of Eq. ($\ref{eq:2.51}$). \section{The concept of Characteristic Conoid} In general, the fundamental solution is singular not only at a point, e.g. the pole, but along a certain surface. What this surface must be was the content of an important theorem of Le Roux \cite{le1895integrales} and Delassus \cite{delassus1895equations, delassus1896equations}, who proved that $\textit{a\-ny \- sin\-gu\-lar \- sur\-fa\-ce \- of \-a \- solu\-tion \- of \-a lin\-ear dif\-fe\-ren\-tial e\-qua\-tion} $ \\ $\textit{ \- mu\-st \- be \- cha\-rac\-teris\-tic}$. Such a singular surface must therefore satisfy the first-order differential equation \begin{equation} \label{eq:2.59} \Omega \bigg{(} \frac{\partial z}{\partial x^1}, \frac{\partial z}{\partial x^2}, ..., \frac{\partial z}{\partial x^m}; x^1, x^2, ..., x^m \bigg{)} = 0. \end{equation} \\ Among the solutions of Eq. ($\ref{eq:2.59}$), one which was especially considered by Darboux \cite{darboux1883memoire} is the one which has a given point $a(a^1, a^2, ..., a^m)$ as a conic point, which is called, since Hadamard, the $\textit{characteristic conoid}$. It coincides with the characteristic cone itself when the coefficients of the equation, or at least the coefficients of the terms of second order, are constants. In general, however, it is a kind of cone with curved generatrices. A more precise definition of the $\textit{characteristic conoid}$ can be given if we introduce some basic concepts of pseudo-Riemannian geometry. A space-time $(M,g)$ is the following collection of mathematical entities: \begin{description} \item[(i)] A connected, four-dimensional, Hausdorff (distinct points belong always to disjoint open neighbourhoods) $C^{\infty}$ manifold $M$; \item[(ii)] A Lorentz metric $g$ on $M$, i.e. the assignment of a non-degenerate bilinear form $g|_p : T_pM \times T_pM \rightarrow \mathbb{R}$ with diagonal form $(-,+,+,+)$ to each tangent space. Thus, $g$ has signature +2 and is not positive-definite; \item[(iii)] A time orientation, given by a globally defined timelike vector field $X : M \rightarrow TM$. A timelike or null tangent vector $v \in T_pM$ is said to be future-directed if $g(X(p),v) < 0$, or past-directed if $g(X(p),v)>0$. \end{description} Some important remarks are now in order: \begin{description} \item[(a)] The condition (i) can be formulated for each number of space-time dimensions $\geq 2$; \item[(b)] Also the convention $(+,-,-,-)$ for the diagonal form of the metric can be chosen. The definitions of timelike and spacelike will then become opposite to out definitions: $X$ is timelike if $g(X(p),X(p))>0$ for $p \in M$, and $X$ is spacelike if $g(X(p),X(p))<0$ for $p \in M$; \item[(c)] The pair $(M,g)$ is only defined up to equivalence. Two pairs $(M,g)$ and $(M',g')$ are said to be equivalent if there exists a diffeomorphism $\alpha: M \rightarrow M'$ such that $\alpha_* g = g'$. Thus, we are really dealing with $\textit{an equivalence class of pairs}$ (M,g). \end{description} Now, if $M$ is a connected, four-dimensional, Hausdorff four-manifold of class $C^{\infty}$, a linear partial differential operator is a linear map \begin{equation} \label{eq:2.60} L : u \in C^{\infty}(M) \rightarrow (Lu) \in C^k(M), \end{equation} with coefficients $a^{i_1...i_m}$ given by functions of class $C^k$. The $\textit{cha\-rac\-te\-ri\-stic}$ $\textit{po\-ly\-no\-mial}$ of the operator $L$ at a point $x \in M$ is \begin{equation} \label{eq:2.61} H(x, \xi) = \sum\limits_{i_1,...,i_m}a^{i_1...i_m} (x)\xi_{i_1}...\xi_{i_m}, \end{equation} where $\xi_i$ is a cotangent vector at $x$. The cone in the cotangent space $T^*_x$ at $x$ defined by \begin{equation} \label{eq:2.62} H(x,\xi)=0 \end{equation} is called the $\textit{characteristic conoid}$. By construction, such a cone is independent of the choice of coordinates, because the terms of maximal order (also called leading or principal symbol) of $L$ transform into terms of the same order by a change of coordinates. The concept of $\textit{hyperbolicity}$ at $x$ of the operator $L$, requires the existence of a domain $\Gamma_x$, a convex open cone in $T^*_x$, such that every line through $\lambda \in \Gamma_x$ cuts the characteristic conoid in $m$ real distinct points. \\ In particular, second-order differental operators with higher-order terms \begin{equation*} (g^{-1})^{\alpha \beta} (x) \frac{\partial}{\partial x^{\alpha}}\frac{\partial}{\partial x^{\beta}} \end{equation*} are hyperbolic at $x$ if and only if the cone defined by \begin{equation} \label{eq:2.63} H_2(x,\xi) \equiv \sum\limits_{\alpha, \beta=1}^{n} (g^{-1})^{\alpha \beta}(x) \xi_\alpha \xi_\beta =0 \end{equation} is convex, i.e. if the quadratic form $H_2(x,\xi)$ has signature $(1, n-1)$. \section{Fundamental Solutions with an Algebraic Singularity} Following \cite{hadamard1952lectures}, we study in the first place the case of a surface without a singular point. We look for fundamental solutions of Eq. like ($\ref{eq:2.50}$), but in $m$ variables, having the form \begin{equation} \label{eq:2.64} u = UG^p + \omega, \end{equation} where $G=0$ is the equation of the desired regular surface, $p$ a given constant, $U$ and $\omega$ are regular functions. Since we assume for $u$ the homogeneous equation \begin{equation} \label{eq:2.65} \mathcal{F}(u)= \bigg{(} \sum\limits_{i,k=1}^m A^{ik} \frac{\partial^2}{\partial x^i \partial x^k} + \sum\limits_{i=1}^m B^i \frac{\partial}{\partial x^i} + C \bigg{)} u =0, \end{equation} the insertion of the factorized ansatz $u= UF(G)$ into Eq. ($\ref{eq:2.65}$) yields, upon defining $p_i \equiv \frac{\partial G}{\partial x^i}$, terms involving the first derivatives \begin{equation} \label{eq:2.66} \frac{\partial u}{\partial x^i} = U p_i F'(G) + \frac{\partial U}{\partial x^i} F(G), \end{equation} and terms involving the second derivatives \begin{equation} \label{eq:2.67} \begin{split} \frac{\partial^2 u}{\partial x^i \partial x^k}= &U p_i p_k F''(G) + \bigg{(} p_i \frac{\partial U}{\partial x^k} + p_k \frac{\partial U}{\partial x^i} + U \frac{\partial^2 G}{\partial x^i \partial x^k} \bigg{)} F'(G) \\ & + \frac{\partial^2 U}{\partial x^i \partial x^k}F(G). \end{split}\end{equation} Now we have to multiply Eq. ($\ref{eq:2.66}$) for every $i$ by $B^i$, and Eq. ($\ref{eq:2.67}$) for every $i$, $k$ by $A^{ik}$, and add to $Cu=CUF$. In this combination one finds that \cite{hadamard1952lectures}: \begin{description} \item[(i)]The coefficient of $F''(G)$ is $A(p_1, ..., p_m)$; \item[(ii)]In the coefficient of $F'(G)$, the terms in $\frac{\partial U}{\partial x^i}$ are \begin{equation*} \frac{\partial U}{\partial x^i} \sum\limits_{k=1}^m 2A^{ik}p_k = \frac{\partial U}{\partial x^i}\frac{\partial A}{\partial p_i}. \end{equation*} \end{description} Thus, Eq. ($\ref{eq:2.65}$) becomes \begin{equation} \label{eq:2.68} U F''(G)A(p_1, ..., p_m) + F'(G) \bigg{(} \sum\limits_{i=1}^m \frac{\partial U}{\partial x^i} \frac{\partial A}{\partial p_i} + MU \bigg{)} + F(G) \mathcal{F}(U) =0, \end{equation} where $M$ denotes \\ \begin{equation} \label{eq:2.69} M \equiv \mathcal{F}(G) - CG. \end{equation} \\ In particular, if $F(G)$ reduces to the $p$-th power of $G$, i.e. $F(G) =G^p$, one gets from Eq. ($\ref{eq:2.68}$) the equation \begin{equation} \label{eq:2.70} p(p-1)G^{p-2}UA(p_1, ..., p_m) + pG^{p-1} \bigg{(} \sum\limits_{i=1}^m \frac{\partial U}{\partial x^i} \frac{\partial A}{\partial p_i} + MU \bigg{)} + G^p \mathcal{F}(U) =0. \end{equation} If the cases $p=0, 1$ are ruled out, the left-hand side of Eq. ($\ref{eq:2.70}$) cannot vanish identically or even be a regular function if the coefficient $A(p_1, ..., p_m)$ does not vanish. In other words, $G=0$ is not a characteristic. The equation \begin{equation*} A \bigg{(} \frac{\partial G}{\partial x^1}, ..., \frac{\partial G}{\partial x^m } \bigg{)} = 0 \end{equation*} must be either an identity or a consequence of $G=0$, hence there exists a function $A_1$, regular also for $G=0$, such that \begin{equation} \label{eq:2.71} A \bigg{(} \frac{\partial G}{\partial x^1}, ..., \frac{\partial G}{\partial x^m } \bigg{)} = A_1 G. \end{equation} The Delassus theorem is therefore proved. Hereafter we assume that Eq. ($\ref{eq:2.71}$) is satisfied, so that the term involving $G^{p-2}$ disappears from Eq. ($\ref{eq:2.70}$). More precisely, one finds, by virtue of Eq. ($\ref{eq:2.71}$), that Eq. ($\ref{eq:2.70}$) reads as \begin{equation} \label{eq:2.72} p G^{p-1} \bigg{[} (p-1) A_1 U + MU + \sum\limits_{i=1}^m \frac{\partial U}{\partial x^i} \frac{ \partial A}{\partial p_i} \bigg{]} + G^{p} \mathcal{F}(U) =0. \end{equation} At this stage, multiplication by $G^{1-p}$ and subsequent restriction to the surface $G=0$ imply that Eq. ($\ref{eq:2.72}$) leads to \begin{equation} \label{eq:2.73} \sum\limits_{i=1}^m \frac{\partial U}{\partial x^i} \frac{\partial A}{\partial p_i} + \big{[} M + (p-1) A_1 \big{]} U=0. \end{equation} This is a linear partial differential equation of first order in $U$, whose integration would lead to the introduction of the lines defined by the ordinary differential equations \begin{equation} \label{eq:2.74} \frac{dx^1}{\frac{1}{2} \frac{\partial A}{\partial p_1}} = ... = \frac{dx^m}{\frac{1}{2} \frac{\partial A}{\partial p_m}} = ds. \end{equation} In the denominators it is possible to recognize the direction cosines of the transversal to $G=0$; this is, in the case considered, tangent to that surface (since the latter is a characteristic; the transversal is the direction of the generatrix of contact between the plan $(p_1, ..., p_m)$ and the characteristic cone). Thus, a line satisfying Eq. ($\ref{eq:2.74}$) and issuing from a point of $G=0$ is lying entirely on that surface. These lines are in fact the bicharacteristics of Eq. ($\ref{eq:2.59}$), with $\Omega=A$ and $z=G$. If the function $A_1$ in Eq. ($\ref{eq:2.71}$) vanishes, so that the function $G$ satisfies identically the equation $A=0$, the theory of partial differential equations of first order shows that, besides Eq. ($\ref{eq:2.74}$), the bicharacteristics satisfy also the equations \begin{equation} \label{eq:2.75} \frac{dp_1}{-\frac{1}{2} \frac{\partial A}{\partial x^1}} = ... = \frac{dp_m}{-\frac{1}{2} \frac{\partial A}{\partial p_m}} = ds, \end{equation} and hence they can be determined without knowing the equation $G=0$ by integrating the system of ordinary differential equations ($\ref{eq:2.74}$) and ($\ref{eq:2.75}$). \section{\-G\-e\-o\-d\-e\-s\-i\-c\- E\-q\-u\-a\-t\-i\-o\-n\-s\- W\-i\-t\-h\- a\-n\-d\- W\-i\-t\-h\-o\-u\-t\- R\-e\-p\-a\-r\-a\-m\-e\-t\-r\-i\-z\-a\-t\-i\-o\-n\- I\-n\-v\-a\-r\-i\-a\-n\-c\-e\-} The characteristic conoid with any point $a(a^1, ..., a^m)$ as its vertex has that point for a singular point, and to study this new case one has to first form the equation of the characteristic conoid. That is $\textit{the locus of all bicharacteristics}$ $\textit{issuing from a}$. One has to take any set of quantities $p_1$, ..., $p_m$ fulfilling the equation \begin{equation} \label{eq:2.76} A(p_1, ..., p_m; x^1, ..., x^m)=0 \end{equation} and, with the initial conditions \begin{equation} \label{eq:2.77} x^i(s=0)=a^i, \hspace{1cm} p_i(s=0)=p_{0i}, \end{equation} integrate the Eqs. ($\ref{eq:2.74}$) and ($\ref{eq:2.75}$), here written concisely in Hamilton form \begin{equation} \label{eq:2.78} \frac{dx^i}{ds} = \frac{1}{2} \frac{\partial A}{\partial p_i}, \hspace{1cm} \frac{dp_i}{ds} =- \frac{1}{2} \frac{\partial A}{\partial x^i}. \end{equation} Since the ratios of the quantities $p_{0i}$, ..., $p_{0m}$ under consideration ($\ref{eq:2.76}$) depend on $(m-2)$ parameters, the locus of the line generated in such a way is a surface. Our task is to obtain a precise form for the equation of this surface. For this purpose, we construct every line issuing from the point $(a^1, ..., a^m)$ and satisfying the differential system ($\ref{eq:2.78}$), whether or not the initial values $p_{01}$, ..., $p_{0m}$ of the variables $p_i$ satisfy Eq. ($\ref{eq:2.76}$). Such lines are indeed the $\textit{geodesics}$ of a suitably chosen line element. Within this framework we recall the definition of a $\textit{geodesic}$. If $T$ is a tensor field defined along a curve $\lambda$ of class $C^r$, and if $\bar T$ is an arbitrary tensor field of class $C^r$ which extends $T$ in an open neighbourhood of $\lambda$, the covariant derivative of $T$ along $\lambda(t)$ can be denoted by $\frac{DT}{\partial t}$ and is equal to \begin{equation} \label{eq:2.79} \frac{DT}{\partial t} \equiv \nabla_{\frac{\partial}{\partial t}} \bar T, \end{equation} where $\nabla$ is the connection of the Riemannian or pseudo-Riemannian manifold we are considering. The formula ($\ref{eq:2.79}$) describes a tensor field of class $C^{r-1}$, defined along the curve $\lambda$, and independent of the extension $\bar T$ \cite{hawking1973large}. In particular, if $\lambda$ has local coordinates $x^a(t)$, and $X^{a} = \frac{dx^a}{dt} $ are the components of its tangent vector, the expression in local coordinates of the covariant derivative of a vector $Y$ along a curve is \begin{equation} \label{eq:2.80} \frac{DY^a}{\partial t} = \frac{\partial Y^a}{\partial t} + \sum\limits_{b,c=1}^n \Gamma\{a, [b,c] \} \frac{dx^b}{dt}Y^c, \end{equation} where the $\Gamma'$s are the Christoffel symbolds of second kind. The tensor field $T$ (and also, in particular, the vector field corresponding to $Y$) is said to undergo $\textit{parallel transport along}$ $\lambda$ if \begin{equation} \label{eq:2.81} \frac{DT}{\partial t} =0. \end{equation} In particular, one may consider the covariant derivative of the tangent vector itself along $\lambda$. The curve $\lambda$ is said to be a $\textit{geodesic}$ if \begin{equation*} \nabla_{X}X= \frac{D}{\partial t} \bigg{(}\frac{\partial}{\partial t} \bigg{)}_\lambda \end{equation*} is parallel to the tangent vector $\big{(}\frac{\partial}{\partial t} \big{)}_\lambda$. This implies that there exists a smooth function on the manifold $M$ for which (the semicolon being used to denote the covariant derivative $\nabla_b$) \begin{equation} \label{eq:2.82} \sum\limits_{b=1}^n X^a_{;b}X^b = fX^a. \end{equation} The parameter $v(t)$ along the curve $\lambda$ such that \begin{equation} \label{eq:2.83} \frac{D}{\partial v} \bigg{(} \frac{\partial}{\partial v} \bigg{)}_\lambda =0 \end{equation} is said to be an $\textit{affine parameter}$. The corresponding tangent vector $U \equiv \big{(} \frac{\partial}{\partial v} \big{)}_\lambda$ obeys the equation \begin{equation} \label{eq:2.84} \sum\limits_{b=1}^n U^a_{;b}U^b=0, \end{equation} i.e., by virtue of ($\ref{eq:2.80}$), \begin{equation} \label{eq:2.85} \frac{d^2 x^a}{d v^2} + \sum\limits_{b,c=1}^n \Gamma \{a, [b,c]\} \frac{dx^b}{dv} \frac{dx^c}{d v}=0. \end{equation} \\ The affine parameter is determined up to a linear transformation \begin{equation} \label{eq:2.86} v' = av+b. \end{equation} We stress that our geodesics are $\textit{auto-parallel}$ curves \cite{hawking1973large}. We prefer auto-parallel curves because they involve the full connection. Given this definition of a geodesic, we have, in the case under consideration its alternative definition as extremal curve for the Lorentzian arc-length. In fact, if \begin{equation} \label{eq:2.87} \mathbf{H} (dx^1, ..., dx^m; x^1, ..., x^m) = \sum\limits_{i,k=1}^m H_{ik}dx^i \otimes dx^k \end{equation} is any non-singular quadratic form, the coefficients $H_{ik}$ being given functions of $x^1$, ..., $x^m$, then if the differentials $dx^l$ are viewed as differentials of the corresponding $x^l$, the $\mathbf{H}$ can be taken as the squared line element in a $m$-dimensional manifold. The integral \begin{equation} \label{eq:2.88} \int \sqrt{ \mathbf{H}(dx^1, ..., dx^m)} = \int \sqrt{\mathbf{H}(x'^1, ..., x'^m)}dt, \end{equation} where $x'^i \equiv \frac{d x^i}{dt}$, is therefore the arc-length of a smooth curve. The corresponding geodesics are the lines which make the variation of this functional vanish. Their differential equations are \begin{equation} \label{eq:2.89} \frac{d}{dt} \bigg{(} \frac{\partial}{\partial x'^i} \sqrt{\mathbf{H}} \bigg{)} - \frac{\partial}{\partial x^i} \sqrt{\mathbf{H}}=0, \hspace{1cm} i=1, 2, ..., m. \end{equation} On the other hand, Lagrangian dynamics leads to writing these differential equations in a different form, i.e. \begin{equation} \label{eq:2.90} \frac{d}{ds} \bigg{(} \frac{\partial}{\partial x'^i} \mathbf{H} \bigg{)} - \frac{\partial}{\partial x^i} \mathbf{H} =0, \hspace{1cm} i=1, ..., m; \end{equation} this being the law governing the motion of a system whose $\textit{vis viva}$ is $\mathbf{H}(x',x)$, and on which no forces act. The equations ($\ref{eq:2.89}$) and ($\ref{eq:2.90}$) are not exactly equivalent, but are $\textit{conditionally equivalent}$ \cite{hadamard1952lectures}. The former determines the required lines but not $t$, the time remaining an arbitrary parameter whose choice is immaterial. In other words, Eq. ($\ref{eq:2.89}$) are reparametrization-invariant, because they remain unchanged if $t$ gets replaced by any smooth function $\phi(t)$. However the latter equations, i.e. ($\ref{eq:2.90}$), define not only a line, but a motion on that line, and this motion is no longer arbitrary in time. It must satisfy the vis viva integral \begin{equation} \label{eq:2.91} \mathbf{H}= {\rm constant}, \end{equation} hence the representative point $(x^1, ..., x^m)$ must move on the curve with constant kinetic energy. But on taking into account Eq. ($\ref{eq:2.91}$), the systems ($\ref{eq:2.89}$) and ($\ref{eq:2.90}$) become in general equivalent. A simple way to see this is to point out that, if we choose $t$ in Eq. ($\ref{eq:2.89}$) in such a way that $\mathbf{H}$ is constant in time, then the denominator $2 \sqrt{\mathbf{H}}$ in the identity \begin{equation} \label{eq:2.92} \frac{\partial}{\partial x'^i} \sqrt{\mathbf{H}}= \frac{1}{2 \sqrt{\mathbf{H}}}\frac{\partial}{\partial x'^i} \mathbf{H} \end{equation} is not affected by the time derivative, and we obtain eventually Eq. ($\ref{eq:2.90}$). Conversely, if one wants to write Eq. ($\ref{eq:2.90}$) in such a way that the independent variable $t$ may become arbitrary, one has to note that, as a function of $t$, the variable $s$ can be easily evaluated from the vis viva integral ($\ref{eq:2.91}$) according to \begin{equation} \label{eq:2.93} ds= \sqrt{\mathbf{H}} dt. \end{equation} On replacing $ds$ by this value, and accordingly $x'^i$ by $\frac{x'^i}{\sqrt{\mathbf{H}}}$, one recovers ($\ref{eq:2.89}$) \cite{darboux1896leccons, hadamard1952lectures}. All these recipes no longer hold for bicharacteristics, for which $A=\mathbf{H} =0$. For them the system ($\ref{eq:2.89}$) becomes meaningless, whereas Eqs. ($\ref{eq:2.90}$) remain valid. \chapter{How to Build the Fundamental Solution} \epigraph{The game's afoot.}{William Shakespeare, King Henry V} \section{Hamiltonian Form of Geodesic Equations} Let us now try to see how it is possible to build the fundamental solution. For this purpose, we here consider the fundamental form of $n$-dimensional Euclidean space \cite{garabedian1998partial} \begin{equation} \label{eq:3.1} g_{E} = \sum \limits_{i,j=1}^n A_{ij}dx^i \otimes dx^j, \end{equation} where $A_{ij}=A_{ji}=A_{ij}(x^1, ..., x^n)$. Let $\Omega_{pq}$ be the set of piecewise smooth curves in the manifold $M$ from $p$ to $q$. Given the curve $c: [0,1] \rightarrow M$ and belonging to $\Omega_{pq}$, there is a finite partition of $[0,1]$ such that $c$ restricted to the sub-interval $[t_i,t_{i+1}]$ is smooth $\forall i$. If we consider the interval $[t_0,t_1]$, the arc-length of $c$ with respect to $g_E$ is defined by \begin{equation} \label{eq:3.2} I \equiv \int_{t_0}^{t_1} \sqrt{\sum\limits_{i,j=1}^n A_{ij} \frac{dx^i}{dt} \frac{dx^j}{dt}} dt, \end{equation} and at the ends of the integration interval we define \begin{equation} \label{eq:3.3} x^i(t_0)\equiv y^i, \; x^i(t_1)\equiv z^i, \hspace{1cm} i=1, 2, ..., n. \end{equation} Let $a^{ij}$ be the controvariant components of the inverse metric, for which \begin{equation} \label{eq:3.4} \sum\limits_{k=1}^n a^{ik}A_{kj} = \delta^i_j. \end{equation} To begin with the variational problem let us define \begin{equation} \label{eq:3.5} Q \equiv \sum\limits_{i,j=1}^n A_{ij} \frac{dx^i}{dt} \frac{dx^j}{dt}, \end{equation} thus the Lagrangian related to this problem, defined as \begin{equation} \label{eq:3.6} L = \sqrt{Q}, \end{equation} is a function homogeneous of degree 1 in the $\dot{x}^j = \frac{dx^j}{dt}$. Since the associated Hessian matrix is singular, i.e. \begin{equation} \label{eq:3.7} det \bigg{(} \frac{\partial^2 L}{\partial \dot{x}^i \partial \dot{x}^j} \bigg{)} =0, \end{equation} it is not possible to define the Legendre transform. However, it is possible to overcome this difficulty by writing the Euler-Lagrange equations, which in terms of Q are \begin{equation} \label{eq:3.8} \frac{d}{dt} \frac{1}{\sqrt{Q}} \frac{\partial Q}{\partial \dot{x}^i} - \frac{1}{\sqrt{Q}}\frac{\partial Q}{\partial x^i} =0, \hspace{1cm} i=1, 2, ..., n. \end{equation} This suggests taking $t$, the parameter along the geodesics, as the arc-length measured from the point $y^1, ..., y^n$. The integral \begin{equation} \label{eq:3.9} J \equiv \int_0^s Q dt \end{equation} is stationary. The terminal values $y^1, ..., y^n$ and $z^1, ..., z^n$ of $x^1, ..., x^n$ are fixed, but the upper limit of integration $s$ is allowed to vary. Hence, the extremals of $J$ come to depend on $s$ and on the variable of integration $t$ according to \cite{garabedian1998partial} \begin{equation} \label{eq:3.10} x^i=x^i \bigg{(} \frac{I}{s} \; t \bigg{)} \end{equation} for a change of scale, since $Q$ is a function homogeneous of degree 2 in the velocity variables $\dot{x}^1, ..., \dot{x}^n$. Thus, the constant value of $Q$ becomes $\frac{I^2}{s^2}$ along each such extremal curve, and \begin{equation} \label{eq:3.11} J = \frac{I^2 (z^1, ..., z^n)}{s^2}(s - 0) = \frac{I^2(z^1, ..., z^n)}{s}. \end{equation} Now we can apply the Hamilton-Jacobi theory to the equations of motion that we are studying. Since the corresponding momenta are \begin{equation} \label{eq:3.12} p_i = \frac{\partial L}{\partial \dot{x}^i} = \frac{\partial Q}{\partial \dot{x}^i} = 2 \sum\limits_{j=1}^n A_{ij} \frac{dx^j}{dt}, \end{equation} we can re-express the velocity variables in the form $ \frac{dx^j}{dt} = \frac{1}{2} \sum\limits_{i=1}^n a^{ji}p_i$. Thus, it is possible to write \begin{equation} \label{eq:3.13} \sum\limits_{i=1}^n p_i \frac{dx^i}{dt} = 2 \sum\limits_{i,j=1}^n A_{ij} \frac{dx^i}{dt } \frac{dx^j}{dt} = \frac{1}{2} \sum\limits_{i,j=1}^n a^{ij}p_i p_j. \end{equation} The Hamiltonian reads as \begin{equation} \label{eq:3.14} H=Q= \frac{1}{4} \sum\limits_{i,j=1}^n a^{ij}p_i p_j. \end{equation} The functional $J$, previously defined, satisfies the Hamilton-Jacobi equation \begin{equation} \label{eq:3.15} \frac{\partial J}{\partial s} + \frac{1}{4} \sum\limits_{i,i=1}^n a^{ij} (z^1, ..., z^n) \frac{\partial J}{\partial z^i}\frac{\partial J}{\partial z^j} =0. \end{equation} In this equation we can insert the form $(\ref{eq:3.11})$ of $J$ and set eventually $s=1$. The non-vanishing factor $I^2$, common to both terms, drops therefore out, and the equation $(\ref{eq:3.15})$ reduces to \begin{equation} \label{eq:3.16} \sum\limits_{i,i=1}^n a^{ij} \frac{\partial I}{\partial z^i}\frac{\partial I}{\partial z^j} =1. \end{equation} If we now define \begin{equation} \label{eq:3.17} \Gamma \equiv I^2, \end{equation} we obtain $I= \sqrt{\Gamma}$, and Eq. ($\ref{eq:3.16}$) takes the remarkable form \begin{equation} \label{eq:3.18} \sum\limits_{i,j=1}^n a^{ij} \frac{\partial \Gamma}{\partial z^i}\frac{\partial \Gamma}{\partial z^j} = 4 \Gamma. \end{equation} This equation coincides with Eq. ($\ref{eq:2.71}$) upon setting therein \begin{equation} \label{eq:3.19} G= \Gamma, \; A_1=4, \; A=a({\rm grad} \Gamma,{\rm grad}\Gamma). \end{equation} The function $\Gamma$ is a conoidal solution of Eq. ($\ref{eq:3.18}$), generated by all bicharacteristics of this equation passing through $(y^1, ..., y^n)$ which are geodesic of the metric $g_E$. The geodesics satisfy the equations of motion in Hamiltonian form \begin{equation} \label{eq:3.20} \frac{dx^i}{ds} = \frac{1}{2} \sum\limits_{j=1}^n a^{ij}p_j; \hspace{1cm} \frac{dp_i}{ds} = - \frac{1}{4} \sum\limits_{j,k=1}^n \frac{\partial a^{jk}}{\partial x^i} p_j p_k,\end{equation} together with the initial conditions \begin{equation} \label{eq:3.21} x^i(0)=y^i; \hspace{1cm} p_i(0) = \gamma_i. \end{equation} In a generic space-time manifold, the $a^{ij}$ of Eq. ($\ref{eq:3.18}$) will denote the contravariant components $(g^{-1})^{ij}$ of \begin{equation} \label{eq:3.22} g^{-1}=\sum\limits_{i,j=1}^n a^{ij} \frac{\partial}{\partial x^i} \otimes \frac{\partial}{\partial x^j}, \end{equation} the signature of $g$ being $(n-2)$. Equation ($\ref{eq:3.17}$) will then be interpreted by stating that $\Gamma$ is a two-point function, called the $\textit{world function}$ and equal to the square of the geodesic distance between the space-time points $x=(x^1, ..., x^n)$ and $y=(y^1, ..., y^n)$. This means that such a formalism can only be used $\textit{locally}$, when there exists a unique geodesic from $x$ to $y$. Such a space-time is said to be $\textit{geodesically convex}$. \section{The Unique Real-Analytic World Function} We aim now to demostrate, following Hadamard \cite{hadamard1952lectures}, that Eq. ($\ref{eq:3.18}$), or ($\ref{eq:2.71}$), is the fundamental equation in the theory of the characteristic conoid, in that any function real-analytic in the neighbourhood of the desired vertex $a$, vanishing on the conoid and satysfying Eq. ($\ref{eq:3.18}$), can only be the world function $\Gamma$ itself (besides this, there exist infinitely many non-analytic solutions of Eq. ($\ref{eq:3.18}$). \proof The desired function should be of the form $\Gamma \Pi$ , where $\Pi$ is a real-analytic function. By insertion into Eq. ($\ref{eq:3.18}$), this yields \begin{equation} \begin{split} \label{eq:3.23} 4\Gamma \Pi &= \sum\limits_{i,j=1}^n a^{ij} \bigg{(} \Pi \frac{\partial \Gamma}{\partial x^i} + \Gamma \frac{\partial \Pi}{\partial x^i} \bigg{)} \bigg{(} \Pi \frac{\partial \Gamma}{\partial x^j} + \Pi \frac{\partial \Pi}{\partial x^j} \bigg{)} \\ &= \Pi^2 \nabla_1 \Gamma + 2 \Pi \Gamma \nabla_1 (\Pi, \Gamma) + \Gamma^2 \nabla_1 \Pi, \end{split} \end{equation} On the right-hand side of Eq. ($\ref{eq:3.23}$), the term involving the mixed differential parameter can be expressed, making use of the derivative of $\Pi$ along a geodesic and the symmetry of $a^{ij}$, as \begin{equation} \begin{split} \label{eq:3.24}& \nabla_1(\Gamma,\Pi) = \sum\limits_{i,j=1}^n a^{ij} \frac{\partial \Gamma}{\partial x^i} \frac{\partial \Pi}{\partial x^j} = 2s \sum\limits_{i,j=1}^n a^{ij}p_i \frac{\partial \Pi}{\partial x^j} \\ &= s \sum\limits_{j=1}^n \frac{\partial \Pi}{\partial x^j} \frac{\partial A}{\partial p_j} = 2s \sum\limits_{j=1}^n \frac{\partial \Pi}{\partial x^j} \frac{dx^j}{ds} = 2s \frac{d\Pi}{ds}. \end{split} \end{equation} Thus, Eq. ($\ref{eq:3.23}$) becomes \begin{equation} \label{eq:3.25} \Pi^2 \nabla_1 \Gamma + 4s \Gamma \Pi \frac{d \Pi}{ds} + \Gamma^2 \nabla_1 \Pi = 4 \Gamma \Pi. \end{equation} In this equation, we can divide both sides by $4 \Gamma \Pi$, finding therefore \begin{equation} \begin{split} \label{eq:3.26} &\Pi \frac{\nabla_1 \Gamma}{4 \Gamma} + s \frac{d \Pi}{ds} + \frac{\Gamma}{4 \Pi} \nabla_1 \Pi - 1 = \bigg{(} \Pi + s \frac{d \Pi}{ds} - 1 \bigg{)} + \frac{ \Gamma}{4 \Pi} \nabla_1 \Pi \\ &= \frac{d}{ds} [s (\Pi - 1)] + \frac{\Gamma}{4 \Pi} \nabla_1 \Gamma = 0. \end{split}\end{equation} This equation shows that the function $\Pi$ equals 1 over the whole conoid, and hence we can write the general formula \begin{equation} \label{eq:3.27} \Pi = 1 + \Gamma^l E, \end{equation} where $l$ is a positive exponent, and $E$ is yet another real-analytic function, not vanishing over the whole surface of the conoid. But this leads to a contradiction, because the insertion of Eq. ($\ref{eq:3.27}$) for $\Pi$ into Eq. ($\ref{eq:3.26}$) yields \begin{equation} \begin{split} \label{eq:3.28} &\frac{d}{ds} \big{(} s \Gamma^l E \big{)} + \frac{\Gamma}{4 \Pi} \nabla_1 (\Gamma^l E) = \Gamma^l E + s \frac{d \Gamma^l}{ds} E + s \Gamma^l \frac{dE}{ds} \\ & + \frac{\Gamma}{4 \Pi} \sum\limits_{i,j=1}^n a^{ij} \bigg{(} l \Gamma^{l-1} \frac{\partial \Gamma}{\partial x^i} E + \Gamma^l \frac{ \partial E}{\partial x^i} \bigg{)} \bigg{(} l \Gamma^{l - 1} \frac{\partial \Gamma}{\partial x^j} E + \Gamma^l \frac{\partial E}{\partial x^j} \bigg{)} \\ & = \Gamma^l \bigg{[} s \frac{d E}{ds} + (2l + 1) E \bigg{]} + \frac{\Gamma}{4 \Pi} \bigg{[} l^2 \Gamma^{2l - 2} E^2(\Delta_1 \Gamma ) \\ & + 2l \Gamma^{2l-1}E \Delta_1 (\Gamma, E) + \Gamma^{2l} \Delta_1 E \bigg{]}. \end{split} \end{equation} This equation, when restricted to the characteristic conoid, reduces to \begin{equation} \label{eq:3.29} s \frac{dE}{ds} + (2l + 1)E=0, \end{equation} which is solved by \begin{equation} \label{eq:3.30} E = E_0 \bigg{(} \frac{s}{s_0} \bigg{)}^{- ( 2l+1)}, \end{equation} which can only be regular if $E_0=0$, that implies $E=0$. \endproof \section{Examples of Fundamental Solutions} Now we aim to study the linear partial differential equation \begin{equation} \label{eq:3.31} \mathcal{F}(u)= \bigg{(} \sum\limits_{i,k=1}^m A^{ik} \frac{\partial^2}{\partial x^i \partial x^k} + \sum\limits_{i=1}^m B^i \frac{\partial }{\partial x^i} + C \bigg{)} u =0, \end{equation} with associated world function $\Gamma$, the square of the geodesic distance between two points, obeying Eq. ($\ref{eq:3.18}$), with coefficients $a^{ij}$ equal to the controvariant components $A^{ij}$ of the inverse metric. A fundamental solution of $ \mathcal{F}(u)=0$ is a two-point function $R(x, \xi)$, with $x=(x^1, ..., x^n)$ and $\xi=(\xi^1, ..., \xi^n)$, which solves Eq. ($\ref{eq:3.31}$) in its dependence on $x$ and possesses, at the parameter point $\xi$, a singularity characterized by the split reading as \cite{garabedian1998partial} \begin{equation} \label{eq:3.32} R = \frac{U}{\Gamma^m} + V log (\Gamma) + W, \end{equation} where $U$, $V$ and $W$ are taken to be smooth functions of $x$ in a neighbourhood of $\xi$, with $U \neq 0$ at $\xi$, and where the exponent $m$ is given by \begin{equation} \label{eq:3.33} m = \frac{n}{2} - 1. \end{equation} We are going to show that, when $n$ is odd, the coefficient $V$ of the logarithm vanishes, whereas the term $W$ is redundant for $n$ even. Thus, the dimension of Euclidean space affects in a non-trivial way the conceivable form of the fundamental solution. \subsection{Odd Number of Variables} Following Garabedian \cite{garabedian1998partial}, we consider first the odd values of $n$. We then put $V=W=0$ in Eq. ($\ref{eq:3.32}$), and look for a convergent series expressing the unknown function $U$, in the form \begin{equation} \label{eq:3.34} U = U_{l} \Gamma^{l} = U_0 + U_1\Gamma + U_2 \Gamma^2 + O(\Gamma^3), \end{equation} with regular coefficients $U_l$. By replacing $R= U_{l} \Gamma^{l-m}$ inside ($\ref{eq:3.31}$), where we recall that $u=R$, and exploiting the symmetry of the inverse metric $a^{ij}$, we have \begin{equation} \begin{split} \label{eq:3.35} &\mathcal{F}[U_l\Gamma^{l-m}] =(l-m) (l-m-1)U_l \Gamma^{l-m-2} \sum\limits_{i,j=1}^n a^{ij} \frac{\partial \Gamma}{\partial x^i} \frac{\partial \Gamma}{\partial x^j} + \\ & + (l-m) \bigg{[} 2 \sum\limits_{i,j=1}^n a^{ij} \frac{\partial U}{\partial x^i} \frac{\partial \Gamma}{\partial x^j} + 4 D U_l \bigg{]} \Gamma^{l-m-1} + \mathcal{F}[U_l]\Gamma^{l-m}, \end{split} \end{equation} where $D$ is the term \begin{equation} \label{eq:3.36} D \equiv \frac{1}{4} \sum\limits_{i,j=1}^n a^{ij} \frac{\partial^2 \Gamma}{\partial x^i \partial x^j} + \frac{1}{4} \sum\limits_{i=1}^n B^i \frac{\partial \Gamma}{\partial x^i}. \end{equation} One should stress that the possibility of eliminating the lowest power of $\Gamma$ on the right-hand side of Eq. ($\ref{eq:3.35}$) by means of the first order partial differential equation ($\ref{eq:3.18}$) now shows why the fundamental solution $R$ should be expanded in terms of this particular function, i.e. the world function $\Gamma$. It is now convenient to introduce again a parameter $s$ which is measured along the geodesics that generate $\Gamma$. We can then write \begin{equation} \label{eq:3.37} \sum\limits_{i,j=1}^n a^{ij} \frac{\partial U_l}{\partial x^i} \frac{\partial \Gamma}{\partial x^j} = 2s \frac{d U_l}{ds}. \end{equation} Hence we arrived at a simplified form of Eq. ($\ref{eq:3.35}$), i.e. \cite{garabedian1998partial} \begin{equation} \label{eq:3.38} \mathcal{F}[U_l\Gamma^{l-m}] = 4 (l-m) \bigg\{ s \frac{dU_l}{ds} + (D+l-m-1)U_l \bigg\} \Gamma^{l-m-1} + \mathcal{F}[U_l] \Gamma^{l-m}. \end{equation} At this stage, in order to solve, $\forall x \neq \xi$, the equation \begin{equation} \label{eq:3.39} \mathcal{F}[R] = \mathcal{F} \bigg{[} \sum\limits_{l=0}^\infty U_l \Gamma^{l-m} \bigg{]} =0, \end{equation} we set to zero all coefficients of the various powers of $\Gamma$. This leads to the fundamental recursion formulae \begin{equation} \label{eq:3.40} \bigg{[} s \frac{d}{ds} + (D- m-1) \bigg{]} U_0 =0, \end{equation} \begin{equation} \label{eq:3.41} \bigg{[} s \frac{d}{ds} + (D+ l-m-1) \bigg{]} U_l = - \frac{1}{4(l-m)} \mathcal{F} [U_{l-1}], \hspace{1cm} l \geq 1, \end{equation} for the evaluation of $U_0$, $U_1$, $U_2$, .... For odd values of $n$, the division by $(l-m)$ on the right-hand side of ($\ref{eq:3.41}$) is always legitimate by virtue of the expression of $m$, because $(l-m)$ never vanishes. Note that, when Eq. ($\ref{eq:3.31}$) is hyperbolic, the fundamental solution $R$ becomes infinite along a two-sheeted conoid $\Gamma=0$ separating $n$-dimensional space into three parts. This conoid is indeed a characteristic surface for the second-order equation ($\ref{eq:3.31}$), since ($\ref{eq:3.18}$) reduces on the level surface $\Gamma=0$ to the first-order partial differential equation \begin{equation} \label{eq:3.42} \sum\limits_{i,j=1}^n a^{ij} \frac{\partial \Gamma}{\partial x^i} \frac{\partial \Gamma}{\partial x^j} = 0 \end{equation} for such a characteristic. The basic property involved is that any locus of singularities of a solution of a linear hyperbolic equation can be expected to form a characteristic surface \cite{garabedian1998partial}. The geodesics that lie on the conoid $\Gamma=0$ are the bicharacteristics of the original equation ($\ref{eq:3.31}$). We have found that, along the characteristic conoid $\Gamma=0$, the ordinary differential operators occurring on the left in the transport equations ($\ref{eq:3.40}$) and ($\ref{eq:3.41}$) apply in the directions of the bicharacteristics. This happens because, within any of its characteristic surfaces, Eq. ($\ref{eq:3.31}$) reduces to an ordinary differential equation imposed on the Cauchy data along each bicharacteristic \cite{garabedian1998partial}. To evaluate the functions $U_0$, $U_1$, ... it is convenient to work in a new space with coordinates $\theta_1$, ..., $\theta_n$ defined by \cite{ hadamard1952lectures, garabedian1998partial} \begin{equation} \label{eq:3.43} \theta_i=s p_i(0). \end{equation} It is possible to do so in a sufficiently small neighbourhood of the parameter point $\xi=(\xi^1, ..., \xi^n)$ because the relevant Jacobian does not vanish. In this new space the geodesics become rays emanating from the origin, while the parameter $s$ can be chosen to coincide with the distance from the origin along each such ray. Each coefficient $U_l$ in the expansion $U= U_l \Gamma^l$ can be written in the form of a series \begin{equation} \label{eq:3.44} U_l = \sum\limits_{j=0}^\infty P_{lj} \end{equation} of polynomials $P_{lj}$ homogeneous in the coordinates $ \theta_1, ..., \theta_n$ of degree equal to the index $j$. Note that the differential operator $s \frac{d}{ds}$ in Eqs. $(\ref{eq:3.40})$ and $(\ref{eq:3.41})$ does not alter the degree of any of the polynomials $P_{lj}$, with the exception that it reduces a polynomial of degree zero, i.e. a constant, to zero. Thus, unless the coefficient $(D-m-1)$ vanishes for $\theta_1 = ...= \theta_n = 0$, there does not exist a solution $U_0$ of Eq. $(\ref{eq:3.40})$ satisfying the requirement $P_{00} \neq 0$. However, we have chosen the exponent as in $(\ref{eq:3.33})$ precisely so that this will be the case, because our $D= \frac{n}{2}$ at the parameter point $x= \xi$. Thus, we can integrate Eq. $(\ref{eq:3.40})$ to find \begin{equation} \label{eq:3.45} U_0 = P_{00} e^{- \int\limits_0^s (D-m-1) \frac{ d \tau}{\tau}}, \end{equation} where $P_{00}$ is a constant as a function of $x$ that might vary with $\xi$. Similarly, Eq. $(\ref{eq:3.41})$ may be solved by the recursion formula \begin{equation} \label{eq:3.46} U_l = - \frac{U_0}{4(l-m)s^l} \int\limits_{0}^s \frac{\mathcal{F}[U_{l-1}]\tau^{l-1}}{U_0} d\tau, \hspace{1cm} l \geq 1. \end{equation} The linear operator on the right turns any convergent series $U_{l-1}$ of the type $(\ref{eq:3.44})$ into another series of the same kind for $U_l$. At this stage, one has still to prove uniform convergence of the expansion of $U$ in powers of $\Gamma$, for sufficiently small values of $s$. This can be obtained by using the method of majorants. For the purpose of proving convergence, it is sufficient to treat only the particular case $U_0= {\rm constant}$, because the substitution $u_1\equiv \frac{u}{U_0},$ with $U_0$ given by $(\ref{eq:3.45})$, reduces $(\ref{eq:3.31})$ to a new partial differential equation reading as \begin{equation} \label{eq:3.47} \mathcal{F}_1[u_1] = \mathcal{F}[U_0u_1] =0, \end{equation} for which such an assumption is verified. Let $K$ and $\epsilon$ be positive numbers such that the geometric series \begin{equation} \label{eq:3.48} \sum\limits_{j=0}^\infty \frac{K}{\epsilon^j} \bigg{(} |\theta_1| + ... + |\theta_n| \bigg{)}^j = \frac{ K \epsilon}{ \epsilon - |\theta_1 | - ... - | \theta_n |} \end{equation} is a majorant for the Taylor expansions in powers of $\theta_1$, ..., $\theta_n$ of all the coefficients of $\mathcal{F}$, which is now a differential operator expressed in these new coordinates. Hence one finds that, if \begin{equation} \label{eq:3.49} M \{U_l \} = \frac{ M_l}{ \bigg{(} 1 - \frac{|\theta_1| + ... + |\theta_n|}{\epsilon} \bigg{)}^{2l}} \end{equation} denotes a majorant for $U_l$, with $M_l$ taken as a suitably large constant, then \begin{equation} \label{eq:3.50} M \{ \mathcal{F} [U_l] \} = \frac{ 2l(2l +1) \bigg{[} 1 + \frac{n}{\epsilon} + \frac{n^2}{\epsilon^2} \bigg{]} K M_l}{ \bigg{(} 1- \frac{ |\theta_1| + ... + |\theta_n |}{\epsilon} \bigg{)}^{2l + 3}} \end{equation} is a majorant for $\mathcal{F}[U_l]$. We now apply the recursion formula $(\ref{eq:3.46})$ to $(\ref{eq:3.50})$ in order to establish that when $l$ is replaced by $(l+1)$, and with \begin{equation} \label{eq:3.51} M_{l+1} = \frac{ l(2l+1)}{2(l+1)(l-m+1)} \bigg{[} 1 + \frac{n}{\epsilon} + \frac{n^2}{\epsilon^2} \bigg{]} KM_l \end{equation} the rule $(\ref{eq:3.49})$ also defines a majorant for $U_{l+1}$. Since we have recognized that it is enough to consider the case $U_0= {\rm constant}$, the proof that we are interested in reduces to a verification that \begin{equation} \label{eq:3.52} M \Biggl\{ s^{-l-1} \int\limits_{0}^s \frac{ \tau^l}{(1- \gamma \tau )^{2l+3} } d \tau \Biggr\} = \frac{1}{(l+1)} (1- \gamma s)^{-2l-2} \end{equation} is a majorant for the integral inside curly brackets on the left. This can be proved with the help of the convenient choice \begin{equation} \label{eq:3.53} M \Biggl\{ \frac{s^l}{(1- \gamma s)^{2l+3}} \Biggr\} = [ 1+ \gamma s ] \frac{ s^l}{(1- \gamma s)^{2l + 3}} = \frac{1}{(l+1)} \frac{d}{ds} \frac{ s^{l+1}}{(1- \gamma s)^{2l+2}} \end{equation} of a majorant for the integrand. With this notation, see Garabedian \cite{garabedian1998partial}, $\gamma$ is specified by \begin{equation} \label{eq:3.54} |\theta_1| + ... + |\theta_n| = \epsilon \gamma s. \end{equation} By induction, we conclude that the majorants $(\ref{eq:3.49})$ are valid for all $l \geq 1$, provided that $M_1$ is sufficiently large and that $M_2$, $M_3$, ... are given by $(\ref{eq:3.51})$. Thus, the series for $U$ in powers of the world function $\Gamma$ converges in a neighbourhood specified by the upper bound \begin{equation} \label{eq:3.55} | \Gamma | < \frac{ \bigg{(} 1 - \frac{ | \theta_1| + ... + |\theta_n| }{\epsilon} \bigg{)}^2 }{\bigg{[} 1 + \frac{n}{\epsilon} + \frac{ n^2}{\epsilon^2} \bigg{]} K } \end{equation} of the parameter point $\xi = (\xi^1, ..., \xi^n)$. To sum up, we obtain $\textit{locally}$ a fundamental solution $S$ of Eq. $(\ref{eq:3.31})$ having special form \begin{equation} \label{eq:3.56} R= \frac{U}{\Gamma^m} = \sum\limits_{l=0}^{\infty} U_l \Gamma^{l-m}, \end{equation} when the number $n$ of independent variables is odd. The addition of a regular term $W$ to the right-hand side of $(\ref{eq:3.56})$ is not mandatory. \subsection{Even Number of Variables and Logarithmic Term} When the number $n \geq 4$ of independent variables is even, the exponent $m$ defined in Eq. $(\ref{eq:3.33})$ is a positive integer and the previous construction of $U$ no longer holds, because the whole algorithm involves division by $(l-m)$, which vanishes when $l=m$. Only the functions $U_0$, $U_1$, ..., $U_{m-1}$ can then be obtained as previously seen. This is why a logarithmic term is needed in the formula $(\ref{eq:3.32})$ for the fundamental solution $R$ in a space with even number of dimensions. Hence we look for $R$ in the form \begin{equation} \label{eq:3.57} R = \sum\limits_{l=0}^{m-1} U_{l}\Gamma^{l-m} + V log (\Gamma) + W. \end{equation} If the formula $(\ref{eq:3.57})$ is inserted into the homogeneous equation $(\ref{eq:3.31})$, one finds \begin{equation} \begin{split} \label{eq:3.58} &\sum\limits_{l=0}^{m-1} \mathcal{F} [U_l\Gamma^{l-m}] + \mathcal{F}[V log(\Gamma)] + \mathcal{F}[W] = \mathcal{F} [U_{m-1}] \frac{1}{\Gamma} - \frac{V}{\Gamma^2} \sum\limits_{i,j=1}^n a^{ij} \frac{ \partial \Gamma}{\partial x^i} \frac{\partial \Gamma}{\partial x^j} \\ & + \bigg{[} 2 \sum\limits_{i,j=1}^n a^{ij} \frac{ \partial V}{\partial x^i}\frac{\partial \Gamma}{\partial x^j} + 4DV \bigg{]} \frac{1}{\Gamma} + \mathcal{F} [V]log(\Gamma) + \mathcal{F} [W] =0, \end{split}\end{equation} by virtue of equation $(\ref{eq:3.38})$ and of the transport equations $(\ref{eq:3.40})$ and $(\ref{eq:3.41})$. In Eq. $(\ref{eq:3.58})$ the term which is non-linear in the derivatives of $\Gamma$ is re-expressed from Eq. $(\ref{eq:3.18})$ (with $z^i$ therein written as $x^i$), and we arrive at \begin{equation} \label{eq:3.59} \Bigg\{ \mathcal{F}[U_{m-1}] + 4 \bigg{[} s \frac{dV}{ds} + (D-1) V \bigg{]} \Biggr\} \frac{1}{\Gamma} + \mathcal{F}[V] log(\Gamma) + \mathcal{F}[W] =0. \end{equation} We are now going to prove that this equation determines $V$ uniquely, whereas $W$ can be selected in a number of ways, in order to satisfy the requirements imposed on it. We note that the ${\rm log}(\Gamma)$ is not balanced by other terms in Eq. $(\ref{eq:3.59})$, hence the function $V$ must solve the homogeneous equation \begin{equation} \label{eq:3.60} \mathcal{F}[V] =0. \end{equation} Moreover the coefficient of $\frac{1}{\Gamma}$ in Eq. $(\ref{eq:3.59})$ must vanish along the whole characteristic conoid $\Gamma=0$, since the remaining regular term $\mathcal{F}[W]$ cannot balance the effect of $\frac{1}{\Gamma}$. Thus, the function $V$ has to solve also the ordinary differential equation. \begin{equation} \label{eq:3.61} \bigg{[} s \frac{d}{ds} + (D-1) \bigg{]} V = - \frac{1}{4} \mathcal{F} [U_{m-1}] \end{equation} on each bicharacteristic that generates the conoid. From our study of the transport equations $(\ref{eq:3.40})$ and $(\ref{eq:3.41})$ we know that Eq. $(\ref{eq:3.61})$ determines the function $V$ uniquely on the characteristic surface $\Gamma=0$, and that $V$ must indeed coincide there with the function $V_0$ defined in a neighbourhood of the parameter point $\xi$ by the integral \begin{equation} \label{eq:3.62} V_0 = - \frac{U_0}{4s^m} \int\limits_0^s \frac{ \mathcal{F}[U_{m-1}]\tau^{m-1}}{U_0} d\tau. \end{equation} We have therefore formulated a characteristic initial-value problem for the partial differential equation $(\ref{eq:3.60})$, in which the unknown function $V$ is prescribed on the conoid $\Gamma=0$. This result agrees with our previous findings, according to which the coefficient of the logarithm is a Riemann kernel satisfying a characteristic initial-value problem. From another point of view \cite{garabedian1998partial}, one can think of Eq. $(\ref{eq:3.62})$ as a substitute for the recursion formula $(\ref{eq:3.46})$ in the case $l=m$. This property suggests trying to find $V$ as a convergent power series \begin{equation} \label{eq:3.63} V= \sum\limits_{l=0}^\infty V_l\Gamma^l. \end{equation} Insertion of Eq. $(\ref{eq:3.63})$ into Eq. $(\ref{eq:3.60})$ leads to infinitely many powers of $\Gamma$ whose coefficients should all be set to zero. We do so, and integrate the resulting ordinary differential equations, finding therefore \begin{equation} \label{eq:3.64} V_l =- \frac{U_0}{4ls^{l+m}} \int\limits_0^s \frac{ \mathcal{F}[V_{l-1}] \tau^{l+m-1}}{U_0} d\tau, \hspace{1cm} \forall l \geq 1. \end{equation} The first term $V_0$ is instead obtained from Eq. $(\ref{eq:3.62})$. The method of majorants can be used to deduce estimates like $(\ref{eq:3.49})$ for the functions $V_l$ provided by $(\ref{eq:3.64})$. Thus, the series $(\ref{eq:3.63})$ converges uniformly in a region like $(\ref{eq:3.54})$ surrounding the parameter point $\xi$. Another result of this method consists on the fact that $(\ref{eq:3.59})$ becomes a partial differential equation for $W$ with a inhomogeneous term that is regular in the neighbourhood of $\xi$. This determines $W$ only up to the addition of an arbitrary solution of the homogeneous equation $(\ref{eq:3.31})$. A particular choice for $W$ that agrees with the method used so far demands that \begin{equation} \label{eq:3.65} W = \sum\limits_{l=1}^\infty W_l \Gamma^l, \end{equation} where the coefficient functions $W_1$, $W_2$, ... will be found from a recursive scheme like $(\ref{eq:3.64})$. By requiring that the series for $W$ should not include the term $W_0$ corresponding to the value $l=0$, one obtains a unique determination of the fundamental solution $(\ref{eq:3.57})$ such that the functions $U_0$, ..., $U_{m-1}$, $V$ and $W$ are all regular as functions of the parameter point $\xi$. The limitation of the Hadamard approach described so far is that it yields the fundamental solution only locally, i.e. in a sufficiently small neighbourhood of $\xi$. Furthermore, when the inverse metric components $a^{ij}$ are varying, also the world function $\Gamma$ is defined only in the small. \subsection{Example of Fundamental Solution: Scalar Wave Equation with Smooth Initial Conditions } Following Sobolev \cite{sobolev1963applications}, we study the wave operator \begin{equation} \label{eq:3.66} \Box u = \bigg{(} \Delta - \frac{\partial^2 }{\partial t^2} \bigg{)} u \end{equation} on the domain $\Omega$ of the $(n+1)$-dimensional space of coordinates $x_1$, $x_2$, ..., $x_n$, $t$ limited by a smooth surface $S$. Let $u(x_1, x_2, ..., x_n, t)$ and $v(x_1, x_2, ..., x_n, t)$ be twice differentiable in $\Omega$ with all their first derivatives continuous on the surface $S$. Thus, we consider \begin{equation} \label{eq:3.67} \Box u = f; \hspace{3cm} \Box v = \varphi \end{equation} and the integral \begin{equation}\begin{split} \label{eq:3.68} J = & \underbrace{ \int \dots \int }_S \Biggl\{ - \sum\limits_{i=1}^n \bigg{(} \frac{\partial u}{\partial x_i}\frac{\partial v}{\partial t} + \frac{\partial v}{\partial x_i}\frac{\partial u}{\partial t} \bigg{)} cos( \vec{n} x_i )\\ & + \bigg{(} \frac{\partial u}{\partial t}\frac{\partial v}{\partial t} + \sum\limits_{i=1}^n \frac{\partial u}{\partial x_i}\frac{\partial v}{\partial x_i}\bigg{)} cos (\vec{n}t)\Bigg\} dS \end{split} \end{equation} where $\vec{n}$ is the inward-pointing normal to $S$. A simple transformation leads to \begin{equation} \begin{split} \label{eq:3.69}& J= - \underbrace{ \int \dots \int }_\Omega \Biggl\{ \frac{\partial}{\partial t} \bigg{(} \frac{\partial u}{\partial t} \frac{\partial v}{\partial t} + \sum\limits_{i=1}^n \frac{\partial u}{\partial x_i} \frac{\partial v}{\partial x_i} \bigg{)} - \sum\limits_{i=1}^n \frac{\partial}{\partial x_i} \bigg{(} \frac{\partial u}{\partial x_i} \frac{\partial v}{\partial t} + \frac{\partial v}{\partial x_i}\frac{\partial u}{\partial t} \bigg{)} \Biggr\} d \Omega \\ & = - \underbrace{ \int \dots \int }_\Omega \Biggl\{ \frac{\partial u}{\partial t} \bigg{[} \frac{\partial^2 v}{\partial t^2} - \sum\limits_{i=1}^n \frac{\partial^2 v}{\partial {x_i}^2} \bigg{]} + \frac{\partial v}{\partial t} \bigg{[} \frac{\partial^2 u}{\partial t^2} - \sum\limits_{i=1}^n \frac{\partial^2 u}{\partial t^2} \bigg{]} \Biggr\} d \Omega \\ &= \underbrace{ \int \dots \int }_\Omega \Biggl\{ \frac{\partial u}{\partial t}\Box v + \frac{\partial v}{\partial t} \Box u \Biggr\} d\Omega \end{split}\end{equation} Replacing $\Box u$ and $\Box v$ with their values, we have \begin{equation} \label{eq:3.70} J = \underbrace{ \int \dots \int }_\Omega \bigg{(} \frac{\partial u}{\partial t} \varphi + \frac{\partial v}{\partial t} f \bigg{)} d \Omega. \end{equation} Let us now consider the expression \begin{equation} \begin{split} \label{eq:3.71} & \sum\limits_{i=1}^n \bigg{(} \frac{\partial u}{\partial x_i} cos( \vec{n} t) - \frac{\partial u}{\partial t} cos(\vec{n} x_i) \bigg{)} \bigg{(} \frac{\partial v}{\partial x_i} cos( \vec{n}t) - \frac{\partial v}{\partial t} cos( \vec{n}x_i) \bigg{)}\\ & = \frac{\partial u}{\partial t} \frac{\partial v}{\partial t} \sum\limits_{i=1}^n (cos(\vec{n} x_i))^2 + (cos(\vec{n} t))^2\sum\limits_{i=1}^n \frac{\partial u}{\partial x_i}\frac{\partial v}{\partial x_i} \\ & - \sum\limits_{i=1}^n \bigg{(} \frac{\partial u}{\partial t} \frac{\partial v}{\partial x_i} - \frac{\partial v}{\partial t} \frac{\partial u}{\partial x_i} \bigg{)} cos(\vec{n} x_i) cos(\vec{n}t) = cos(\vec{n}t) \bigg{[} \frac{\partial u}{\partial t}\frac{\partial v}{\partial t} cos(\vec{n}t) \\ & + \sum\limits_{i=1}^n \frac{\partial u}{\partial x_i}\frac{\partial v}{\partial x_i} cos(\vec{n} t) - \sum\limits_{i=1}^n \bigg{(} \frac{\partial u}{\partial t}\frac{\partial v}{\partial x_i} + \frac{\partial v}{\partial t} \frac{\partial u}{\partial x_i} \bigg{)} cos(\vec{n} x_i) \bigg{]} \\ & + \frac{\partial u}{\partial t} \frac{\partial v}{\partial t} \Biggl\{ \sum\limits_{i=1}^n (cos(\vec{n}x_i))^2 - (cos(\vec{n}t))^2 \Biggr\} \end{split}\end{equation} Everywhere, except at the points of the surface $S$ where $cos(\vec{n}t)=0$, we have \begin{equation} \begin{split} \label{eq:3.72} & J= \underbrace{ \int \dots \int }_S \bigg{[} \frac{1}{cos(\vec{n} t)} \sum\limits_{i=1}^n \bigg{(} \frac{\partial u}{\partial x_i} cos( \vec{n} t) - \frac{\partial u}{\partial t} cos(\vec{n} x_i) \bigg{)} \\ & \times \bigg{(} \frac{\partial v}{\partial x_i} cos(\vec{n}t) - \frac{\partial v}{\partial t} cos(\vec{n}x_i) \bigg{)} - \frac{\partial u}{\partial t} \frac{\partial v}{\partial t} \frac{1 - 2(cos(\vec{n}t))^2 }{cos(\vec{n}t)} \bigg{]} dS \\ &= \underbrace{ \int \dots \int }_S \Phi dS \end{split}\end{equation} where $\Phi$ is the integer of $J$. If $u=v$, then for all the points of $S$ where $|cos(\vec{n}t) | \geq \frac{1}{\sqrt{2}}$, the previous expression always assumes the same sign, that is the same sign of $cos(\vec{n}t)$. \begin{equation} \label{eq:3.73} sign \Phi = sign (cos(\vec{n}t)). \end{equation} Whereas, if $cos(\vec{n}t) =0$, $\Phi$ becomes \begin{equation} \label{eq:3.74} \Phi = - \bigg{(} \frac{\partial u}{\partial t} \frac{\partial v}{\partial \vec{n}} + \frac{\partial v}{\partial t}\frac{\partial u}{\partial \vec{n}} \bigg{)} \end{equation} Now, let us suppose that $u$ is the solution of the wave equation \begin{equation} \label{eq:3.75} \Box u =0 \end{equation} in an infinite homogeneous medium. By taking $u=v$ and making use of Eq. ($\ref{eq:3.69}$), where $\Omega$ it is the truncated cone whose generators form an angle of $\frac{\pi}{4}$ degrees with the axis $0t$ (as shown in fig ($\ref{fig:4}$)). Thus \begin{equation} \label{eq:3.76} cos(\vec{n} t) = \left\{\begin{array} {l} - \frac{1}{\sqrt{2}} \; \; \rm{in} \; S_1 \\ - 1 \; \; \; \rm{in} \; S_2 \\ + 1 \; \; \; \rm{in} \; S_3 \end{array}\right. \end{equation} \begin{figure} \centering \includegraphics{fig4mod.png} \caption{}\label{fig:4} \end{figure} where $S_3$ is the lower base, $S_2$ is the upper base and $S_1$ is the lateral surface of the truncated cone. Let us suppose that $t$ on $S_2$ is equal to $t_0$. Making use of Eq. ($\ref{eq:3.69}$) and $\Box u =0$, we obtain \begin{equation} \begin{split} \label{eq:3.77} J(u,u) =& \underbrace{ \int \dots \int }_{S_1} \Phi dS + \underbrace{ \int \dots \int }_{S_2} \Phi dS + \underbrace{ \int \dots \int }_{S_3} \Biggl\{ - i \sum\limits_{i=1}^\infty \frac{\partial u}{\partial x_i} \frac{\partial u}{\partial t} cos(\vec{n} x_i) \\ & + \bigg{[} \bigg{(} \frac{\partial u}{\partial t} \bigg{)}^2 + \sum\limits_{i=1}^n \bigg{(} \frac{\partial u}{\partial x_i} \bigg{)}^2 \bigg{]} cos( \vec{n} t) \Biggr\} dS =0 \end{split} \end{equation} By recalling the ($\ref{eq:3.73}$), we have \begin{equation} \label{eq:3.78} \underbrace{ \int \dots \int }_{S_1} \Phi dS < 0. \end{equation} Therefore, the Eq. ($\ref{eq:3.77}$) reads as \begin{equation} \label{eq:3.79} - \underbrace{ \int \dots \int }_{S_2} \Phi dS = \underbrace{ \int \dots \int }_{S_3} \Phi dS + \underbrace{ \int \dots \int }_{S_1} \Phi dS < \underbrace{ \int \dots \int }_{S_3} \Phi dS. \end{equation} Since on $S_2$ and $S_3$ we have $cos(\vec{n} x_i)=0$, it is necessary to evaluate \begin{equation} \label{eq:3.80} \underbrace{ \int \dots \int }_{S_2} \bigg{[} \sum\limits_{i=1}^n \bigg{(} \frac{\partial u}{\partial x_i} \bigg{)}^2 + \bigg{(} \frac{\partial u}{\partial t} \bigg{)}^2 \bigg{]} dS \leq \underbrace{ \int \dots \int }_{S_3} \bigg{[} \sum\limits_{i=1}^n \bigg{(} \frac{\partial u}{\partial x_i} \bigg{)}^2 + \bigg{(} \frac{\partial u}{\partial t} \bigg{)}^2 \bigg{]} dS. \end{equation} For this purpose, it is also necessary to evaluate \begin{equation} \label{eq:3.81} \underbrace{ \int \dots \int }_{S_2} u^2 dS. \end{equation} From Eq. ($\ref{eq:3.80}$) it follows that $\underbrace{ \int \dots \int }_{S_2} \big{(} \frac{\partial u}{\partial t} \big{)}^2 dS$ is bounded. If we denote by $y(t)$ the quantity \begin{equation} \label{eq:3.82} y(t) = \underbrace{ \int \dots \int }_{\Sigma_t} u^2 dS, \end{equation} where $\Sigma_t$ is the surface on which the coordinates $x_1$, $x_2$, ..., $x_n$ assume the same values as above $S_2$, whereas $t$ goes from 0 to $t_0$, we have \begin{equation} \label{eq:3.83} y'(t) = 2 \underbrace{ \int \dots \int }_{\Sigma_t} u(t) \frac{\partial u}{\partial t} dS. \end{equation} Making use of Cauchy-Bunjakovsky inequality, it follows that \begin{equation} \label{eq:3.84} | y'(t)| \leq 2 \bigg{[} \underbrace{ \int \dots \int }_{\Sigma_t} \bigg{(} \frac{\partial u}{\partial t} \bigg{)}^2 dS \bigg{]}^{\frac{1}{2}} \bigg{[} \underbrace{ \int \dots \int }_{\Sigma_t} u^2(t) dS \bigg{]}^{\frac{1}{2}} \end{equation} and by virtue of the inequality ($\ref{eq:3.80}$), we have \begin{equation} \label{eq:3.85} |y'(t)| \leq 2 A (y(t))^{\frac{1}{2}}, \end{equation} where A is given by \begin{equation} \label{eq:3.86} A = \bigg{[} \underbrace{ \int \dots \int }_{S_2} \Biggl\{ \sum\limits_{i=1}^n \bigg{(} \frac{\partial u}{\partial x_i} \bigg{)}^2 + \bigg{(} \frac{\partial u}{\partial t} \bigg{)}^2 \Biggr\} dS_3 \bigg{]}^{\frac{1}{2}}. \end{equation} The inequality ($\ref{eq:3.85}$) implies that \begin{equation} \label{eq:3.87} \frac{1}{2} \frac{dy}{\sqrt{y}} \leq A dt \; \rightarrow \; \frac{d}{dt} \sqrt{y} \leq A. \end{equation} Similary we obtain $\sqrt{y_1} \leq \sqrt{y_0} + At$, from which eventually we have \begin{equation} \label{eq:3.88} y \leq y_0 + 2 A \sqrt{y_0} t + A^2 t^2; \end{equation} if we set $ \underbrace{ \int \dots \int }_{S_3} u^2 dS = B^2 $ it follows that $y_0 \leq B^2$ and then \begin{equation} \label{eq:3.89} y \leq (B + At)^2. \end {equation} Now, by intersecting our truncated cone with the plane $t= {\rm const.}$, where $\Sigma_t$ is the $n$-dimensional space domain resulting from that intersection, and applying the same procedure that we have previously shown, we have \begin{equation}\begin{split} \label{eq:3.90} & \overbrace{\underbrace{ \int \dots \int }_{\Sigma_t}}^n \Biggl\{ \sum\limits_{i=1}^n \bigg{(} \frac{\partial u}{\partial x_i} \bigg{)}^2 + \bigg{(} \frac{\partial u}{\partial t} \bigg{)}^2 \Biggr\} dS \\ & \leq \underbrace{ \int \dots \int }_{S_3} \Biggl\{ \sum\limits_{i=1}^n \bigg{(} \frac{\partial u}{\partial x_i} \bigg{)}^2 + \bigg{(} \frac{\partial u}{\partial t} \bigg{)}^2 \Biggr\}dS. \end{split} \end{equation} If we integrate Eq. ($\ref{eq:3.90}$) over $t$ from 0 to $t_0$, it reads as \begin{equation} \begin{split} \label{eq:3.91} & \overbrace{\underbrace{ \int \dots \int }_{V}}^{n+1} \Biggl\{ \sum\limits_{i=1}^n \bigg{(} \frac{\partial u}{\partial x_i} \bigg{)}^2 + \bigg{(} \frac{\partial u}{\partial t} \bigg{)}^2 \Biggr\}dV \leq t_0 \underbrace{ \int \dots \int }_{S_3} \Biggl\{ \sum\limits_{i=1}^n \bigg{(} \frac{\partial u}{\partial x_i} \bigg{)}^2 \\ &+ \bigg{(} \frac{\partial u}{\partial t} \bigg{)}^2 \Biggr\}dS \leq A^2 t_0, \end{split} \end{equation} where $V$ is the truncated cone. In the same manner, making use of \begin{equation} \label{eq:3.92} y = \underbrace{ \int \dots \int }_{\Sigma_t}u^2 dS \leq (B+ A^2 t)^2 \end{equation} and by integration over $t$ from 0 to $t_0$, we have \begin{equation} \label{eq:3.93} \overbrace{ \underbrace{ \int \dots \int }_{V}}^{n+1} u^2 dV \leq \frac{1}{A} [ (B+ At_0)^3 - B^3 ] = 3B^2 t_0 + 3AB {t_0}^2+A^2 {t_0}^3. \end{equation} The inequality ($\ref{eq:3.92}$) has two important corollaries. \begin{cor}$\\$ Suppose that the initial values of $u$ and $\frac{\partial u}{\partial t}$ are on $S_3$. This implies that $A=B=0$ and as a consequence of $(\ref{eq:3.92}$) we have $y=0$, i.e. $u=0$ in $V$. Hence, if on the base of the truncated cone $u=0$ and $\frac{\partial u}{\partial t}=0$, then $u=0$ inside this cone. \end{cor} \begin{cor} $\\$ The value of the function $u$, that is a solution of the given equation, at a point ${x_1}^0$, ..., ${x_n}^0$, $t^0$, is given by the initial values of $u$ and $\frac{\partial u}{\partial t}$ on the sphere $\eta = \bigg{(} \sum\limits_{i=1}^n (x_i - {x_i}^0)^2 \bigg{)}^{\frac{1}{2}} \leq t_0$, that is the intersection of the characteristic cone with vertex in every point with the plane $t=0$. \end{cor} In fact, if for every two solutions of the wave equation the initial data of $u$ and $\frac{\partial u}{\partial t}$ coincide on this domain, the data of their difference will vanish on this domain, and from the corollary 3.3.1, the difference will be null on the vertex of the cone. Thus, at the top of the cone, the two solutions will coincide. \begin{thm}$\\$ \label{thm:2} Let $u$ be a solution to the homogeneous wave equation. If the initial values $u|_{t=0}$ and ${\frac{\partial u}{\partial t}}|_{t=0}$ are infinitely differentiable on the whole space of the $x_1$, .., $x_n$, then the function $u$ itself has all its derivatives up to every order. \end{thm} To begin with, we will estabilish this theorem in a more general situation. Hence we will demonstate the lemma \begin{lem} $\\$ Let $u$ be the function that satisfies the equation \begin{equation*} \Delta u - \frac{\partial^2 u}{\partial t^2} = 0 \end{equation*} on the domain $-a \leq x \leq a$ , for $i=1, 2, ..., n$, where $a$ is a constant, and let us suppose that \begin{equation} \label{eq:3.94} u|_{x_i} = \pm a =0, \end{equation} i.e. $u$ vanishes on the boundary of this domain, and that, at $t=0$, we have \begin{equation} \label{eq:3.95} u|_{t=0} = \varphi_0 \hspace{0.5cm} \rm{and} \hspace{0.5cm} {\frac{\partial u}{\partial t}}\bigg{|}_{t=0} = \varphi_1 \end{equation} where the functions $\varphi_0$ and $\varphi_1$ have their derivatives continuous up to every order and they, together with their derivatives, vanish on the boundary of this domain. Hence, $u$ has continuous derivatives up to every order. \end{lem} \proof In this case, the solution can be written in explicit form by making use of the Fourier series. Thus, we expand in Fourier series $\varphi_0$ and $\varphi_1$ \begin{equation} \label{eq:3.96} \varphi_0 = \sum\limits_{j_k =1}^\infty b_{j_1, j_2, ..., j_n} sin \bigg{(} j_1 \frac{(x_1 + a) \pi}{2a} \bigg{)} \dots \; sin \bigg{(} j_n \frac{ (x_n + a) \pi}{2a} \bigg{)}, \end{equation} \begin{equation} \label{eq:3.97} \varphi_1 = \sum\limits_{j_k =1}^\infty g_{j_1, j_2, ..., j_n} sin \bigg{(} j_1 \frac{(x_1 + a) \pi}{2a} \bigg{)} \dots \; sin \bigg{(} j_n \frac{ (x_n + a) \pi}{2a} \bigg{)}. \end{equation} The $\varphi_0$ and $\varphi_1$ functions are continuous with their derivatives and they can be extended periodically to the whole space preserving the continuity of all their derivatives. It follows that the Fourier series of all these functions will converge uniformly with all their derivatives of arbitrary order. It is possible to consider the partial sum of these series, which reads as \begin{equation} \label{eq:3.98} { \varphi_0}^{(N)} = \sum\limits_{j_k =1}^N b_{j_1, j_2, ..., j_N} sin \bigg{(} j_1 \frac{(x_1 + a) \pi}{2a} \bigg{)} \dots \; sin \bigg{(} j_n \frac{ (x_n + a) \pi}{2a} \bigg{)}, \end{equation} \begin{equation} \label{eq:3.99} { \varphi_1}^{(N)} = \sum\limits_{j_k =1}^N g_{j_1, j_2, ..., j_N} sin \bigg{(} j_1 \frac{(x_1 + a) \pi}{2a} \bigg{)} \dots \; sin \bigg{(} j_n \frac{ (x_n + a) \pi}{2a} \bigg{)}. \end{equation} If we replace in the initial data $\varphi_0$ and $\varphi_1$ with ${\varphi_0}^{(N)}$ and ${\varphi_1}^{(N)}$, we obtain as a solution of the wave equation with the previous initial data the function \begin{equation} \label{eq:3.100} \begin{split} &u^{(N)} = \sum\limits_{j_k =1}^N \Biggl\{ b_{j_1, j_2, ..., j_N} cos \bigg{(}\sqrt{ {j_1}^2 + {j_2}^2 + ... {j_N}^2 }\bigg{)} + \frac{g_{j_1, j_2, ..., j_N}}{\sqrt{{j_1}^2 +{j_2}^2 + ... + {j_N}^2}} \\ & \times sin \bigg{(} \sqrt{{j_1}^2 +{j_2}^2 + ... + {j_N}^2 }\bigg{)} \Biggr\} sin \bigg{(} j_1 \frac{(x_1 + a) \pi}{2a} \bigg{)} \dots \; sin \bigg{(} j_N \frac{ (x_N + a) \pi}{2a} \bigg{)} \end{split} \end{equation} This solution is infinitely differentiable. We have to show that, with the increase of $N$, $u^{(N)}$ converges to every Sobolev space ${W_2}^{(l)}$, where $l$ is an arbitrary number of some function $u$ (see Appendix A). It follows from this that the limit function $u$ is a solution of the wave equation that satisfies the initial conditions $u|_{t=0} = \varphi_0$ and ${\frac{\partial u}{\partial t}}|_{t=0} =\varphi_1 $ and it is infinitely differentiable. Since this solution is unique the lemma is shown. It is left to prove the convergence of $u^{(N)}$. Let us apply the ($\ref{eq:3.85}$) to the parallelepiped, in fig. ($\ref{fig:5}$), whose base $S_2$ is the domain $\Omega: |x_i| \leq a$, on the plane $t=0$, and its upper base $S_3$ lies on the plane $t=t_0$. Since on the lateral surface $S_1$ of this domain $(|x_i|=a)$ we have $ u^{(N)}= \frac{\partial {u^{(N)}}}{\partial t} =0 $, then \begin{figure} \centering \includegraphics{fig5mod.png} \caption{}\label{fig:5} \end{figure} \begin{equation} \label{eq:301} \underbrace{\int \dots \int}_{S_1} \frac{\partial u^{(N)}}{\partial t}\frac{\partial u^{(N)}}{\partial n} dS_1=0, \end{equation} and hence \begin{equation} \begin{split} & \label{eq:3.102} \underbrace{\int \dots \int}_{S_2} \bigg{[} \sum\limits_{i=1}^N \bigg{(} \frac{\partial u^{(N)}}{\partial x_i} \bigg{)}^2 + \bigg{(}\frac{\partial u^{(N)}}{\partial t}\bigg{)}^2 \bigg{]} dS_2 = \underbrace{\int \dots \int}_{S_3} \bigg{[} \sum\limits_{i=1}^N \bigg{(} \frac{\partial u^{(N)}}{\partial x_i} \bigg{)}^2 \\ & + \bigg{(}\frac{\partial u^{(N)}}{\partial t}\bigg{)}^2 \bigg{]} dS_3. \end{split} \end{equation} At this stage, we consider the functions \begin{equation} \label{eq:3.103} v^{(N)}_{a_1 ... a_n} = \frac{\partial^{\alpha} u^{(N)}}{\partial {x_1}^{\alpha_1}... \partial {x_N}^{\alpha_N}}, \end{equation} which are solutions of the wave equation. We also note that on the boundary of the parallelepiped, $|x_i|=a$, the functions $ v^{(N)}_{a_1 ... a_n}$ satisfy the condition $v^{(N)}_{a_1 ... a_n} =0$, if $\alpha_j$ is even, or $\frac{\partial v^{(N)}_{a_1 ... a_n}}{\partial n} =0$, if $\alpha_j$ is odd. If we apply ($\ref{eq:3.85}$) to these functions, we have \begin{equation} \begin{split} \label{eq:3.104} & \underbrace{\int \dots \int}_{S_2} \bigg{[} \sum\limits_{i=1}^N \bigg{(} \frac{\partial v^{(N)}_{a_1 ... a_n} }{\partial x_i} \bigg{)}^2 + \bigg{(} \frac{ \partial v^{(N)}_{a_1 ... a_n}}{\partial t} \bigg{)}^2 \bigg{]} dS_2 = \underbrace{\int \dots \int}_{S_3} \bigg{[} \sum\limits_{i=1}^N \bigg{(} \frac{\partial v^{(N)}_{a_1 ... a_n} }{\partial x_i} \bigg{)}^2 \\ &+ \bigg{(} \frac{ \partial v^{(N)}_{a_1 ... a_n}}{\partial t} \bigg{)}^2 \bigg{]} dS_3 \end{split} \end{equation} On the initial plane $S_2$, all integrals for given $\alpha_1$, $\alpha_2$, ..., $\alpha_N$, are bounded numbers that do not depend on $N$. We also consider the functions \begin{equation} \label{eq:3.105} {\omega^{(k,r)}}_{\alpha_1 ... \alpha_N} ={ v^{(k)}}_{\alpha_1 ... \alpha_N} - { v^{(r)}}_{\alpha_1 ... \alpha_N}. \end{equation} For these functions we obtain, as before, that \begin{equation} \begin{split} \label{eq:3.106} & \underbrace{\int \dots \int}_{S_3} \bigg{[} \sum\limits_{i=1}^N \bigg{(} \frac{\partial \omega^{(k,r)}_{a_1 ... a_n} }{\partial x_i} \bigg{)}^2 + \bigg{(} \frac{ \partial \omega^{(k,r)}_{a_1 ... a_n}}{\partial t} \bigg{)}^2 \bigg{]} dS_3 = \underbrace{\int \dots \int}_{S_3} \bigg{[} \sum\limits_{i=1}^N \bigg{(} \frac{\partial \omega^{(k,r)}_{a_1 ... a_n} }{\partial x_i} \bigg{)}^2 \\ &+ \bigg{(} \frac{ \partial \omega^{(k,r)}_{a_1 ... a_n}}{\partial t} \bigg{)}^2 \bigg{]} dS_2. \end{split} \end{equation} For $k$ and $r$ sufficiently large, the integral on the right-hand side will be small enough. This immediately follows from the convergence with all its derivatives of the Fourier series for $\varphi_0$ and $\varphi_1$. This implies that the quantity on the left-hand side will be arbitrarily small. \begin{equation} \label{eq:3.107} \underbrace{\int \dots \int}_{S_3} \bigg{[} \sum\limits_{i=1}^N \bigg{(} \frac{\partial \omega^{(k,r)}_{a_1 ... a_n} }{\partial x_i} \bigg{)}^2 + \bigg{(} \frac{ \partial \omega^{(k,r)}_{a_1 ... a_n}}{\partial t} \bigg{)}^2 \bigg{]} dS_3 < \epsilon. \end{equation} By Integration of this latter inequality with respect to the variable $t$ from 0 to $T$, it follows that \begin{equation} \label{eq:3.108} \underbrace{\int \dots \int}_{\Omega} \bigg{[} \sum\limits_{i=1}^N \bigg{(} \frac{\partial \omega^{(k,r)}_{a_1 ... a_n} }{\partial x_i} \bigg{)}^2 + \bigg{(} \frac{ \partial \omega^{(k,r)}_{a_1 ... a_n}}{\partial t} \bigg{)}^2 \bigg{]} d\Omega < T\epsilon, \end{equation} where $\Omega$ is the domain $0 \leq t \leq T$; $0 \leq x_i \leq a$. Now, with a procedure analogous to that we have used to obtain ($\ref{eq:3.93}$), it is possible to prove that \begin{equation} \label{eq:3.109} \underbrace{\int \dots \int}_{\Omega} \bigg{(} \omega^{(k,r)}_{a_1 ... a_n} \bigg{)}^2 d\Omega \leq (\epsilon_0 + \epsilon_1 T )^2 \leq \epsilon. \end{equation} By virtue of the completeness of the space ${W_2}^l$, it is possible to conclude that $v^{(N)}_{a_1 ... a_n}$, which satisfy the Cauchy convergence criterion, must converge in this space. The convergence of all derivatives of $u^{(N)}$ in ${W_2}^l$ implies the uniform convergence of all the derivatives of these functions. \endproof Now, by making use of this lemma, we can prove the theorem. \proof The values of the unknown functions within the piramid are \begin{equation*} 0 \leq |x_i| + t \leq \frac{a}{2} \end{equation*} and they depend only on the values of $\varphi_0$ and $\varphi_1$ inside the domain $0 \leq x_1 \leq \frac{a}{2}$, at $t=0$. Thus, we build the functions \begin{equation} \label{eq:3.110} {\varphi_0}^{(a)} = \varphi_0 \prod\limits_{i=1}^n \psi \bigg{(} \bigg{|} \frac{x_i}{a} \bigg{|} \bigg{)}; \end{equation} \begin{equation} \label{eq:3.111} {\varphi_1}^{(a)} = \varphi_1 \prod\limits_{i=1}^n \psi \bigg{(} \bigg{|} \frac{x_i}{a} \bigg{|} \bigg{)}; \end{equation} where $\psi(\xi)$ is a function equal to 1 when $\xi < \frac{1}{2}$ and to 0 when $\xi >1$ and it is infinitely differentiable. Our aim is to find a solution $u^a$ of the wave equation that satisfies \begin{equation} \label{eq:3.112} {u^a}|_{t=0} = {\varphi_0}^a, \hspace{1cm} {\frac{\partial u^a}{\partial t}}\bigg{|}_{t=0} = {\varphi_1}^a. \end{equation} We note that the previously demonstrated lemma shows that $u^a$ is infinitely differentiable, but, as we have seen before, on the piramid $0 \leq |x_i| + t \leq \frac{a}{2}$, this solution coincides with $u$. Hence, $u$ will be, in this case, infinitely differentiable. \endproof At this stage, we aim to find a solution to the wave equation $\Box u = \Delta u - \frac{\partial^2 u}{\partial t^2} =0$, on the whole space that satifies the initial conditions \begin{equation} \label{eq:3.113} {u}|_{t=0} = {u_0}, \hspace{1cm} {\frac{\partial u}{\partial t}}\bigg{|}_{t=0}= {u_1}. \end{equation} It has been shown that if $u_0$ and $u_1$ have derivatives of every order, the solution of the problem exists and it is infinitely differentiable. There is no need for the infinite differentiation of data to obtain solutions, especially since the equation involves only second-order derivatives. To solve our problem we first propose to determine which conditions imposed on $u_0$ and $u_1$ ensure the existence of doubly differentiable solutions. Thus, let $u(t, x_1, ..., x_n)$ be a summable function on a domain $\Omega$ of the $(n+1)$-dimensional space. If there exists a summable function $f(x_1, ..., x_n, t)$ such that \begin{equation} \label{eq:3.114} \overbrace{\underbrace{\int \dots \int}_{\Omega}}^{n+1} u \; \Box \psi dV = \underbrace{\int \dots \int}_{\Omega} \psi f dV \end{equation} for each twice differentiable $\psi(t, x_1, ..., x_n)$ function that vanishes out of some closed subset of $\Omega$, then $f$ takes the name of $\textit{Generalized Wave Operator}$ of $u$ and we will write $\Box u = f$. A function that has a generalized wave operator equal to zero will take the name of a$\textit{ generalized solution}$ of the wave equation. \begin{thm} $\\$ If $u_0$ has generalized derivatives up to the $ \big{(} \frac{n}{2} + 3 \big{)} $ -order square integrable on each bounded domain and $u_1$ has similar generalized derivatives up to the $\big{(} \frac{n}{2}+ 2 \big{)}$ -order, then the equation $\Delta u - \frac{\partial^2 u}{\partial t^2}=0$ has a doubly differentiable solution that satisfies the conditions \begin{equation*} {u}|_{t=0} = {\varphi_0}; \hspace{1cm} {\frac{\partial u}{\partial t}}\bigg{|}_{t=0}= {\varphi_1}. \end{equation*} \end{thm} \proof We build the sequence of average functions $\{ u_{0h} \}$ and $\{u_{1h} \}$. Using the theorem for the solution of the wave equation with smooth initial conditions, there exist solutions of the equation $\Box u=0$ which satify the initial conditions \begin{equation} \label{eq:3.115} {u}|_{t=0} = {u_{0h}}; \hspace{1cm} {\frac{\partial u}{\partial t}}\bigg{|}_{t=0}= {u_{1h}} \end{equation} and having derivatives of arbitrary order. Let us consider the function $v_{p,q} = v_{hp} - v_{hq}$; $v_{p,q}$ is a solution of $\Box u=0$, which satisfies the conditions \begin{equation*} {v_{p,q}}|_{t=0} = {u_{0h_p}} - u_{0h_p}; \; \; \; {\frac{\partial v_{p,q}}{\partial t}}\bigg{|}_{t=0}= {u_{1h_p} - u_{1h_q}}. \end{equation*} From inequality ($\ref{eq:3.80}$), which refers to fig. ($\ref{fig:4}$), we have \begin{equation} \begin{split} \label{eq:3.116}& \underbrace{\int \dots \int}_{S_2} \bigg{[} \sum\limits_{i=1}^n \bigg{(} \frac{\partial v_{p,q}}{\partial x_i} \bigg{)}^2 + \bigg{(} \frac{\partial v_{p,q}}{\partial t} \bigg{)}^2 \bigg{]} dS \leq \underbrace{\int \dots \int}_{S_3} \bigg{[} \sum\limits_{i=1}^n \bigg{(} \frac{\partial v_{p,q}}{\partial x_i} \bigg{)}^2 \\ & + \bigg{(} \frac{\partial v_{p,q}}{\partial t} \bigg{)}^2 \bigg{]} dS, \end{split} \end{equation} and similarly for each derivative of $v_{p,q}$ we have \begin{equation} \begin{split} \label{eq:3.117}& \underbrace{\int \dots \int}_{S_2} \bigg{[} \sum\limits_{i=1}^n \bigg{(} \frac{\partial}{\partial x_i}\frac{\partial^\alpha v_{p,q}}{\partial {x_1}^{\alpha_1} \dots \partial {x_n}^{\alpha_n}} \bigg{)}^2 + \bigg{(} \frac{\partial}{\partial t}\frac{\partial^\alpha v_{p,q}}{\partial {x_1}^{\alpha_1}\dots \partial {x_n}^{\alpha_n}} \bigg{)}^2 \bigg{]} dS \leq \underbrace{\int \dots \int}_{S_3} \\ &\bigg{[} \sum\limits_{i=1}^n \bigg{(} \frac{\partial}{\partial x_i}\frac{\partial^\alpha v_{p,q}}{\partial {x_1}^{\alpha_1} \dots \partial {x_n}^{\alpha_n}} \bigg{)}^2 + \bigg{(} \frac{\partial}{\partial t}\frac{\partial^\alpha v_{p,q}}{\partial {x_1}^{\alpha_1} \dots \partial {x_n}^{\alpha_n}}\bigg{)}^2 \bigg{]} dS; \end{split} \end{equation} whereas from ($\ref{eq:3.92}$) follows \begin{equation} \begin{split} \label{eq:3.118} & \underbrace{\int \dots \int}_{S_2} (v_{p,q})^2 dS \leq \Biggl\{ \bigg{[} \underbrace{\int \dots \int}_{S_3} (v_{p,q})^2 dS \bigg{]}^{\frac{1}{2}} + \bigg{[} \underbrace{\int \dots \int}_{S_3} \sum\limits_{i=1}^n \bigg{(}\frac{\partial v_{p,q}}{\partial x_i} \bigg{)}^2 dS \\ &+ \underbrace{\int \dots \int}_{S_3} \bigg{(} \frac{\partial v_{p,q}}{\partial t} \bigg{)}^2 dS \bigg{]}^{\frac{1}{2}}t \Biggr\}^2, \end{split} \end{equation} From one of the properties of the average functions it follows that $u_{0h} \rightarrow u_0$ in ${W_2}^{\frac{n}{2} + 3}$ and $u_{1h} \rightarrow u_{1}$ in ${W_2}^{\frac{n}{2} + 2}$ and consequently the right-hand side of the previous inequalities can be arbitrarily small for $h_p$ and $h_q$ sufficiently small and $\alpha \leq \frac{n}{2} + 3$, and then the left-hand side has the same behaviour, thus for an arbitrary domain of the plane $t= {\rm const.}$ the sequence $\{u_h \}$ strongly converges in the sense of ${W_2}^{\frac{n}{2} +3}$. \endproof However, the convergence of the $u$ functions in $C^2$ follows from this inclusion theorem. With a similar estimate, we show that $\frac{\partial u}{\partial t} \in C^1$ and $\frac{\partial^2 u}{\partial t^2} \in C^0$, i.e. $u$ is twice continuosly differentiable in the $(n+1)$ -dimensional space and it is solution of the wave equation. \section{Parametrix of Scalar Wave Equation in Curved Space-Time} Let us recall that the solution of the wave equation \begin{equation} \label{eq:3.119} \Box u =0 \end{equation} in Minkowski space-time involves amplitude and phase functions, which characterize the integral representation \begin{equation} \label{eq:3.120} u (t, x_1, x_2, x_3) = \int_{-\infty}^{\infty} d \xi_1 \int_{-\infty}^{\infty} d \xi_2 \int_{-\infty}^{\infty} d \xi_3 A(\xi_1, \xi_2, \xi_3, t) e^{i(\xi_1 x_1 + \xi_2 x_2 + \xi_3 x_3 )}. \end{equation} This is completely specified once suitable Cauchy data \begin{equation} \label{eq:3.121} u(t, x)|_{t=0} \equiv u_{0}(x), \; \; \; {\frac{\partial u}{\partial t}}\bigg{|}_{t=0} \equiv u_1 (x), \end{equation} are assigned. However, when the wave operator refers to a curved-space time, Eq. ($\ref{eq:3.120}$) has to be generalized. This is possible, since we have seen that a theorem guarantees that the solution of the Cauchy problem for the system under examination can be expressed in the form \cite{treves1980introduction} \begin{equation} \label{eq:3.122} u(x, t) = \sum\limits_{i=0}^1 E_{i}(t) u_{i}(x), \end{equation} where, on denoting by $\hat{u_i}$ the Fourier transform of the Cauchy data, the operators $E_{i}(t)$ act according to \begin{equation} \label{eq:3.123} E_{i}(t) u_i (x) = \sum\limits_{k=1}^2 (2\pi)^{-3} \int e^{i \varphi_k(x, t, \xi)}\alpha_{ik}(x, t, \xi) \hat{u_i}(\xi)d^3 \xi + R_i(t) u_i(x), \end{equation} where the $\varphi_k$ are real-valued phase functions which satisfy the initial condition \begin{equation} \label{eq:3.124} \varphi_k(t, x, \xi) |_{t=0} = x \cdot \xi = \sum\limits_{s=1}^3 x^s \xi_s, \end{equation} and $R_i(t)$ is a regularizing operator which smoothes out the singularities acted upon by it. In other words, the Cauchy problem is here solved by a pair of Fourier-Maslov integral operators \cite{treves1980introduction} of the form ($\ref{eq:3.123}$), and such a construction generalizes the monochromatic plane waves for the d'Alembert operator from Minkowski space-time to curved space-time. Strictly, we are dealing with the $\textit{parametrix}$ for the wave equation. In our case, since we know that ($\ref{eq:3.122}$) and ($\ref{eq:3.123}$) yield an exact solution of the Cauchy problem, we can insert them into Eq. ($\ref{eq:3.119}$) with $P= \Box$, finding that, for all $i=0, 1,$ \begin{equation} \label{eq:3.125} P[E_i(t)u_i(x)] \sim \sum\limits_{k=1}^2 (2\pi)^{-3} \int P [e^{i \varphi_k} \alpha_{ik} ] \hat{u_i}(\xi)d^3\xi, \end{equation} where $PR_i(t)u_i(x)$ can be neglected with respect to the integral on the right-hand side of Eq. ($\ref{eq:3.123}$), because $R_i(t)$ is a regularizing operator. Next, we find from Eq. ($\ref{eq:3.119}$) that \begin{equation} \label{eq:3.126} P[e^{i \varphi_k}\alpha_{ik}] = e^{i \varphi_k}(iA_{ik} + B_{ik}), \end{equation} where, on considering the form of $P$ in Kasner space-time (see Appendix B), i.e. \begin{equation} \label{eq:3.127} P = - \frac{ \partial^2}{\partial t^2} - \frac{1}{t} \frac{\partial}{\partial t} \sum \limits_{l=1}^3 t^{-2p_l} \frac{\partial^2}{\partial x^{l^2}}, \; \; \sum\limits_{k=1}^3 p_k = \sum\limits_{k=1}^3 (p_k)^2 =1, \end{equation} one finds \begin{equation} \label{eq:3.128} A_{ik} \equiv \frac{\partial^2 \varphi_k}{\partial t^2} \alpha_{ik} + 2 \frac{\partial \varphi_k}{\partial t} \frac{\partial \alpha_{ik}}{\partial t} + \frac{1}{t} \frac{\partial \varphi_k}{\partial t} \alpha_{ik} - \sum\limits_{l=1}^3 t^{-2p_l} \bigg{(} \frac{\partial^2 \varphi_k}{\partial {x_l}^2} \alpha_{ik} + 2 \frac{\partial \varphi_k}{\partial x_l} \frac{\partial \alpha_{ik}}{\partial x_l} \bigg{)}, \end{equation} \begin{equation} \label{eq:3.129}B_{ik} \equiv \frac{\partial^2 \alpha_{ik}}{\partial t^2} - \bigg{(} \frac{\partial \varphi_k}{\partial t} \bigg{)}^2 \alpha_{ik} + \frac{1}{t} \frac{\partial \alpha_{ik}}{\partial t} - \sum\limits_{l=1}^3t^{-2p_l} \bigg{(} \frac{\partial^2 \alpha_{ik}}{\partial {x_l}^2} - \bigg{(}\frac{\partial \varphi_k}{\partial x_l}\bigg{)}^2 \alpha_{ik}\bigg{)}. \end{equation} Then, if the phase functions $\varphi_k$ are real-valued, since the exponentials $e^{i \varphi_k}$ can be taken to be linearly independent, we can fulfill Eq. ($\ref{eq:3.119}$), up to the negligible contributions resulting from $PR_{i}(t)u_i(x)$, by setting to zero in the integrand ($\ref{eq:3.125}$) both $A_{ik}$ and $B_{ik}$. This leads to a coupled system of partial differential equations. We want to remark that the choice of Kasner space-time is merely an useful example and it is not necessary for the validity of our argumentation. Our Cauchy problem is therefore equivalent to solving the equations \begin{equation} \label{eq:3.130} A_{ik}=0, \hspace{1cm} B_{ik}=0. \end{equation} This equation is the $\textit{dispersion relation}$ for the scalar wave equation in Kasner space-time. Such a dispersion relation takes a neater geometric form upon bearing mind the form ($\ref{eq:3.127}$) of the wave operator $P=\Box$ in Kasner coordinates, i.e. \begin{equation} \label{eq:3.131} A_{ik}=0 \rightarrow \bigg{[} - \alpha_{ik} (\Box \varphi_k ) - 2 \sum\limits_{\beta, \gamma =1}^4 (g^{-1})^{\beta \gamma}(\varphi_k)_{, \beta}(\alpha_{ik})_{,\gamma} \bigg{]} =0, \end{equation} \begin{equation} \label{eq:3.132} B_{ik}=0 \rightarrow \bigg{[} - \Box + \sum\limits_{\beta, \gamma =1}^4 (g^{-1})^{\beta \gamma}(\varphi_k)_{, \beta}(\varphi_{k})_{,\gamma} \bigg{]} \alpha_{ik} =0. \end{equation} Let us bear in mind that the indices $i$ and $k$ count the number of functions contributing to the Fourier-Maslov integral operator. We can therefore exploit the four-dimensional concept of gradient of a function as the four-dimensional covariant vector defined by the differential of the function, i.e. \begin{equation} \label{eq:3.133} df = \sum\limits_{\alpha =1}^4 \frac{\partial f}{\partial x^\alpha} d x^{\alpha} = \sum\limits_{\alpha =1}^4 (\nabla_\alpha f) dx^{\alpha} = \sum\limits_{\alpha=1}^4 (\rm{grad} f)_\alpha dx^\alpha,\end{equation} where $\nabla$ is the Levi-Civita connection on four-dimensional space-time, and we exploit the identity $f_{,\alpha} = \nabla_\alpha f$, $\forall f \in C^{\infty}(M)$. The consideration of $\nabla_\alpha f$ is not mandatory at this stage, but it will be helpful in a moment, when we write in tensor language the equations expressing the dispersion relation. We arrive therefore, upon multiplying Eq. ($\ref{eq:3.131}$) by $\alpha_{ik}$, while dividing Eq. ($\ref{eq:3.132}$) by $\alpha_{ik}$, at the following geometric form of dispersion relation in Kasner space-time \begin{equation} \label{eq:3.134} \sum\limits_{\beta,\gamma=1}^4 (g^{-1})^{\beta \gamma} \nabla_\beta \bigg{[} (\alpha_{ik})^2 \nabla_\gamma \varphi_k \bigg{]} = {\rm div} \big{[}(\alpha_{ik})^2 \rm{grad} \varphi_k \big{]} =0, \end{equation} \begin{equation} \label{eq:3.135} \sum\limits_{\beta,\gamma=1}^4 (g^{-1})^{\beta \gamma} (\nabla_\beta \varphi_{k})( \nabla_\gamma \varphi_k) = <\rm{grad}(\varphi_k), \rm{grad}\varphi_k> = \frac{(\Box \alpha_{ik})}{\alpha_{ik}}, \end{equation} where the four-dimensional divergence operator acts according to \begin{equation} \label{eq:3.136}{\rm div}F=\sum\limits_{\beta=1}^4 \nabla^\beta F_\beta = \sum\limits_{\alpha, \beta=1}^4 (g^{-1})^{\alpha \beta} \nabla_\beta F_\alpha. \end{equation} \section{Tensor Generalization of the Ermakov-Pinney Equation} Note that, if the ratio $\frac{(\Box \alpha_{ik})}{\alpha_{ik}}$ is much smaller than a suitable parameter having dimension $\rm{length}^{-2}$, Eq. ($\ref{eq:3.135}$) reduces to the eikonal equation and hence the phase functions reduces to the Hadamard-Ruse-Synge world function that we have defined in the course of studying the characteristic conoid. It is possible to solve exactly Eqs. ($\ref{eq:3.134}$) and ($\ref{eq:3.135}$). For this purpose we remark that, upon defining the covariant vector \begin{equation} \label{eq:3.137} \psi_\gamma \equiv (\alpha_{ik})^2 \nabla_{\gamma} \varphi_k, \end{equation} Eq. ($\ref{eq:3.134}$) is equivalent to solving the first-order partial differential equation expressing the vanishing divergence condition for $\psi_\gamma$, i.e. \begin{equation} \label{eq:3.138} \sum\limits_{\gamma=1}^4 \nabla^\gamma \psi_\gamma = \rm{div}\psi =0. \end{equation} This equation is not enough to determine the four components of $\psi_\gamma$, but there are cases where further progress can be made. After doing that, we can express the covariant derivative of the phase function from the definition ($\ref{eq:3.137}$), i.e. \begin{equation} \label{eq:3.139} \nabla_\gamma \varphi_k = \partial_\gamma \varphi_k = (\alpha_{ik})^{-2} \psi_{\gamma}, \end{equation} and the insertion of Eq. ($\ref{eq:3.139}$) into Eq. ($\ref{eq:3.135}$) yields \begin{equation} \label{eq:3.140} (\alpha_{ik})^3 \Box \alpha_{ik} = g(\psi,\psi) =\sum\limits_{\beta, \gamma=1}^4 (g^{-1})^{\beta \gamma} \psi_\beta \psi_\gamma = \sum\limits_{\gamma=1}^4 \psi_\gamma \psi^{\gamma}. \end{equation} This is a tensorial generalization of a famous non-linear ordinary differential equation, i.e. the $\textit{Ermakov-Pinney equation}$ \cite{pinney1950nonlinear} \begin{equation} \label{eq:3.141} y'' + p y=qy^{-3}.\end{equation} If $y''$ is replaced by $\Box y$, $p$ is set to zero and $q$ is promoted to a function of space-time location, Eq. ($\ref{eq:3.141}$) is mapped into Eq. ($\ref{eq:3.140}$). After solving this nonlinear equation for $\alpha_{ik}= \alpha_{ik}[g(\psi, \psi)]$, we have to find the phase function $\varphi_k$ by writing and solving the four components of Eq. ($\ref{eq:3.139}$). To sum up, we have proved the following result: \begin{thm} $\\$ Fior any Lorentzian space-time manifold $(M,g)$, the amplitude functions $\alpha_{ik} \in C^2(T^*M)$ and phase functions $\varphi_k \in C^1(T^*M)$ in the parametrix ($\ref{eq:3.123}$) for the scalar wave equation can be obtained by solving, first, the linear conditions ($\ref{eq:3.138}$) of vanishing divergence for a covariant vector $\psi_\gamma$. All non-linearities of the coupled system are then mapped into solving the non-linear equation ($\ref{eq:3.140}$) for the amplitude function $\alpha_{ik}$. Eventually, the phase function $\varphi_k$ is found by solving the first-order linear equation ($\ref{eq:3.139}$). \end{thm} \chapter{Linear Systems of Normal Hyperbolic Form} \epigraph{The most incomprehensible thing about the world is that it is comprehensible.}{Albert Einstein } In the next chapters, our aim is to demonstrate, following the work by Fourès-Bruhat \cite{foures1952theoreme}, that it is possible to solve the Cauchy problem for Einstein field equations in vacuum. The great achievement of this work was a rigorous and constructive proof that the Cauchy problem for Einstein's theory is well posed and admits a unique solution also with non-analytic Cauchy data. Hence, in this Chapter, we first consider a system of $n$ second-order partial differential equations $[E]$, with $n$ unknown functions $u_s$ and four variables $x^\alpha$, hyperbolic and linear, of the following type: \begin{equation*} E_r = \sum\limits_{\lambda,\mu=1}^4 A^{\lambda \mu} \frac{\partial^2 u_s}{\partial x^\lambda \partial x^\mu} + \sum\limits_{s=1}^n \sum\limits_{\mu=1}^4 {{B^s}_r}^\mu \frac{\partial u_s}{\partial x^\mu} + f_r =0, \hspace{3cm} [E] \end{equation*} where the coefficients $A^{\lambda \mu}$, ${{B^s}_r}^\mu$ and $f_r$ are given functions of the four variables $x^\alpha$ and they are taken to satisfy some useful assumptions. We will consider some linear combinations of these equations whose coefficients are some auxiliary functions which possess at $M$ the parametrix properties and then we will obtain, by integrating them over the characteristic conoid $\Sigma$ with vertex $M$, a system of integral equations of the type of Kirchhoff formulae. Thus, by adjoining to these Kirchhoff formulae the equations determining the characteristic conoid and the auxiliary functons we will find a system of integral equations that is the solution of the system $[E]$. \section{Assumptions on the Coefficients and The Characteristic Conoid} Following the Foures-Bruhat work \cite{foures1952theoreme}, let us first consider a system of $n$ second-order partial differential equations $[E]$, with $n$ unknown functions $u_s$ and four variables $x^\alpha$, hyperbolic and linear, of the following type: \begin{equation} \label{eq:4.1} E_r = \sum\limits_{\lambda,\mu=1}^4 A^{\lambda \mu} \frac{\partial^2 u_s}{\partial x^\lambda \partial x^\mu} + \sum\limits_{s=1}^n \sum\limits_{\mu=1}^4 {{B^s}_r}^\mu \frac{\partial u_s}{\partial x^\mu} + f_r =0, \; \; r=1, 2, ..., n. \end{equation} The coefficients $A^{\lambda \mu}$, which are the same for all the $n$ equations, ${{B^s}_r}^\mu$ and $f_r$ are given functions of the four variables $x^\alpha$. These are taken to satisfy the following assumptions: Within a domain defined by \begin{equation} \label{eq:4.2} |x^i - \bar{x}^i | \leq d, \; \; |x^4| \leq \epsilon \hspace{1cm} (i=1, 2, 3), \end{equation} where $\bar{x}^i$, $d$ and $\epsilon$ are some given numbers, it holds that \begin{description} \item[(1)] The coefficients $A^{\lambda \mu}$ and ${{B^s}_r}^\mu$ possess continuous and bounded derivatives up to the orders four and two, respectively. The coefficients $f_s$ are continuous and bounded. \item[(2)] The quadratic form $\sum\limits_{\lambda, \mu =1}^4A^{\lambda \mu}x_\lambda x_\mu$ is of the normal hyperbolic type, i.e. it has one positive square and three negative squares. We will assume in addition that the variable $x^4$ is a temporal variable, the three variables $x^i$ being spatial, i.e. \begin{equation} \label{eq:4.3} A^{44}>0 \; \rm{and} \; \rm{the} \; \rm{quadratic} \; \rm{form} \; \sum\limits_{i,j=1}^3 A^{ij}x_i x_j < 0. \end{equation} \item[(3)] The partial derivatives of the $A^{\lambda \mu}$ and ${{B^s}_r}^\mu$ of order four and two, respectively, satisfy Lipschitz conditions with respect to all their arguments. \end{description} The characteristic surfaces of ($\ref{eq:4.1}$) are three-dimensional manifolds of the space of four variables $x^\alpha$, solutions of the differential system \begin{equation} \label{eq:4.4} F = \sum\limits_{\lambda, \mu =1}^4 A^{\lambda \mu} y_\lambda y_\mu =0 \; \rm{with} \; \sum\limits_{\lambda=1}^4 y_\lambda dx^\lambda =0. \end{equation} The four quantities $y_\lambda$ denote a system of directional parameters of the normal to the contact element, having support $x^\alpha$. If we take this system, which is only defined up to a proportionality factor, in such a way that $y_4=1$ and if we set $y_i=p_i$, the desired surfaces are solution of \begin{equation} \label{eq:4.5} F=A^{44} + 2 \sum\limits_{i=1}^3 A^{i4}p_i + \sum\limits_{i,j=1}^3 A^{ij}p_i p_j =0 \; \rm{with} \; dx^4 + \sum\limits_{i=1}^3 p_i dx^i = 0.\end{equation} The characteristics of this differential system, bicharacteristics of Eq. ($\ref{eq:4.1}$), satisfy the following differential equations: \begin{equation} \label{eq:4.6} \frac{ dx^i}{A^{i4} + \sum\limits_{j=1}^3A^{ij}p_j}= \frac{ dx^4}{A^{44} + \sum\limits_{i=1}^3A^{i4}p_i}= \frac{- dp_i}{\frac{1}{2} \bigg{(} \frac{\partial F}{\partial x^i} -p_i \frac{\partial F}{\partial x^4}\bigg{)}}= d \lambda_1, \end{equation} where $\lambda_1$ is an auxiliary parameter. The characteristic conoid $\Sigma_0$ with vertex $M_0(x^\alpha_0)$ is the characteristic surface generated from the bicharacteristics passing through $M_0$. Any such bicharacteristic satisfies the system of integral equations \begin{equation} \label{eq:4.7} x^i = x^i_0 + \int\limits_{0}^{\lambda_1} \bigg{[} A^{i4} + \sum\limits_{j=1}^3 A^{ij}p_j \bigg{]} d \lambda= x^i_0 + \int\limits_{0}^{\lambda_1} T^i d \lambda, \end{equation} \begin{equation} \label{eq:4.8} x^4 = x^4_0 + \int\limits_{0}^{\lambda_1} \bigg{[} A^{44} + \sum\limits_{i=1}^3 A^{i4}p_i \bigg{]} d \lambda= x^4_0 + \int\limits_{0}^{\lambda_1} T^4 d \lambda, \end{equation} \begin{equation} \label{eq:4.9} p_i = p_i^0 - \int\limits_{0}^{\lambda_1} \frac{1}{2} \bigg{(} \frac{\partial F}{\partial x^i} -p_i \frac{\partial F}{\partial x^4}\bigg{)}d \lambda= p_i^0 + \int\limits_{0}^{\lambda_1} R_i d \lambda, \end{equation} where the $p_i^0$ verify the relation \begin{equation} \label{eq:4.10} A_{0}^{44} + 2 \sum\limits_{i=1}^3 A^{i4}_0 p_i^0 + \sum\limits_{i,j=1}^3 A^{ij}_0 p_i^0p_j^0 =0, \end{equation} whereas $A^{\lambda \mu}_0$ denotes the value of the coefficient $A^{\lambda \mu}$ at the vertex $M_0$ of the conoid $\Sigma_0$. We will assume that at the point $M_0$ the coefficients $A^{\lambda \mu}$ take the following values: \begin{equation} \label{eq:4.11} A^{44}_0=1,\; \; A^{i4}_0=0,\; \; A^{ij}_0 = - \delta^{ij}. \end{equation} Thus, Eq. ($\ref{eq:4.10}$) reads as \begin{equation} \label{eq:4.12} \sum\limits_{i=1}^3 (p_i^0)^2=1. \end{equation} We will introduce to define the points of the surface $\Sigma_0$, besides the parameter $\lambda_1$ which defines the position of a point on a given bicharacteristic, two new parameters $\lambda_2$ and $\lambda_3$, that vary with the bicharacteristic under consideration, by means of \begin{equation} \label{eq:4.13} p_1^0 = sin (\lambda_2)cos(\lambda_3), \hspace{0.5cm} p_2^0=sin (\lambda_2)sin(\lambda_3), \hspace{0.5cm} p_3^0=cos(\lambda_2). \end{equation} The assumptions ($\ref{eq:4.11}$) make it possible to prove that there exists a number $\epsilon_1$ defininig a variation domain $\Lambda$ of the parameters $\lambda_i$ by means of \begin{equation} \label{eq:4.14} |\lambda_1| \leq \epsilon_1, \; \; 0 \leq \lambda_2 \leq \pi, \; \; 0 \leq \lambda_3 \leq 2 \pi, \end{equation} such that the integral equations ($\ref{eq:4.7}$), ($\ref{eq:4.8}$) and ($\ref{eq:4.9}$) possess within ($\ref{eq:4.14}$) a unique solution, continuous and bounded \begin{equation} \label{eq:4.15} x^\alpha = x^{\alpha}(x^\alpha_0, \lambda_1, \lambda_2, \lambda_3), \hspace{1cm} p_i=p_i (x^\alpha_0, \lambda_1, \lambda_2, \lambda_3), \end{equation} satisfying the inequalities \begin{equation*} |x^i - \bar{x}^i | \leq d, \hspace{1cm} |x^4| \leq \epsilon \end{equation*} and possessing partial derivatives, continuous and bounded, of the first three orders with respect to the overabundant variables $\lambda_1$, $p_i^0$ (hence with respect to the three variables $\lambda_i$). The first four equations ($\ref{eq:4.15}$) define, as a function of the three parameters $\lambda_i$, varying within the domain $\Lambda$, a point of a domain $V$ of the characteristic conoid $\Sigma_0$. We shall be led, in the following part of this work, to consider other parametric representations of the domain $V$: \begin{description} \item[(1)] We shall take as independent parameters the three quantities $x^4$, $\lambda_2$, $\lambda_3$. The function $x^4(\lambda_1, \lambda_2, \lambda_3)$ satisfies the equation \begin{equation} \label{eq:4.16} x^4= \int\limits_{0}^{\lambda_1} T^4 d \lambda + x^4_0. \end{equation} Or it turns out from ($\ref{eq:4.10}$) that, on $\Sigma_0$, one has \begin{equation} \label{eq:4.17} 2 \sum\limits_{i=1}^4 A^{i4}p_i = - \sum\limits_{i,j=1}^3 A^{ij}p_ip_j - A^{44} \geq - A^{44}, \end{equation} from which $T^4 \geq \frac{A^{44}}{2} >0$; $x^4$ is thus a monotonic increasing function of $\lambda_1$, the correspondence between $(x^4, \lambda_2, \lambda_3)$ and $(\lambda_1, \lambda_2, \lambda_3)$ is bijective. \item[(2)] We shall take as representative parameters of a point of $\Sigma_0$ his three spatial coordinates $x^i$. The elimination of $\lambda_1$, $\lambda_2$, $\lambda_3$ among the four equations yields $x^4$ as a function of the $x^i$. From the relation \begin{equation*} dx^4 + \sum\limits_{i=1}^3 p_i dx^i =0, \end{equation*} identically verified from the solutions of equations ($\ref{eq:4.7}$), ($\ref{eq:4.8}$) and ($\ref{eq:4.9}$) on the characteristic surface $\Sigma_0$, one infers that the partial derivatives of this function $x^4$ with respect to the $x^i$ verify the relation \begin{equation*} \frac{\partial x^4}{\partial x^i} = - p_i. \end{equation*} If we denote by $[\varphi]$ the value of a function $\varphi$ of four coordinates $x^\alpha$ on the surface of the characteristic conoid $\Sigma_0$, it can be expressed as a function of the three variables of a parametric representation of $\Sigma_0$, in particular of the three coordinates $x^i$. The partial derivatives of this function with respect to the $x^i$ fulfill therefore: \begin{equation} \label{eq:4.18}\frac{\partial [\varphi]}{\partial x^i} = \bigg{[} \frac{\partial \varphi}{\partial x^i} \bigg{]} - \bigg{[} \frac{\partial \varphi}{\partial x^4} \bigg{]}p_i. \end{equation} In the same manner it is possible to evaluate the derivatives $\big{[} \frac{\partial^2 \varphi}{\partial x^i \partial x^j} \big{]}$ and $\big{[} \frac{\partial^2 \varphi}{\partial x^i \partial x^4} \big{]}$, which are \begin{equation*} \bigg{[} \frac{\partial^2 \varphi}{\partial x^i \partial x^4} \bigg{]} = \frac{\partial}{\partial x^i} \bigg{[} \frac{\partial \varphi}{\partial x^4} \bigg{]} + \bigg{[} \frac{\partial^2 \varphi}{\partial (x^4)^2} \bigg{]}p_i, \end{equation*} \begin{equation*} \begin{split} \bigg{[} \frac{\partial^2 \varphi}{\partial x^i \partial x^j} \bigg{]} = & \frac{\partial^2 [\varphi]}{\partial x^i \partial x^j} + \frac{\partial}{\partial x^i} \bigg{[} \frac{\partial \varphi}{\partial x^4} \bigg{]}p_j + \frac{\partial}{\partial x^j} \bigg{[} \frac{\partial \varphi}{\partial (x^4)} \bigg{]}p_i + \bigg{[} \frac{\partial \varphi}{\partial x^4} \bigg{]}\frac{\partial p_i}{\partial x^j} \\ & + \bigg{[} \frac{\partial^2 \varphi}{\partial (x^4)^2} \bigg{]}p_i p_j. \end{split} \end{equation*} These identities make it possible to write the following relations satisfied by the unknown functions $u_s$ on the characteristic conoid: \begin{equation}\begin{split} \label{eq:4.19}[E_r] = &\sum\limits_{i,j=1}^3 [A^{ij}] \frac{\partial^2 [u_r]}{\partial x^i \partial x^j} + \Biggl\{ \sum\limits_{i,j=1}^3 [A^{ij}]p_ip_j + 2 \sum\limits_{i=1}^3 [A^{i4}]p_i + [A^{44}] \Biggr\} \frac{\partial^2 u_r}{\partial (x^4)^2} \\ & + 2 \sum\limits_{i=1}^3 \Biggl\{ \sum\limits_{j=1}^3 [A^{ij}]p_j + [A^{i4}] \Biggr\} \frac{\partial}{\partial x^i} \bigg{[}\frac{\partial u_r}{\partial x^4} \bigg{]} + \bigg{[}\frac{\partial u_r}{\partial x^4} \bigg{]}\sum\limits_{i,j=1}^3 [A^{ij}]\frac{\partial p_i}{\partial x^j} \\ & + \sum\limits_{s=1}^n \sum\limits_{\mu=1}^4 B^{s \mu}_r \bigg{[}\frac{\partial u_s}{\partial x^\mu} \bigg{]} + [f_r] =0. \end{split}\end{equation} The coefficient of the term $\big{[} \frac{\partial^2 u_r}{\partial (x^4)^2} \big{]}$ is the value on the characteristic conoid of the first member of Eq. ($\ref{eq:4.5}$); it therefore vanishes. We might have expected on the other hand that the equations $[E_r]=0$ would not contain second derivatives of the functions $u_r$ but those obtained by derivation on the surface $\Sigma_0$, the assignment on a characteristic surface of the unknown functions $[u_r]$ and of their first derivatives $\big{[} \frac{\partial u_r}{\partial x^\alpha} \big{]}$ not being able to determine the set of second derivatives. \end{description} \section{Integral Equations for Derivatives of $x^i$ and $p_i$} Let us now derive Eqs. ($\ref{eq:4.7}$), ($\ref{eq:4.8}$) and ($\ref{eq:4.9}$) under the summation sign with respect to the $p_i^0$, they will read as \begin{equation} \label{eq:4.20} \frac{\partial x^i}{\partial p_j^0} = \int\limits_{0}^{\lambda_1} \frac{\partial T^i}{\partial p_j^0} d \lambda = \int \limits_{0}^{\lambda_1} \Biggl\{ \sum\limits_{h=1}^3 \Biggl\{ \frac{\partial}{\partial x^h} \sum\limits_{k=1}^3 [A^{ik}]p_k + \frac{\partial}{\partial x^h}[A^{i4}] \Biggr\} y_j^h + [A^{ih}]z_j^h \Biggr\} d\lambda, \end{equation} \begin{equation} \label{eq:4.21} \frac{\partial p_i}{\partial p_j^0} = \int\limits_{0}^{\lambda_1} \frac{\partial R_i}{\partial p_j^0} d \lambda = \int \limits_{0}^{\lambda_1} \sum\limits_{k=1}^3 \bigg{(} \frac{\partial R_i}{\partial x_k} \frac{\partial x^k}{\partial p_j^0} + \frac{\partial R_i}{\partial p_k} \frac{\partial p_k}{\partial p_j^0} \bigg{)}d\lambda, \end{equation} \begin{equation} \label{eq:4.22} \frac{\partial^2 x^i}{\partial p_j^0 \partial p_k^0} = \int\limits_{0}^{\lambda_1} \frac{\partial^2 T^i}{\partial p_j^0 \partial p_k^0} d \lambda = \int \limits_{0}^{\lambda_1} \Biggl\{ \sum\limits_{h=1}^3 \bigg{(} \frac{\partial T^i}{\partial x^h} \frac{\partial^2 x^h}{\partial p_j^0 \partial p_k^0} + \frac{\partial T^i}{\partial p_h} \frac{\partial^2 p_h}{\partial p_j^0 \partial p_k^0} \bigg{)} + \phi^i_{jk} \Biggr\} d\lambda, \end{equation} \begin{equation} \label{eq:4.23} \frac{\partial^2 p_i}{\partial p_j^0 \partial p_k^0} = \int\limits_{0}^{\lambda_1} \frac{\partial^2 R^i}{\partial p_j^0 \partial p_k^0} d \lambda = \int \limits_{0}^{\lambda_1} \Biggl\{ \sum\limits_{h=1}^3 \bigg{(} \frac{\partial R_i}{\partial x^h} \frac{\partial^2 x^h}{\partial p_j^0 \partial p_k^0} + \frac{\partial R_i}{\partial p_h} \frac{\partial^2 p_h}{\partial p_j^0 \partial p_k^0} \bigg{)} + \psi^i_{jk} \Biggr\} d\lambda, \end{equation} \begin{equation}\begin{split} \label{eq:4.24} \frac{\partial^3 x^i}{\partial p_j^0 \partial p_h^0 \partial p_k^0} = &\int\limits_{0}^{\lambda_1} \frac{\partial^3 T^i}{\partial p_j^0 \partial p_h^0 \partial p_k^0} d \lambda = \int \limits_{0}^{\lambda_1} \Biggl\{ \sum\limits_{l=1}^3 \bigg{(} \frac{\partial T^i}{\partial x^l} \frac{\partial^3 x^l}{\partial p_j^0 \partial p_h^0 \partial p_k^0} \\ &+ \frac{\partial T^i}{\partial p_l} \frac{\partial^3 p_l}{\partial p_j^0 \partial p_h^0 \partial p_k^0} \bigg{)} + \phi^i_{jhk} \Biggr\} d\lambda, \end{split}\end{equation} \begin{equation} \begin{split} \label{eq:4.25} \frac{\partial^3 p_i}{\partial p_j^0 \partial p_h^0 \partial p_k^0} = &\int\limits_{0}^{\lambda_1} \frac{\partial^3 R^i}{\partial p_j^0 \partial p_h^0 \partial p_k^0} d \lambda = \int \limits_{0}^{\lambda_1} \Biggl\{ \sum\limits_{l=1}^3 \bigg{(} \frac{\partial R_i}{\partial x^l} \frac{\partial^3 x^l}{\partial p_j^0 \partial p_h^0 \partial p_k^0} \\ &+ \frac{\partial R_i}{\partial p_l} \frac{\partial^3 p_l}{\partial p_j^0 \partial p_h^0\partial p_k^0} \bigg{)} + \psi^i_{jhk} \Biggr\} d\lambda, \end{split}\end{equation} where $\phi^i_{jk}$ and $\psi^i_{jk}$ are polynomials of the functions $p_i(\lambda)$, $\frac{\partial x^i}{\partial p_j^0}(\lambda)$, $\frac{\partial p_i}{\partial p_j^0}(\lambda)$, of the coefficients $A^{\lambda \mu}(x^\alpha)$ and of their partial derivatives with respect to the $x^\alpha$ up to the third order included. Whereas $\phi^i_{jhk}$ and $\psi^i_{jhk}$ are polynomials of the functions $p_i(\lambda)$, $\frac{\partial x^i}{\partial p_j^0}(\lambda)$, $\frac{\partial p_i}{\partial p_j^0}(\lambda)$, $\frac{\partial^2 x^i}{\partial p_j^0 \partial p_h^0}(\lambda)$, $\frac{\partial^2 p_i}{\partial p_j^0 \partial p_h^0}(\lambda)$ as well as of the coefficients $A^{\lambda \mu}$ and of their partial derivatives up to the fourth order included. In these functions the $x^\alpha$ are replaced from the $x^\alpha(\lambda)$ given by ($\ref{eq:4.15}$). If we set \begin{equation*} \frac{\partial x^i}{\partial p_j^0} \equiv y^i_j, \hspace{0.5cm} \frac{\partial^2 x^i}{\partial p_j^0 \partial p_h^0} \equiv y^i_{jh}, \hspace{0.5cm} \frac{\partial^3 x^i}{\partial p_j^0 \partial p_h^0 \partial p_k^0} \equiv y^i_{jhk}, \end{equation*} \begin{equation*} \frac{\partial p_i}{\partial p_j^0} \equiv z^i_j, \hspace{0.5cm} \frac{\partial^2 p_i}{\partial p_j^0 \partial p_h^0} \equiv z^i_{jh}, \hspace{0.5cm} \frac{\partial^3 p_i}{\partial p_j^0 \partial p_h^0 \partial p_k^0} \equiv z^i_{jhk},\end{equation*} \begin{equation*} T^i_j \equiv \frac{\partial T^i}{\partial p_j^0}, \hspace{0.5cm} T^i_{jk} \equiv \frac{\partial^2 T^i}{\partial p_j^0 \partial p_k^0}, \hspace{0.5cm} T^i_{jhk} \equiv \frac{\partial^3 T^i}{\partial p_j^0 \partial p_h^0 \partial p_k^0}, \end{equation*} \begin{equation*} R^i_j \equiv\frac{\partial R_i}{\partial p_j^0}, \hspace{0.5cm} R^i_{jk} \equiv \frac{\partial^2 R_i}{\partial p_j^0 \partial p_k^0}, \hspace{0.5cm} R^i_{jhk} \equiv \frac{\partial^3 R_i}{\partial p_j^0 \partial p_h^0 \partial p_k^0};\end{equation*} Eqs. ($\ref{eq:4.20}$), ($\ref{eq:4.21}$), ($\ref{eq:4.22}$), ($\ref{eq:4.23}$), ($\ref{eq:4.24}$) and ($\ref{eq:4.25}$) read as \begin{equation} \begin{split} \label{eq:4.26} & y^i_j = \int\limits_{0}^{\lambda_1} T^i_j d\lambda, \hspace{0.5cm} z^i_j = \int\limits_{0}^{\lambda_1} R^i_j d\lambda, \hspace{0.5cm} y^i_{jk} = \int\limits_{0}^{\lambda_1} T^i_{jk} d\lambda, \\ & z^i_{jk} = \int\limits_{0}^{\lambda_1} R^i_{jk} d\lambda, \hspace{0.5cm} y^i_{jhk} = \int\limits_{0}^{\lambda_1} T^i_{jhk} d\lambda, \hspace{0.5cm} z^i_{jhk} = \int\limits_{0}^{\lambda_1} R^i_{jhk} d\lambda. \end{split} \end{equation} \section{The Auxiliary Functions $\sigma^r_s$} Let us now form $n^2$ linear combinations $\sum\limits_{r=1}^n \sigma^r_s [E_r]$ of the Eq. ($\ref{eq:4.19}$) verified by the unknown functions within the domain $V$ of $\Sigma_0$, the $\sigma^r_s$ denoting $n^2$ auxiliary functions which possess at $M_0$ a singularity. If we set \begin{equation*} M (\varphi)= \sum\limits_{i,j=1}^3 [A^{ij}] \frac{\partial^2 \varphi}{\partial x^i \partial x^j}, \end{equation*} $\varphi$ denoting a function whatsoever of the three variables $x^i$, it is possible to perfom the stated $n^2$ linear combinations as \begin{equation} \begin{split} \label{eq:4.27} \sum\limits_{r=1}^n \sigma^r_s [E_r] & = \sum\limits_{r=1}^n \Biggl\{ M ([u_r]) + 2 \sum\limits_{i=1}^3 \bigg{(} \sum\limits_{j=1}^3 [A^{ij}]p_j + [A^{i4}] \bigg{)} \frac{\partial}{\partial x^i} \bigg{[} \frac{\partial u_r}{\partial x^4} \bigg{]} \\ & + \bigg{[} \frac{\partial u_r}{\partial x^4} \bigg{]} \sum\limits_{i,j=1}^3 [A^{ij}] \frac{\partial p_i}{\partial x^j} + \sum\limits_{t,\mu=1}^3 [{B^{t }_r}^\mu ] \bigg{[} \frac{\partial u_t}{\partial x^\mu} \bigg{]} + [f_r] \Biggr\} \sigma^r_s=0. \end{split} \end{equation} We will transform these equations in such a way that a divergence occurs therein, whose volume integral will get transformed into a surface integral, while the remaining terms will contain only $[u_r]$ and $\big{[} \frac{\partial u_r}{\partial x^4} \big{]}$. We will use for that purpose the following identity, verified by two functions whatsoever $\varphi$ and $\psi$ of the three variables $x^i$: \begin{equation*} \psi M(\varphi) = \sum\limits_{i,j=1}^3 \frac{\partial}{\partial x^i} \bigg{(} [A^{ij}]\psi \frac{\partial \varphi}{\partial x^j} \bigg{)} -\sum\limits_{i,j=1}^3 \frac{\partial \varphi}{\partial x^j} \frac{\partial}{\partial x^i} ([A^{ij}]\psi ) \end{equation*} or \begin{equation*} \psi M(\varphi) =\sum\limits_{i,j=1}^3 \frac{\partial}{\partial x^i} \bigg{(} [A^{ij}]\psi \frac{\partial \varphi}{\partial x^j} \bigg{)} - \varphi \frac{\partial}{\partial x^j} ([A^{ij}] \varphi) \bigg{)} + \varphi \bar{M}(\psi), \end{equation*} where $\bar{M}$ is the adjoint operator of $M$, i.e. \begin{equation*} \bar{M}(\psi)= \sum\limits_{i,j=1}^3 \frac{\partial^2 ([A^{ij}]\psi)}{\partial x^i \partial x^j}, \end{equation*} and the identity ($\ref{eq:4.18}$) yields here \begin{equation*} \bigg{[} \frac{\partial u_r}{\partial x^i} \bigg{]} = \frac{\partial [u_r]}{\partial x^i} + p_i \bigg{[} \frac{\partial u_r}{\partial x^4} \bigg{]}. \end{equation*} Thus, $\sum\limits_{r=1}^4 \sigma^r_s[E_r]$ take the form \begin{equation*} \sum\limits_{r=1}^4 \sigma^r_s[E_r] = \sum\limits_{i=1}^3 \frac{\partial}{\partial x^i}E_s^i + \sum\limits_{r=1}^n [u_r]L_s^r +\sum\limits_{r=1}^n \sigma^r_s [f_r] - \sum\limits_{r=1}^n \bigg{[} \frac{\partial u_r}{\partial x^4}\bigg{]} D_s^r, \end{equation*} where we have defined \begin{equation} \begin{split} \label{eq:4.28} E_s^i =& \sum\limits_{j=1}^3 \sum\limits_{r=1}^n \bigg{(} [A^{ij}]\sigma_s^r \frac{\partial [u_r]}{\partial x^j} - [u_r]\frac{\partial}{\partial x^j}([A^{ij}]\sigma_s^r) \bigg{)} + 2 \sum\limits_{r=1}^n \sigma_s^r \Biggl\{ \sum\limits_{j=1}^3[A^{ij}]p_j \\ &+ [A^{i4}] \Biggr\} \bigg{[}\frac{\partial u_r}{\partial x^4} \bigg{]} + \sum\limits_{r,t=1}^n[B^t_{ri}][u_t]\sigma_s^r, \end{split} \end{equation} \begin{equation} \label{eq:4.29} L_s^r = \bar{M}(\sigma_s^r) - \sum\limits_{i=1}^3\sum\limits_{t=1}^n \frac{\partial}{\partial x^i} ([B_t^ {ri}] \sigma_s^t ), \end{equation} \begin{equation} \begin{split} \label{eq:4.30} D_s^r =& \sigma_s^r \Biggl\{ 2 \sum\limits_{i=1}^3 \frac{\partial}{\partial x^i} \bigg{(}\sum\limits_{j=1}^3 [A^{ij}]p_j + [A^{i4}] \bigg{)} - \sum\limits_{i,j=1}^3[A^{ij}] \frac{\partial p_j}{\partial x^i} + 2 \sum\limits_{i=1}^3 \bigg{(} \sum\limits_{j=1}^3 [A^{ij}]p_j \\ & + [A^{i4}] \bigg{)} \frac{\partial \sigma_s^r}{\partial x^i} - \sum\limits_{t=1}^n ([B_t^{r4}] + \sum\limits_{i=1}^3 [B_t^{ri}]p_i) \sigma_s^t. \end{split} \end{equation} We will choose the auxiliary functions $\sigma_s^r$ in such a way that, in every equation, the coefficient of $\big{[} \frac{\partial u_r}{\partial x^4} \big{]}$ vanishes. These functions will therefore fulfill $n^2$ partial differential equations of first order \begin{equation} \label{eq:4.31} D_s^r=0. \end{equation} If we look for its solution of the form $\sigma_s^r = \sigma \omega_s^r$, where $\sigma$ is infinite at the point $M_0$ and the $\omega_s^r$ are bounded, Eq. ($\ref{eq:4.31}$) reads as \begin{equation} \begin{split} \label{eq:4.32} & \sigma_s^r \Biggl\{ \sum\limits_{i=1}^3 \frac{\partial}{\partial x^i} \bigg{(} \sum\limits_{j=1}^3[A^{ij}]p_j + [A^{i4}] \bigg{)} + \sum\limits_{i,j=1}^3 p_j \frac{\partial}{\partial x^i}[A^{ij}] + \sum\limits_{i=1}^3 \frac{\partial }{\partial x^i}[A^{i4}] \Biggr\} \\ & - \sum\limits_{t=1}^n \bigg{(}[B_t^{r4}] + \sum\limits_{i=1}^3[B_t^{ri}]p_i \bigg{)}\sigma_s^t + 2 \sum\limits_{i=1}^3 \bigg{(}\sum\limits_{j=1}^3[A^{ij}]p_j + [A^{i4}]\bigg{)} \frac{\partial \sigma_s^r}{\partial x^i} = 0. \end{split} \end{equation} The coefficients $A^{\lambda \mu}$, $B_s^{t \lambda}$ , the first derivatives of the $A^{\lambda \mu}$ and the functions $p_i$ are bounded within the domain $V$, the coefficients of the linear first-order partial differential equations are therefore a sum of bounded terms, perhaps with exception of the terms \begin{equation*} \sum\limits_{i=1}^3 \frac{\partial }{\partial x^i} \Biggl\{ \sum\limits_{j=1}^3 [A^{ij}]p_j + [A^{i4}] \Biggr\} \end{equation*} We will therefore choose the $\omega_s^r$, that we want to be bounded, as satisfying the equation \begin{equation} \begin{split} \label{eq:4.33} & \omega_s^r \bigg{(} \sum\limits_{i,j=1}^3p_j \frac{\partial}{\partial x^i}[A^{ij}] + \sum\limits_{i=1}^3 \frac{\partial}{\partial x^i} [A^{i4}]\bigg{)} - \sum\limits_{t=1}^n \omega_s^t \Biggl\{[B_t^{r4}] +\sum\limits_{i=1}^3[B_t^{ri}]p_i \Biggr\} \\ & + 2\sum\limits_{i=1}^3 \Bigg\{\sum\limits_{j=1}^3 [A^{ij}]p_j + [A^{i4}] \Biggr\} \frac{\partial \omega_s^r}{\partial x^i}=0, \end{split}\end{equation} fulfilling in turn \begin{equation}\label{eq:4.34} \sigma \sum\limits_{i=1}^3 \frac{\partial}{\partial x^i} \bigg{(}\sum\limits_{j=1}^3 [A^{ij}]p_j + [A^{i4}] \bigg{)} + 2\sum\limits_{i=1}^3 \bigg{(}\sum\limits_{j=1}^3 [A^{ij}]p_j + [A^{i4}] \bigg{)} \frac{\partial \sigma}{\partial x^i} =0. \end{equation} Our task is to evaluate $\omega_s^r$ and then $\sigma_s^r$. \section{Evaluation of the $\omega_s^r$ and $\sigma$} Let us consider the equation ($\ref{eq:4.33}$), it can be written in form of integral equations analogous to the Eqs. ($\ref{eq:4.7}$), ($\ref{eq:4.8}$) and ($\ref{eq:4.9}$) obtained in the search for the conoid $\Sigma_0$. We have indeed, on $\Sigma_0$: \begin{equation*} \sum\limits_{j=1}^3[A^{ij}]p_j + [A^{i4}] = T^i = \frac{\partial x^i}{\partial \lambda_1}, \end{equation*} from which, for an arbitrary function $\varphi$ defined on $\Sigma_0$, \begin{equation*}\sum\limits_{i=1}^3T^i \frac{\partial \varphi}{\partial x^i}= \frac{\partial \varphi}{\partial \lambda_1}. \end{equation*} Let us impose upon the $\omega_s^r$ the limiting conditions $\omega_s^r = \delta_s^r$ for $\lambda_1=0$. These quantities satisfy therefore the integral equations \begin{equation} \label{eq:4.35} \omega_s^r = \int\limits_{0}^{\lambda_1} \bigg{(}\sum\limits_{t=1}^n \mathcal{Q}_t^r \omega_s^t + \mathcal{Q}\omega_s^r \bigg{)} d \lambda + \delta_s^r \end{equation} where we have defined \begin{equation*} \mathcal{Q}_t^r = \frac{1}{2} ([B_t^{r4}] + \sum\limits_{i=1}^3[B_t^{ri}]p_i) \; \rm{and} \; \mathcal{Q}= - \frac{1}{2} \bigg{(}\sum\limits_{i,j=1}^3 p_j \frac{\partial}{\partial x^i} [A^{ij}] + \sum\limits_{i=1}^3 \frac{\partial}{\partial x^i}[A^{i4}] \bigg{)}, \end{equation*} the assumptions made upon the coefficients $A^{\lambda \mu}$ and $B_s^{r \lambda}$ and the results obtained on the functions $x^i$, $p_i$ enabling moreover to prove that, for a convenient choice of $\epsilon_1$, these equations have a unique, continuous, bounded solution which has partial derivatives of the first two orders with respect to the $p_i^0$, continuous and bounded within the domain $\Lambda$. We will denote these derivatives by $\omega^r_{si}$ and $\omega^r_{sij}$. Once we have found $\omega_s^r$, let us consider the Eq. ($\ref{eq:4.34}$) verified by $\sigma$. We know that \begin{equation*} \sum\limits_{i=1}^3 \bigg{(} \sum\limits_{j=1}^3 [A^{ij}]p_j + [A^{i4}] \bigg{)} \frac{\partial \sigma}{\partial x^i} = \frac{\partial \sigma}{\partial \lambda_1}, \end{equation*} and we are going to evaluate the coefficient of $\sigma$, \begin{equation*} \sum\limits_{i=1}^3 \frac{\partial}{\partial x^i} \bigg{(} \sum\limits_{j=1}^3 [A^{ij}]p_j + [A^{i4}]\bigg{)}, \end{equation*} by relating it very simply to the determinant \begin{equation*} \frac{D(x^1, x^2, x^3)}{D(\lambda_1, \lambda_2, \lambda_3)} \equiv J_{x \lambda}. \end{equation*} This Jacobian $J_{x \lambda}$ of the change of variables $x^i = x^i (\lambda_j)$ on the conoid $\Sigma_0$, has for elements \begin{equation} \label{eq:4.36} \frac{\partial x^i}{\partial \lambda_1}=T^i, \hspace{0.5cm} \frac{\partial x^i}{\partial \lambda_2}= \sum\limits_{j=1}^3 {y^i}_j \frac{\partial p_j^0}{\partial \lambda_2}, \hspace{0.5cm} \frac{\partial x^i}{\partial \lambda_3}= \sum\limits_{j=1}^3 {y^i}_j \frac{\partial p_j^0}{\partial \lambda_3}. \end{equation} Let us denote by $J^i_j$ the minor relative to the element $\frac{\partial x^i}{\partial \lambda_j}$ of the determinant $J_{x \lambda}$. A function whatsoever $\varphi$, defined on $\Sigma_0$, verifies the identities \begin{equation*} \frac{\partial \varphi}{\partial x^i} = \sum\limits_{j=1}^3 \frac{J^j_i}{J_{x \lambda}} \frac{\partial \varphi}{\partial \lambda_j}. \end{equation*} Let us apply this formula to the function $\frac{\partial x^i}{\partial \lambda_1} = T^i$: \begin{equation*} \sum\limits_{i=1}^3 \frac{\partial}{\partial x^i}T^i = \sum\limits_{i,j=1}^3 \frac{J^i_j}{J_{x \lambda}} \frac{\partial}{\partial \lambda_j} T^i = \sum\limits_{i,j=1}^3 \frac{J^i_j}{J_{x \lambda}} \frac{\partial}{\partial \lambda_1} \bigg{(} \frac{ \partial x^i}{\partial \lambda_j} \bigg{)}, \end{equation*} $J^i_j$ being the minor relative to the element $\frac{\partial x^i}{\partial \lambda_j}$ of the determinant $J_{x \lambda}$ we have \begin{equation*} \sum\limits_{i=1}^3 \frac{\partial}{\partial x^i}T^i = \frac{1}{J_{x \lambda}} \frac{ \partial J_{x \lambda}}{\partial \lambda_1}. \end{equation*} Thus, the function $\sigma$ verifies the relation \begin{equation*} \sigma \frac{\partial J_{x \lambda}}{ \partial \lambda_1} + 2 J_{x \lambda} \frac{\partial \sigma}{\partial \lambda_1} = 0, \end{equation*} whose general solution is \begin{equation*} \sigma = \frac{ f (\lambda_2, \lambda_3)}{ {|J_{x \lambda}|}^{\frac{1}{2}}}, \end{equation*} where $f$ denotes an arbitrary function. For $\lambda_1=0$ the determinant $J_{x \lambda}$ vanishes, because the $y^i_j$ are vanishing; the function $\sigma$ is therefore infinite. The coefficients $A^{\lambda \mu}$ and their first and second partial derivatives with respect to the $x^\alpha$ being continuous and bounded within the domain $V$ of $\Sigma_0$, as well as the functions $x^i$, $y^i_j$, $z^i_j$, we have \begin{equation} \label{eq:4.37} \lim_{\lambda \to 0} \frac{y^i_j}{\lambda_1} = [A^{ij}]_{\lambda_1 =0} = - \delta^i_j. \end{equation} By dividing the second and third line of $J_{x \lambda}$ by $\lambda_1$ we obtain a determinant equal to $\frac{J_{x \lambda}}{(\lambda_1)^2}$; we deduce from the formulas ($\ref{eq:4.36}$) and ($\ref{eq:4.37}$) \begin{equation*} \begin{split} \lim_{\lambda \to 0} \frac{y^i_j}{(\lambda_1)^2}& = det \left ( {\begin{array}{ccc} - sin (\lambda_2)cos(\lambda_3) & - sin (\lambda_2)sin(\lambda_3) & - cos(\lambda_2) \\ - cos (\lambda_2)cos(\lambda_3)& - cos (\lambda_2)sin(\lambda_3) & sin (\lambda_2) \\ + sin (\lambda_2)sin(\lambda_3) & - sin (\lambda_2)cos(\lambda_3) & 0 \\ \end{array} } \right ) \\ & = - sin(\lambda_2). \end{split} \end{equation*} As a matter of fact: \begin{equation*} \lim_{\lambda_1 \to 0} T^i = - \sum\limits_{j=1}^3 \delta^j_ip_j^0 = - p_i^0, \end{equation*} \begin{equation*} \lim_{\lambda_1 \to 0} \frac{1}{\lambda_1} \frac{\partial x^i}{\partial \lambda_u} = \lim_{\lambda_1 \to 0} \sum\limits_{j=1}^3 \frac{y^i_j}{\lambda_1} \frac{\partial p_j^0}{\partial \lambda_u} = - \sum\limits_{j=1}^3 \delta^i_j \frac{\partial p_j^0}{\partial \lambda_u }. \end{equation*} We will take for auxiliary function $\sigma$ the function \begin{equation*} \sigma= {\bigg{|} \frac{ sin(\lambda_2)}{J_{x \lambda}} \bigg{|}}^\frac{1}{2}. \end{equation*} We will then have $\lim_{\lambda_1 \to 0} \sigma \lambda_1=1$. \section{Derivatives of the Functions $\sigma^r_s$} Let us now consider \begin{equation*} D^r_s=0. \end{equation*} These equations possess a solution having at $M_0$ the desired singularity. If the auxiliary functions $\sigma^r_s$ verify these $n^2$ relations, the equations, verified by the unknown functions $u_r$ on the characteristic conoid $\Sigma_0$, take the simple form \begin{equation} \label{eq:4.38} \sum\limits_{r=1}^n ([u_r]L^r_s + \sigma^r_s [f_r] ) + \sum\limits_{i=1}^3 \frac{\partial}{\partial x^i}E^i_s=0. \end{equation} We will integrate these equations with respect to the three variables $x^i$ on a portion $V_\eta$ of hypersurface of the characteristic conoid $\Sigma_0$, limited by the hypersurfaces $x^4=0$ and $x^4 = x^4_0 - \eta$. This domain $V_\eta$ is defined to be simply connected and internal to the domain $V$ if the coordinate $x^4_0$ is sufficiently small. As a matter of fact \begin{equation*} |x^4_0| < \epsilon \; \rm{implies} \; \rm{within} \; V_\eta \; |x^4-x^4_0| < \epsilon_0. \end{equation*} The formula ($\ref{eq:4.16}$) shows in such a case that, for a suitable choice of $\epsilon_0$, we will have $\lambda_1 \leq \epsilon_1$. Since the boundary of $V_\eta$ consists of two-dimesional domains $S_0$ and $S_\eta$ cut over $\Sigma_0$ from the hypersurfaces $x^4=0$, $x^4=x^4_0 - \eta$ we will have, upon integrating Eq. ($\ref{eq:4.38}$) within $V_\eta$, the following fundamental relations: \begin{equation} \begin{split}\label{eq:4.39} & \underbrace{ \int \int \int}_{V_\eta} \sum\limits_{r=1}^n \{ [u_r]L^r_s + \sigma^r_s[f_r] \} dV + \underbrace{\int \int}_{S_\eta} \sum\limits_{i=1}^3 E^i_s cos(n,x^i)dS \\ & - \underbrace{ \int \int}_{S_0} \sum\limits_{i=1}^3 E^i_s cos(n,x^i)dS=0, \end{split}\end{equation} where $dV$, $dS$ and $cos(n,x^i)$ denote, in the space of three variables $x^i$, the volume element, the area element of a surface $x^4={\rm const.}$ and the directional cosines of the outward-pointing normal to one of such surfaces, respectively. Equation ($\ref{eq:4.38}$) contains, on the one hand the values on $\Sigma_0$ of the unknown functions $u_r$, of their partial derivatives as well as the functions $p_i$, $y$ and $z$, on the other hand the functions $\sigma^r_s$ and their first and second partial derivatives. Let us study therefore the partial derivatives of the first two orders of the functions $\sigma$ and $\omega^r_s$. Since we have seen that $\sigma= {\big{|} \frac{sin(\lambda_2)}{J_{x \lambda}} \big{|}}^\frac{1}{2}$, thus it is a function of the trigonometric lines of $\lambda_u$ (with $u=2, 3$), of the functions $x^\alpha$ (through the intermediate effect of the $A^{\lambda \mu}$) and of the functions $p_i$, $y^i_j$. The first and second partial derivatives of $\sigma$ with respect to the $x^i$ will be therefore expressed with the help of the functions listed and of their first and second partial derivatives. $\textbf{First derivatives of $\sigma$:}$ We have seen that the partial derivatives with respect to the $x^i$ of a function whatsoever $\varphi$, defined on $\Sigma_0$, satisfy the identity \begin{equation} \label{eq:4.40} \frac{\partial \varphi}{\partial x^i} = \sum\limits_{j=1}^3 \frac{ J^j_i}{J_{x \lambda}} \frac{\partial \varphi}{\partial \lambda_j}, \end{equation} where $\frac{ J^j_i}{J_{x \lambda}}$ is a given function of $cos(\lambda_u)$, $ sin(\lambda_u)$, $x^\alpha$, $p_i$, $y^j_i$; the partial derivatives with respect to $\lambda_1$ of the functions $x^i$, $p_i$, $y^j_i$, are the quantities $T^i$, $R_i$, $T^i_j$ which are expressed through these functions themselves and through $z^j_i$; the partial derivatives with respect to $\lambda_u$ of these functions $x^i$, $p_i$, $y^j_i$ being expressible by means of their derivatives with respect to the overabundant parameters $p_h^0$, denoted here by $y^i_h$, $z^i_h$, $y^i_{jh}$, and by means of $cos(\lambda_u)$, $sin(\lambda_u)$. The function $\sigma$ admits therefore within $V$, under the assumpstions made, of first partial derivatives with respect to the $x^i$ which are expressible by means of the functions $x^\alpha$ ( with the intermediate help of the $[A^{\lambda \mu}]$ and of the $\big{[} \frac{\partial A^{\lambda \mu}}{\partial x^\alpha} \big{]}$ and of the functions $p_i$, $y^i_j$, $z^i_j$, $y^i_{jh}$ and of $cos(\lambda_u)$, $sin(\lambda_u)$). $\textbf{Second derivatives of $\sigma$:}$ Another application of the formula ($\ref{eq:4.40}$) shows, in analogous fashion, that $\sigma$ admits within $V$ of second partial derivatives, which are expressible by means of the functions $x^\alpha$ (with the intermediate action of the $A^{\lambda \mu}$ and their first and second partial derivatives) and of the functions $p_i$, $y^i_j$, $z^i_j$, $y^i_{jh}$, $z^i_{jh}$, $y^i_{jhk}$ and of $cos(\lambda_u)$, $sin(\lambda_u)$. $\textbf{Derivatives of the $\omega^r_s$:}$ The identity ($\ref{eq:4.40}$) makes it possible moreover to state that the functions $\omega^r_s$ admit within $V$ of first and second partial derivatives with respect to the variables $x^i$ if these functions admit, within $V$, of first and second partial derivatives with respect to the variables $\lambda_u$; it suffices for that purpose that they admit of first and second partial derivatives with respect to the overabundant variables $p_i^0$. We shall set \begin{equation*} \frac{\partial \omega^r_s}{\partial p_i^0} = \omega^r_{si}, \hspace{1cm} \frac{\partial \omega^r_s}{\partial p_i^0 \partial p_j^0} = \omega^r_{sij}. \end{equation*} If these functions are continuous and bounded within $V$ they satisfy, under the assumptions made, the integral equations obtained by derivation under the summation symbol of Eq. ($\ref{eq:4.35}$) with respect to the $p_i^0$. Let us define \begin{equation} \label{eq:4.41} \omega^r_{si} = \int\limits_{0}^{\lambda_1} \bigg{(} \sum\limits_{t=1}^n \mathcal{Q}^r_t \omega^t_{si} + \mathcal{Q} \omega^r_{si} + \Omega^r_{si} \bigg{)} d \lambda, \end{equation} where \begin{equation*} \Omega^r_{si}= \sum\limits_{t=1}^n \frac{\partial \mathcal{Q}^r_t}{\partial p_i^0} \omega^t_s + \frac{\partial \mathcal{Q}}{\partial p_i^0} \omega^r_s \end{equation*} is a polynomial of the functions $\omega^r_s$, $p_i$, $y^i_j$, $z^i_j$ as well as of the values on $\Sigma_0$ of the coefficients $A^{\lambda \mu}$, $B^{r \lambda}_s$ of the equations ($\ref{eq:4.1}$) and their partial derivatives with respect to the $x^\alpha$ up to the orders two and one, respectively (quantities that are themselves functions of the functions $x^\alpha (\lambda_j)$). \begin{equation} \label{eq:4.42} \omega^r_{sij} = \int\limits_{0}^{\lambda_1} \bigg{(} \sum\limits_{t=1}^n \mathcal{Q}^r_t \omega^t_{sij} + \mathcal{Q} \omega^r_{sij} + \Omega^r_{sij} \bigg{)} d \lambda, \end{equation} where \begin{equation*} \Omega^r_{sij} = \sum\limits_{t=1}^n \frac{\partial \mathcal{Q}^r_t}{\partial p_j^0} \omega^t_{si} + \frac{ \partial \mathcal{Q}}{\partial p_j^0} \omega^r_{si} + \frac{\partial \Omega^r_{si}}{\partial p_j^0}, \end{equation*} is a polynomial of the functions $\omega^r_s$, $\omega^r_{si}$, $p_i$, $y^i_j$, $z^i_j$, $y^i_{jh}$, $z^i_{jh}$ as well as of the values on $\Sigma_0$ of the coefficients $A^{\lambda \mu}$, $B^{r \lambda}_s$ and of their partial derivatives with respect to the $x^\alpha$ up to the orders three and two, respectively. The first and second partial derivatives of the $\omega^r_s$ with respect to the variables $x^i$ are expressed by means of the functions $x^\alpha$ (with the help of the coefficients $A^{\lambda \mu}$ and of their first partial derivatives), $p_i$, $y^i_j$, $z^i_j$, $y^i_{jh}$, $z^i_{jh}$, $\omega^r_s$, $\omega^r_{si}$ and $\omega^r_{sij}$. Then, the functions $\omega^r_s$ and their first and second derivatives with respect to the $x^i$ are expressed only through some functions $X$ and $\Omega$, $X$ denoting any whatsoever of the functions $x^\alpha$, $p_i$, $y^i_j$, $z^i_j$, $y^i_{jh}$, $z^i_{jh}$, $y^i_{jhk}$, $z^i_{jhk}$ and $\Omega$ any whatsoever among the functions $\omega^r_s$, $\omega^r_{si}$, $\omega^r_{sij}$. The functions $X$ and $\Omega$ satisfy integral equations of the form \begin{equation*} X = \int\limits_{0}^{\lambda_1} E(X) d\lambda + X_0, \end{equation*} \begin{equation*} \Omega = \int\limits_{0}^{\lambda_1} F(X, \Omega) d\lambda + \Omega_0, \end{equation*} where $X_0$ and $\Omega_0$ denote the given values of the functions $X$ and $\Omega$ for $\lambda_1=0$. $E(X)$ is a polynomial of the functions $X$ and of the values on $\Sigma_0$ of the coefficients $A^{\lambda \mu}$ and of their partial derivatives up to the fourth order (functions of the functions $x^\alpha$). $F(X, \Omega)$ is a polynomial of the functions $X$ and $\Omega$, and of the values on $\Sigma_0$ of the coefficients $A^{\lambda \mu}$, $B^{r \lambda}_s$ and of their partial derivatives up to the orders three and two, respectively. \section{Behaviour in the Neighbourhood of the Vertex} We are going to study the quantities occurring in the integrals of the fundamental relations $(\ref{eq:4.39}$), and for this purpose we will look in a more precise way for the expression of the partial derivatives of the functions $\sigma$ and $\omega^r_s$ with respect to the variables $x^i$ by means of the functions $X$ and $\Omega$. The behaviour of these functions in the neighbourhood of $\lambda_1=0$, that is the vertex of the characteristic conoid $\Sigma_0$, will make it possible for us to look for the limit of Eq. ($\ref{eq:4.39}$) for $\eta=0$: the function $x^4(\lambda_1, \lambda_2, \lambda_3)$ being, within the domain $\Lambda$, a continuous function of the three variables $\lambda_i$, $\eta= x^4 - x^4_0$ approaches actually zero with $\lambda_1$. First of all, we have already seen that the quantity $\frac{J_{x \lambda}}{(\lambda_1)^2}$ is a polynomial of the functions $X$, that is $p_i$ in this case, $\tilde{X}$, that is $\frac{ y^i_j}{\lambda_1}$, of the coefficients $A^{\lambda \mu}$ and of the $sin(\lambda_u)$, $cos(\lambda_u)$. It is therefore a continuous bounded function of $\lambda_1$, $\lambda_2$ and $\lambda_3$ within $V$. We have seen that the value of this function for $\lambda_1=0$ is \begin{equation*} \lim_{\lambda_1 \to 0} \frac{J_{x \lambda}}{(\lambda_1)^2} = - sin (\lambda_2). \end{equation*} In the neighbourhood of $\lambda_1=0$ the function $\frac{J_{x \lambda}}{(\lambda_1)^2} \neq 0$ but for $\lambda_2=0$ or $\lambda_2= \pi$. To remove this difficulty we will show that the polynomial $J_{x \lambda}$ is divisible by $sin(\lambda_2)$ and we will make sure that the function $D= \frac{J_{x \lambda}}{(\lambda_1)^2sin(\lambda_2)}$ appears in the denominators we consider. Let us therefore consider on the conoid $\Sigma_0$ the following change of variables: $\mu_i \equiv \lambda_i p_i^0$. We set \begin{equation*} d \equiv \frac{D(\mu_1, \mu_2, \mu_3)}{D(\lambda_1, \lambda_2, \lambda_3)}= {\rm det} \left ( {\begin{array}{ccc} p_1^0 & p_2^0 & p_3^0 \\ \lambda_1 \frac{\partial p_1^0}{\partial \lambda_2} & \lambda_1 \frac{\partial p_2^0}{\partial \lambda_2}& \lambda_1 \frac{\partial p_3^0}{\partial \lambda_2} \\ \lambda_1 \frac{\partial p_1^0}{\partial \lambda_3} & \lambda_1 \frac{\partial p_2^0}{\partial \lambda_3} & \lambda_1 \frac{\partial p_3^0}{\partial \lambda_3} \\ \end{array} } \right ) =(\lambda_1)^2 sin(\lambda_2), \end{equation*} and \begin{equation*} J_{x \lambda} \equiv \frac{D(x^1, x^2, x^3)}{D(\lambda_1, \lambda_2, \lambda_3)}. \end{equation*} Since \begin{equation*}\frac{D(x^1, x^2, x^3)}{D(\lambda_1, \lambda_2, \lambda_3)} = \frac{D(x^1, x^2, x^3)}{D(\mu_1, \mu_2, \mu_3)} \frac{D(\mu_1, \mu_2, \mu_3)}{D(\lambda_1, \lambda_2, \lambda_3)} \end{equation*} we have \begin{equation} \label{eq:4.43} J_{x \lambda} = D(\lambda_1)^2 sin(\lambda_2), \end{equation} where the determinant $D$ has elements \begin{equation*} \frac{\partial x^i}{\partial \mu_j} = \frac{\partial x^i}{\partial \lambda_1} \frac{\partial \lambda_1 }{\partial \mu_j} + \sum\limits_{h=1}^3 \sum\limits_{u=2}^3 \frac{\partial x^i}{\partial p_h^0}\frac{\partial p_h^0}{\partial \lambda_u}\frac{\partial \lambda_u}{\partial \mu_j}. \end{equation*} It results directly from $\mu_i =\lambda_i p_i^0$ and from the identity \begin{equation*} \sum\limits_{i=1}^3 (\mu_i)^2 = (\lambda_1)^2 \end{equation*} that \begin{equation*} \frac{\partial \lambda_1}{\partial \mu_j} = p_j^0 \hspace{0.5cm} \rm{and} \hspace{0.5cm} \frac{\partial p_h^0}{\partial \lambda_u} = \frac{1}{\lambda_1}\frac{\partial \mu_h}{\partial \lambda_u}. \end{equation*} On the other hand, we have \begin{equation*} \frac{\partial \lambda_1}{\partial \mu_j} \frac{\partial \mu_h}{\partial \lambda_1} + \sum\limits_{u=2}^3 \frac{\partial \lambda_u}{\partial \mu_j} \frac{\partial \mu_h}{\partial \lambda_u} = \delta^h_j. \end{equation*} The elements of $D$ are therefore \begin{equation*} \frac{\partial x^i}{\partial \mu_j} = T^i p_j^0 + \sum\limits_{h=1}^3 \frac{y^i_h}{\lambda_1} (\delta^h_j - p_j^0 p_h^0). \end{equation*} The polynomial $\frac{J_{x \lambda}}{(\lambda_1)^2}$ is therefore divisible by $sin(\lambda_2)$, the quotient $D$ being a polynomial of the same functions $X$, $\tilde{X}$ as $\frac{J_{x \lambda}}{(\lambda_1)^2}$ is of $sin(\lambda_u)$, $cos(\lambda_u)$. $D$ is a continuous bounded function of $\lambda_1$, $\lambda_2$, $\lambda_3$ within $V$ whose value for $\lambda_1=0$ is $\lim_{\lambda_1 \to 0} D = -1$. As a matter of fact: \begin{equation*} \lim_{\lambda_1 \to 0} \frac{\partial x^i}{\partial \mu_j} = - p_i^0 p_j^0 - \delta^i_j + p_i^0p_j^0 = - \delta^i_j. \end{equation*} $\frac{J_{x \lambda}}{(\lambda_1)^2}$ being a homogeneous polynomial of second degree of the functions $\frac{y^j_i}{\lambda_1}$, the same is true of the polynomial $D$, and the quantity $(\lambda_1)^2D$ is a polynomial of the functions $X$, of the coefficients $A^{\lambda \mu}$ and of the three $p_i^0$, homogeneous of the second degree with respect to the $y^i_j$. $D$ is actually a continuous and bounded function of $\lambda_1$ in the domain $\Lambda$ (where $\lambda_2$ and $\lambda_3$ vary over a compact set) and takes the value -1 for $\lambda_1=0$. There exists therefore a number $\epsilon_2$ such that, in the domain $\Lambda_2$, neighbourhood $\lambda_1=0$ of the domain $\Lambda$, defined by \begin{equation*} |\lambda_1| \leq \epsilon_2, \; 0 \leq \lambda_2 \leq \pi, \; 0 \leq \lambda_2 \leq 2 \pi, \end{equation*} one has for example \begin{equation*} |D + 1| \leq \frac{1}{2} \hspace{0.5cm} \rm{therefore} \hspace{0.5cm} |D| \geq \frac{1}{2}. \end{equation*} We will denote by $W$ the domain of $\Sigma_0$ corresponding to the domain $\Lambda_2$. Hereafter, the behaviour of the minors of $J_{x \lambda}$ is studied. $\textbf{Minors relative to elements of the first line of $J_{x \lambda}$:}$ $J_i^1$ is, as $J_{x \lambda}$ itself, a homogeneous polynomial of second degree with respect to the functions $y^j_i$, and $\frac{J_i^1}{(\lambda_1)^2}$ is a polynomial of the functions $X(p_i)$, $\tilde{X}\big{(}\frac{y^j_i}{\lambda_1}\big{)}$, of the coefficients $[A^{\lambda \mu}]$ and of $sin(\lambda_u)$, $cos(\lambda_u)$; it is therefore a continuous and bounded function of $\lambda_1$, $\lambda_2$, $\lambda_3$ in $V$. In order to study the quantity $\frac{J_i^1}{J_{x \lambda}}= \frac{\partial \lambda_1}{\partial x^i}$ we shall put it in the form of a rational fraction with denominator $D$, which differs from 0 in $W$. We have \begin{equation} \label{eq:4.44} \frac{J_i^1}{J_{x \lambda}}= \frac{\partial \lambda_1}{\partial x^i} = \sum\limits_{j=1}^3 \frac{\partial \lambda_1}{\partial \mu_j}\frac{\partial \mu_j}{\partial x^i} = \sum\limits_{j=1}^3 p_j^0 \frac{D^j_i}{D} \end{equation} where $D^j_i$ is the minor relative to the element $\frac{\partial x^i}{\partial \mu_j}$ of the determinant $D$. The quantity $\frac{J_i^1}{J_{x \lambda}}$ is therefore a continuous and bounded function of the three variables $\lambda_1$, $\lambda_2$, $\lambda_3$ in $W$. When we compute the value of this function for $\lambda_1=0$, we find \begin{equation*} \lim_{\lambda_1 \to 0}\frac{J_i^1}{J_{x \lambda} }= - p_i^0. \end{equation*} Indeed: \begin{equation*} \lim_{\lambda_1 \to 0} \frac{\partial \lambda_1}{\partial x^i} = \lim_{\lambda_1 \to 0} \frac{\partial x^4}{\partial x^i}, \end{equation*} or one has constantly, over $\Sigma_0$, $\frac{\partial x^4}{\partial x^i}=-p_i$. One deduces from the formulas $(\ref{eq:4.43})$ and $(\ref{eq:4.44}$) that \begin{equation*} J_i^1 = \sum\limits_{j=1}^3 (\lambda_1)^2 sin(\lambda_2) p_j^0 D^j_i. \end{equation*} One then sees that the quantity $(\lambda_1)^2 \sum\limits_{j=1}^3 p_j^0D^j_i$ is a polynomial of the functions $p_i$, $y^j_i$, of the coefficients $[A^{\lambda \mu}]$ and of the three $p_h^0$, homogeneous of second degree with respect to the functions $y^j_i$. $\textbf{Minors relative to the second and third line of $J_{x \lambda}$:}$ $J_i^u$ is a polynomial of the functions $X(p_i, y^j_i)$, $ [A^{\lambda \mu}]$ and of $sin(\lambda_u)$, $cos(\lambda_u)$, homogeneous of first degree with respect to the functions $y^j_i$. $\frac{J_i^u}{\lambda_1}$ is a continuous and bounded function of $\lambda_1$, $\lambda_2$, $\lambda_3$ in $V$. Let us study the quantity $\sum\limits_{u=2}^3 \frac{\partial p_h^0}{\partial \lambda_u} \frac{J_i^u}{J_{x \lambda}}$. One has \begin{equation*} \sum\limits_{u=2}^3 \frac{\partial p_h^0}{\partial \lambda_u} \frac{J_i^u}{J_{x \lambda}}= \sum\limits_{u=2}^3 \frac{\partial p_h^0}{\partial \lambda_u}\frac{\partial \lambda_u}{\partial x^i} = \sum\limits_{u=2}^3 \sum\limits_{j=1}^3 \frac{1}{\lambda_1}\frac{\partial \mu_h}{\partial \lambda_u} \frac{\partial \lambda_u}{\partial \mu_j} \frac{\partial \mu_j}{\partial x^i} = \sum\limits_{j=1}^3 \frac{1}{\lambda_1} (\delta^h_j - p_j^0 p_h^0 ) \frac{D_i^j}{D}. \end{equation*} The quantity $\lambda_1 \sum\limits_{u=2}^3 \frac{\partial p_h^0}{\partial \lambda_u}\frac{J_i^u}{J_ {x \lambda}}$ is a rational fraction with nonvanishing denominator in the domain $W$ of the functions $X(p_i)$, $\hat{X}\big{(} \frac{y^j_i}{\lambda_1} \big{)}$, $[A^{\lambda \mu}]$ and of the three $p_i^0$. It is therefore a continuous and bounded function of $\lambda_1$, $\lambda_2$, $\lambda_3$ in the domain $W$; the value of this function for $\lambda_1=0$ is computed as follows. One has on one hand \begin{equation*} \frac{\partial x^h}{\partial \lambda_u} = \sum\limits_{j=1}^3 \frac{\partial x^h}{\partial p_j^0}\frac{\partial p_j^0}{\partial \lambda_u} = \sum\limits_{j=1}^3 y^h_j \frac{\partial p_j^0}{\partial \lambda_u}, \end{equation*} from which \begin{equation*} \lim_{\lambda_1 \to 0} \frac{1}{\lambda_1}\frac{\partial x^h}{\partial \lambda_u} = - \sum\limits_{j=1}^3 \delta^h_j \frac{\partial p_j^0}{\partial \lambda_u} = - \frac{\partial p_h^0}{\partial \lambda_u}. \end{equation*} One knows on the other hand that \begin{equation*} \frac{J_i^u}{J_{x \lambda}}= \frac{\partial \lambda_u}{\partial x^i}, \end{equation*} from which \begin{equation*}\lim_{\lambda_1 \to 0} \lambda_1 \sum\limits_{u=2}^3 \frac{\partial p_h^0}{\partial \lambda_u} \frac{J^u_i}{J_{x \lambda}} = - \lim_{\lambda_1 \to 0} \sum\limits_{u=2}^3 \frac{\partial x^h}{\partial \lambda_u} \frac{\partial \lambda_u}{\partial x^i} = - \delta^h_i + \lim_{\lambda_1 \to 0} \frac{\partial x^h}{\partial \lambda_1} \frac{\partial \lambda_1}{\partial x^i}, \end{equation*} from which eventually \begin{equation*} \lim_{\lambda_1 \to 0} \lambda_1 \sum\limits_{u=2}^3 \frac{\partial p_h^0}{\partial \lambda_u} \frac{J^u_i}{J_{x \lambda}} = - \delta^h_i + p_i^0 p_h^0. \end{equation*} By a reasoning analogous to the one of previous remarks, one sees that the quantity $\lambda_1 \sum\limits_{j=1}^3 (\delta^h_j - p_j^0 p_h^0)D_i^j$ is a polynomial homogeneous of first degree with respect to the $y^j_i$, of the functions $X(p_i, y^j_i)$, $[A^{\lambda \mu}]$, $p_i^0$. \section{The First Derivatives} The first partial derivatives of an arbitrary function $\varphi$ satisfy, in light of the identity $(\ref{eq:4.40})$ and of the previous results, the relation \begin{equation*} \frac{\partial \varphi}{\partial x^i} = \frac{\partial \varphi}{\partial \lambda_1} \sum\limits_{j=1}^3 \frac{p_j^0 D_i^j}{D} + \frac{1}{\lambda_1} \sum\limits_{h,j=1}^3 \frac{\partial \varphi}{\partial p_h^0} (\delta_j^h - p_j^0 p_h^0) \frac{D_i^j}{D}. \end{equation*} Let us apply this formula to the functions $p_h^0$ and $X$: \begin{equation} \label{eq:4.45} \left \{ \begin{array} {l} \frac{\partial p_h^0}{\partial x^i} = \frac{1}{\lambda_1} \sum\limits_{j=1}^3 (\delta_j^h - p_j^0 p_h^0 ) \frac{D_i^j}{D},\\ \frac{\partial p_h}{\partial x^i} = R_h \sum\limits_{j=1}^3 \frac{p_j^0 D_i^j}{D} + \sum\limits_{k,j=1}^3 \delta^h_k \frac{1}{\lambda_1}(\delta^k_j - p_j^0 p_k^0) \frac{D_i^j}{D}, \\ \frac{\partial y^k_h}{\partial x^i} = \sum\limits_{j=1}^3 T^k_h \frac{p_j^0 D_i^j}{E} + \frac{1}{\lambda_1} \sum\limits_{l=1}^3 y^k_{hl} (\delta^l_j - p_j^0 p_l^0) \frac{D_i^j}{D}, \\ \frac{\partial z^k_h}{\partial x^i} = \sum\limits_{j=1}^3 R^k_h \frac{p_j^0 D_i^j}{D} + \frac{1}{\lambda_1} \sum\limits_{j,l=1}^3 z^k_{hl} (\delta^l_j - p_j^0 p_l^0) \frac{D_i^j}{D}. \end{array}\right. \end{equation} These equations and the analogous equations verified by $\frac{\partial y^k_h}{\partial x^i}$, $\frac{\partial z^k_h}{\partial x^i}$, $\frac{\partial \omega^r_s}{\partial x^i}$, $\frac{\partial \omega^r_{si}}{\partial x^i}$ show that the quantities $\lambda_1 \frac{\partial p_h^0}{\partial x^i}$, $\lambda_1 \frac{\partial p_h}{\partial x^i}$, $\lambda_1 \frac{\partial z^k_h}{\partial x^i}$, $\lambda_1 \frac{\partial z^k_{hl}}{\partial x^i}$, $\frac{\partial y^k_h}{\partial x^i}$, $\frac{\partial y^k_{hl}}{\partial x^i}$, $\frac{\partial \omega^r_s}{\partial x^i}$, $\frac{\partial \omega^r_{si}}{\partial x^i}$ are rational fractions with denominator $D$ of the functions $X$, $\tilde{X}$, $\Omega$, $\tilde{\Omega}$, $[A^{\lambda \mu}]$, $\big{[} \frac{\partial A^{\lambda \mu}}{\partial x^\alpha} \big{]}$, $\big{[} \frac{\partial^2 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta} \big{]}$, $p_i^0$. These are bounded and continuous functions, within $W$, of the three variables $\lambda_1$, $\lambda_2$, $\lambda_3$. \section{Study of $\sigma$ and its derivatives} To begin the study of $\sigma$ and its derivatives we need first to revert to the functions $\sigma^r_s$ and to study their partial derivatives with respect to $x^i$. The previous results and the identity ($\ref{eq:4.40}$) show then that the quantities $\frac{1}{\lambda_1} \frac{\partial}{\partial x^i} (\lambda_1^2 D)$, $\frac{1}{\lambda_1} \frac{\partial}{\partial x^i} \sum\limits_{j=1}^3 ((\lambda_1)^2p_j^0 D^j_i)$, $\sum\limits_{i,j=1}^3 \frac{\partial}{\partial x^i} (\lambda_1 (\delta^h_j - p_j^0 p_h^0)D_i^j)$ are rational fractions with denominator $D$ of the functions $X(p_i, y^j_i, z^j_i)$, $\tilde{X}\big{(} \frac{y^j_i}{\lambda_1}, \frac{y^j_{ih}}{\lambda_1}\big{)}$, $[A^{\lambda \mu}]$, $\big{[} \frac{\partial A^{\lambda \mu}}{\partial x^\alpha} \big{]}$, $p_i^0$. They are therefore continuous and bounded functions of $\lambda_1$, $\lambda_2$, $\lambda_3$ in $W$. In the study of second partial derivatives of the function $\sigma$ with respect to the $x^i$ we will use the second partial derivatives $\frac{\partial^2 ((\lambda_1)^2 D)}{\partial x^i \partial x^j}$. The first-order partial derivatives of $(\lambda_1)^2D$ can be written \begin{equation*} \frac{\partial ((\lambda_1)^2D)}{\partial x^i}= \frac{ P_1}{(\lambda_1)^2D}, \end{equation*} where $P_1$ is a polynomial of the functions $X(p_i, y^j_i, z^j_i, y^j_{ih})$, $[A^{\lambda \mu}]$, $\big{[} \frac{\partial A^{\lambda \mu}}{\partial x^\alpha}\big{]}$, $p_i^0$ whose terms are of the third degree with respect to the set of functions $y^j_i$, $y^j_{ih}$. As a matter of fact, the partial derivatives $\frac{\partial p_h}{\partial x^i}$ and $\frac{\partial p_h^0}{\partial x^i}$ can be put in form of rational fractions, by multiplying denominator and numerator of the right-hand side of the equations by $(\lambda_1)^2$, with denominator $(\lambda_1)^2D$ and whose numerators are polynomials of the functions $X(p_i, y^j_i, z^j_i)$, $[A^{\lambda \mu}]$, $p_i^0$ whose terms are of first degree with respect to the $y^j_i$, and the partial derivatives $\frac{\partial p_h^k}{\partial x^i}$ can be put in form of rational fractions with denominator $(\lambda_1)^2D$ and whose numerators are polynomials of the functions $X(p_i, y^j_i, z^j_i, y^j_{hk})$, $[A^{\lambda \mu}]$, $\big{[} \frac{\partial A^{\lambda \mu}}{\partial x^\alpha}\big{]}$, $p_i^0$ homogeneous of second degree with respect to the set of functions $y^0_i$, $y^i_{hk}$. The polynomial $(\lambda_1)^2D$ being homogeneous of first degree with respect to the $y^j_i$, its first partial derivatives have for sure the desired form. Let us then consider the second partial derivatives: \begin{equation*} \frac{\partial^2((\lambda_1)^2D)}{\partial x^i \partial x^j} = \frac{1}{(\lambda_1)^2D}\frac{\partial p_1}{\partial x^i} = \frac{P_1}{((\lambda_1)^2D)^2} \frac{\partial ((\lambda_1)^2D)}{\partial x^i}. \end{equation*} It turns out from the form of the polynomial $P_1$ and from the previous results that: \begin{description} \item[(1)] $\frac{P_1}{(\lambda_1)^3}$ is a polynomial of the functions $X(p_i, y^j_i, z^j_i, y^j_{ih})$, $X \big{(} \frac{y^j_i}{\lambda_1}, \frac{y^j_{ih}}{\lambda_1}\big{)}$, $[A^{\lambda \mu}]$, $\big{[} \frac{\partial A^{\lambda \mu}}{\partial x^\alpha} \big{]}$, $p_i^0$. \item[(2)]$\frac{1}{(\lambda_1)^2}\frac{\partial P_1}{\partial x^i}$ is a rational fraction with denominator $D$ of the functions $X(p_i, y^j_i, z^j_i, \\ y^j_{ih}, z^j_{ih}, y^j_{ihk})$, $\tilde{X} \big{(} \frac{y^j_i}{\lambda_1}, \frac{y^j_{ih}}{\lambda_1}, \frac{\partial y^j_{ihk}}{\lambda_1}\big{)}$, $[A^{\lambda \mu}]$, $\big{[} \frac{\partial A^{\lambda \mu}}{\partial x^\alpha} \big{]}$, $\big{[} \frac{\partial^2 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta} \big{]}$ $p_i^0$. \end{description} The derivatives $\frac{\partial^2 ((\lambda_1)^2 D)}{\partial x^i \partial x^j}$ are therefore rational functions with denominator $D^3$ of the functions we have just listed. To pursue our aim, let us proceed with the study of $\sigma$ and its derivatives. The auxiliary functions $\sigma$ has been defined by $\sigma= \bigg{|}{\frac{sin(\lambda_2)}{J_{x \lambda}} \bigg{|}}^\frac{1}{2}$. Since $J_{x \lambda}=D(\lambda_1)^2 sin(\lambda_2)$, we have \begin{equation*} \sigma= \frac{1}{{|(\lambda_1)^2D|}^\frac{1}{2}}. \end{equation*} Thus we deduce that, in the domain $W$, the function $\sigma \lambda=\frac{1}{{|D|}^\frac{1}{2}}$ is the square root of a rational fraction, bounded and non-vanishing, of the function $X$, $\tilde{X}$, $[A^{\lambda \mu}]$, $p_i^0$; it is a continuous and bounded function of the three variables $\lambda_i$, whose value for $\lambda_1=D$ is $\lim_{\lambda_1 \to 0} \sigma \lambda_1=1$. The first partial derivatives of $\sigma$ with respect to the $x^i$ are \begin{equation*} \frac{\partial \sigma}{\partial x^i} = \frac{\sigma}{2} \frac{1}{(\lambda_1)^2D} \frac{\partial ((\lambda_1)^2D)}{\partial x^i}. \end{equation*} Thus we can conclude that, in the domain $W$, the function \begin{equation*} (\lambda_1)^2\frac{\partial \sigma}{\partial x^i}= - \frac{\sigma}{2}\frac{\lambda_1}{D} \frac{1}{\lambda_1}\frac{\partial ((\lambda_1)^2D)}{\partial x^i} \end{equation*} is the product of the square root of a non-vanishing bounded rational fraction with a bounded rational fraction of the functions $X$, $\tilde{X}$, $[A^{\lambda \mu}]$, $\big{[} \frac{\partial A^{\lambda \mu}}{\partial x^\alpha} \big{]}$, $p_i^0$. It is a continuous and bounded function of $\lambda_1$, $\lambda_2$, $\lambda_3$ of which we are going to compute the value for $\lambda_1=0$. The identities $\frac{\partial \sigma}{\partial \lambda_1}=\sum\limits_{i=1}^3T^i \frac{\partial \sigma}{\partial x^i}$ and $\frac{\partial \sigma}{\partial p_h^0}= \sum\limits_{i=1}^3 \frac{\partial \sigma}{\partial x^i}y^i_h$ show that the functions $(\lambda_1)^2 \frac{\partial \sigma}{\partial \lambda_1}$ and $\lambda_1 \frac{\partial \sigma}{\partial p_h^0}$ are continuous and bounded in $W$. We can therefore differentiate $\lim_{\lambda_1 \to 0} \sigma \lambda_1 = 1$ with respect to $p_h^0$, and we find $\lim_{\lambda_1 \to 0} \lambda_1 \frac{\partial \sigma}{\partial p_h^0} = 0$. Furthermore, we can write \begin{equation*} \frac{\partial (\sigma (\lambda_1)^2)}{\partial \lambda_1} = 2 \lambda_1 \sigma + (\lambda_1)^2 \frac{\partial \sigma}{\partial \lambda_1} \end{equation*} and $\lim_{\lambda_1 \to 0} \frac{\partial (\sigma (\lambda_1)^2)}{\partial \lambda_1} =\lim_{\lambda_1 \to 0} \lambda_1 \sigma$, from which \begin{equation*} \lim_{\lambda_1 \to 0}(\lambda_1)^2 \frac{\partial \sigma}{\partial \lambda_1}= - \lim_{\lambda_1 \to 0} \lambda_1 \sigma = -1. \end{equation*} In order to compute the value for $\lambda_1=0$ of the function $(\lambda_1)^2 \frac{\partial \sigma}{\partial x^i}$ we shall use the identity \begin{equation*} (\lambda_1)^2 \frac{\partial \sigma}{\partial x^i} = (\lambda_1)^2 \frac{\partial \sigma}{\partial \lambda_1} \frac{J_1^i}{J_{x \lambda}} + \lambda_1 \sum\limits_{h,u=2}^3 \frac{\partial \sigma}{\partial p_h^0} \lambda_1 \frac{\partial p_h^0}{\partial \lambda_u} \frac{J_u^i}{J_{x \lambda}}, \end{equation*} from which we have $\lim_{\lambda_1 \to 0} (\lambda_1)^2 \frac{\partial \sigma}{\partial x^i} = p_i^0$. The second partial derivatives of $\sigma$ with respect to the $x^i$ are \begin{equation*}\begin{split} \frac{\partial^2 \sigma}{\partial x^i \partial x^j}=& - \frac{\sigma}{2} \frac{1}{((\lambda_1)^2 D)} \frac{\partial^2( (\lambda_1)^2D)}{\partial x^i \partial x^j} + \frac{\sigma}{2((\lambda_1)^2 D)^2} \frac{\partial( (\lambda_1)^2D)}{\partial x^i} \frac{\partial((\lambda_1)^2 D)}{\partial x^j} \\ &- \frac{1}{2(\lambda_1)^2 D} \frac{\partial \sigma}{\partial x^j} \frac{\partial ((\lambda_1)^2D)}{\partial x^i}. \end{split}\end{equation*} It is easily seen that in the domain $W$ the function $(\lambda_1)^3 \frac{\partial^2 \sigma}{\partial x^i \partial x^j}$ is the product of the square root of a non-vanishing bounded rational fraction with a bounded rational fraction, having denominator $D^4$, of the functions $X$, $\tilde{X}$, $[A^{\lambda \mu}]$, $\big{[} \frac{\partial A^{\lambda \mu}}{\partial x^\alpha} \big{]}$, $ \big{[} \frac{\partial^2 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta} \big{]}$, $p_i^0$. It is a continuous and bounded function of the three variables $\lambda_i$. We are going to compute the value for $\lambda_1=0$ of the function $(\lambda_1)^3 \sum\limits_{i=0}^3\frac{\partial^2 \sigma}{\partial {x^i}^2}$ which, only, we will need: the second partial derivatives of $\sigma$ do not occur actually in the fundamental equations except for the quantity $\sum\limits_{i,j=1}^3 [A^{ij}] \frac{\partial^2 \sigma}{\partial x^i \partial x^j}$, and one has \begin{equation*} \lim_{\lambda_1 \to 0} \sum\limits_{i,j=1}^3 [A^{ij}](\lambda_1)^3 \frac{\partial^2 \sigma}{\partial x^i \partial x^j} = \lim_{\lambda_1 \to 0}(\lambda_1)^3 \sum\limits_{i=1}^3 \frac{\partial^2 \sigma}{\partial ({x^i})^2}. \end{equation*} Furthermore, by differentiating $\lim_{\lambda_1 \to 0} (\lambda_1)^2 \frac{\partial \sigma}{\partial x^i}=p_i^0$ with respect to $p_h^0$ we have \begin{equation*} \lim_{\lambda_1 \to 0} (\lambda_1)^2 \frac{\partial}{\partial p_h^0} \bigg{(}\frac{\partial \sigma}{\partial x^i}\bigg{)}=\delta^i_h; \end{equation*} but, on the other hand, we also have \begin{equation*} \frac{\partial}{\partial \lambda_1} \bigg{[}(\lambda_1)^3 \frac{\partial \sigma}{\partial x^i} \bigg{]} = 3 (\lambda_1)^2 \frac{\partial \sigma}{\partial x^i} + (\lambda_1)^3 \frac{\partial}{\partial \lambda_1} \bigg{(}\frac{\partial \sigma}{\partial x^i}\bigg{)}, \end{equation*} from which \begin{equation*} \lim_{\lambda_1 \to 0} (\lambda_1)^3 \frac{\partial}{\partial \lambda_1} \bigg{(} \frac{\partial \sigma}{\partial x^i} \bigg{)} = \lim_{\lambda_1 \to 0} \bigg{(} - 2 (\lambda_1)^2 \frac{\partial \sigma}{\partial x^i} \bigg{)} = - p_i^0. \end{equation*} We find therefore, by using the identity \begin{equation*} \sum\limits_{i=1}^3 (\lambda_1)^3 \frac{\partial^2 \sigma}{\partial (x^i)^2} = (\lambda_1)^3 \sum\limits_{i=1}^3 \frac{\partial}{\partial \lambda_1}\bigg{(} \frac{\partial \sigma}{\partial x^i} \bigg{)} \frac{J_i^1}{J_{x \lambda}} + (\lambda_1)^3 \sum\limits_{h=1}^3 \sum\limits_{u=2}^3 \frac{\partial}{\partial p_h^0} \bigg{(}\frac{\partial \sigma}{\partial x^i} \bigg{)} \frac{\partial p_h^0}{\partial \lambda_u} \frac{J_i^u}{J_{x \lambda}} \end{equation*} and the previous results, that \begin{equation*} \lim_{\lambda_1 \to 0} \sum\limits_{i=1}^3 (\lambda_1)^3 \frac{\partial^2 \sigma}{\partial (x^i)^2} =0. \end{equation*} Let us show that the function \begin{equation*} (\lambda_1)^2 \sum\limits_{i,j=1}^3 [A^{ij}]\frac{\partial^2 \sigma}{\partial x^i \partial x^j} \end{equation*} is a continuous and bounded function of the three variables $\lambda_i$, in the neighbourhood of $\lambda_1=0$. We have seen that $(\lambda_1)^3\sum\limits_{i,j=1}^3 \frac{\partial^2 \sigma}{\partial x^i \partial x^j}[A^{ij}]$ is the product of a square root of a non-vanishing bounded rational fraction $\big{(} \frac{1}{D} \big{)}$ with a rational fraction having denominator $D^4$, whose numerator, polynomial of the functions $X$, $\tilde{X}$, $[A^{\lambda \mu}]$, $\big{[} \frac{\partial A^{\lambda \mu}}{\partial x^\alpha} \big{]}$, $ \big{[} \frac{\partial^2 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta} \big{]}$, $p_i^0$, vanishes for the values of these functions corresponding to $\lambda_1=0$. We have \begin{equation*} \sum\limits_{i,j=1}^3 (\lambda_1)^3 [A^{ij}] \frac{\partial^2 \sigma}{\partial x^i \partial x^j} = \frac{P \bigg{(} X, \tilde{X}, [A^{\lambda \mu}], \big{[} \frac{\partial A^{\lambda \mu}}{\partial x^\alpha} \big{]}, \big{[} \frac{\partial^2 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta} \big{]}, p_i^0 \bigg{)}}{D^4} \frac{1}{{|D|}^\frac{1}{2}} \end{equation*} with \begin{equation*} P_0 = P \bigg{(} X_0, \tilde{X}_0, \pm \delta^\mu_\lambda, \bigg{[} \frac{\partial A^{\lambda \mu}}{\partial x^\alpha} \bigg{]}_0, \bigg{[} \frac{\partial^2 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta} \bigg{]}_0, p_i^0 \bigg{)}=0. \end{equation*} We then write: \begin{equation} \label{eq:4.46} (\lambda_1)^3 \sum\limits_{i,j=1}^3 [A^{ij}] \frac{\partial^2 \sigma}{\partial x^i \partial x^j}= \frac{(P - P_0)}{D^4} \frac{1}{{|D|}^\frac{1}{2}}. \end{equation} By applying the Taylor formula for $P$ one sees that the quantity ($\ref{eq:4.46}$) is a polynomial of the functions $X- X_0$, $\tilde{X} - \tilde{X}_0$, $A^{\lambda \mu} \pm \delta^\mu_\lambda$, ..., whose terms are of first degree with respect to the set of these functions. To show that $(\lambda_1)^2 \sum\limits_{i,j=1}^3[A^{ij}]\frac{\partial^2 \sigma}{\partial x^i \partial x^j}$ is a continuous and bounded function of $\lambda_1$, $\lambda_2$, $\lambda_3$ in the domain $W$, it is enough to show that the same holds for the functions \begin{equation*} \frac{(X- X_0)}{\lambda_1}, \; \frac{(\tilde{X}- \tilde{X}_0)}{\lambda_1}, \; \frac{[A^{\lambda \mu}] - \delta^\lambda_\mu}{\lambda_1}, \; ..., \; \frac{ \big{[} \frac{\partial^2 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta} \big{]} - \big{[} \frac{\partial^2 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta} \big{]}_0}{\lambda_1}. \end{equation*} The functions $X$ verify \begin{equation*} X= X_0 + \int\limits_0^{\lambda_1} E(X) d \lambda, \end{equation*} $\frac{(X- X_0)}{\lambda_1}$ is therefore a continuous and bounded function of the $\lambda_i$ in $V$: $$|X-X_0| \leq \lambda_1 M.$$ The coefficients $A^{\lambda \mu}$ possessing in $D$ partial derivatives continuous and bounded up to the fourth order with respect to the $x^\alpha$, the $x^\alpha$ fulfilling the previous inequalities, we see that \begin{equation} \label{eq:4.47} [A^{\lambda \mu}] - \delta^\lambda_\mu \leq \lambda_1 A, \; ..., \; \bigg{[} \frac{\partial^3 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta \partial x^\gamma} \bigg{]} - \bigg{[} \frac{\partial^3 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta \partial x^\gamma} \bigg{]}_0 \leq \lambda_1 A. \end{equation} Let us consider $\frac{(X- X_0)}{\lambda_1}$. The corresponding $X$ functions are $y^j_i$, $y^j_{ih}$, $y^j_{ihk}$ which verify the equation $X= \int\limits_{0}^{\lambda_1} E(X) d \lambda$, $E(X)$ being a polynomial of the functions $X$, of the $A^{\lambda \mu}$ and of their partial derivatives up to the third order $\big{[}\frac{\partial A^{\lambda \mu}}{\partial x^\alpha} \big{]}$, ..., $\big{[} \frac{\partial^3 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta \partial x^\gamma} \big{]}$. We have \begin{equation*} \tilde{X} - \tilde{X}_0 = \frac{ \int\limits_{0}^{\lambda_1} (E(X) - E(X)_0) d \lambda}{(\lambda_1)^2}. \end{equation*} The Taylor formula applied to the polynomial $E$ shows that $E(X) - E(X)_0$ is a polynomial of the functions $X_0$, $\delta^\mu_\lambda$, ..., $\big{[}\frac{\partial^3 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta \partial x^\gamma} \big{]}_0$ and of the functions $X - X_0$, $[A^{\lambda \mu}] - \delta^\mu_\lambda$, ..., $\big{(}\big{[} \frac{\partial^3 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta \partial x^\gamma} \big{]} - \big{[} \frac{\partial^3 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta \partial x^\gamma} \big{]}_0\big{)}$ whose terms are of first degree with respect to this last set of terms. All these functions being bounded in $V$ and satisfying \begin{equation*} |X - X_0| \leq \lambda_1 M, \; [A^{\lambda \mu}] - \delta^\lambda_\mu \leq \lambda_1 A, \; ..., \; \bigg{[} \frac{\partial^3 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta \partial x^\gamma} \bigg{]} - \bigg{[} \frac{\partial^3 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta \partial x^\gamma} \bigg{]}_0 \leq \lambda_1 A, \end{equation*} we see easily that $\frac{(\tilde{X} - \tilde{X}_0)}{\lambda_1}$ is continuous and bounded in $V$. The function $(\lambda_1)^2 \sum\limits_{i,j=1}^3[A^{ij}]\frac{\partial^2 \sigma}{\partial x^i \partial x^j}$ is therefore continuous and bounded in $W$. \section{Derivatives of the $\omega^r_s$} Let us now study the first and second partial derivatives of the $\omega^r_s$ with respect to the $x^i$. Our aim is to prove that the first and second partial derivatives of the $\omega^r_s$ with respect to the $x^i$ are, as $\sigma$ and its partial derivatives, simple algebraic functions of the functions $X$ and $\Omega$, $\tilde{X}$ and $\tilde{\Omega}$, and of the values on the conoid $\Sigma_0$ of the coefficients of the given equations and of their partial derivatives. The first partial derivatives of the $\omega^r_s$ with respect to the $x^i$ are expressed as functions of their partial derivatives with respect to the $\lambda_i$ \begin{equation*} \frac{\partial \omega^r_s}{\partial x^i} = \sum\limits_{j=1}^3 \frac{\partial \omega^r_s}{\partial \lambda_j} \frac{J_i^j}{J_{x \lambda}}, \end{equation*} therefore \begin{equation} \label{eq:4.48} \frac{\partial \omega^r_s}{\partial x^i} = \bigg{(} \sum\limits_{t=1}^n \mathcal{Q}^r_t \omega^t_s + \mathcal{Q}\omega^r_s \bigg{)} \sum\limits_{j=1}^3 \frac{P_j^0 D^j_i}{D} + \sum\limits_{h,j=1}^3 \frac{\omega^r_{sh}}{\lambda_1} \frac{(\delta^h_j - P_j^0 P_h^0)D^j_i}{D}. \end{equation} The first partial derivatives of the $\omega^r_s$ with respect to the $x^i$ are therefore rational fraction with denominator $D$ of the functions $X(P_i, y^j_i )$, $\Omega(\omega^r_s)$, $\tilde{X}\big{(} \frac{y^j_i}{\lambda_1} \big{)}$, $\tilde{\Omega} \big{(}\frac{\omega^r_{sh}}{\lambda_1} \big{)}$, $[A^{\lambda \mu}]$, $\big{[} \frac{\partial A^{\lambda \mu}}{\partial x^\alpha} \big{]}$, $[B^{s \lambda}]$ and $P_i^0$. These are continuous and bounded functions in $W$. The second partial derivatives of the $\omega^r_s$ with respect to the $x^i$ can be evaluated by writing $\frac{\partial \omega^r_s}{\partial x^i}$ in the form $\frac{\partial \omega^r_s}{\partial x^i} = \frac{P_2}{(\lambda_1)^2D}$. The equality ($\ref{eq:4.48}$) and the previous remarks show that $P_2$ is a homogeneous polynomial of second degree with respect to the set of functions $y^j_i$, $\omega^r_s$. By differentiating the previous equality, we have \begin{equation*} \frac{\partial^2 \omega^r_s}{\partial x^i \partial x^j} = \frac{1}{(\lambda_1)^2D} \frac{\partial P_2}{\partial x^j} - \frac{P_2}{((\lambda_1)^2D)^2} \frac{\partial ((\lambda_1)^2D)}{\partial x^i}. \end{equation*} These functions $\lambda_1 \frac{\partial^2 \omega^r_s}{\partial x^i \partial x^j} $ are rational fractions with denominator $D^3$ of the functions $X$, $\Omega$, $\tilde{X}$, $\tilde{\Omega}$, $[A^{\lambda \mu}]$, $\big{[}\frac{\partial A^{\lambda \mu}}{\partial x^\alpha} \big{]}$, $\big{[}\frac{\partial^2 A^{\lambda \mu}}{\partial x^\alpha \partial x^\beta} \big{]}$, $[B^{s \lambda}_r]$, $\big{[} \frac{\partial B^{s \lambda}_r}{\partial x^\alpha} \big{]}$. These are therefore continuous and bounded functions in $W$. \section{Kirchhoff Formulae} We can now study in a more precise way the fundamental equations \begin{equation*} \begin{split}& \underbrace{\int \int \int}_{V_\eta} \sum\limits_{r=1}^n \{ [u_r]L^r_s + \sigma^r_s[f_r] \} dV + \underbrace{\int \int}_{S_\eta} \sum\limits_{i=1}^3 E^i_s cos(n,x^i)dS \\ & - \underbrace{\int \int}_{S_0} \sum\limits_{i=1}^3 E^i_s cos(n,x^i)dS=0, \end{split}\end{equation*} and look for their limit as $\eta$ approaches zero. We have seen that the functional determinant $D= \frac{D(x^i)}{D(\lambda_j)}$ is equal to -1 for $\lambda_1=0$. The correspondence between the parameters $x^i$ and $\lambda_j$ is therefore surjective in a neighbourhood of the vertex $M_0$ of $\Sigma_0$. One derives from this that the correspondence between the parameters $x^i$ and $\lambda_j$ is one-to-one in a domain $\Lambda_\eta$ defined by \begin{equation*} \eta \leq \lambda_1 \leq \epsilon_3, \hspace{0.5cm} 0 \leq \lambda_2 \leq \pi, \hspace{0.5cm} 0 \leq \lambda_3 \leq 2 \pi, \end{equation*} where $\epsilon_3$ is a given number and where $\eta$ is arbitrarily small. To the domain $\Lambda_\eta$ of variations of the $\lambda_i$ parameters there corresponds, in a one-to-one way, a domain $W_\eta$ of $\Sigma_0$, because the correspondence between $(x^4, \lambda_2, \lambda_3)$ and $(\lambda_1, \lambda_2, \lambda_3)$ is one-to-one. We shall then assume that the coordinate $x^4_0$ of the vertex $M_0$ of $\Sigma_0$ is sufficiently small to ensure that the domain $V_\eta \subset V$, previously considered, is interior to the domains $W$ and $W_\eta$. We can, under these conditions, compute the integrals by means of the parameters $\lambda_i$, the integrals that we are going to obtain being convergent. For this purpose, let us evaluate the Area and Volume elements. We have $dV= dx^1\; dx^2\; dx^3 = d\lambda_1 \; d\lambda_2 \;d\lambda_3$. Then, we begin by computing the element of Area $dS$. The surfaces $S_0$ and $S_\eta$ are $x^4={\rm const.}$ surfaces on the characteristic conoid $\Sigma_0$. Thus, they satisfy the differentiation relation \begin{equation*} \sum\limits_{i=1}^3 p_i dx^i =0, \end{equation*} from which we have \begin{equation*} cos(n, x^i)= \frac{p_i}{{\bigg{(} \sum\limits_{j=1}^3 (p_j)^2} \bigg{)}^\frac{1}{2}}. \end{equation*} In order to evaluate $dS$ we shall write an alternative expression of the Volume element $dV$ in which the surfaces $S(x^4= cost.)$ and the bicharacteristics are involved \begin{equation*}dV = cos (\nu) {|T|}^\frac{1}{2} d\lambda_1 dS, \end{equation*} where ${|T|}^\frac{1}{2} d\lambda_1$ denotes the length element of the bicharacteristic, and $\nu$ is the angle formed by the bicharacteristic with the normal to the surface $S$ at the point considered. A system of directional parameters of the tangent to the bicharacteristic being \begin{equation*} T^h = \sum\limits_{j=1}^3 [A^{hj}]p_j + [A^{h4}], \end{equation*} we have \begin{equation*} cos(\nu) {|T|}^\frac{1}{2} = \sum\limits_{h=1}^3 \Biggl\{ \sum\limits_{j=1}^3 [A^{hj}]p_j + [A^{h4}] \Biggr\} cos (n, x^h), \end{equation*} from which, by comparing the two expressions of $dV$, \begin{equation*} cos(n, x^i)dS= \frac{J_{x \lambda}p_i d\lambda_2 d\lambda_3}{\sum\limits_{h,j=1}^3[A^{hj}]p_j p_h + \sum\limits_{h=1}^3[A^{h4}]p_h} = \frac{- J_{x \lambda} p_i}{[A^{44}] + \sum\limits_{j=1}^3 [A^{j4}]p_j} d\lambda_2 d\lambda_3. \end{equation*} Hence the integral relations read in terms of the $\lambda_i$ as \begin{equation} \begin{split} \label{eq:4.49} & \int \int \int_{V_\eta} \sum\limits_{r=1}^n ([u_r]L^r_s + \sigma^r_s[f_r])d\lambda_1 d\lambda_2 d\lambda_3 - \int\limits_{0}^{2 \pi} \int\limits_{0}^{\pi} \frac{\sum\limits_{i=1}^3 E^i_s J_{x \lambda} p_i}{[A^{44}] + \sum\limits_{i=1}^3 [A^{i4}]p_i}d\lambda_{2|_{x^4=0}}d\lambda_3 \\ &= - \int\limits_{0}^{2 \pi} \int\limits_{0}^{\pi} \Biggl\{ \frac{\sum\limits_{i=1}^3 E^i_s J_{x \lambda} p_i}{[A^{44}] + \sum\limits_{i=1}^3 [A^{i4}]p_i}\Biggr\}_{x^4 = x^4_0 - \eta} d\lambda_2 d\lambda_3. \end{split} \end{equation} The previous results prove that the quantities to be integrated are continuous and bounded functions of the variables $\lambda_i$. They read actually as: \begin{equation*} (\lambda_1)^2 \sum\limits_{r=1}^n \{ [u_r]L^r_s + \sigma^r_s[f_r] \} \frac{J_{x \lambda}}{(\lambda_1)^2} \; {\rm and} \; (\lambda_1)^2 \sum\limits_{i=1}^n E^i_s \frac{J_{x \lambda}}{(\lambda_1)^2} \frac{p_i}{T^4}. \end{equation*} $E^i_s$ and $L^r_s$ being given by the equalities $(\ref{eq:4.28})$ and ($\ref{eq:4.29}$), the quantities considered are continuous and bounded in $W$ if the functions $u_r$ and $\frac{\partial u_r}{\partial x^\alpha}$ are continuous and bounded in $D$. Thus, when $\eta$ approaches zero, the two sides of $(\ref{eq:4.49})$ tend towards a finite limit. In particular, the triple integral tends to the value of this integral taken over the portion $V_0$ of hypersurface of the conoid $\Sigma_0$ in between the vertex $M_0$ and the initial surface $x^4=0$. Let us evaluate the limit of the double integral on the right-hand side. All terms of the quantity $(\lambda_1)^2E^i_s$ approach uniformly zero with $\lambda_1$ but \begin{equation*} - (\lambda_1)^2 \sum\limits_{r=1}^n \sum\limits_{j=1}^3 [u_r][A^{ij}]\omega^r_s \frac{\partial \sigma}{\partial x_j}, \end{equation*} tends, when $\lambda_1$ approaches zero, to \begin{equation*} \sum\limits_{r=1}^n \sum\limits_{j=1}^3[u_r(x^\alpha_0)]\delta^j_i \delta^r_s p_j^0 = u_s (x^\alpha_0)p_i^0. \end{equation*} Hence we obtain \begin{equation*} \lim_{\lambda_1 \to 0} \frac{\sum\limits_{i=1}^3 E^i_s J_{x \lambda} p_i}{[A^{44}] + \sum\limits_{i=1}^3 [A^{i4}]p_i}= - u_s(x^\alpha_0) sin(\lambda_2). \end{equation*} The right-hand side of Eq. ($\ref{eq:4.49}$), when $\eta$ approaches zero, tends to \begin{equation*} \int\limits_{0}^{2 \pi} \int\limits_{0}^{\pi} u_s(x^\alpha_0) sin(\lambda_2) d\lambda_2 d\lambda_3 = 4 \pi u_s(x^\alpha_0). \end{equation*} Eventually, under the limit for $\eta \rightarrow 0$, the Eqs. ($\ref{eq:4.49}$) become \begin{equation} \begin{split} \label{eq:4.50} 4 \pi u_s(x^\alpha_0) &= \int \int \int_{V_\eta} \sum\limits_{r=1}^n ([u_r]L^r_s + \sigma^r_s[f_r]) J_{x \lambda} d\lambda_1 d\lambda_2 d\lambda_3 \\ & +\int\limits_{0}^{2 \pi} \int\limits_{0}^{\pi} \Biggl\{ \frac{\sum\limits_{i=1}^3 E^i_s J_{x \lambda}p_i}{T^4} \Biggr\}_{x^4=0} d\lambda_2 d\lambda_3, \end{split} \end{equation} known as the $\textit{Kirchhoff formulae}$. In order to compute its right-hand side, it will be convenient to take for parameters, on the hypersurface of the conoid $\Sigma_0$, the three independent variables $x^4$, $\lambda_2$, $\lambda_3$. Thus, the previous formulae read as \begin{equation} \begin{split} \label{eq:4.51} 4 \pi u_s(x_j) &= \int_{x^4_0}^{0} \int_{0}^{2 \pi} \int_{0}^\pi \sum\limits_{r=1}^n ([u_r]L^r_s + \sigma^r_s[f_r]) \frac{J_{x \lambda}}{T^4} dx^4 d\lambda_2 d\lambda_3 \\ & +\int\limits_{0}^{2 \pi} \int\limits_{0}^{\pi} \Biggl\{ \frac{\sum\limits_{i=1}^3 E^i_s J_{x \lambda}p_i}{T^4} \Biggr\}_{x^4=0} d\lambda_2 d\lambda_3. \end{split} \end{equation} The quantity under the sign of triple integral is expressed by means of the functions $[u]$ and of the functions $X(\lambda_1, \lambda_2, \lambda_3)$ and $\Omega(\lambda_1, \lambda_2, \lambda_3)$, solutions of the integral equations ($\ref{eq:4.7}$), ($\ref{eq:4.8}$), ($\ref{eq:4.9}$) and ($\ref{eq:4.35}$). We shall obtain the expression of the $X$ and $\Omega$ as functions of the new variables $x^4$, $\lambda_2$, $\lambda_3$ by replacing $\lambda_1$ with its value defined by Eq. ($\ref{eq:4.16}$), function of the $x^4$, $\lambda_2$, $\lambda_3$. These functions satisfy the integral equations \begin{equation*} X(x^4, \lambda_2, \lambda_3)= X_0(x^4_0, \lambda_2, \lambda_3) + \int_{x^4_0}^{x^4} \frac{E(X)}{T^4} d \omega^4, \end{equation*} \begin{equation*} \Omega(x^4, \lambda_2, \lambda_3)= \Omega_0(x^4_0, \lambda_2, \lambda_3) + \int_{x^4_0}^{x^4} \frac{F(X,\Omega)}{T^4} d \omega^4.\end{equation*} The quantity under sign of double integral is expressed by means of the values for $x^4=0$ of the Cauchy data, $[u]$ and $\big{[} \frac{\partial u}{\partial x^\alpha}\big{]}$, and of the values for $x^4=0$ of the functions $X$ and $\Omega$. Thus, it is possible to conclude that: $\mathbf{Conclusion}$ Every solution of the equations \begin{equation*}E_r = \sum\limits_{\lambda,\mu=1}^4 A^{\lambda \mu} \frac{\partial^2 u_s}{\partial x^\lambda \partial x^\mu} + \sum\limits_{s=1}^n \sum\limits_{\mu=1}^4 {{B^s}_r}^\mu \frac{\partial u_s}{\partial x^\mu} + f_r =0, \hspace{1cm} r=1, 2, ..., n. \end{equation*} continuous, bounded and with first partial derivatives continuous and bounded in $D$ verifies the integral relations ($\ref{eq:4.51}$) if the coordinates $x^\alpha_0$ of $M_0$ satisfy the inequalities of the form \begin{equation*} |x^4_0| \leq \epsilon_0, \hspace{1cm} |x^i_0 - \tilde{x}^i_0| \leq d, \end{equation*} defining a domain $D_0 \subset D$. \section{Application of the Results} We are going to estabilish formulae analogous to ($\ref{eq:4.51}$), verified by the solutions of the given equations $[E]$ at every point of a domain $D_0$ of space-time, where the values of coefficients will be restricted uniquely by the requirement of having to verify some conditions of normal hyperbolicity and differentiability. Let us consider the system $[E]$ of equations \begin{equation*} E_r = \sum\limits_{\lambda,\mu=1}^4 A^{\lambda \mu} \frac{\partial^2 u_s}{\partial x^\lambda \partial x^\mu} + \sum\limits_{s=1}^n \sum\limits_{\mu=1}^4 {B^{r \lambda}_s} \frac{\partial u_r}{\partial x^\lambda} + f_s =0, \hspace{0.5cm} s=1, 2, ..., n. \end{equation*} We assume that in the space-time domain $D$, defined by \begin{equation*} |x^4| \leq \epsilon, \hspace{1cm} |x^i - \tilde{x}^i| \leq d, \end{equation*} where the three $\tilde{x}^i$ are given numbers, the equations $[E]$ are of the normal hyperbolic type, i.e. \begin{equation*} A^{44}>0, \, \rm{the} \; \rm{quadratic} \; \rm{form} \; \sum\limits_{i,j=1}^3A^{ij}X_iX_j \; \rm{is} \; \rm{negative-definite}. \end{equation*} At every point $M_0(x_j)$ of the domain $D$ we can associate to the values $A^{\lambda \mu}(x^\alpha_0)$ of the coefficients $A$ a system of real numbers $a^{\alpha \beta}_0$, algebraic functions, defined and indefinitely differentiable of the $A^{\lambda \mu}_0= A^{\lambda \mu}(x^\alpha_0)$, satisfying the identity \begin{equation*}\sum\limits_{i,j=1}^4 A^{\lambda \mu}_0 X_\lambda X_\mu = \bigg{(}\sum\limits_{\alpha=1}^4 a^{4 \alpha}_0 X_\alpha \bigg{)}^2 - \bigg{(}\sum\limits_{\alpha=1}^4 a^{i \alpha}_0 X_\alpha \bigg{)}^2. \end{equation*} We shall denote by $a^0_{\alpha \beta}$ the quotient by the determinant $a_0$ of elements $a^{\alpha \beta}_0$ of the minor relative to the element $a^{\alpha \beta}_0$ of this determinant. The quantities $a^0_{\alpha \beta}$ are algebraic functions defined and indefinitely differentiable of the $A^{\lambda \mu}_0$ in $D$. The square of the determinant $a_0$, being equal to the absolute value $A$ of the determinant having elements $A^{\lambda \mu}$, $a_0$, is different from zero in $D$. Let us perform the linear change of variables \begin{equation*} y_\alpha \equiv \sum\limits_{\beta=1}^4 a^0_{\alpha \beta} x^\beta. \end{equation*} The partial derivatives of the unknown functions $u_s$ are covariant under such a change of variables, hence the equations $[E]$ read as \begin{equation} \label{eq:4.52} \sum\limits_{\alpha ,\beta=1}^4 A^{*\alpha \beta} \frac{\partial^2 u_s}{\partial y^\alpha \partial y^\beta} + \sum\limits_{r=1}^n \sum\limits_{\alpha=1}^4 {{B_s}^{*r \alpha}} \frac{\partial u_s}{\partial y^\alpha} + f_r =0, \end{equation} with \begin{equation} \label{eq:4.53} A^{* \alpha \beta}= \sum\limits_{\lambda, \mu=1}^4 A^{\lambda \mu} a^0_{\alpha \lambda} a^0_{\beta \mu}, \end{equation} \begin{equation} \label{eq:4.54} B_s^{*r\alpha} = \sum\limits_{\lambda=1}^4 B_s^{r \lambda}a^0_{\alpha \lambda}. \end{equation} The coefficients of Eq. ($\ref{eq:4.52}$) take at the point $M_0$ the values ($\ref{eq:4.11}$). As a matter of fact: \begin{equation*} \begin{split} A_0^{* \alpha \beta} &= \sum\limits_{\lambda, \mu=1}^4 A^{\lambda \mu} a^0_{\alpha \lambda} a^0_{\beta \mu}= - \sum\limits_{\lambda, \mu, \gamma=1}^4 a_0^{\gamma \lambda } a_0^{\gamma \mu} a^0_{\alpha \lambda} a^0_{\beta \mu} + 2 \sum\limits_{\lambda, \mu=1}^4 a_0^{4 \lambda} a_0^{4 \mu} a^0_{\alpha \lambda} a^0_{\beta \mu} \\ &= - \delta^\beta_\alpha + 2 \delta^4_\alpha \delta^4_\beta, \end{split}\end{equation*} hence one has \begin{equation*} A^{*44}=1, \hspace{0.5cm} A^{*i4}= 0, \hspace{0.5cm} A^{*ij}= - \delta^{ij}. \end{equation*} We can apply to the equations $[E]$, written in the form $(\ref{eq:4.52})$, in the variables $y^\alpha$ and for the corresponding point $M_0$, the results that we obtained before. The integration parameters so introduced will be $y^4$, $\lambda_2$, $\lambda_3$ but, the surface carrying the Cauchy data being always $x^4\equiv a_0^{\alpha 4} y^4=0$, the integration domains will be determined from $M_0$ and the intersection of this surface with the characteristic conoid with vertex $M_0$. We see that it will be convenient, in order to evaluate these integrals, to choose the variables $y^\alpha$ relative to a point $M_0$ whatsoever in such a way that the initial space section, $x^4=0$, is a hypersurface $y^4=0$. It will be enough for that purpose to choose the coefficients $a_0^{\alpha \beta}$ in such a way that $a_0^{i4}=0$. We shall then have \begin{equation*} a^0_{4i}=0, \; a^0_{44}=\frac{1}{a_0^{44}}= (A^{44}_0)^{-\frac{1}{2}} \; \rm{and} \; y_4 = a^0_{44} x^4, \end{equation*} where $a^0_{44}$ is a bounded positive number. The application of the results proves then the existence of a domain $D_0 \subset D$, defined by $|x^4_0| \leq \epsilon$, which implies at every point $M_0 \in D_0$, $|y^4_0| \leq \eta$, such that one can write at every point $M_0$ of $D_0$ a Kirchhoff formula whose first member is the value at $M_0$ of the unknown $u_s$, in terms of the quantities $y^\alpha_0 = \sum\limits_{\beta=1}^4 a^0_{\alpha \beta} x_0^\beta$, and whose right-hand side consists of a triple integral and of a double integral. The quantities to be integrated are expressed by means of the functions $X(y^4, \lambda_2, \lambda_3, y^\alpha_0)$ representing $(y^\alpha, p_i, y^j_i, z^j_i, ..., z^j_{ihk})$ and $\Omega(y^4, \lambda_2, \lambda_3)(\omega^r_s, ..., \omega^r_{sij})$, solutions of an equation of the kind \begin{equation} \label{eq:4.55} X = X_0 + \int\limits_{y^4_0}^{y^4} E^*(X) dY^4, \hspace{0.5cm} \Omega = \Omega_0 + \int\limits_{y^4_0}^{y^4} F^*(X, \Omega) dY^4, \end{equation} where the functions $E^*$ and $F^*$ are the functions $E$ and $F$ considered before, but evaluated starting from the coefficients ($\ref{eq:4.53}$) and ($\ref{eq:4.54}$) and from their partial derivatives with respect to the $y^\alpha$, and where $\Omega_0$, $X_0$ denote the values for $y^4=y^4_0$ of the corresponding functions $\Omega$, $X$. In order to obtain, under a simpler form, some integral equations holding in the whole domain $D_0$, we will take as integration parameter, in place of $y^4$, $x^4$. Also, we shall replace those of the auxiliary unknown functions $X$ which are the values (in terms of the three parameters) of the coordinates $y^\alpha$ of a point of the conoid $\Sigma_0$ of vertex $M_0$, with the values of the original coordinates $x^\alpha$ of a point of this conoid. We shall replace, for that purpose, those of the integral equations which have on the left-hand side $y^\alpha$ with their linear combinations of coefficients $a^{\alpha \beta}_0$, i.e. with the equations of the same kind \begin{equation*} \sum\limits_{\beta=1}^4 a^{\alpha \beta}_0 y^\beta = x^\alpha = x^\alpha_0 + \int\limits_{x^4_0}^{x^4} \sum\limits_{\beta=1}^4 a^{\alpha \beta}_0 \frac{T^{*\alpha \beta}}{T^{*4}} a^0_{44} d\omega^4, \end{equation*} and we will replace the quantities under integration signs of all our equations in terms of the $x^\alpha$ in place of the $y^\beta$ by replacing in these equations the $y^\beta$ with the linear combinations $\sum\limits_{\alpha=1}^4 a^0_{\alpha \beta}x^\alpha$. The system of integral equations obtained in such a way has, for every point $M_0$ of the domain $D$, solutions which are of the form $X(x^\alpha_0, x^4, \lambda_2, \lambda_3)$. At this stage of our argumentation, we are able to consider a more complex case which is the study of non-linear systems of partial differential equations. Since this is the aim of the next Chapter, we can state the results of our study of linear systems of partial differential equations which will be useful for that purpose. $\mathbf{Conclusion.}$ Every solution of Eqs. $[E]$, possessing in $D$ first partial derivatives with respect to the $x^\alpha$ continuous and bounded, verifies, if $x^\alpha$ are the coordinates of a point $M_0$ of the domain $D_0$ defined by \begin{equation*} |x^4_0| \leq \epsilon_0 \leq \epsilon; \hspace{2cm} |x^i_0 - \tilde{x}^i | \leq d_0 \leq d, \end{equation*} some Kirchhoff formulae whose left-hand side are the values at the point $M_0$ of the unknown functions $u_s$ and whose right-hand side consists of a triple integral in the parameters $x^4$, $\lambda_2$ and $\lambda_3$, and of a double integral in the parameters $\lambda_2$ and $\lambda_3$. The quantities to be integrated are expressed by means of the functions $X(x^\alpha_0, x^4, \lambda_2, \lambda_3)$ and $\Omega(x^\alpha_0, x^4, \lambda_2, \lambda_3)$, themselves solutions of given integral equations ($\ref{eq:4.55}$), and of the unknown functions $[u_s]$; the quantity under the sign of double integral, which is taken for the zero value of the $x^4$ parameter, contains, besides the previous functions, the first partial derivatives of the unknown functions $\big{[}\frac{\partial u_s}{\partial x^\alpha} \big{]}$ (value over $\Sigma_0$ of the Cauchy data). We obtain in such a way a system of integral equations verified in $D_0$ from the solutions of Eqs. $[E]$. We write this system in the following reduced form \cite{foures1952theoreme}: \begin{equation*} X= X_0 + \int\limits_{x^4_0}^{x^4} E(X) d\omega^4, \end{equation*} \begin{equation*} 4 \pi U= \int\limits_{x^4_0}^0 \int\limits_{0}^{2 \pi} \int\limits_{0}^{\pi} H dx^4 d\lambda_2 d \lambda_3 + \int\limits_{0}^{2 \pi} \int\limits_{0}^{\pi} I d\lambda_2 d\lambda_3. \end{equation*} \chapter{Linear System from a Non-linear Hyperbolic System} \chaptermark{Non-linear hyperbolic systems} \epigraph{Curiouser and curiouser.}{Lewis Carroll, Alice's Adventures in Wonderland and Through the Looking-Glass} At this point of our analysis, we focus the attention on the non-linear hyperbolic systems of partial differential equations. We will prove that it is possible to begin with a non-linear system and turn it into a linear system for which the results obtained in the previous chapter hold. In particular, we consider a system $[F]$ of $n$ second-order partial differential equations, with $n$ unknown functions and four variables, $\textit{non linear}$ of the following type: $$ \sum\limits_{\lambda, \mu=1}^4 A^{\lambda \mu} \frac{\partial^2 W_s}{\partial x^\lambda \partial x^\mu} + f_s =0, \hspace{3.5cm} s=1, 2, ..., n. \hspace{2cm} [F]$$ The coefficients $A^{\lambda \mu}$, which are the same for the $n$ equations, and $f_s$ are given functions of the four variables $x^\alpha$, the unknown functions $W_s$, and of their first derivatives $\frac{\partial W_s}{\partial x^\alpha}$. The calculations made in the previous chapter for the linear equations $[E]$ are valid for the non-linear equations $[F]$: it suffices to consider in these calculations the functions $W_s$ as functions of the four variables $x^\alpha$; the coefficients $A^{\lambda \mu}$ and $f_s$ are then functions of these four variables and the previous calculations are valid, subject to considering, in all formulae where there is occurrence of partial derivatives of the coefficients with respect to $x^\alpha$, these derivations as having been performed. Furthermore, we do not apply directly to the equations $[F]$ the results of previous chapters; but we are going to show that, by differentiating five times with respect to the variables $x^\alpha$ the given equations $[F]$, and by applying to the obtained equations the result of Chapter 4, one obtains a system of integral equations whose left-hand side are the unknown functions $W_s$, their partial derivatives with respect to the $x^\alpha$ up to the fifth order and some auxiliary functions $X$, $\Omega$, and whose right-hand sides contain only these functions and the integration parameters. Then, in order to solve the Cauchy problem for the nonlinear equations $[F]$ we will try to solve, independently of these equations, the system of integral equations verified by the solutions. Unfortunately, some difficulties arise for this solution: we have seen in the previous chapter that the quantities occurring under the integral sign are continuous and bounded, upon assuming differentiability of the coefficients $A^{\lambda \mu}$, viewed as given functions of the variables $x^\alpha$, these conditions not being realized when the functions $W_s$, $W_{s \alpha}$, ..., $U_S$ are independent; the quantity $[A^{ij}] \frac{\partial^2 \sigma}{\partial x^i \partial x^j}J_{x \lambda}$ will then fail to be bounded and continuous. Moreover, to pursue our purpose, we will have to pass through the intermediate stage of approximate equations $[F_1]$, where the coefficients $A^{\lambda \mu}$ will be some functions of the $x^\alpha$. We will then be in a position to solve the integral equations and show that their solution is a solution of the equations $[F_1]$ and to show which are the partial solution of $W_s$; but we will see that the obtained solution $W_s$ will be only five times differentiable and the method we are going to use is therefore applicable only if the $A^{\lambda \mu}$ depend uniquely on the $W_s$ and not on the $W_{s \alpha}$: it will be then enough to assume the approximation function five times differentiable. Eventually, we will solve the Cauchy problem for the system $[G]$ $$ \sum\limits_{\lambda, \mu=1}^4 A^{\lambda \mu} \frac{\partial^2 W_s}{\partial x^\lambda \partial x^\mu} + f_s =0, \hspace{3.5cm} s=1, 2, ..., n. \hspace{2cm} [G] $$ where the coefficients $A^{\lambda \mu}$ do not depend on first partial derivatives of the unknown functions. It will be enough to apply the results of Chapter 4 to the equations $[G']$ deduced from the equations $[G]$ by four differentiations with respect to the variables $x^\alpha$ in order to obtain a system of integral equations whose right-hand sides do not contain other functions than those occurring on the left-hand sides. The integral equations $[J]$, verified by the bounded solutions and with bounded first derivatives of equations $[G']$, will only involve the coefficients $A^{\lambda \mu}$ and $B^{T \lambda}_S$ and their partial derivatives up to the orders four and two, respectively, as well as of the coefficients $F_S$. We would face clearly, in order to solve the system of integral equations $[J]$ directly, the same difficulty as in the general case: the quantity under the sign of triple integral is not bounded in general if $W_s$, $W_{s \alpha}$, ..., $U_S$ are independent functions. We shall be able however, in the case in which the $A^{\lambda \mu}$ depend only on the first derivatives of the $W_s$, to solve the Cauchy problem by using the results obtained on the system of integral equations verified in a certain domain, from the solutions of the given equations $[G]$, by considering a system $[G_1]$, which is the approximate version of $[G]$. This system is obtained by substitution in $A^{\lambda \mu}$ of the $W_s$ with their approximate values $W_s^{(1)}$. We will prove that the system of integral equations $[J_1]$, verified by the solutions of the Cauchy problem assigned with respect to the equations $[G_1]$, admits of a unique, continuous and bounded solution in a domain $D$. Then, we will prove that the solutions of $[J_1]$ are solutions of the Cauchy problem given for the equations $[G_1]$ in the whole domain $D$, and that the functions $W_s$ obtained admit of partial derivatives up to the fourth order equal to $W_{s \alpha}$, ..., $U_S$. Eventually, since the solution of the Cauchy problem given for the equations $[G_1]$ defines a representation of the space of the functions $W_s^{(1)}$ into itself, we will prove that this representation admits a fixed point, belonging to the space. The corresponding $W_s$ are solutions of the given equations $[G]$. This solution is unique and possesses partial derivatives continuous and bounded up to the fourth order. \section{The Equations $[F]$} Let us consider the system of $n$ second-order partial differential equations, with $n$ unknown functions and four variables $$ \sum\limits_{\lambda, \mu=1}^4 A^{\lambda \mu} \frac{\partial^2 W_s}{\partial x^\lambda \partial x^\mu} + f_s =0, \hspace{3.5cm} s=1, 2, ..., n. \hspace{2cm} [F]$$ We assume that in a space-domain $D$, centered at the point $\bar{M}$ with coordinates $x^i$, 0 and defined by \begin{equation*} |x^i - \bar{x}^i | \leq d, \hspace{2cm} |x^4| \leq \epsilon \end{equation*} and for values of the unknown functions $W_s$ and their first partial derivatives satisfying \begin{equation} \label{eq:5.1} |W_s - \bar{W}_s | \leq l, \hspace{2cm} \bigg{|} \frac{\partial W_s}{\partial x^\alpha} -\frac{\overline{\partial W_s}}{\partial x^\alpha} \bigg{|} \leq l, \end{equation} where $\bar{W}_s$ and $\bar{\frac{\partial W_s}{\partial x^\alpha}}$ are the values of the functions $W_s$ and $\frac{\partial W_s}{\partial x^\alpha}$ at the point $\bar{M}$, the coefficients $A^{\lambda \mu}$ and $f_s$ admit of partial derivatives with respect to all their arguments up to the fifth order. We shall then obtain, by differentiating five times the equations $[F]$ with respect to the variables $x^\alpha$, a system of $N$ equations, where $N$ is the product by $n$ of the number of derivatives of order five of a function of four variables, verified in the domain $D$ by the solutions of the equations $[F]$ which satisfy the inequalities $(\ref{eq:5.1})$ and possess derivatives with respect to the $x^\alpha$ up to the seventh order. Let us write this system of $N$ equations. We set \begin{equation*} \frac{\partial W_s}{\partial x^\alpha}=W_{s \alpha}, \hspace{2cm} \frac{\partial^2 W_s}{\partial x^\alpha \partial x^\beta}= W_{s \alpha \beta} \end{equation*} and we denote by $U_S$ the partial derivatives of order five of $W_s$ \begin{equation*} \frac{\partial^5 W_s}{\partial x^\alpha \partial x^\beta \partial x^\gamma \partial x^\delta \partial x^\epsilon} = W_{s \alpha \beta \gamma \delta \epsilon} =U_S, \hspace{2cm} s=1, 2, ..., N. \end{equation*} Let us differentiate the given equations $[F]$ with respect to any whatsoever of the variables $x^\alpha$; we obtain $n$ equations of the form \begin{equation*} \begin{split} & A^{\lambda \mu} \frac{\partial^2 W_{s \alpha}}{\partial x^\lambda \partial x^\mu} + \Biggl\{\frac{\partial A^{\lambda \mu}}{\partial W_r}W_{r \alpha} + \frac{\partial A^{\lambda \mu}}{\partial W_{r \nu}}\frac{\partial W_{r \nu}}{\partial x^\alpha} + \frac{\partial A^{\lambda \mu}}{\partial x^\alpha} \Biggr\} \frac{\partial W_{s \mu}}{\partial x^\lambda} + \frac{\partial f_s}{\partial W_r}W_{r \alpha} \\ &+ \frac{\partial f_s}{\partial W_{r \nu}}\frac{\partial}{\partial x^\alpha}W_{r \nu} + \frac{\partial f_s}{\partial x^\alpha}=0. \end{split} \end{equation*} If we differentiate the previous equations four times, we obtain the following system of $N$ equations: \begin{equation} \begin{split} \label{eq:5.2}& A^{\lambda \mu} \frac{\partial^2 W_{s \alpha \beta \gamma \delta \epsilon}}{\partial x^\lambda \partial x^\mu} + \Biggl\{ \frac{\partial A^{\lambda \mu}}{\partial W_r}W_{r \alpha} + \frac{\partial A^{\lambda \mu}}{\partial W_{r \nu}}W_{r \nu \alpha} + \frac{A^{\lambda \mu}}{\partial x^\alpha} \Biggr\} \frac{\partial}{\partial x^\lambda} W_{s \beta \gamma \delta \epsilon \mu} \\ & + \Biggl\{ \frac{\partial A^{\lambda \mu}}{\partial W_r}W_{r \beta} + \frac{\partial A^{\lambda \mu}}{\partial W_{r \nu}}W_{r \nu \beta} +\frac{A^{\lambda \mu}}{\partial x^\beta} \Biggr\} \frac{\partial}{\partial x^\lambda} W_{s \alpha \gamma \delta \epsilon \mu} \dots \\ &+ \Biggl\{ \frac{\partial A^{\lambda \mu}}{\partial W_r}W_{r \epsilon} + \frac{\partial A^{\lambda \mu}}{\partial W_{r \nu}}W_{r \nu \epsilon} +\frac{A^{\lambda \mu}}{\partial x^\epsilon} \Biggr\} \frac{\partial}{\partial x^\lambda} W_{s \alpha \beta \gamma \delta \mu} + \frac{\partial A^{\lambda \mu}}{\partial W_{r \nu}} \frac{\partial W_{r \nu \alpha \beta \gamma \delta}}{\partial x^\epsilon} \\ & + \frac{\partial f_s}{\partial W_{r \nu}}\frac{\partial W_{r \nu \alpha \beta \gamma \delta}}{\partial x^\epsilon} + F_S=0, \end{split} \end{equation} where $F_S$ is a function of the variables $x^\alpha$, of the unknown functions $W_s$ and of their partial derivatives up to the fifth order included, but not of the derivatives of higher order. The fifth derivatives $U_S$ of the functions $W_s$ satisfy therefore, in the domain $D$ and under the conditions specified, a system of $N$ equations $[F']$ of the following type: \begin{equation} \label{eq:5.3} A^{\lambda \mu} \frac{\partial^2U_S}{\partial x^\lambda \partial x^\mu} + B_S^{T \lambda} \frac{\partial U_T}{\partial x^\lambda} + F_S=0. \end{equation} The coefficients $A^{\lambda \mu}$, $B_S^{T \lambda}$ and $F_S$ of these equations are polynomials of the coefficients $A^{\lambda \mu}$ and $f_s$, of the given equations $[F]$ and of their partial derivatives with respect to all arguments up to the fifth order, as well as of the unknown functions $W_s$ and of their partial derivatives with respect to the $x^\alpha$ up to the fifth order. The coefficients $A^{\lambda \mu}$ depend only on the variables $x^\alpha$, the unknown functions $W_s$ and their first partial derivatives $W_{s \alpha}$, the coefficients $B_S^{T \lambda}$ depend only on the variables $x^\alpha$, on the unknown functions $W_s$ and their first and second partial derivatives $W_{s \alpha}$ and $W_{s \alpha \beta}$. Thus, we apply to Eqs. ($\ref{eq:5.3}$), which is a system of $N$ linear equations of second order, with the unknown functions $U_S$, the result of the previous chapter. We obtain a system of integral equations whose left-hand sides will be some auxiliary functions $\Omega$, $X$, and the unknown functions $U_S$ whereas, their right-hand sides have, under the sign of integral, quantities expressed by means of the auxiliary functions $X$, of the unknown functions $U_S$ and of the value for $x^4=0$ of their first partial derivatives $\frac{\partial U_S}{\partial x^\alpha}$, of the integration parameters, as well as of the coefficients $A^{\lambda \mu}$, $B^{T \lambda}_S$ and $F_S$ and of their partial derivatives up to the orders four, three and zero. $A^{\lambda \mu}$, $B^{T \lambda}_S$ and $F_S$ not involving the partial derivatives of the functions $W_s$ except for the orders up to one, two and five, respectively, the right-hand sides of the integral equations considered do not contain, besides the auxiliary functions $X$, $\Omega$, the functions $U_S$ and the value for $x^4=0$ of their first derivatives, and the integration parameters, nothing but the unknown functions $W_s$ and their partial derivatives up to the fifth order included. If the functions $W_s$ and their partial derivatives up to the fifth order $W_{s\alpha}$, $W_{s\alpha\beta}$, ..., $W_{s \alpha \beta \gamma \delta \epsilon}=U_S$ are continuous and bounded in a spacetime domain $D$ of equations $|x^i - \bar{x^i}| \leq d$, $|x^4| \leq \epsilon$, they verify in this domain the integral relations \begin{equation} \begin{split}\label{eq:5.4} &W_s(x^\alpha)= \int_0^{x^4} W_{s 4} (x^i, t) dt + W_s(x^i, 0), \\ & \dots \\ &W_{s \alpha \beta \gamma \delta \epsilon} (x^\alpha)= \int_0^{x^4} W_{s \alpha \beta \gamma \delta 4} (x^i, t) dt + W_{s\alpha \beta \gamma \delta \epsilon}(x^i, 0). \end{split} \end{equation} By adjoining this system to the system of integral equations, we are able to obtain a system of integral equations, verified, under certain assumptions, by the solutions of the given equations $[F]$, whose right-hand sides contain the functions occurring on the left-hand sides. We search for solutions $W_s$ of the equations $[F]$ which take, as well as their first partial derivatives, some values given in a domain $(d)$ of the initial hypersurface $x^4=0$: \begin{equation*} W_s(x^i, 0)= \varphi_s(x^i), \hspace{2cm} \frac{\partial W_s}{\partial x^4}(x^i, 0)= \psi_s(x^i); \end{equation*} where $\varphi_s$ and $\psi_s$ are given functions of the three variables $x^i$ in the domain $(d)$. We will prove that the data $\varphi_s$ and $\psi_s$ determine the values in $(d)$ of the partial derivatives up to the sixth order of the solution $W_s$ of the equations $[F]$. $\mathbf{Assumptions}$ \begin{description} \item[(1)] In the domain $(d)$, defined by \begin{equation*} |x^i - \bar{x}^i| \leq d, \end{equation*} the functions $\varphi_s$ and $\psi_s$ admit of partial derivatives continuous and bounded with respect to the three variables $x^i$ and satisfy the inequalities \begin{equation} \label{eq:5.5} |\varphi_s - \bar{\varphi}_s | \leq l_0 \leq l, \; \; | \psi_s - \bar{\varphi}_s| \leq l_0 \leq l, \; \; \bigg{|} \frac{\partial \varphi_s}{\partial x^i} - \frac{\overline{\partial \varphi_s}}{\partial x^i}\bigg{|} \leq l_0 \leq l. \end{equation} \item[(2)] In the domain $(d)$ and for values of the functions \begin{equation*} W_s= \varphi_s, \hspace{0.5cm} \frac{\partial W_s}{\partial x^4}= \psi_s , \hspace{0.5cm} \frac{\partial W_s}{\partial x^i}= \frac{\partial \varphi_s}{\partial x^i}, \end{equation*} satisfying the inequalities ($\ref{eq:5.5}$), the coefficients $A^{\lambda \mu}$ and $f_s$ have partial derivatives continuous and bounded with respect to all their arguments, up to the fifth order. \item[(3)] In the domain $(d)$ and for the functions $\varphi_s$ and $\psi_s$ considered, the coefficient $A^{44}$ is different from zero. \end{description} It follows, from the assumption $\textbf{(1)}$, that the values in $(d)$ of partial derivatives up to the sixth order, corresponding to a differentiation at most with respect to $x^4$, of the solutions $W_s$ of the assigned Cauchy problem are equal to the corresponding partial derivatives of the functions $\varphi_s$ and $\psi_s$, and they are continuous and bounded in $(d)$. The values in $(d)$ of partial derivatives up to the sixth order of the functions $W_s$, corresponding to more than one derivative with respect to $x^4$, are expressed in terms of the previous ones, of the coefficients $A^{\lambda \mu}$ and $f_s$ of the equations $[F]$ and of their partial derivatives up to the fourth order. Moreover, from the assumption $\textbf{(3)}$, it follows that the equations $[F]$ make it possible to evaluate (being given within $(d)$ the values of the functions $W_s$, $W_{s \alpha}$, $W_{s \alpha i}$) the value in $(d)$ of $W_{s 44}$, from which one will deduce by differentiation the value in $(d)$ of the partial derivatives corresponding to two differentiations with respect to $x^4$. The equations that are derivatives of the equations $[F]$ with respect to the variables $x^\alpha$, up to the fourth order, make it possible to evaluate in $(d)$ the values of partial derivatives up to the sixth order of the functions $W_s$. It turns out from the three previous assumptions that all functions obtained are continuous and bounded in $(d)$. We shall set \begin{equation*} W_{s j}(x^i,0)=\varphi_{s j}(x^i), \hspace{0.5cm} U_S(x^i,0)=\Phi_S(x^i), \hspace{0.5cm} \frac{\partial U_S}{\partial x^4}(x^i,0)=\Psi_s(x^i). \end{equation*} At this stage, it is useful to make a sum up of the assumptions made and the results obtained. $\mathbf{Assumptions}$ \begin{description} \item[(a)] In the domain $D$ defined by $|x^i - \bar{x}^i | \leq d$, $|x^4| \leq \epsilon$ and for values of the unknown functions satisfying \begin{equation*}|W_s - \bar{\varphi}_s | \leq l, \hspace{0.5cm} \bigg{|} \frac{\partial W_s}{\partial x^i} - \frac{\overline{\partial \varphi_s}}{\partial x^i} \bigg{|} \leq l, \hspace{0.5cm} \bigg{|} \frac{\partial W_s}{\partial x^4} - \bar{\psi}_s \bigg{|} \leq l;\end{equation*} \item[(b)] In the domain of the initial surface $x^4=0$, defined by $|x^i - \bar{x}^i | \leq d$, the Cauchy data $\varphi_s$ and $\psi_s$ admit of partial derivatives continuous and bounded up to the orders six and five. \end{description} It follows from the assumption $\textbf{(a)}$ that the coefficients $A^{\lambda \mu}$ and $f_s$ have partial derivatives with respect to all their arguments up to the fifth order continuous and bounded, the derivatives of order five satisfying some Lipschitz conditions. Furthermore, the quadratic form $A^{\lambda \mu}X_\lambda X_\mu$ is of normal hyperbolic form, i.e. $A^{44} >0$ and the form $A^{ij}X_iX_j$ is negative definite. In conclusion, we have seen that if we consider a solution $W_s$ seven times differentiable of the assigned Cauchy problem, possessing partial derivatives with respect to the $x^\alpha$ up to the sixth order, continuous and bounded and satisfying the inequalities $(\ref{eq:5.1})$ in $D$, it satisfies in this domain the equations $(\ref{eq:5.3}$), which are linear equations in the unknown functions $U_S$. These equations satisfy the assumptions of Chapter 4 and therefore there exists a domain $D_0 \subset D$ in which the functions $W_s$ verify the following system of integral equations. This system consists of equations having the form \begin{description} \item[(1)] \begin{equation*} X= X_0 + \int_{x^4_0}^{x^4} E(X) d\omega^4 \end{equation*} where $X$ is a function of the three parameters $x^4$, $\lambda_2$ and $\lambda_3$, representatives of a point of the characteristic conoid of vertex $M_0(x_0)$, and of the four coordinates $x^\alpha_0$ of a point $M_0 \in D_0$. These functions $X$ are the functions $x^i$, $p_i$, $y^j_i$, $z^j_i$, $y^j_{ih}$, $z^j_{ih}$, $y^j_{ihk}$, $z^j_{ihk}$, whereas $X_0$ is the value of $X$ for $x^4=x^4_0$ and it is a given function of $x^\alpha_0$, $\lambda_2$, $\lambda_3$. \item[(2)] \begin{equation*} \Omega = \Omega_0 + \int_{x^4_0}^{x^4} F(X, \Omega) d\omega^4, \end{equation*} where $\Omega$ is a function of $x^\alpha_0$, $x^4_0$, $\lambda_2$ and $\lambda_3$. These functions are $\omega^r_s$, $\omega^r_{si}$ and $\omega^r_{sij}$, whereas $\Omega_0$ is the value of $\Omega$ for $x^4=x^4_0$ and it is a function of $x^4_0$, $\lambda_2$ and $\lambda_3$. \item[(3)] \begin{equation*} W= W_0 + \int_{0}^{x^4} G(W, U) d\omega^4, \end{equation*} where $W$ is a function of the four coordinates $x^\alpha$ of a point $M \in D$. These functions are $W_s$, $W_{s \alpha}$, $W_{s \alpha \beta}$, $W_{s \alpha \beta \gamma}$ and $W_{s \alpha \beta \gamma \delta}$, whereas $W_0$ is the value of $W$ for $x^4=0$ and it is a given function of the three variables $x^i$. \item[(4)] \begin{equation*} U = \int_{x^4_0}^{0} \int_{0}^{2 \pi}\int_{0}^{\pi} H d\omega^4 d\lambda_2 d\lambda_3 + \int_{0}^{2 \pi}\int_{0}^{\pi} I d\lambda_2 d\lambda_3, \end{equation*} the Kirchhoff formulae, where $U$ is a function of the four coordinates $x^\alpha_0$ of a point $M_0 \in D_0$. These functions are the functions $U_S$. \end{description} The quantities $E$, $F$, $G$, $H$ and $I$ are formally identical to the corresponding quantities evaluated for the equations $[E]$. The quantity $G$ is a function of $W$ or $U$. All these quantities are therefore expressed by means of the functions $X$, $\Omega$, $W$ and $U$, occurring on the left-hand sides of the integral equations considered, and involve the partial derivatives of the $A^{\lambda \mu}$ and $f_s$ with respect to all their arguments, up to the fifth order, and the partial derivatives of the Cauchy data $\varphi_s$ and $\psi_s$ up to the orders six and five, in the quantity $I$ and by means of $W_0$. Let us now try to solve the system of integral equations verified by the solutions of the non-linear equations $[F]$. We have seen in Chapter 4 that the quantities occurring under the integral sign, in particular $H$, are continuous and bounded, upon assuming differentiability of the coefficients $A^{\lambda \mu}$, viewed as given functions of the variables $x^\alpha$, these conditions not being realized when the functions $W_s$, $W_{s \alpha}$, ..., $U_S$ are independent; the quantity $[A^{ij}]\frac{\partial^2 \sigma}{\partial x^i \partial x^j} J_{x \lambda}$ will then fail to be bounded and continuous. Thus, it is possible to overcome this difficulty on the way towards solving the Cauchy problem by passing through the intermediate stage of approximate equations $[F_1]$, where the coefficients $A^{\lambda \mu}$ are some given functions of the $x^\alpha$, obtained by replacing $W_s$ with a given function $W_s^{(1)}$. The quantities occurring under the integration signs of the integral equations verified by the solutions will be continuous and bounded if the same holds for the functions $W_s$, ..., $U_S$ considered as independent. We will then be in a position to solve the integral equations and show that their solution $W_s$, ..., $U_S$ is a solution of the equations $[F_1]$, and that $W_{s \alpha}$, ..., $U_S$ are the partial derivatives of $W_s$; but we need for that purpose to take as a function $W_s^{(1)}$ a function six times differentiable because the integral equations involve fifth derivatives of the $A^{\lambda \mu}$. Since the obtained solution $W_s$ is merely five times differentiable, it will be impossible for us to iterate the procedure. The method described will be therefore applicable only if the $A^{\lambda \mu}$ depend uniquely on the $W_s$ and not on its first derivatives with respect to the $x^\alpha$. Hence, from now on, it will be enough for us to assume that the approximation function is five times differentiable. In the general case, where $A^{\lambda \mu}$ are functions of $W_s$ and $W_{s \alpha}$, it is possible to solve the Cauchy problem by passing through the intermediate step of approximate forms not of the equations $[F]$ themselves, but of equations previously differentiated with respect to the $x^\alpha$ and viewed as integro-differential equations in the unknown functions $W_{s \alpha}$. \section{Solution of the Cauchy problem for the system $[G]$ in which the coefficients $A^{\lambda \mu}$ do not depend on first partial derivatives of the unknown functions} Following Bruhat \cite{foures1952theoreme}, we will now proceed by showing the solution of the Cauchy problem for the system $[F]$ when the coefficients $A^{\lambda \mu}$ depend only on the variables $x^\alpha$, on the functions $W_s$ but not on their first derivatives with respect to the $x^\alpha$, i.e. $W_{s \alpha}$. Let us consider the system $[G]$ of $n$ partial differential equations of second order with $n$ unknown functions and four variables $$ A^{\lambda \mu} \frac{\partial^2 W_s}{\partial x^\lambda \partial x^\mu} + f_s =0, \hspace{8cm} [G] $$ where the coefficients $A^{\lambda \mu}$ depend only on the variables $x^\alpha$ and on the unknown functions $W_s$, and not on the first partial derivatives $W_{s \alpha}$ of these functions. The coefficients $f_s$ are functions of the variables $x^\alpha$, of the unknown functions $W_s$ and of their first partial derivatives $W_{s \alpha}$. We shall obtain a system of integral equations verified by the solutions of the equations $[G]$ by applying the methods used for the equations $[F]$. Since the $A^{\lambda \mu}$ do not contain $W_{s \alpha}$, it will be enough to apply the results of Chapter 4 to the equations $[G']$ deduced from the $[G]$ by four differentiation with respect to the variables $x^\alpha$ in order to obtain a system of integral equations whose right-hand sides do not contain other functions than those which occur on the left-hand sides. If we denote by $U_S$ any whatsoever of the fourth derivatives of the unknown functions $W_s$, the equations obtained with the previous calculations read as $$A^{\lambda \mu} \frac{\partial^2 U_S}{\partial x^\lambda \partial x^\mu} + B^{T \lambda}_S \frac{\partial U_T}{\partial x^\lambda} + F_S =0. \hspace{6cm} [G']$$ $A^{\lambda \mu}$ depend only on the variables $x^\alpha$ and the functions $W_{s}$. $B^{T \lambda}_S$ are a sum of first partial derivatives of the functions $A^{\lambda \mu}$, viewed as functions of the variables $x^\alpha$ and of first partial derivatives of a function $f_s$ with respect to the first partial derivatives $W_{s \alpha}$ of the unknown functions, depend on the variables $x^\alpha$, on the unknown functions $W_s$ and on their first partial derivatives $W_{s \alpha}$. Eventually, $F_S$ is a polynomial of the coefficients $A^{\lambda \mu}$, of $f_s$ and of their partial derivatives with respect to all their arguments up to the fourth order, as well as of the functions $W_s$ and of their partial derivatives with respect to the variables $x^\alpha$ up to the fourth order. In order to solve the Cauchy problem we proceed as follows. $\mathbf{Assumptions}$ \begin{description} \item[(1)] In the domain $D$, defined by $|x^i - \bar{x}^i | \leq d$, $|x^4| \leq \epsilon$, and for values of the unknown functions satisfying \begin{equation} \label{eq:5.6} |W_s - \bar{\varphi}_s| \leq l, \hspace{0.5cm}\bigg{|} \frac{\partial W_s}{\partial x^i} - \frac{\overline{\partial \varphi_s}}{\partial x^i} \bigg{|} \leq l, \hspace{0.5cm} \bigg{|} \frac{\partial W_s}{\partial x^4} - \bar{\psi}_s \bigg{|} \leq l, \end{equation} one has that \begin{description} \item[(a)] The coefficients $A^{\lambda \mu}$ and $f_s$ admit partial derivatives with respect to all their arguments up to the fourth order, continuous, bounded and satisfying Lipschitz conditions. \item[(b)] The quadratic form $A^{\lambda \mu}X_\lambda X_\mu$ is of normal hyperbolic type, i.e. $A^{44} >0$ and $\sum_{i, j=1}^3 A^{ij}X_i X_j$ is negative-definite. \end{description} \item[(2)] In the domain $(d)$ of the initial surface, defined by $|x^i - \bar{x}^i | \leq d$, the Cauchy data $\varphi_s$ and $\psi_s$ possess partial derivatives continuous and bounded up to the orders five and four, respectively, satisfying some Lipschitz conditions. \end{description} The integral equations $[J]$, verified by the bounded solutions and with bounded first derivatives of equations $[G']$, only involve the coefficients $A^{\lambda \mu}$ and $B^{T \lambda}_S$ and their partial derivatives up to the orders four and two respectively, as well as of the coefficients $F_S$. These equations $[J]$ contain only partial derivatives of the functions $W_s$ of order higher than four. When we try to solve the system of integral equations $[J]$ directly, we see that the quantity $H$ under the sign of triple integral is not bounded in general if $W_S$, $W_{s \alpha}$, ... $U_S$ are independent functions. We shall be able however, in the case in which the $A^{\lambda \mu}$ depend only on $W_{s \alpha}$, to solve the Cauchy problem by using the results obtained on the system of integral equations verified in a certain domain, from the solutions of the given equations $[G]$. We shall consider a system $[G_1]$, which is the approximation of $[G]$, obtained by replacing in $A^{\lambda \mu}$ and not in $f_s$ the unknown $W_s$ with some approximate values $W_s^{(1)}$ which admit of partial derivatives continuous and bounded up to the fourth order, $W_{s \alpha}$, ..., $U_S$, in the domain $D$ and satisfy the inequalities \begin{equation*} |W_s^{(1)} - \bar{\varphi}_s| \leq l, \hspace{0.5cm} \bigg{|} \frac{\partial W_s^{(1)}}{\partial x^i} - \frac{\overline{\partial \varphi_s}}{\partial x^i} \bigg{|} \leq l, \hspace{0.5cm} \bigg{|} \frac{\partial W_s^{(1)}}{\partial x^4} - \bar{\psi}_s \bigg{|} \leq l. \end{equation*} This approximated system reads as $$ {A^{\lambda \mu}}^{(1)} \frac{\partial^2 W_s}{\partial x^\lambda \partial x^\mu} + f_s =0. \hspace{7cm} [G_1] $$ A solution $W_1$, six times differentiable and satisfying the inequalities ($\ref{eq:5.6}$), of the equations $[G_1]$ verifies therefore, in $D$, the following equations: $$ {A^{\lambda \mu}}^{(1)} \frac{\partial^2 U_S}{\partial x^\lambda \partial x^\mu} + {B^{T \lambda}_S}^{(1)} \frac{\partial U_T}{\partial x^\lambda} + {F_S}^{(1)} =0. \hspace{4cm} [{G'}_1] $$ ${A^{\lambda \mu}}^{(1)}$ is a function of the variables $x^\alpha$ and of the unknown functions ${W_s}^{(1)}$. ${B^{T \lambda}_S}^{(1)}$ is a sum of the first partial derivatives of the ${A^{\lambda \mu}}^{(1)}$, viewed as functions of the variables $x^\alpha$ (hence as functions of the variables $x^\alpha$ and of the functions ${W_s}^{(1)}$ and ${W_{s \alpha}}^{(1)}$) and of the first partial derivatives of a function $f_s$ with respect to the functions $W_{r \nu}$ (therefore of the functions of $x^\alpha$, ${W_s}$ and $W_{s \alpha}$). Eventually, $F^{(1)}_S$ is a polynomial of the coefficients ${A^{\lambda \mu}}^{(1)}$, of $f_s$ and of their partial derivatives with respect to all their arguments up to the fourth order, as the functions ${W_s}^{(1)}$ and ${W_{s \alpha}}^{(1)}$ and of their partial derivatives with respect to the $x^\alpha$ up to the fourth order. All these coefficients of equations $[G'_1]$, viewed as linear equation of type $[E]$ in the unknown functions $U_S$, satisfy in the domain $D$ the assumptions made in Chapter 4. Thus, there exists a domain $D_0 \subset D$ in which the fifth derivatives $U_S$ of a solution $W_s$ of the given Cauchy problem, which possess partial derivatives continuous and bounded up to the sixth order and satisfy the inequalities ($\ref{eq:5.6}$), verify some Kirchhoff formulae, whose left-hand sides are the values at the point $M_0 \in D_0$ of those functions $U_S$. These equations, together with the integral equations having on the left-hand side some auxiliary functions $X$ and $\Omega$, and with some integral equations analogous to the previous ones, form a system of integral equations that we denote by $[J_1]$. \subsection{The integral equations $[J_1]$} Let us consider the set of integral equations $[J_1]$ as a system of integral equations with four groups of unknown functions $X$, $\Omega$, $W$ and $U$. The system consists of the following four group of equations: \begin{description} \item[(1)] Equations having on the left-hand side a function $X$ of the four coordinates $x^\alpha_0$ and of three parameters $x^4$, $\lambda_2$ and $\lambda_3$. These functions $X$ are $x^i$, $p_i$, $y^j_i$, $z^j_i$, ..., $z^j_{ihk}$ which define the characteristic conoids. These equations are of the form $$ X = X_0 + \int_{x^4_0}^{x^4} E(X) d\omega^4, \hspace{6cm} [1] $$ where $X_0$ is the value of $X$ for $x^4=x^4_0$, whereas $E$ is a rational function, with denominator \begin{equation*} T^{*4}= {A^{*44}}^{(1)} + {A^{*i4}}^{(1)}p_i, \end{equation*} of the following quantities: \begin{description} \item[(a)] The coefficients ${A^{\lambda \mu}}^{(1)}$ and their partial derivatives with respect to all their arguments up to the fourth order (which are functions of ${W_s}^{(1)}(x^\alpha)$ and $x^\alpha$ where $x^i$ is replaced by the corresponding $X$ function), functions ${W_s}^{(1)}$ and partial derivatives up to the fourth order; \item[(b)] The functions $X$; \item[(c)] The quantities ${a^{\alpha \beta}_0}^{(1)}$ and ${a_{\alpha \beta}^0}^{(1)}$, which are algebraic functions of the values of the coefficients ${A^{\lambda \mu}}^{(1)}$ for the values $x^\alpha_0$ and ${W_s}^{(1)}(x^\alpha_0)$ of their arguments. \end{description} \item[(2)] Equations having on the left-hand side a function $\Omega$ of the $x^\alpha_0$ and of the parameters $x^4$, $\lambda_2$, $\lambda_3$. These functions $\Omega$ correspond to $\omega^r_s$, $\omega^r_{si}$ and $\omega^r_{sij}$. These equations are of the form $$ \Omega = \Omega_0 + \int_{x^4_0}^{x^4} F(X, \Omega) d\omega^4, \hspace{6cm} [2] $$ where $\Omega_0$ is the value of $\Omega$ for $x^4=x^4_0$, whereas $F$ is a rational fraction, with denominator \begin{equation*} T^{*4} = {A^{*44}}^{(1)} + {A^{*i4}}^{(1)}p_i, \end{equation*} of the following quantities: \begin{description} \item[(a)] The coefficients ${A^{\lambda \mu}}^{(1)}$ and ${B^{T \lambda}_S}^{(1)}$ and their partial derivatives with respect to all their arguments up to the orders three and two, respectively, i.e. coefficients ${A^{\lambda \mu}}^{(1)}$, $f_s$ and their partial derivatives up to the third order; \item[(b)] The functions ${W_s}^{(1)}(x^\alpha)$ and their partial derivatives up to the third order and functions $W_{s \alpha}(x^\alpha)$, $W_{s \alpha \beta}(x^\alpha)$ and $W_{s \alpha \beta \gamma}(x^\alpha)$. The $x^i$ are always replaced by the corresponding functions $X$; \item[(c)] The functions $X$ and $\Omega$; \item[(d)] The quantities ${a^{\alpha \beta}_0}^{(1)}$ and ${a_{\alpha \beta}^0}^{(1)}$. \end{description} \item[(3)] Equations having on the left-hand side a function $W$ of the four coordinates $x^\alpha$. These equations are of the form $$ W(x^\alpha) = W_0 (x^i) + \int_{0}^{x^4} G d\omega^4, \hspace{6cm} [3] $$ where $W_0$ is the value of $W$ for $x^4=0$, whereas $G$ is a function $W$ or a function $U$. \item[(4)] Equations having on the left-hand side a function $U$ of the four coordinates $x^\alpha_0$, known as Kirchhoff formulae, of the form $$ 4 \pi U (x^\alpha_0) = \int_{x^4_0}^{0} \int_0^{2 \pi} \int_0^\pi H d\omega^4 d \lambda_2 d \lambda_3 + \int_0^{2 \pi} \int_0^\pi I d \lambda_2 d\lambda_3, \hspace{1cm} [4] $$ where $H$ is the product of the square root of a rational fraction with denominator $D^*$, which is a polynomial of ${A^{\lambda \mu}}^{(1)}$, $X$, $\tilde{X}$ and $p_i^0$, and numerator 1, with the sum of the two following rational fractions: \begin{description} \item[(A)] A rational fraction $H_a$ with denominator $(D^*)^3(x^4_0 - x^4)T^{*4}$, which results only from those terms of the operator $L^r_s$ which contain the second partial derivatives of the function $\sigma$, whereas its numerator is a polynomial of the functions: ${A^{\lambda \mu}}^{(1)}$ and their first and second partial derivatives with respect to all their arguments, i.e. functions of ${W_s}^{(1)}(x^\alpha)$ and $x^\alpha$, where $x^i$ are replaced by the corresponding $X$ functions; ${W_{s}}^{(1)}(x^\alpha)$, ${W_{s \alpha}}^{(1)}(x^\alpha)$ and ${W_{s \alpha \beta}}^{(1)}(x^\alpha)$; $X$ and $\tilde{X}$, where $\tilde{X}$ is the quotient by $(x^4_0 - x^4)$ of the functions $X$ for which $X_0=0$; $U(x^\alpha)$ and $\Omega$, which only occur in the product $[U_r]\omega^r_s$ in the polynomial considered. This polynomial, which is a function of $x^\alpha_0$, $x^4$, $\lambda_2$ and $\lambda_3$, vanishes for $x^4=x^4_0$. \item[(B)] A rational fraction $H_{1b}$ with denominator $(D^*)^2T^{*4}$ of the following quantities: The coefficients ${A^{\lambda \mu}}^{(1)}$, ${B^{T \lambda}_s}^{(1)}$ and ${F_S}^{(1)}$, and their partial derivatives of up to the orders two and one, respectively, with respect to the $x^\alpha$. More precisely, the quantities involved are: the coefficients ${A^{\lambda \mu}}^{(1)}$ and $f_s$ and their partial derivatives with respect to all their arguments up to the fourth order; the functions ${W_s}^{(1)}(x^\alpha)$, $\dots$, ${U_S}^{(1)}(x^\alpha)$, ${W_s}(x^\alpha)$, $\dots$, ${U_S}(x^\alpha)$; the functions $X$ and $\tilde{X}$; the functions $\Omega$, $\tilde{\Omega}$, where $\tilde{\Omega}$ is the quotient by $(x^4_0 - x^4)$ of the functions $\Omega$ for which $\Omega_0=0$. \end{description} Eventually, $I$ is the value for $x^4=0$ of the product of the square root of a rational fraction with denominator $D^*$ and numerator 1, with a rational fraction having denominator $(D^*)^2{A^{*44}}^{(1)}T^{*4}$ of the following functions: ${A^{\lambda \mu}}^{(1)}$ and their first partial derivatives with respect to all their arguments; the first partial derivatives of $f_s$ with respect to $W_{r \nu}$, which contribute through ${B^{T \lambda}_S}^{(1)}$, functions of $W_s(x^\alpha)$, $W_{s \alpha}(x^\alpha)$ and $X^{\alpha}$; ${W_s}^{(1)}(x^\alpha)$ and ${W_{s \alpha}}^{(1)}(x^\alpha)$, $X$ and $\tilde{X}$, $\Omega$ and $\tilde{\Omega}$; the Cauchy data $\varphi_s (x^i)$ and $\psi_s (x^i)$ and their partial derivatives with respect to the $x^i$ up to the orders five and four, respectively. \end{description} Since the equations $[1]$ do not contain other unknown functions besides the functions $X$, we shall solve them first. Furthermore, the $H_{a}$ is a known function when the $X$ are known. We shall then be in a position to restrict the quantity $H$ without making assumptions on the derivatives of the functions $U$ and $W$, viewed as independent, and to solve the remaining equations $[2]$, $[3]$ and $[4]$. Hence, we are going to prove that the system of integral equations $J_1$ admits a unique solution, by making use of the assumptions made on the coefficients $A^{\lambda \mu}$ and $f_s$ and of the assumptions on the functions ${W_s}^{(1)}$. \subsection{Assumptions on the coefficients $A^{\lambda \mu}$, $f_s$ and on the functions ${W_s}^{(1)}$} $\textbf{Assumptions B}$ \begin{description} \item[($B_1$)] In the domain $D$ defined by $ |x^i - \bar{x}^i| \leq d$, $|x^4| \leq \epsilon$ and for the values of the functions $W_s$ and $W_{s \alpha}$ satisfying: \begin{equation} \label{eq:5.7} | W_s - \varphi_s | \leq l, \hspace{0.5cm} |W_{si} - \varphi_{si}| \leq l, \hspace{0.5cm} |W_{s 4} - \psi_s | \leq l: \end{equation} \begin{description} \item[(a)] The coefficients $A^{\lambda \mu}$ and $f_s$ admit partial derivatives with respect to all their arguments up to the fourth order, continuous and bounded by a given number. \item[(b)] The quadratic form $ \sum_{\lambda, \mu=1}^4 A^{\lambda \mu}X_\lambda X_\mu$ is of normal hyperbolic type. The coefficient $A^{44}$ is bigger than a given positive number. \end{description} The coefficients ${a^{\alpha \beta}_0}$ and ${a_{\alpha \beta}^0}$ relative to the values of the coefficients $A^{\lambda \mu}$ at a point of the previous domain are bounded by a given number. \item[($B_2$)] The approximating functions ${W_s}^{(1)}$ admit in the domain $D$ of partial derivatives up to the fourth order continuous, bounded and satisfying the inequalities \begin{equation*} | {W_s}^{(1)} - \varphi_s | \leq l, \hspace{0.5cm}|{W_{si}}^{(1)} - \varphi_{si}| \leq l, \hspace{0.5cm} |{W_{s 4}}^{(1)} - \psi_s | \leq l \end{equation*} and analogous identities $$| {W}^{(1)} - W_0| \leq l \hspace{0.5cm} {\rm up} \; {\rm to} \hspace{0.5cm} | {U_S}^{(1)} - \Phi_s | \leq l.$$ \item[($B_3$)] In the domain $(d)$ defined by $ |x^i - \bar{x}^i| \leq d$, the Cauchy data $\varphi_s(x^i)$ and $\psi_s(x^i)$ possess partial derivatives continuous and bounded with respect to the variables $x^i$ up to the orders five and four, respectively. \end{description} $\textbf{Assumptions B'}$ \begin{description} \item[($B'_1$)] In the domain $D$ and for the values of the functions $W_s$ and $W_{s \alpha}$ satisfying the inequalities $(\ref{eq:5.7}$), the partial derivatives of order four of the coefficients $A^{\lambda \mu}$ and $f_s$ satisfy a Lipschitz condition assigned with respect to all their arguments. \item[($B'_2$)] The assumptions B imply that, in the domain $D$ and for the values of the functions $W_s$ satisfying $(\ref{eq:5.7}$), the coefficients ${a^{\alpha \beta}_0}$ and ${a_{\alpha \beta}^0}$, as long as their partial derivatives up to the fourth order, verify a Lipschitz condition given with respect to their arguments $x^\alpha_0$, $W_s(x^\alpha_0)$. \item[($B'_3$)] The partial derivatives of order four of the functions $W_s$ satisfy a Lipschitz condition with respect to the three arguments $x^i$. From the assumption $(B_3)$ one obtains the inequality \begin{equation*} \big{|} {W_s}^{(1)}(x'^{\alpha}) - {W_s}^{(1)}(x^\alpha) \big{|} \leq l' \sum | x'^{\alpha} - x^{\alpha} | \end{equation*} and the analogous inequalities for the partial derivatives of the ${W_s}^{(1)}$ up to the third order. We shall have in addition: \begin{equation*} \big{|} {U_S}^{(1)}(x'^{i}, x^4) - {U_S}^{(1)}(x^i, x^4) \big{|} \leq l \sum |x'^{i} - x^i |. \end{equation*} \item[($B'_4$)] In the domain $(d)$ the partial derivatives of Cauchy data $\varphi_s$ and $\psi_s$ of orders five and four, respectively, satisfy a Lipschitz condition with respect to the variables $x^i$. From the assumptions B, one finds the inequality \begin{equation*} \big{|} {\varphi_s}(x'^{i}) - {\varphi_s}(x^i) \big{|} \leq l'_0 \sum | x'^{ i} - x^{i} | \end{equation*} and the analogous inequalities for the functions $\psi_s$ and the partial derivatives of $\psi_s$ and $\varphi_s$ up to the orders three and four. We have in addition: \begin{equation*} \big{|} {\phi_{sj}}(x'^{i}) - {\phi_{sj}}(x^i) \big{|} \leq l' \sum | x'^{i} - x^{i} |, \end{equation*} \begin{equation*} \big{|} {\psi_{s}}(x'^{i}) - {\psi_{s}}(x^i) \big{|} \leq l'_0 \sum | x'^{ i} - x^{i} |, \end{equation*} where $l'$ and $l'_0$ are given numbers which satisfy $l' >l'_0$. \end{description} We are now able to proceed with the calculation of the solution of equations $[1]$. \subsection{Solution of equations $[1]$} We shall solve first the equations $[1]$ defining the characteristic conoid $$ X= X_0 + \int_{x^4_0}^{x^4} E(X) d\omega^4. \hspace{7cm} [1] $$ These non-linear integral equations, having on the left-hand side a function $X$, do not contain other unknown functions besides the functions $X$. We shall solve this equations by considering a functional space $\Upsilon$, the $m$ coordinates of a point of $\Upsilon$ (where $m$ is the number of functions $X$) being some functions $X_1$ continuous and bounded of $x^\alpha_0$, $x^4$, $\lambda_2$ and $\lambda_3$ in the domain $\Lambda$ defined by \begin{equation*}\begin{split}& |x^i_0 - \tilde{x}^i| \leq d, \hspace{0.5cm} |x^4_0| \leq \Upsilon(x^i_0), \\ & 0 \leq x^4 \leq x^4_0, \hspace{0.5cm} 0 \leq \lambda_2 \leq \pi, \hspace{0.5cm} 0 \leq \lambda_3 \leq 2 \pi, \end{split}\end{equation*} with $\Upsilon(x^i_0) \leq \epsilon$. The functions $X_1$ take for $x^4=x^4_0$ the assigned values $X_0$. We denote by $\bar{M}_0$ the point of $\Upsilon$ having coordinates $\tilde{X}_0$, which are the values of the functions $X_0$ for $x^i_0=\bar{x}^i$ and $x^4_0=0$, and we assume that the functions $X_1$ satisfy the inequalities \begin{equation} \label{eq:5.8} |X_1 - \tilde{X}_0| \leq d \; {\rm and} \; |X_1 - X_0| \leq M |x^4_0 - x^4 |, \end{equation} where $M$ is a given number. We shall define in the space $\Upsilon$ the distance of two points $M_1$ and $M_1'$ by the maximum in the domain $\Lambda$ of the sum of absolute values of the differences of their coordinates: \begin{equation*} d(M_1, M_1')= Max_{\Lambda} \sum |X_1' - X_1 |. \end{equation*} The norm introduced in such a way endows the space $\Upsilon$ of the topology of uniform convergence, and then $\Upsilon$ is a normed, complete and compact space. To the point $M_1$ of $\Upsilon$ having coordinates $X_1$ we associate a point $M_2$ whose coordinates $X_2$ are defined by \begin{equation} \label{eq:5.9} X_2 = X_0 + \int_{x^4_0}^{x^4} E_1 d\omega^4, \end{equation} where $E_1$ denotes the quantity $E$ occurring in the equations $[1]$ and the functions $X$ are replaced by the corresponding coordinates $X_1$ of $M_1$. Since this representation is a representation of $\Upsilon$ in itself, the $X_2$ are continuous and bounded functions of their arguments, they take for $x^4=x^4_0$ the values $X_0$ and satisfy the same inequalities ($\ref{eq:5.8}$) fulfilled by $X_1$, if $\epsilon(x^i_0)$, which defines the domain of variation of the argument $x^4_0$ of $X_1$ is suitably chosen. The $E_1$ are indeed expressed rationally by means of the ${W_{s1}}^{(1)}$, ${A^{\lambda \mu}_1}^{(1)}$, of their partial derivatives up to the fourth order and $x^i$ are replaced in all its functions by the corresponding $X_1$ function: $X_1$, ${a^{\alpha \beta}_0}^{(1)}$, ${a_{\alpha \beta}^0}^{(1)}$. All these functions are, by virtue of the assumptions B and of the assumptions made upon the $X_1$, functions continuous and bounded of $x^\alpha_0$, $x^4$, $\lambda_2$ and $\lambda_3$. On the other hand the denominator of the functions $E_1$ is \begin{equation*} {T^{*4}_1}^{(1)}= \bigg{(} {A^{*44}}^{(1)} + {A^{*i4}}^{(1)}p_i \bigg{)}_1 \end{equation*} and takes the value 1 for $x^4=x^4_0$, $X_1=X_0$. If follows from the assumptions B and B' and from the inequalities verified by the $X_1$ that ${T^{*4}}^{(1)}$ satisfies some Lipschitz conditions \begin{equation*} \big{|} {T^{*4}_1}^{(1)} - 1 \big{|} \leq T' \Biggl\{ \sum \big{|} X_1 - X_0 \big{|} + |x^4 - x^4_0 | \Biggr\} \leq T' (m M + 1) |x^4_0 - x^4|, \end{equation*} where $T'$ depends only on the bounds B and B'. Therefore, we shall be in a position to choose $\epsilon (x^i_0)$ sufficiently small so that the denominator considered differs from zero in $\Lambda$. The quantities $E_1$ are then continuous functions of their seven arguments in the domain $\Lambda$, and are bounded by a number $M$ which depends only on the bounds B, $E_1 \leq M$. This implies that the functions $X_2$ are continuous and bounded in their seven arguments. They fulfill the inequalities \begin{equation} \label{eq:5.10} |X_2 - X_0| \leq M |x^4_0 - x^4 |, \end{equation} where $M$ has been chosen in such a way that the functions $X_2$ verify the same inequality as the functions $X_1$. It will be therefore enough to take $\epsilon(x^i_0)$ in such a way that \begin{equation} \label{eq:5.11} \epsilon (x^i_0) \leq \frac{ d - |x^i_0 - \bar{x}^i_0|}{M}, \end{equation} in order to obtain $|X_2 - \bar{X}_0| \leq d$. The point $M_2$ will be therefore a point of $\Upsilon$ if $\epsilon(x^i_0)$ verifies the inequality $(\ref{eq:5.11})$. Let us now show that the distance of two points $M_2$, $M_2'$ is less than the distance of the initial points $M_1$, $M_1'$ if $\epsilon(x^i_0)$ is suitably chosen. From the equations $(\ref{eq:5.9})$ there follows the inequality \begin{equation} \label{eq:5.12} |X_2' - X_2 | \leq |x^4_0 - x^4 | \cdot Max|E_1' - E_1|, \end{equation} where $E_1$ are rational fractions with non-vanishing denominators of boundend functions verifying Lipschitz conditions with respect to the $X_1$. We have on the other hand \begin{equation*} |E_1' - E_1| \leq M' \cdot \sum |X_1' - X_1|, \end{equation*} where $M'$ is a number which depends only on the bounds B and B'. From which \begin{equation*} d(M_2, M_2') \leq mM' \cdot Max_{\Lambda} \epsilon (x^i_0) \cdot d(M_1, M_1'). \end{equation*} In order for the representation ($\ref{eq:5.9}$) of the space $\Upsilon$ into itself to reduce the distance it will be then enough that \begin{equation} \label{eq:5.13} \epsilon(x^i_0) < \frac{1}{m M'}. \end{equation} We shall therefore choose $\epsilon(x^i_0)$ as satisfying the inequalities ($\ref{eq:5.11})$ and ($\ref{eq:5.13}$). The representation ($\ref{eq:5.9}$) of the space $\Upsilon$ normed, complete and compact into itself, reducing the distances, will then admit a unique fixed point belonging to this space. $\textbf{Conclusion.}$ In the domain \begin{equation} \begin{split} \label{eq:5.14} &|x^i_0 - \bar{x}^i | \leq d, \hspace{0.5cm}|x^4_0| < \epsilon(x^i_0), \\ & 0 \leq x^4 \leq x^4_0, \hspace{0.5cm} 0 \leq \lambda_2 \leq \pi, \hspace{0.5cm} 0 \leq \lambda_3 \leq 2\pi \end{split}\end{equation} the system of integral equations $[1]$ admits a solution that is unique, continuous and bounded and verifying the inequalities \begin{equation} \label{eq:5.15} |X - \bar{X}_0| \leq d, \end{equation} where the three functions $X$ corresponding to the $x^i$ define, with the variable $x^4$, a point belonging to the domain $D$. Hence, having shown that there exists a unique solution of the equations $[1]$, and recalling that the quantities $E$ which are involved on the right-hand side of $[1]$ are only ${A^{\lambda \mu}}^{(1)}$ and their partial derivatives, possessing the same properties of Chapter 4, it is possible to apply the same method and to see that: \begin{description} \item[(1)] The functions $\frac{X- X_0}{x^4_0 - x^4}$ are continuous and bounded in $\Lambda$. The functions $\tilde{X}$, quotients by $x^4_0 - x^4$ of the $X$ which vanish for $x^4_0=x^4$, are continuous and bounded in $\Lambda$: \begin{equation*} |X - X_0| < M|x^4_0 - x^4|, \hspace{0.5cm} |\tilde{X}| \leq M. \end{equation*} \item[(2)] The functions \begin{equation*} \frac{ \tilde{X} - \tilde{X}_0}{x^4_0 - x^4} = \frac{\int_{x^4_0}^{x^4} (E - E_0)d\omega^4}{x^4_0 - x^4}, \end{equation*} where $\tilde{X}_0$ and $E_0$ denote the values for $x^4_0=x^4$ of $\tilde{X}$ and $E$, are continuous and bounded in $\Lambda$. The bound on these functions is deduced from the Lipschitz conditions, verified by $E$ with respect to the $X$ and $x^4$: \begin{equation*} |E - E_0| \leq M'' \Biggl\{ \sum |X- X_0| + |x^4 - x^4_0| \Biggr\}, \end{equation*} where $M''$ depends only on the bounds B and B'. Thus, we have \begin{equation} \label{eq:5.16} | \tilde{X} - \tilde{X}_0| \leq \frac{M}{2}(Mm+1) |x^4 - x^4_0|. \end{equation} \item[(3)] The functions $X$ verify Lipschitz conditions with respect to the $x^i_0$. It is sufficient, in order to prove it, to impose on the space $\Upsilon$ the following supplementary assumptions: The functions $X_1$ verify a Lipschitz condition with respect to the $x^i_0$ \begin{equation} \label{eq:5.17} \big{|} X_1(x'^{i}_0, x^4_0, ...) - X_1(x^i_0, x^4_0, ...)\big{|} \leq d' \sum |x'^{i}_0 - x^i_0|, \end{equation} where $d'$ is a given number. We have \begin{equation*} X_2(x'^{i}_0, ...) - X_2(x^i_0, ...) = \int_{x^4_0}^{x^4} \bigg{(} E_1(x'^{i}_0, ...) - E_1(x^i_0, ...) \bigg{)} d\omega^4, \end{equation*} where $ E_1(x'^{i}_0, ...)$ and $ E_1(x^i_0, ...)$ are evaluated with the help of the functions $X_1(x'^{i}_0, ...)$, in particular $x^i_1(x'^{i}_0, ...)$ and $X_1(x^{i}_0, ...)$, respectively. Since the quantities $E_1$ verify a Lipschitz condition with respect to the $X_1$, one deduces from $(\ref{eq:5.17}$): \begin{equation*} |X_2(x'^{i}_0, ...) - X_2(x^i_0, ...) | \leq |x^4_0 - x^4| M'd' |x'^{i}_0 - x^i_0|, \end{equation*} from which, for $\epsilon(x^i_0) \leq \frac{1}{M'}$, one has \begin{equation*} |X_2(x'^{i}_0, ...) - X_2(x^i_0, ...) | \leq d' \sum |x'^{i}_0 - x^i_0|. \end{equation*} The point $M_2$, representative of $M_1$ by virtue of $(\ref{eq:5.9})$, with the supplementary assumption made, is still a point of $\Upsilon$, and the fixed point has coordinates verifying \begin{equation*} |X(x'^{i}_0, ...) - X(x^i_0, ...) | \leq d' \sum |x'^{i}_0 - x^i_0|, \end{equation*} and \begin{equation*} |X(x'^{i}_0, ...) - X(x^i_0, ...) | \leq |x^4_0 - x^4| M'd' |x'^{i}_0 - x^i_0|, \end{equation*} from which we have \begin{equation*} |\tilde{X}(x^{i}_0, ...) - \tilde{X}(x^i_0, ...) | \leq M' d' \sum |x'^{i}_0 - x^i_0|. \end{equation*} \end{description} \subsection{Solution of equations $[2]$, $[3]$ and $[4]$} Let us now consider the system of integral equations with the unknown functions $\Omega$, $W$ and $U$, obtained by replacing in the equations $[2]$, $[3]$ and $[4]$ the functions $X$ with the solutions found of equations $[1]$: $$ \Omega = \Omega_0 + \int_{x^4_0}^{x^4} F(X, \Omega) d\omega^4, \hspace{7cm} [2] $$ $$ W = W_0 + \int_{0}^{x^4} G d\omega^4, \hspace{8cm} [3] $$ $$ 4 \pi U (x^\alpha_0) = \int_{x^4_0}^{0} \int_0^{2 \pi} \int_0^\pi H d\omega^4 d \lambda_2 d \lambda_3 + \int_0^{2 \pi} \int_0^\pi I d \lambda_2 d\lambda_3. \hspace{2cm} [4] $$ We shall solve these equations by considering a functional space $\mathcal{F}$, the coordinates of a point of $\mathcal{F}$ being defined by: \begin{description} \item[(1)] $m_1$ of these coordinates, that is the number of functions $\Omega$, are functions $\Omega_1$ continuous and bounded of $x^\alpha_0$, $x^4$, $\lambda_2$ and $\lambda_3$ in the domain $\Lambda$: \begin{equation*} \begin{split} & |x^i_0 - \bar{x}^i | \leq d, \hspace{0.5cm} |x^4_0| \leq \epsilon(x^i_0), \\ & 0 \leq x^4 \leq x^4_0, \hspace{0.5cm} 0 \leq \lambda_2 \leq \pi, \hspace{0.5cm} 0 \leq \lambda_3 \leq 2 \pi. \end{split} \end{equation*} These functions take for $x^4_0=x^4$ the given values $\Omega_0$ and satisfy the inequalities \begin{equation} \label{eq:5.18} |\Omega_1 - \Omega_0| \leq h, \end{equation} where $h$ is a given number. We shall suppose in addiction \begin{equation*} |\Omega_1 - \Omega_0| \leq N |x^4 - x^4_0|, \end{equation*} where $N$ is a given number. The functions $\tilde{\Omega}_1$, quotients by $x^4 - x^4_0$ of the functions $\Omega_1$ that vanish identically for $x^4=x^4_0$, are then bounded in the domain $\Lambda$: \begin{equation} \label{eq:5.19} |\Omega_1| \leq N. \end{equation} The functions $\Omega_1$ will be assumed continuous in $\Lambda$. \item[(2)] $m_2$ of these coordinates, that is the number of functions $W$ and $U$, are functions $W_1$ and $U_1$ continuous and bounded of the four variables $x^\alpha$ in the domain $D$: $|x^i - \bar{x}^i| \leq d$, $|x^4| \leq \epsilon(x^i_0)$. These functions take for $x^4=0$ the values $W_0$ and $U_0$ defined by the Cauchy data and satisfy the inequalities \begin{equation} \label{eq:5.20} |W_1 - W_0| \leq l, \hspace{0.5cm} |U_1 - U_0| \leq l, \end{equation} where $l$ is the same number occurring in the assumptions B. The functions $\Omega_0$, $W_0$ and $U_0$ define a point $M_0 \in \mathcal{F}$. \end{description} Let us now define in the space $\mathcal{F}$ the distance of two points $M_1$ and $M_1'$ by the sum of the upper bounds of the absolute values of differences of their coordinates: \begin{equation*} d(M_1, M_1')=Max \Biggl\{ \sum |\Omega_1' - \Omega_1| + \sum |W_1' - W_1 | + \sum |U_1' - U_1| \Biggr\}. \end{equation*} The space $\mathcal{F}$ is then a normed, complete and compact space. To the point $M_1$ of the space $\mathcal{F}$ we associate a point $M_2$ whose coordinates $\Omega_2$, $W_2$, $U_2$ are defined by \begin{equation} \begin{split} \label{eq:5.21} &\Omega_2 = \Omega_0 + \int_{x^4_0}^{x^4} F_1 d\omega^4, \\ & W_2 = W_0 + \int_{0}^{x^4} G_1 d\omega^4, \\ & 4 \pi U_2 (x^\alpha_0) = \int_{x^4_0}^{0} \int_0^{2 \pi} \int_0^\pi H_1 d\omega^4 d \lambda_2 d \lambda_3 + \int_0^{2 \pi} \int_0^\pi I_1 d \lambda_2 d\lambda_3. \end{split}\end{equation} where $F_1$, $G_1$, $H_1$ and $I_1$ denote the quantities $F$, $G$, $H$ and $I$ occurring in the equations $[2]$, $[3]$ and $[4]$, evaluated with the help of the functions $X$, solutions of the equations $[1]$, and by replacing the unknown functions $\Omega$, $W$ and $U$ with the coordinates $\Omega_1$, $W_1$ and $U_1$ of the point $M_1$. Let us now prove that the representation $(\ref{eq:5.21}$) is a representation of the space $\mathcal{F}$ into itself if $\epsilon(x^i_0)$ is suitably chosen. \begin{description} \item[(1)] $F_1$ is expressed rationally by means of ${A^{\lambda \mu}}^{(1)}$, $f_s$, ${W_s}^{(1)}$ and of their partial derivatives up to the third order as long as of the ${a^{\alpha \beta}_0}$, ${a_{\alpha \beta}^0}$, and of $\Omega_1$. All these functions are continuous and bounded functions of $x^\alpha_0$, $x^4$, $\lambda_2$ and $\lambda_3$. The denominator ${T^{*4}}^{(1)}$ of these fractions $F_1$ being nonvanishing, the $F_1$ are continuous and bounded functions of $x^\alpha_0$, $x^4$, $\lambda_2$ and $\lambda_3$: $|F_1| \leq N$, where $N$ depends only on the bounds B and on $h$. Hence, $\Omega_2$ and $\tilde{\Omega}_2$ are continuous and bounded functions of their arguments and verify \begin{equation} \label{eq:5.22} |\Omega_2 - \Omega_0| \leq N |x^4_0 - x^4|, \hspace{0.5cm} \tilde{\Omega}_2 \leq N. \end{equation} If $\epsilon(x^i_0)\leq \frac{h}{N}$, we shall have $|\Omega_2 - \Omega_0| \leq h$. Then $\Omega_2$ satisfies the same conditions as $\Omega_1$ and the number $N$, which is the upper bound of the $F_1$ in $\Lambda$, occurring in the inequality $(\ref{eq:5.19}$), have been chosen for this purpose. \item[(2)] $G_1$ being an $U_1$ or a $W_1$, the $W_2$ are continuous and bounded in $D$ by a number $P$ which depends only on the bounds B: $$|W_2 - W_0| \leq |x^4P|,$$ from which, for $\epsilon(x^i_0) \leq \frac{l}{p}$, we have $$|W_2 - W_0| \leq l.$$ \item[(3)] Let us show that the functions $H_1$ are bounded by a number which only depends on the bounds B, B' and on $h$. \begin{description} \item[(a)] Let us consider the quantity ${D^*}^{(1)}$ occurring in the denominator: It is a polynomial of the functions ${A^{*\lambda \mu}}^{(1)}$, $X$, $\tilde{X}$ and $p_i^0$ which takes the value -1 for $x^4=x^4_0$ and $X=X_0$. By virtue of the inequalities $(\ref{eq:5.17})$ and ($\ref{eq:5.14}$), verified by the functions $x^i$ and the variable $x^4$ in the domain $\Lambda$, ${A^{*\lambda \mu}}^{(1)}$ verifies Lipschitz conditions with respect to the $x^\alpha$ in $\Lambda$. Hence, we obtain some inequalities verified by the functions $X$ and $\tilde{X}$ and some assumptions B stating that \begin{equation*} \bigg{|} {D^*}^{(1)} + 1 \bigg{|} \leq D' \Biggl\{ \sum |X - X_0| + |x^4 - x^4_0| \Biggr\} \leq D'(mM+1)\epsilon(x^i_0), \end{equation*} where $D'$ is a number which depends only on the bounds B and B'. Thus, we are able to choose $\epsilon(x^i_0)$ sufficiently small so that ${D^{*}}^{(1)}$ does not vanish. \item[(b)] Let us consider the rational fraction $H_{1a}$ with denominator $\big{(}{D^{*}}^{(1)}\big{)}^3 \\ \times (x^4_0 - x^4){T^{*4}}^{(1)}$. Its numerator is the product by $\big{(}[U_R]_1 \omega^{R}_{s_1} \big{)}$ of a polynomial of the functions $X$, $\tilde{X}$ and $p_i^0$: quantities that are all known, possessing the same properties as Chapter 4. Thus, the quotient $x^4_0 - x^4$ of the polynomial $p$ is a function continuous and bounded in $\Lambda$. The bound on this function is deduced from the Lipschitz conditions verified by $p$: $$p\leq P' \Bigl\{ \sum |X - X_0| + |x^4 - x^4_0| \Bigr\},$$ where $P'$ is a number which depends only on the bounds B and B'. Thus, we have \begin{equation*} \frac{p}{x^4_0 - x^4} \leq P' (mM + 1). \end{equation*} The $H_{1 a}$ can be therefore put in the form of fractions with numerator \begin{equation*} [U_R]_1 \omega^R_{s_1} \frac{p}{x^4_0 - x^4} \end{equation*} continuous and bounded in $\Lambda$, with denominator ${D^{*}}^{(1)}{T^{*4}}^{(1)}$ continuous and bounded in $\Lambda$. Hence, the $H_{1a}$ are continuous and bounded in $\Lambda$ and their bound depends only on the bounds B, B' and $h$. \item[(c)] The $H_{1b}$, which are rational fractions with nonvanishing denominator of the functions continuous and bounded in $\Lambda$, are continuous and bounded in $\Lambda$. Eventually, we see that $H_1$ are continuous and bounded in $\Lambda$: $$ |H_1| \leq Q,$$ where $Q$ depends only on B, B' and on $h$. \end{description} \item[(4)] Let us consider $I_1$. Since \begin{equation} \label{eq:5.23} I = \Biggl\{ E^{*i}_S \frac{D^*p_i}{T^{*4}}(x^4_0 - x^4)^2 sin (\lambda_2) \Biggr\}_{x^4=0}, \end{equation} where the $E^{*i}_S$ involve the partial derivatives of the $\sigma^R_S$ with respect to the $x^i$ of first order only and linearly. Moreover, if we apply the results of Chapter 4, we see that $E^{i*}_{S_1}(x^4_0 - x^4)^2$ are continuous and bounded in $\Lambda$ because $X$, $\tilde{X}$, ${D}^{(1)}$ and ${D^*}^{(1)}$ and their partial derivatives possess the same properties as in Chapter 4, and that the $\Omega_1$ and $\tilde{\Omega}_1$ are continuous and bounded. Furthermore, the products of all terms of $(E^{*i}_S)_1$ by $x^4_0 - x^4$ are bounded by a number $R_1$ depending only on the bounds B, B' and on $h$, with the exception of the term \begin{equation} \label{eq:5.24} - [U_R]_1 \omega^R_{S_1} \big{[}{A^{ij}}^{(1)} \big{]} \frac{\partial {\sigma}^{(1)}}{\partial x^j}. \end{equation} Thus, we have \begin{equation} \label{eq:5.25} I_1 \leq R_1 |x^4_0| + \Phi_R \big{(}\omega^R_{S_1} \big{)}_{x^4=0} \Biggl\{ {A^{ij}}^{(1)}\frac{\partial {\sigma}^{(1)}}{\partial x^j}p_i \frac{{D^*}^{(1)}}{{T^{*4}}^{(1)}}(x^4_0 - x^4)^2 \Biggr\}_{x^4=0} sin(\lambda_2), \end{equation} where $J= \Bigl\{ {A^{ij}}^{(1)}\frac{\partial {\sigma}^{(1)}}{\partial x^j}p_i \frac{{D^*}^{(1)}}{{T^{*4}}^{(1)}}(x^4_0 - x^4)^2 \Bigr\}$ is a known quantity, which verifies a Lipschitz condition with respect to the functions $X$, $\tilde{X}$ and the variable $x^4$ and which takes the value 1 for $x^4=x^4_0$. Therefore we have in $\Lambda$: \begin{equation} \label{eq:5.26} |J - 1|\leq R_2 |x^4 - x^4_0|\hspace{0.5cm} {\rm and} \hspace{0.5cm} |(J)_{x^4=0} - 1 | \leq R_2 |x^4_0|, \end{equation} where $R_2$ is a number that depends only on the bounds B, B' and on $h$. Furthermore, from the inequality ($\ref{eq:5.18}$), it follows that \begin{equation} \label{eq:5.27} \bigg{|} \big{(} \omega^R_{S_1} \big{)}_{x^4=0} - \delta^R_S \bigg{|} \leq N |x^4_0|, \end{equation} and the inequalities ($\ref{eq:5.25}$), ($\ref{eq:5.26}$) and ($\ref{eq:5.27}$) imply that \begin{equation*} |I_1 - \Phi_s sin(\lambda_2) | \leq R_3 |x^4_0|, \end{equation*} where $R_3$ is a number which depends only on the bounds B, B' and on $h$. The previous inequality is verified at every point $x^i(x^i_0, 0, \lambda_2, \lambda_3)$ of the domain $(d)$. We have assumed in B' that the $\Phi_S$ were verifying some Lipschitz conditions with respect to the $x^i$: \begin{equation*} |\Phi_S(x^i) - \Phi_S(x^i_0)| \leq l'_0 |x^i - x^i_0|. \end{equation*} The $x^i$ verify $$|x^i - x^i_0| \leq M_1 |x^4_0 - x^4|$$ and, having taken here for value $x^4=0$, we have \begin{equation} \label{eq:5.28} |\Phi_S(x^i) - \Phi_S(x^i_0)| \leq l'_0 M|x^4_0|. \end{equation} Eventually, we see that there exists a number $R$, which depends only on the bounds B, B' and on $h$, such that \begin{equation*} |I_1 - \Phi_S(x^i_0)sin(\lambda_2)| \leq R|x^4_0|. \end{equation*} \end{description} The functions \begin{equation*} U_2 = \frac{1}{4 \pi} \int_{x^4_0}^{0} \int_0^{2 \pi} \int_{0}^{\pi} H_1 d\omega^4 d\lambda_2 d \lambda_3 + \frac{1}{4 \pi} \int_0^{2 \pi} \int_{0}^{\pi} I_1 d\lambda_2 d \lambda_3 \end{equation*} are hence continuous and bounded functions of the $x^\alpha_0$ and verify, by denoting $\Phi_S(x^i_0)$ by $U_0$, the inequality \begin{equation*} |U_2 - U_0| \leq |x^4_0| \frac{\pi}{2} (Q + R), \end{equation*} from which, for \begin{equation} \label{eq:5.29} \epsilon(x^i_0)\leq \frac{2l}{\pi(Q+R)} \end{equation} we shall have $$|U_2 - U_0| \leq l.$$ The functions $\Omega_2$, $W_2$ and $U_2$ possess then the same properties as $\Omega_1$, $W_1$ and $U_1$. Thus, the point $M_2$ is a point of $\mathcal{F}$ if $x^i_0$ verifies, besides the inequalities that were imposed upon it in the solution of the equations $[1]$, the inequalities ($\ref{eq:5.28}$), ($\ref{eq:5.29}$) and ($\ref{eq:5.22}$). At this stage, it is possible to evaluate the distance of the points $M_2$ and $M_2'$ representative of $M_1$ and $M_1'$. From the Eqs. ($\ref{eq:5.20}$), defining the representation, we have that in the domain $\Omega$ $$ \Omega_2' - \Omega_2 \leq |x^4_0 - x^4| Max_{\Lambda} |F'_1 - F_1|. \hspace{5cm} (1)$$ It turns out from the expression $F_1$, from the assumptions B and the assumptions made on $\Omega_1$ and $W_1$, that $F_1$ verifies a Lipschitz condition with respect to the functions $\Omega_1$ and $W_1$ whose $N'$ coefficient depends only on the bounds B and on $h$. It implies the inequality \begin{equation} \label{eq:5.30} |\Omega_2' - \Omega_2| \leq N'|x^4_0 - x^4| Max \Biggl\{ \sum |\Omega_1' - \Omega_1| + \sum |W_1' - W_1| \Biggr\}, \end{equation} $$ |W_2' - W_2| \leq |x^4|Max_D |G_1' - G_1|, \hspace{5cm} (2)$$ where $G_1$ is a function $W_1$ or $U_1$; we have \begin{equation*} |W_2' - W_2| \leq |x^4|Max_D \Biggl\{ \sum |W_1' - W_1| + \sum |U_1' - U_1| \Biggr\}, \end{equation*} $$|U_2' - U_2| \leq \frac{\pi}{2} |x^4_0| Max_D |H_1' - H_1| + \frac{\pi}{2} Max_d (I_1' - I_1). \hspace{1.5cm} (3)$$ \begin{description} \item[(a)] It turns out, from the fact that the polynomial $p$ occurring in the numerator of the function $H_{a}$ is independent of the point $M_1$ of $\mathcal{F}$ that we consider, from the assumptions B and from the previous inequalities, that $H_1$ verifies a Lipschitz condition with respect to the functions $\Omega$, $\tilde{\Omega_1}$, $W_1$ and $U_1$ whose $R'_1$ coefficient depends only on the bounds $B$, $B'$ and on $h$: \begin{equation*}\begin{split} |H_1' - H_1| \leq R_1' \Biggl\{ & \sum |\Omega_1' - \Omega_1| + \sum |{\tilde{\Omega}}_1' - {\tilde{\Omega}}_1|\\ & + \sum |W_1' - W_1| + \sum |U_1' - U_1| \Biggr\}. \end{split}\end{equation*} \item[(b)] Let us consider the quantity $I_1$, given by ($\ref{eq:5.21}$), where the only unknown functions are the functions $(\Omega_1)_{x^4=0}$. The expression of the $E^i_S$, the results of the Chapter 4 and those obtained from the solution of the equations $[1]$, the assumptions B and those made upon $\Omega_1$, show that the product $$\Bigl\{ E^i_{S_1}(x^4_0-x^4)^2 \Bigr\}_{x^4=0}$$ verifies a Lipschitz condition with respect to the functions $(\Omega_1)_{x^4=0}$ whose $R_2'$ coefficient depends only on the bounds B, B' and $h$: \begin{equation*} |I_1' - I_1| \leq R_2' \sum |\Omega_1' - \Omega_1|_{x^4=0}. \end{equation*} Therefore, we have \begin{equation} \begin{split} \label{eq:5.31} & |U_2' - U_2| \leq R_2' |x^4_0| Max_D \Biggl\{ \sum|\Omega_1' - \Omega_1| + \sum |{\tilde{\Omega}}_1' - {\tilde{\Omega}}_1| \\ & + \sum |W_1' - W_1| + \sum |U_1' - U_1| \Biggr\} + \frac{\pi}{2}R_2' Max_d \sum |\Omega_1' - \Omega_1|_{x^4=0}. \end{split}\end{equation} Let us then consider the point $M_3$ representative of the point $M_2$. The transformation mapping $M_1$ into $M_3$ is a representation of the space $\mathcal{F}$ into itself. Let us compute the distance of two representative points. We shall deduce from the inequality ($\ref{eq:5.30}$) that \begin{equation} \label{eq:5.32} |{\tilde{\Omega}}_2' - {\tilde{\Omega}}_2| \leq N' Max_{\Lambda} \Biggl\{ \sum |{\Omega}_1' - {\Omega}_1| + \sum |W_1' - W_1| \Biggr\}. \end{equation} The inequalities $(\ref{eq:5.30}$), ($\ref{eq:5.31}$) and ($\ref{eq:5.32}$), written for the representations $M_1 \rightarrow M_2$ and $M_2 \rightarrow M_3$, show that there exists a number $\alpha$ non vanishing, depending only on the bounds B, B' and on $h$, such that, for $$\epsilon(x^i_0) < \alpha,$$ one has \begin{equation*} d(M_3, M_3') < kd(M_1, M_1'), \end{equation*} where $k$ is a given number less than 1. Hence, the representation of the space $\mathcal{F}$ into itself which leads from $M_1$ to $M_3$ admits a unique fixed point, and the same holds for the representation ($\ref{eq:5.21})$ originally given. \end{description} $\textbf{Conclusion.}$ The exists a number $\epsilon(x^i_0)$, which depends only on the bounds $B$, $B'$ and on $h$ and nonvanishing, such that, in the representative domains: $$|x^i_0 - \bar{x}^i| \leq d, \;|x^4_0| \leq \epsilon(x^i_0), \; 0 \leq x^4 \leq x^4_0, \; 0 \leq \lambda_2 \leq \pi, \; 0 \leq \lambda_3 \leq 2 \pi; \hspace{0.5cm} (1) $$ $$ |x^i - \bar{x}^i| \leq d, \; |x^4| \leq \epsilon(x^i_0). \hspace{3cm} (2) $$ The equations $[2]$, $[3]$ and $[4]$ have a unique solution, continuous and bounded $\Omega(x^\alpha_0, x^4, \lambda_2, \lambda_3)$ and $W(x^\alpha)$, $U(x^\alpha)$ verifying the inequalities \begin{equation*} |\Omega - \Omega_2| \leq l, \hspace{0.5cm}|W-W_0| \leq l, \hspace{0.5cm} |U-U_0| \leq l. \end{equation*} We shall prove in addition that the functions $W$ and $U$ obtained satisfy, as ${W^{(1)}}$ and $U^{(1)}$, some Lipschitz conditions with respect to the variables $x^i$. In order to prove it, it is enough to make on the functional space $\mathcal{F}$ the following assumptions: $\mathbf{Assumptions}$ \begin{description} \item[(1)] The functions $\Omega_1$ and $\tilde{\Omega}_1$ satisfy Lipschitz conditions with respect to the three arguments $x^i_0$ \begin{equation} \label{eq:5.33} \bigg{|} \Omega_1(x^i_0, x^4_0, x^4, \lambda_2, \lambda_3) - \Omega_1(x'^{i}_0, x^4_0,x^4, \lambda_2, \lambda_3) \bigg{|} \leq h' \sum_{i=1}^3 | x'^{i}_0 - x^i_0| \end{equation} with $h' \leq |x^4_0 - x^4| N'$; in particular \begin{equation} \label{eq:5.34} \bigg{|} \tilde{\Omega}_1(x^i_0, ...) - \tilde{\Omega}_1(x'^{i}_0, ...) \bigg{|} \leq N' \sum_{i=1}^3 |x'^{i}_0 - x^i_0|, \end{equation} where $h'$ is an arbitrary given number, $N'$ is a function of the previous bounds. \item[(2)] The functions $W_1$ and $U_1$ satisfy Lipschitz conditions with respect to the $x^i$: \begin{equation} \begin{split} \label{eq:5.35} &|W_1(x'^{i},x^4) - W_1(x^i,x^4)| \leq l \sum_{i=1}^3 |x'^{i} - x^i|,\\ &|U_1(x'^{i},x^4) - U_1(x^i, x^4)| \leq l \sum_{i=1}^3 |x'^{i} - x^i|. \end{split} \end{equation} \end{description} Hence, $\mathcal{F}$ endowed with the previous norm, is still a normed, complete and compact space. Then, let us now show that the representative points $M_2$ of the points $M_1 \in \mathcal{F}$ are still points of $\mathcal{F}$ if $\epsilon(x^i_0)$ is suitably chosen. \begin{equation} \label{eq:5.36} \Omega_2(x'^{i}_0, ...) - \Omega_2(x^i_0, ...) = \int_{x^4_0}^{x^4} \bigg{(} F_1(x'^{i}_0, ...) - F_1(x^i_0, ...) \bigg{)} d \omega^4, \end{equation} where the quantities $F_1(x'^{i}_0,...)$ and $F_1(x^i_0, ...)$ are evaluated with the help of the functions $X(x'^{i}_0, ...)$, more precisely of $x^i(x'^{i}_0, ...)$, $\Omega_1((x'^{i}_0, ...)$ and $x^i(x^i_0, ...)$, $\Omega_1(x^i_0, ...)$, respectively. It turns out from the expression $F_1$ and from the inequalities $(\ref{eq:5.17}$) and $(\ref{eq:5.22}$) that \begin{equation} \begin{split} \label{eq:5.37} &|F_1(x'^{i}_0, ...) - F_1(x^i_0, ...)| \leq N' \sum_{i=1}^3 |x'^{i}_0 - x^i_0|, \\ &|\Omega_2(x'^{i}_0, ...) - \Omega_2(x^i_0, ...)| \leq |x^4_0 - x^4| N'\sum_{i=1}^3 |x'^{i}_0 - x^i_0|, \end{split} \end{equation} and hence, if $$\epsilon(x^i_0) \leq \frac{h'}{N'},$$ we will have \begin{equation} \label{eq:5.38}|\Omega_2(x'^{i}_0, ...) - \Omega_2(x^i_0, ...)| \leq h' \sum_{i=1}^3|x'^{i}_0 - x^i_0|. \end{equation} If $N'$ denotes the number, that depends only on the bounds B, B' and $h$, occurring in Eq. ($\ref{eq:5.34}$), we will have \begin{equation*} |{\tilde{\Omega}}_2(x'^{i}_0, ...) - {\tilde{\Omega}}_2(x^i_0, ...)| \leq N'\sum_{i=1}^3 |x'^{i}_0 - x^i_0|. \end{equation*} Furthermore, if we consider \begin{equation} \begin{split} \label{eq:5.39} |W_2(x'^{i},x^4) - W_2(x^i, x^4)| =& \int_{x^4_0}^{x^4} \bigg{(} G_1(x'^{i}, x^4) - G_1(x^i, x^4) \bigg{)} dt \\ & + W_0( x'^{i}) - W_0(x^i), \end{split} \end{equation} where $G_1$ is a function $W_1$ or $U_1$, the Eq. ($\ref{eq:5.35}$) shows that, under the assumptions B' on the Cauchy data, we have \begin{equation*} |W_2(x'^{i},x^4) - W_2(x^i, x^4)| \leq |x^4| l \sum_{i=1}^3 |x'^{i} - x^i| + l_0 \sum_{i=1}^3 | x'^{i} - x^i|. \end{equation*} Hence, we see that $$\epsilon(x^i_0) \leq \frac{ l - l_0}{l}$$ implies \begin{equation*} \begin{split} & |W_2(x'^{i},x^4) - W_2(x^i, x^4)| \leq l \sum_{i=1}^3 |x'^{i} - x^i|, \\ & U_2(x'^{i},x^4) - U_2(x^i, x^4) = \int_{x^4_0}^0 \int_0^{2 \pi} \int_0^{\pi} [H_1(x'^{i}_0, ...) - H_1(x^i_0, ...)] d\omega^4 d\lambda_2 d \lambda_3 \\ &+ \int_0^{2 \pi} \int_0^{\pi} [I_1(x'^{i}_0, ...) - I_1(x^i_0, ...)] d\lambda_2 d \lambda_3. \end{split} \end{equation*} The quantities $H_1(x'^{i}_0)$, $I_1(x'^{i}_0)$ and $H_1(x^{i}_0)$, $I_1(x^{i}_0)$ are evaluated by means of the functions $X(x'^{i}_0, ...)$, $\Omega_1(x'^{i}_0, ...)$ and $X(x^{i}_0, ...)$, $\Omega_1(x^{i}_0, ...)$, respectively. $\mathbf{Quantity \; H_1}$ \begin{description} \item[(a)] Let us consider the polynomial $p$ occurring in the denominator of $H_{1a}$, $p$ is a polynomial of the functions $[{A^{\lambda \mu}}^{(1)}]$, ${W_s}^{(1)}(x^\alpha)$, of their first and second partial derivatives, of the functions $X$, $\tilde{X}$ and $p_i^0$. The Taylor series expansion of this polynomial, starting from the values \begin{equation*} \begin{split}& [{A^{\lambda \mu}}^{(1)}]_0= \delta^\mu_\lambda, \; \bigg{[} \frac{\partial {A^{\lambda \mu}}^{(1)}}{\partial x^\alpha} \bigg{]} = \bigg{[} \frac{\partial {A^{\lambda \mu}}^{(1)}}{\partial x^\alpha} \bigg{]}_0,\; \dots,\;\\ & {W_s}^{(1)}(x^\alpha) ={W_s}^{(1)}(x^\alpha_0), \;{W_{s \alpha}}^{(1)}(x^\alpha) ={W_{s \alpha}}^{(1)}(x^\alpha_0), \; \dots, \\ &\; X=X_0,\; \tilde{X} = \tilde{X}_0 \end{split}\end{equation*} for which the polynomial $p$ vanishes, shows that $p$ is a polynomial of the functions already listed, and of the functions $[{A^{\lambda \mu}}^{(1)}]- \delta^\mu_\lambda$,$\dots$, ${W_s}^{(1)}(x^\alpha) -{W_s}^{(1)}(x^\alpha_0)$, ..., $\tilde{X} - \tilde{X}_0$, $X-X_0$ whose terms are at least of first degree with respect to the set of these last functions. The quantity $\frac{p}{x^4_0 - x^4}$ is therefore a polynomial of the functions $${A^{\lambda \mu}}^{(1)}, \; \dots, \; {W_s}^{(1)}(x^\alpha), \; \dots, \; X, \; \tilde{X}, \; p_i^0$$ and of the functions $$\frac{\big{[} {A^{*\lambda \mu}}^{(1)}\big{]} - \delta^{\mu}_\lambda}{x^4_0 -x^4}, \; \frac{{W_s}^{(1)}(x^\alpha) - {W_s}^{(1)}(x^\alpha_0)}{x^4_0 - x^4}, \; \dots, \;\frac{X - X_0}{x^4_0 - x^4}, \; \frac{\tilde{X} - \tilde{X}_0}{x^4_0 - x^4}.$$ Since the coefficients ${A^{*\lambda \mu}}^{(1)}$ and the functions ${W_s}^{(1)}$ admit bounded derivatives with respect to the $x^\alpha$ up to the fourth order, whereas the functions considered involve only derivatives of the first two orders, it turns out from the assumptions B and the inequalities ($\ref{eq:5.15}$) and ($\ref{eq:5.16}$) that all the listed functions are bounded in $\Lambda$ by a number which only depends on the bounds B and B'. Thus, the polynomial $\frac{p}{x^4_0 - x^4}$ verifies a Lipschitz condition with respect to each of these functions, whose coefficient depends only on the bounds B and B'. Then, we are going to prove that these functions themselves verify Lipschitz conditions with respect to the $x^i_0$. It will be enough for us, by virtue of the assumptions B and the previous inequalities to prove this result for: \begin{description} \item[(1)] the functions $\frac{\big{[} {A^{*\lambda \mu}}^{(1)}\big{]} - \delta^{\mu}_\lambda}{x^4_0 -x^4}$ and $\frac{{W_s}^{(1)}(x^\alpha) - {W_s}^{(1)}(x^\alpha_0)}{x^4_0 - x^4}$ and the analogous functions written with first and second partial derivatives of ${A^{*\lambda \mu}}^{(1)}$ and ${W_s}^{(1)}$ with respect to the $x^\alpha$; \item[(2)] The functions $\frac{X - X_0}{x^4_0 - x^4}$. \end{description} Let us begin with $\mathbf{(1)}$ by setting \begin{equation*} F(x^i_0, x^4_0, x^4, \lambda_2, \lambda_3) = \frac{{A^{*\lambda \mu}}^{(1)} - \delta^\mu_\lambda}{x^4_0 - x^4}, \end{equation*} where ${A^{*\lambda \mu}}^{(1)} - \delta^\mu_\lambda = {A^{*\lambda \mu}}({W_s}^{(1)}(x^i, x^4), {W_s}^{(1)}(x^i_0, x^4_0), x^i, x^4, x^i_0, x^4_0)$\\ $- {A^{*\lambda \mu}}^{(1)}({W_s}^{(1)}(x^i, x^4), {W_s}^{(1)}(x^i_0, x^4_0), x^i_0, x^4_0, x^i_0, x^4_0)$, with $$x^i = x^i(x^i_0, x^4_0, x^4, \lambda_2, \lambda_3).$$ Let us consider the quantity $F(x'^{i}_0, ...)- F(x^i_0, ...)$. The function occurring in the numerator vanishes for $x^4=x^4_0$, because the two functions $F(x'^{i}_0, ...)$ and $F(x^i_0, ...)$ vanish, and it admits a derivative with respect to $x^4$ continuous and bounded in the domain $\Lambda$. Thus, we have \begin{eqnarray} \;&\;& F(x'^{i}_0, ...)- F(x^i_0, ...) \nonumber \\ &\;& = \Biggl\{ \frac{\partial}{\partial x^4} \bigg{[} ({A^{*\lambda \mu}}^{(1)}(x'^{i}_0, ...) - \delta^\mu_\lambda) - ({A^{*\lambda \mu}}^{(1)}(x^{i}_0, ...) - \delta^\mu_\lambda) \bigg{]}\Biggr\}_{x^4=x^4_0 - \theta(x^4 - x^4_0)} \end{eqnarray} where $0 \leq \theta \leq 1$. Since the derivative of the function ${A^{*\lambda \mu}}^{(1)}(x'^{i}_0, ...)$ with respect to the parameter $x^4$ verifies a Lipschitz condition with respect to the $x^i_0$, whose coefficient depends only on the bounds B and B', we see eventually that \begin{equation*} F(x'^{i}_0, ...)- F(x^i_0, ...) \leq L_1 \sum_{i=1}^3 |x'^{i}_0 - x^i_0| \end{equation*} where $L_1$ depends only on the bounds B and B'. The same proof holds for the function $\frac{{W_s}^{(1)}(x^\alpha) - {W_s}^{(1)}(x^\alpha_0)}{x^4_0 - x^4}$ and for the functions built with the partial derivatives of the ${A^{*\lambda \mu}}^{(1)}$ or ${W_s}^{(1)}$ up to the third order included. Eventually, we can prove the same result for $\mathbf{(2)}$. We have \begin{equation*} \tilde{X} - \tilde{X}_0 = \frac{ \int_{x^4_0}^{x^4} (E- E_0)d\omega^4}{x^4_0 - x^4}, \end{equation*} from which \begin{equation*} (\tilde{X} - \tilde{X}_0)_{x'^{i}_0} - (\tilde{X} - \tilde{X}_0)_{x^i_0} = \frac{\int_{x^4_0}^{x^4} [ (E- E_0)_{x'^{i}_0} - (E- E_0)_{x^i_0} ] d \omega^4}{x^4_0 - x^4}, \end{equation*} where $E$ is a rational fraction with denominator ${T^{*4}}^{(1)}$ of the coefficients ${A^{*\lambda \mu}}^{(1)}$ and of their partial derivatives up to the third order and of the functions $X$. We can write $E-E_0$ in the form of a rational fraction with denominator ${T^{*4}}^{(1)}$, because ${T^{*4}}^{(1)}=1$ for $x^4=x^4_0$, of the previous functions and of the functions $X-X_0$, ${A^{*\lambda \mu}}^{(1)} -{ {\delta}^\mu}_\lambda$, $\dots$, whose denominator has all its terms of first degree at least with respect to the set of these functions. Then, we can write $$E-E_0=(x^4_0 - x^4)F,$$ where $F$ is a rational fraction with denominator ${T^{*4}}^{(1)}$ of the previous functions and of the functions $$\frac{X - X_0}{x^4_0 - x^4}, \; \frac{{A^{*\lambda \mu}}^{(1)} - \delta^\mu_\lambda}{x^4_0 - x^4}, \; \dots .$$ Since all these functions verify Lipschitz conditions with respect to the $x^i_0$, we have \begin{equation*} |(E- E_0)_{x'^{i}_0} -(E- E_0)_{x^{i}_0} | \leq L_2 |x^4_0 - x^4| \sum_{i=1}^3 |x'^{i}_0 - x^i_0|, \end{equation*} from which \begin{equation*} |(X- X_0)_{x'^{i}_0} - (X- X_0)_{x^{i}_0}| \leq \frac{L_1}{2} |x^4_0 - x^4| \end{equation*} and \begin{equation*} \bigg{|} \bigg{(} \frac{X - X_0}{x^4_0 - x^4} \bigg{)}_{x'^{i}_0} - \bigg{(} \frac{X - X_0}{x^4_0 - x^4} \bigg{)}_{x^{i}_0} \bigg{|} \leq \frac{L_2}{2}. \end{equation*} Thus, we have proven that the quantity $\frac{p}{x^4_0 - x^4}$ verifies a Lipschitz condition with respect to the $x^i_0$, whose coefficient depends only on the bounds B and B'. \item[(b)] There remains to prove that the quantity $H_1$, that is the product of the square root of a rational fraction with numerator 1 and non-vanishing denominator with a rational fraction with non-vanishing denominator of the bounded functions verifying all Lipschitz conditions with respect to the $x^i_0$, verifies in $\Lambda$ a Lipschitz condition with respect to the $x^i_0$ whose coefficients $Q'$ depends only on the bounds B, B', $h$ and on $h'$. Hence we have \begin{equation*} |H'_1 - H_1| \leq Q' \sum_{i=1}^3 |x'^{i}_0 - x^i_0|. \end{equation*} \end{description} $\mathbf{Quantity\; I_1}$ By considering the expression of $I_1$ and the previous inequalities, we can prove that all terms of $I_1$, with the exception of the term ($\ref{eq:5.23}$), verify Lipschitz conditions with respect to the $x^i_0$ whose coefficient is of the form $R'_1|x^4_0|$, where $R'_1$ is a number that depends only on the bounds B and B'. Let us consider $(\ref{eq:5.23}$). We find that $\frac{J(x^i_0) - 1}{x^4_0 - x^4}$ verifies a Lipschitz condition with respect to the variables $x^i_0$, from which \begin{equation*} |J(x'^{i}_0) -J(x^{i}_0)|_{x^4=0} \leq R'_1|x^4_0| \sum_{i=1}^3 |x'^{i}_0 - x^i_0|, \end{equation*} from which, by using the inequality $(\ref{eq:5.33}$) and the inequalities on $I$, we have \begin{equation*} |I_0(x'^{i}_0) - I_0(x^{i}_0)| \leq R''_0|x^4_0| \sum_{i=1}^3 |x'^{i}_0 - x^i_0| + |U_0(x'^{i}_0) - U_0(x^i_0)|(1+ R''_2|x^4_0|). \end{equation*} Then, we obtain Lipschitz conditions verified by $U_0$ \begin{equation*} |I_1(x'^{i}_0) -I_1(x^{i}_0)| \leq (R'|x^4_0| + l_0) \sum_{i=1}^3 |x'^{i}_0 - x^i_0|, \end{equation*} where $R'$ is a number that depends only on the bounds B and B'. Eventually, we shall deduce from the Lipschitz conditions verified by $H_1$ and $I_1$ \begin{equation*} |U_2(x'^{i}_0, x^4) - U_2(x^{i}_0, x^4)| \leq \frac{\pi}{2} [(Q' + R')|x^4_0| + l_0]\sum_{i=1}^3|x'^{i}_0 - x^i_0| \end{equation*} hence that inequality $$\epsilon(x^i_0) \leq \frac{l - l_0}{Q' + R'} \frac{2}{\pi}$$ implies \begin{equation*} |U_2(x'^{i}_0, x^4) - U_2(x^{i}, x^4)| \leq l \sum_{i=1}^3 |x'^{i}_0 - x^i_0|. \end{equation*} $\mathbf{Conclusion.}$ The previous inequalities prove that, if $\epsilon(x^i_0)$ satisfies the corresponding inequalities, the point $M_2$ is, under the assumptions made, a point of $\mathcal{F}$. The application of the fixed-point theorem shows that, in the domain $D$, the functions $W$ and $U$ satisfy Lipschitz conditions with respect to the $x^i$ with coefficient $l$. The functions $W$ and $U$, solutions of the integral equations $[J_1]$, satisfy therefore, in $D$, the same inequalities holding for the functions $W^{(1)}$, $\dots$, $U^{(1)}$. \subsection{Solution of the equations $G_1$} We will now prove that the functions $W_s$, which are solutions of the equations $I_1$, solve the equations $G_1$, and that the functions $W_{s \alpha}$, $\dots$, $U_S$, which are solutions of the equations $I_1$, are the partial derivatives up to the fourth order of the $W_s$ in a domain $D$ that depends only on the bounds B and B'. We shall use for the proof the approximation of continuous functions by means of analytic functions. Let us consider some equations $[G_1]$: \begin{equation*} {A^{\lambda \mu}}^{(1)} \frac{\partial^2 W_s}{\partial x^\lambda \partial x^\mu} + f_s =0, \hspace{7cm} [G_1] \end{equation*} where the coefficients $A^{\lambda \mu}$, $f_s$, ${W_s}^{(1)}$ and the Cauchy data $\psi_s$, $\varphi_s$ are analytic functions of their arguments. The Cauchy problem for the equations $[G_1]$ admits an analytic solution in a neighbourhood $V$ of the domain $(d)$ of the surface $x^4=0$ carrying the initial data. If the coefficients and the Cauchy data satisfy the assumptions made for the system $[F_1]$, there exists a neighbourhood $V$ of $(d)$ where this solution satisfies the integral equations $[I_1]$. Furthermore, let us consider, independently of equations $[G_1]$, the integral equations $[I_1]$. We shall prove that they admit, within a domain $D$ that depends only on the bounds B and B', a unique analytic solution which coincides therefore, in the part shared by the domains $V'$ and $D^*$, with the solution of equations $[G_1]$. This principle of analytic continuation shows then that this solution of equations $[I_1]$ is solution of equations $[G_1]$ in the whole of $D$. Let us prove for example the analyticity in $D$ of the solution of equations $[1]$ \begin{equation*} X= \int_{x^4_0}^{x^4} E d\omega^4 + X_0, \end{equation*} when $E$ is an analytic function of the quantities $X$, $x^\alpha_0$ and $x^4$, bounded by $M$ in the domain \begin{equation*} R: \; |X - \bar{X}_0| \leq d, \hspace{0.5cm} |x^i_0 - \bar{x}^i| \leq d, \hspace{0.5cm} |x^4| \leq |x^4_0| \leq \epsilon(x^i_0) \end{equation*} of variation of its real arguments and it is expandable in an absolutely convergent series in the neighbourhood of every point of $R$. Thus, we can extend the definition of $E$ to a domain of variation of the complex arguments $Z= x +iy$, $z^\alpha_0 =x^\alpha_0 + i y^\alpha_0$, $z^4=x^4+iy^4$ by expressing it in the form of a convergent series, hence holomorphic in the $m$ cylinders $V$, centered at a point whatsoever of $V$ and defined by \begin{equation*} |Z' - X| \leq a_X, \hspace{0.5cm} |z^{' \alpha}_0 - x^\alpha_0|\leq b_{x^\alpha_0} \hspace{0.5cm} |z^4 - x^4| \leq C_{x^4}. \end{equation*} The partial derivatives $\frac{\partial E}{\partial X_1}$ being bounded by $M'$ in $R$ one can choose the bounds $a_X$, $b_{x^\alpha_0}$ and $C_{x^4}$ in such a way that in $v$ one has \begin{equation*}\bigg{|} \frac{\partial E}{\partial Z_1} \bigg{|} \leq M' + \alpha', \end{equation*} where $\alpha'$ is an arbitrarily small number. One can also choose the bounds $b_{x^\alpha_0}$ and $C_{x^4}$ so that in $v$ one has \begin{equation*} | I\; E(X_1, z^\alpha_0, z^4)| \leq \beta, \hspace{0.5cm} |R\;E(X_1, z^\alpha_0, z^4)| \leq M + \beta, \end{equation*} where $\beta$ is an arbitrarily small number. One can build on the other hand a cover of the domain $R$ by means of a finite number of projections in $R$ of the $m$ previous cylinders. The corresponding $m$ cylinders determine a domain $\bar{R}$ of the space of complex arguments $Z$, $z_0$, $z^4$, which fulfill the inequalities \begin{equation*} \begin{split} &|X - \bar{X}_0| \leq d, \hspace{0.5cm} |x^i_0 - \bar{x}^i| \leq d, \hspace{0.5cm} |x^4| \leq |x^4_0| \leq \epsilon(x^i_0); \\ & |Y| \leq a, \hspace{0.5cm} |y^\alpha_0| \leq b, \hspace{0.5cm} |y^4| \leq c, \end{split} \end{equation*} where $a$, $b$ and $c$ are non-vanishing numbers, and in which the complex function $E$ is defined and analytic. Let us write: \begin{equation*} E(Z_1, z^\alpha_0, z^4) = E(Z_1, z^\alpha_0, z^4) - E(X_1, z^\alpha_0, z^4) +E(X_1, z^\alpha_0, z^4), \end{equation*} from which \begin{equation*} \begin{split} & | I\; E(Z_1, z^\alpha_0, z^4)| \leq \beta + m(M' + \alpha')a, \\ & |R\;E(Z_1, z^\alpha_0, z^4)| \leq M + \beta + m(M' + \alpha')a. \end{split} \end{equation*} Now, let us consider the equations $[1]$ extended to the complex domain $\bar{R}$ \begin{equation*} Z = \int_{z^4_0}^{z^4} E(Z, z^\alpha_0, z^4) d\omega^4 + Z_0. \hspace{6cm} [\bar{1}] \end{equation*} In order to solve it, we consider, as in the real case, a functional space $\Upsilon$ defined by the functions of complex variables $Z_1(z^\alpha_0, z^4)$, real for $z^\alpha_0$ and $z^4$ real, analytic in the domain $\bar{D}$ defined by \begin{equation*} |x^i_0 - \bar{x}^i| \leq d, \hspace{0.5cm} |x^4| \leq |x^4_0| \leq \epsilon(x^i_0), \hspace{0.5cm} |y^\alpha_0| \leq b, \hspace{0.5cm} |y^4| \leq c, \end{equation*} and satisfying $|X_1 - X_0| \leq d$ and $|y_1| \leq a$. \begin{description} \item[(a)] The representation \begin{equation*} Z_2 = Z-0 + \int_{z^4_0}^{z^4} E(Z_1, z^\alpha_0, z^4)d \omega^4 \end{equation*} is a representation of the space into itself if $\epsilon(x^i_0)$, $b$ and $c$ are suitably chosen. As a matter of fact: \begin{description} \item[(1)] $Z_1$ is an analytic function of $z^\alpha_0$, $z^4$ because this holds for $E$, real for $z^\alpha_0$ and $z^4$ real. \item[(2)] The equality \begin{equation*} Z_2 = - \int_{x^4_0}^{x^4_0 + i y^4_0}E d\omega^4 + \int_{x^4_0}^{x^4}E d\omega^4 + \int_{x^4}^{x^4+ i y^4}E d\omega^4 + Z_0\end{equation*} implies that \begin{equation*}\begin{split} |X_2 - X_0| \leq &(b+c)[m(M' + \alpha')a + \beta] \\ &+ |x^4_0 - x^4|[m(M' + \alpha')a + \beta + M],\\ |Y_2| \leq &(b+c) [m(M'+ \alpha')a + \beta + M] \\ &+ |x^4_0 - x^4|[m(M' + \alpha') a + b].\end{split}\end{equation*} Thus, if $\epsilon(x^i_0) \leq \frac{d-(b+c)[m(M' + \alpha')a + \beta]}{M+ m(M'+\alpha')a+ \beta}$, we have \begin{equation*} |X_2 - \bar{X}_0| \leq d \end{equation*} and if $b+c \leq \frac {a[1- mM'(x^4_0 -x^4)]- (m \alpha' a+ \beta)(x^4_0 - x^4)}{M + m(M' + \alpha')a + \beta}$, we have \begin{equation*} |Y_2|\leq a. \end{equation*} Let us recall that the number \begin{equation*}\epsilon(x^i_0) \leq \frac{1}{mM}. \hspace{6cm} (A) \end{equation*} Therefore, we have \begin{equation*}1 - mM'(x^4_0 - x^4) >0. \hspace{5cm} (B) \end{equation*} Thus, we have to choose $\epsilon(x^i_0)$ as satisfying $(A)$ and the inequality $(B)$ shows that one can find, without supplementary assumptions upon $\epsilon(x^i_0)$, the numbers $b$ and $c$ defining $\bar{D}$, so that $M_2$ is a point of $\mathcal{F}$. The domain $\bar{D}$ has for real part a domain as close as one wants to $D$. \end{description} \item[(b)] Let us prove that the representation reduces the distances. We have seen that, in $\bar{R}$, one has $\big{|}\frac{\partial E}{\partial Z_1} \big{|} \leq M' + \alpha'$, from which \begin{equation*}|E(Z_1', z^\alpha_0, z^4) - E(Z_1, z^\alpha_0, z^4) | \leq |Z'_1 - Z_1|(M' + \alpha'). \end{equation*} Thus, we shall have \begin{equation*} d(M_2, M_2') \leq m(M' + \alpha')|z^4_0 - z^4| d(M_1, M_1'), \end{equation*} from which, if $|z^4_0 - z^4| < \frac{1}{m M' + \alpha}$, $\epsilon(x^i_0) \leq \frac{1}{mM' + \alpha'} - \eta $ and $b+c < \eta$, we have \begin{equation*} d(M_2, M_2') \leq d(M_1, M_1'), \end{equation*} where $\eta$ is an arbitrary small number. Thus, the real part of the domain $\bar{D}$ is as close as one wants to $D$. \end{description} We can conclude, as in the real case, that the representation $I$ admits a unique fixed point. The corresponding $Z$ functions are solutions of equations $[1]$, and analytic in the domain $\bar{D}$. The functions $X$, values of these functions $Z$ for real arguments $x^4_0$ and $x^4$ are analytic functions, solutions in a domain as close as one wants to $D$ of equations $[1]$. Analogous results can be proved in the same way for equations $[2]$, $[3]$ and $[4]$. \subsection{Coefficients and Cauchy data satisfying only the assumptions $B$ and $B'$} If the coefficients $A^{\lambda \mu}$, $f_s$, the given functions ${W_s}^{(1)}$ and the Cauchy data satisfy only the assumptions B and B', we shall approach uniformly these quantities and at the same time their partial derivatives up to the fourth order, by means of analytic functions $A^{\lambda \mu}_{(n)}$, $f_{s(n)}$, ${W_{s(n)}}^{(1)}$, $\varphi_{s(n)}$ and $\psi_{s(n)}$ verifying the assumptions B and B'. We shall build in this way a family of functions $W_{s(n)}$, ..., $U_{S(n)}$, which are solutions in $D$ of equations $I_{1(n)}$ and solutions in $D$ of the Cauchy problem, relatively to the equations $[G_{1(n)}]$: \begin{equation*} {A^{\lambda \mu}}_{(n)} \frac{ \partial^2 W_{s(n)}}{\partial x^\lambda \partial x^\mu} + f_{s(n)} =0. \hspace{6cm} [G_{1(n)}] \end{equation*} These functions $W_{s(n)}$ possess partial derivatives up to the fourth order and satisfy the assumptions B and B'. We want to prove that the functions $W_{s(n)}$, ..., $U_{S(n)}$ converge uniformly to some functions $W_s$, ..., $U_S$, when the functions $A^{\lambda \mu}_{(n)}$, ${W_{s(n)}}^{(1)}$, $\varphi_{s(n)}$, $\psi_{s(n)}$ and their partial derivatives converge uniformly to the given functions $A^{\lambda \mu}$, ${W_s}^{(1)}$, $\varphi_s$, $\psi_s$. This is possible by applying the same method we used before and the fact that the functions $W_{(n)}$ and $U_{(n)}$ verify a Lipschitz condition with respect to the $x$ variables (that one has to replace by $X_{(n)}$ in the integral equations $[I_{1(n)}]$ verified by these functions). Thus, we will have \begin{equation} \begin{split} \label{eq:5.40}& |X_{(n)} - X_{(m)}| \leq Max_{\Lambda} \Biggl\{ \alpha \bigg{(}\sum |A^{\lambda \mu}_{(n)} - A^{\lambda \mu}_{(m)} | + \dots \\ &+ \sum |{W_{s(n)}}^{(1)} - {W_{s(m)}}^{(1)}| + \dots \bigg{)} + M' \sum |X_{(n)} - X_{(m)}|\Biggr\}|x^4_0 - x^4|,\\ &|\Omega_{(n)} - \Omega_{(m)}| \leq Max_{\Lambda} \Biggl\{ \beta \bigg{(} \sum |A^{\lambda \mu}_{(n)} - A^{\lambda \mu}_{(m)}| + \dots \\ & + \sum |{W_{(n)}}^{(1)} - {W_{(m)}}^{(1)}| + \sum |X_{(n)} - X_{(m)}|\bigg{)} \\ &+ N' \bigg{(}|\Omega_{(n)} - \Omega_{(m)}| + \sum |W_{(n)} - W_{(m)}|\bigg{)} \Biggr\} |x^4_0 - x^4|, \end{split} \end{equation} and \begin{equation} \begin{split} \label{eq:5.41} & |W_{(n)} - W_{(m)}| \leq Max \Biggl\{ \sum |W_{(n)}- W_{(m)}| + \sum |U_{(n)} - U_{(m)}| \Biggr\} |x^4| \\ &+ |W_{0(n)}- W_{0(m)}|, \\ & |U_{(n)} - U_{(m)}| \leq Max \Biggl\{ \gamma \bigg{(} \sum |A^{\lambda \mu}_{(n)} - A^{\lambda \mu}_{(m)}| + \dots + \sum |f_{s(n)} - f_{s(m)}|\\ & + \dots + \sum|{W_{(n)}}^{(1)} - {W_{(m)}}^{(1)}| + \sum |X_{(n)} - X_{(m)}| \bigg{)} \\ &+ R'_1 \bigg{(} \sum |U_{(n)} - U_{(m)}| + \sum |W_{(n)} - W_{(m)}| + \sum |\Omega_{(n)} - \Omega_{(m)}|\\ & +\sum |\tilde{\Omega}_{(n)} - \tilde{ \Omega}_{(m)}| \bigg{)} \Biggr\} |x^4_0| + Max \Biggl\{ \delta \bigg{(} \sum |X_{(n)} - X_{(m)}| + \sum |A^{\lambda \mu}_{(n)} - A^{\lambda \mu}_{(m)}| \\ & + \dots + \sum | \Phi_{s(n)} - \Phi_{s(m)}| \bigg{)} + R'_2 \sum |\Omega_{(n)} - \Omega_{(m)}| \Biggr\}_{x^4=0}, \end{split}\end{equation} where $\alpha$, $\beta$, $\gamma$, $\delta$ are bounded numbers which only depend on the bounds B, B' and on $h$ and $h'$. The previous inequalities show that the functions $X_{(n)}$, $\Omega_{(n)}$, $W_{(n)}$ and $U_{(n)}$ converge uniformly towards functions $X$, $\Omega$ and $W$, $U$ in their respective domains of definition, $\Lambda$ and $D$, when the approximating functions converge uniformly towards the given functions. These functions $W$, $U$, uniform limit of the functions $W_{s(n)}$, $U_{(n)}$ satisfy the following properties. \begin{description} \item[(p.1)] The functions $W_{s \alpha}$, ..., $U_S$ are partial derivatives up to the fourth order of the functions $W_s$, and all these functions satisfy the same assumptions B and B' as the functions ${W_s}^{(1)}$ in $D$. \item[(p.2)] The functions $W_s$ verify the partial differential equations $[G_1]$ in the domain $D$. \end{description} \subsection{Solution of the equations $[G]$} We consider the functional space $W$ defined by the functions ${W_s}^{(1)}$ and satisfying the assumptions B and B' in the domain $D$. We have just proved that the solution evaluated of the Cauchy problem for the equations $[G_1]$ defines a representation of this space into itself. Let us denote by ${W_s}^{(1)}$ this solution. The space $W$ is a normed, complete and compact space if one defines the distance of two of its points by \begin{equation*} d(M_1, M_1')=Max_{D} \bigg{(} \sum |{W_s}^{(1)}- {W'_s}^{(1)}| + ... + |{U_S}^{(1)} - {U'_S}^{(1)}| \bigg{)}. \end{equation*} The distance of two representative points $M_2$, $M_2'$ from $M_1$, $M_1'$ will be compared to the distance of these points with the help of inequalities analogous to ($\ref{eq:5.40}$) and $(\ref{eq:5.41}$). Then, there exists a number $\eta$ bounded, non-vanishing and such that if $$\epsilon(x^i_0)< \eta,$$ the distance of two representative points $$\big{(} {W'_s}^{(2)}, \dots, {U'_S}^{(2)} \big{)} \hspace{0.5cm} {\rm and} \hspace{0.5cm} \big{(} {W_s}^{(2)}, \dots, {U_S}^{(2)} \big{)}$$ is less than the distance of the initial points. The representation considered admits then a unique fixed point $(W_s, ..., U_S)$ which belongs to the space. The functions $W_s$ corresponding to this fixed point are solutions of the Cauchy problem associated to the equations $[G]$, in the domain $D$. They possess partial derivatives up to the fourth order, continuous, bounded and satisfying some Lipschitz conditions with respect to the variables $x^i$. Furthermore, the Cauchy problem relative to the system of non-linear partial differential equations $[G]$, admits in the domain $D$, under the assumptions $H$, a solution possessing partial derivatives up to the fourth order, continuous, bounded and satisfying Lipschitz conditions with respect to the variables $x^i$. This concerns the existence of the solution. Another implication of our argumentation is the uniqueness of this solution. As a matter of fact, if we consider the system of integral equations verified by the solutions of the given equations $[G]$, it has only one solution $W_s$, $W_{s \alpha}$, ..., $U_S$ where the $W_{s\alpha}$, ..., $U_S$ are partial derivatives of the $W_s$. In this case it is possible to write inequalities analogous to the inequalities for $[G_{1(n)}]$, where $W_{(n)}$, ..., $U_{(n)}$; ${W_{(n)}}^{(1)}$, ..., ${U_{(n)}}^{(1)}$ and $W_{(m)}$, ..., $U_{(m)}$; ${W_{(m)}}^{(1)}$, ..., ${U_{(m)}}^{(1)}$ are replaced by two solutions of equations $[G]$, respectively. From these inequalities one derives the coincidence of these two solutions. $\mathbf{Conclusion.}$ We consider a system of non-linear, second-order, hyperbolic partial differential equations with $n$ unknown functions $W_s$ and four variables $x^\alpha$, of the form $$A^{\lambda \mu} \frac{\partial^2 W_S}{\partial x^\lambda \partial x^\mu} + f_s=0, \hspace{2cm} \lambda,\mu=1, ...,4, \; s=1, 2, ..., n. \hspace{1cm} [G] $$ \\ The $f_s$ are given functions of the unknown $W_s$, $W_{s \alpha}$ and of the variables $x^\alpha$. The $A^{\lambda \mu}$ are given functions of the $W_s$ and of the $x^\alpha$. The Cauchy data are, on the initial surface $x^4=0$, $$W_s(x^i, 0)= \varphi_s(x^i), \hspace{1cm} W_{s 4}(x^i, 0)=\psi_s(x^i).$$ On the system $[G]$ and the Cauchy data we make the following assumptions: \begin{description} \item[(1)] In the domain $(d)$, defined by $|x^i - \bar{x}^i| \leq d$, $\varphi_s$ and $\psi_s$ possess partial derivatives up to the orders five and four, continuous, bounded and satisfying Lipschitz conditions. \item[(2)] For the values of the $W_s$ satisfying \begin{equation*} |W_s - \varphi_s| \leq l,\hspace{0.5cm} |W_{si} - \varphi_{si}| \leq l, \hspace{0.5cm} |W_{s4} - \psi_s|\leq l \end{equation*} \end{description} and in the domain $D$, defined by $$|x^i - \bar{x}^i| \leq d, \hspace{1cm}|x^4| \leq \epsilon:$$ \begin{description} \item[(a)] $A^{\lambda \mu}$ and $f_s$ possess partial derivatives up to the fourth order, continuous, bounded and satisfying Lipschitz conditions. \item[(b)] The quadratic form $A^{\lambda \mu}X_\lambda X_\mu$ is of the normal hyperbolic type, i.e. $A^{44}>0$, $A^{ij}X_i X_j$ is negative-definite. \end{description} Then the Cauchy problem admits a unique solution, possessing partial derivatives continuous and bounded up to the fourth order, in relations with equations $[G]$ in a domain $\Delta$, which is a tronc of cone with base $d$, defined by \begin{equation*} |x^i - \bar{x}^i| \leq d, \hspace{1cm} |x^4| \leq \eta(x^i). \end{equation*} Once we have proved the existence and uniqueness of the solution of the Cauchy problem for non-linear, second-order, hyperbolic partial differential equations we are able now to apply these results to General Relativity. In the next Chapter, we will show the solution of the Cauchy problem for the field equations, which are ten partial differential equations of second-order that are linear in the second derivatives of the gravitational potentials and non-linear in their first derivatives. \chapter{General Relativity and the Causal structure of Space-Time} \chaptermark{General Relativity and the Causal structure } \epigraph{There are more things in Heaven and Earth, Horatio, than are dreamt of in your philosophy.}{William Shakespeare, Hamlet } Once we have argued about the existence and uniqueness of the solution of the Cauchy problem for systems of linear and non-linear equations, we are ready to discuss the applications to General Relativity. This will be the object of the discussion of the first part of this Chapter. More precisely, we will discuss how is it possible to use the results obtained in the previous chapters to solve the Cauchy problem for the field equations. The gravitation potentials, in a domain without matter and in absence of electromagnetic filed, must verify ten partial differential equations of second-order of the exterior case $R_{\alpha \beta}=0$, that are not independent because of the Bianchi identities. We will formulate the Cauchy problem relative to this system of equations and with initial data on a hypersurface $S$. The study of the values on $S$ of the consecutive partial derivatives of the potentials shows that, if $S$ is nowhere tangent to the characteristic manifold, and if the Cauchy data satisfy four given conditions, the Cauchy problem admits, with respect to the system of equations $R_{\alpha \beta}=0$, in the analytic case, a solution and this solution is unique. Thus, if there exist two solutions, they coincide up to a change of coordinates, conserving $S$ point-wise and the values on $S$ of the Cauchy data. Hence, by making use of $\textit{isothermal coordinates}$, we will solve the Cauchy problem for the equations $\mathcal{G}_{\alpha \beta}=0$. After that we have seen under which assumptions this is possible, we will define, in the second part, the causal structure of space-time. We will give the definition of $\textit{strong causality}$, and, since this is not enough to ensure that space-time is not just about to violate causality, we will define $\textit{stable causality}$. Eventually, we will deal with $\textit{global hyperbolicity}$ and its meaning in relation to Cauchy surfaces. \section{Cauchy Problem for General Relativity} The ten potentials, which are the metric components, $g_{\alpha \beta}$ of an Einstein universe satisfy, in the domains without matter and in absence of electromagnetic field, ten partial differential equations of second-order of the exterior case \begin{equation*} \begin{split} R_{\alpha \beta} \equiv &\sum_{\lambda=1}^4 \Biggl\{ \partial_\lambda \Gamma \{ \lambda, [\alpha, \beta] \} - \partial_\alpha \Gamma \{ \lambda, [\lambda, \beta] \} \Bigg\} + \sum_{\lambda, \mu=1}^4 \Biggl\{\Gamma \{ \lambda, [\lambda, \beta] \} \Gamma \{ \mu, [\alpha, \beta] \} \\ & - \Gamma \{ \mu, [\lambda, \alpha] \} \Gamma \{ \lambda, [\mu, \beta] \} \Bigg\} =0, \end{split} \end{equation*} where the $ \partial_\lambda = \frac{\partial}{\partial x^\lambda}$, the $x^\lambda$ are a system of four space-time coordinates whatsoever, and we have denoted by $\Gamma \{ \lambda, [\alpha, \beta] \}$ the $\Gamma^{\lambda}_{\alpha \beta}$ to stress the non-tensorial behaviour of the Christoffel symbols. This ten equations are not independent because the Ricci Tensor satisfies the four Bianchi identities \begin{equation*} \sum_{\lambda=1}^{4} \nabla_\lambda G^{\lambda \mu} \equiv 0, \end{equation*} where $G^{\lambda \mu} \equiv R^{\lambda \mu} - \frac{1}{2} (g^{-1})^{\lambda \mu} R$ is the Einstein Tensor, and $R$ is the scalar of curvature. The problem of determinism is here formulated for an exterior space-time in the form of the Cauchy problem relative to the system of partial differential equations $R_{\alpha \beta}=0$ and with initial data carried by any hypersurface $S$. The study of the values on $S$ of the partial derivatives of $g_{\alpha \beta}$ shows that, if $S$ is nowhere tangent to a characteristic manifold, and if the Cauchy data satisfy four given conditions, the Cauchy problem for $R_{\alpha \beta}=0$ admits in the analytic case a unique solution. Thus, if $S$ is defined by the equation $x^4=0$, the four conditions that the initial data must verify are the four equations $G^4_\lambda=0$ which are expressed in terms of the data only. We want to remark that $G^{4}_\lambda$ is obtained from the $(1, 1)$ form of the Einstein tensor, by fixing the controvariant index to the component 4 and letting to vary the covariant component. It is possible to use the results of Chapter 5, since once a space-time and a hypersurface $S$ are given, there always exists a coordinate change $\tilde{x}^{\lambda}=f(x^\mu)$, with $\tilde{x}=0$ for $x^4=0$, so that every equation $R_{\alpha \beta}=0$ does not contain, in the new coordinates, second derivatives besides those of ${g}_{\alpha \beta}$ and the system of Einstein equations takes then the form of the systems studied in Chapter 5. The vacuum Einstein equations, in every coordinates (Levi-Civita \cite{levi2013n}) read as \begin{equation*} R_{\alpha \beta} \equiv - \mathcal{G}_{\alpha \beta} - L_{\alpha \beta} =0 \end{equation*} where $\mathcal{G}_{\alpha \beta}$ is \begin{equation*} \mathcal{G}_{\alpha \beta} \equiv \frac{1}{2} \sum_{\lambda, \mu=1}^4 (g^{-1})^{\lambda \mu} \frac{\partial^2 g_{\alpha \beta}}{\partial x^\lambda \partial x^\mu} + H_{\alpha \beta} \end{equation*} with $H_{\alpha \beta}$ as a polynomial of the $g_{\lambda \mu}$ and $g^{\lambda \mu}$; and $L_{\alpha \beta}$ is \begin{equation} \label{eq:6.1} L_{\alpha \beta} \equiv \frac{1}{2} \sum_{\mu=1}^4 \bigg{[} g_{\beta \mu} \partial_{\alpha} F^{\mu} + g_{\alpha \mu} \partial_{\beta} F^{\mu} \bigg{]}. \end{equation} We see that with a choice of coordinates, more precisely if $x^\lambda$ are four $\textit{isothermal}$ $\textit{coordinates}$, it is possible to assume, without restricting the generality of the hypersurface $S$, that the initial data satisfy, besides the four conditions $G^4_\lambda=0$, the so-called $\textit{conditions of isothermy}$: \begin{equation} \label{eq:6.2} F^{\mu} \equiv \frac{1}{\sqrt{- g}} \sum_{\lambda=1}^4 \frac{ \partial (\sqrt{- g} (g^{-1})^{\lambda \mu})}{\partial x^\lambda}=0 \;{\rm for}\; x^4=0, \end{equation} which are first-order partial differential equations satisfied by the potentials. Thus, as we desired, every equation $R_{\alpha \beta}=0$ does not contain second derivatives besides those of $g_{\alpha \beta}$. The reason why these coordinates are called $\textit{isothermal}$ is that they satisfy the wave equation associated with the metric. A function $u$ solving the Laplace equation in the Euclidean setting can be thought of as corresponding to a static solution to the heat equation, and the surfaces of constant $u$ are thus isothermal; thinking of the wave equation associated with the metric as the analogue of the Laplace equation, surfaces on which an isothermal coordinate is constant are thus $\textit{isothermal}$ with respect to that coordinates. We shall solve this Cauchy problem for the equations $\mathcal{G}_{\alpha \beta}=0$, verified by the potentials in isothermal coordinates, and we shall prove afterwards that the potentials obtained define indeed a space-time, related to isothermal coordinates, and verify the equations of gravitation $R_{\alpha \beta}=0$. \subsection{Solution of the Cauchy Problem for the Equations $\mathcal{G}_{\alpha \beta}=0$} We shall apply to the system \begin{equation*} \mathcal{G}_{\alpha \beta} \equiv \sum_{\lambda, \mu=1}^4 (g^{-1})^{\lambda \mu} \frac{\partial^2 g_{\alpha \beta}}{\partial x^\lambda \partial x^\mu} + H_{\alpha \beta}=0 \end{equation*} the results of Chapter 5, by setting $(g^{-1})^{\lambda \mu}=A^{\lambda \mu}$, $g_{\alpha \beta}=W_s$, $H_{\alpha \beta}= f_s$, whereas on the Cauchy data we should make two assumptions. $\mathbf{Assumptions}$\\ In a domain $(d)$ of the initial surface $S$, $x^4=0$, defined by $$|x^i - \bar{x}^i | \leq d:$$ \begin{description} \item[(1)] The Cauchy data $\varphi_s$ and $\psi_s$ possess partial derivatives continuous and bounded up to the orders five and four, respectively. \item[(2)] The quadratic form $\sum_{\lambda, \mu=1}^4 (g^{-1})^{\lambda \mu} X_\lambda X_\mu$ is of normal hyperbolic form, i.e. $(g^{-1})^{44}>0$ and $\sum_{i,j=1}^3 (g^{-1})^{ij}X_i X_j$ is negative-definite. In particular, $g=det(g_{\lambda \mu}) \neq 0$. \end{description} We deduce from these assumptions the existence of a number $l$ such that for $|g_{\alpha \beta} - \bar{\varphi}_s| \leq l $ one has $g \neq 0$ and we see that, for some unknown functions $g_{\alpha \beta}=W_s$, the inequalities \begin{equation} \label{eq:6.3} |W_s - \bar{\varphi}_s| \leq l, \hspace{0.5cm} \bigg{|}\frac{\partial W_s}{\partial x^i} - \frac{\partial \bar{\varphi}_s}{\partial x^i} \bigg{|}\leq l, \hspace{0.5cm} \bigg{|} \frac{\partial W_s}{\partial x^4} - \bar{\psi}_s \bigg{|} \leq l \end{equation} are satisfied. The coefficients of the equations $\mathcal{G}_{\alpha \beta}=0$ (which are here independent of the variables $x^\alpha$) satisfy, as the Cauchy data, the assumptions of Chapter 5, i.e.: \begin{description} \item[(1)] The coefficients $A^{\lambda \mu}= (g^{-1})^{\lambda \mu}$ and $f_s=H_{\alpha \beta}$ are rational fractions with denominator $g$ of the $g_{\lambda \mu}= W_s$, and of the $g_{\lambda \mu}= W_s$ and $\frac{\partial W_s}{\partial x^\alpha}$, respectively and they admit partial derivatives with respect to all their arguments up to the fourth order continuous, bounded and satisfying Lipschitz conditions . \item[(2)] The quadratic form $\sum_{\lambda, \mu=1}^4 A^{\lambda \mu} X_\lambda X_\mu$ is of normal hyperbolic type, i.e. $A^{44}>0$ and $\sum_{i,j=1}^3 A^{ij}X_i X_j$ is negative-definite. \end{description} Hence, we can apply to the system $\mathcal{G}_{\alpha \beta}=0$ the conclusions of Chapter 5. $\mathbf{Conclusion}$ There exists a number $\epsilon (x^i_0) \neq 0$ such that, in the domain \begin{equation*} |x^i - \bar{x}^i | < d, \hspace{1cm} |x^4| \leq \epsilon(x^i_0) \end{equation*} the Cauchy problem relative to the equations $\mathcal{G}_{\alpha \beta}=0$ admits a solution which has partial derivatives continuous and bounded up to the fourth order and which verifies the inequalities $(\ref{eq:6.3}$). Once the solution has been found, it is left to prove that it verifies the conditions of isothermy. Thus, let us show that \begin{description} \item[(1°)] $\textit{The solution found of the system}$ $\mathcal{G}_{\alpha \beta}=0$ $\textit{verifies the four equations}$ \begin{equation*} \partial_4 F^\mu =0 \; {\rm for} \; x^4=0. \end{equation*} Indeed, we have assumed that the initial data satisfy the conditions \begin{equation} \label{eq:6.4} G^4_\lambda=0 \; {\rm and} \; F^\mu=0 \hspace{1cm} {\rm for}\; x^4=0.\end{equation} Hence, we have \begin{equation*} \begin{split} G^4_\lambda \equiv & - \sum_{\mu=1}^4 (g^{-1})^{4 \mu} \Biggl\{ \mathcal{G}_{\lambda \mu} - \frac{1}{2} g_{\lambda \mu} \sum_{\alpha, \beta=1}^4 (g^{-1})^{\alpha \beta} \mathcal{G}_{\alpha \beta} + L_{\lambda \mu}\\ & - \frac{1}{2} g_{\lambda \mu}\sum_{\alpha, \beta=1}^4 (g^{-1})^{\alpha \beta} L_{\alpha \beta}\Biggr\}, \end{split}\end{equation*} where $L_{\alpha \beta}$ is defined by ($\ref{eq:6.1}$). Thus, the solution of the system $\mathcal{G}_{\alpha \beta}=0$ verifies the equations \begin{equation*} - \frac{1}{2} \sum_{\alpha, \mu=1}^4(g^{-1})^{4 \mu}g_{\lambda \alpha} \partial_\mu F^\alpha - \frac{1}{2}\partial_\lambda F^4 + \frac{1}{2} \sum_{\alpha=1}^4 \delta^4_\lambda \partial_\alpha F^\alpha =0 \; {\rm for}\; x^4=0, \end{equation*} from which, by virtue of $F^\mu=0$ and $\partial_\lambda F^\mu=0$, we have \begin{equation*} - \frac{1}{2} (g^{-1})^{44} \sum_{\alpha=1}^4 g_{\lambda \alpha} \partial_4 F^\alpha=0. \end{equation*} Eventually, we see that the solution found verifies the four equations $\partial_4 F^\mu=0$, for $x^4=0$. \item[(2°)] $\textit{The solution found of}$ $\mathcal{G}^{\alpha \beta}=0$ $\textit{verifies}$ \begin{equation*} F^\mu =0. \end{equation*} This property is going to result from the conservation conditions. Indeed, the metric components $g_{\alpha \beta}$ satisfy the four Bianchi identities \begin{equation*} \sum_{\lambda=1}^4 \nabla_\lambda \bigg{(} R^{\lambda \mu} - \frac{1}{2} (g^{-1})^{\lambda \mu} R \bigg{)}=0, \end{equation*} where $ R^{\lambda \mu}$ is the Ricci tensor corresponding to this metric. Thus, a solution of the system $\mathcal{G}_{\alpha \beta}=0$ verifies four equations \begin{equation*} \sum_{\lambda=1}^4 \nabla_\lambda \bigg{(} L^{\lambda \mu} - \frac{1}{2} (g^{-1})^{\lambda \mu} L \bigg{)}=0, \end{equation*} where $ L^{\lambda \mu} = \sum_{\alpha, \beta=1}^4 (g^{-1})^{\alpha \lambda}(g^{-1})^{\beta \mu}L_{\alpha \beta}$ and $L = \sum_{\alpha, \beta=1}^4 (g^{-1})^{\alpha \beta}L_{\alpha \beta}$. It turns out from the expression $(\ref{eq:6.1}$) that these equations read as \begin{equation*} \begin{split}& \frac{1}{2} \sum_{\alpha, \lambda=1}^4(g^{-1})^{\alpha \lambda} \nabla_\lambda (\partial_\alpha F^\mu) + \frac{1}{2} \sum_{\beta, \lambda=1}^4 (g^{-1})^{\beta \mu} \nabla_\lambda (\partial_\beta F^\lambda) \\ &- \frac{1}{2}\sum_{\alpha, \lambda=1}^4 (g^{-1})^{\lambda \mu} \nabla_\lambda (\partial_\alpha F^\alpha)=0, \end{split} \end{equation*} from which, by developing and simplifying, we obtain \begin{equation*} \frac{1}{2} \sum_{\alpha, \lambda=1}^4 (g^{-1})^{\alpha \lambda} \frac{\partial^2 F^\mu}{\partial x^\alpha \partial x^\lambda} + P^\mu(\partial_\alpha F^\lambda)=0, \end{equation*} where $P$ is a linear combination of the $\partial_\alpha F^\lambda$ whose coefficients are polynomials of the $(g^{-1})^{\alpha \beta}$, $g_{\alpha \beta}$ and of their first derivatives. Hence, the four quantities $F^\mu$, formed with the $g_{\alpha \beta}$ solutions of $\mathcal{G}_{\alpha \beta}=0$, verify four partial differential equations of the type previously studied. The coefficients $A^{\lambda \mu}=(g^{-1})^{\lambda \mu}$ and $f_s=P^\mu$ verify, in $D$, the assumptions of Chapter 5. The quantities $F^\mu$ are by hypothesis vanishing on the domain $(d)$ of $x^4=0$, and we have proved that the same was true of their first derivatives $\partial_\alpha F^\mu$. \end{description} Then, we deduce from the uniqueness theorem that, in $D$, we have \begin{equation*} F^\mu=0, \hspace{0.5cm} {\rm and} \hspace{0.5cm} \partial_\alpha F^\mu=0. \end{equation*} Therefore, the metric components verify effectively in $D$ the conditions of isothermy and represent the potentials of an Einstein space-time, solutions of the vacuum Einstein equations $R_{\alpha \beta}=0$. \subsection{Uniqueness of the Solution} In order to prove that there exists only one exterior space-time corresponding to the initial conditions given on $S$, one has to prove that every solution of the Cauchy problem formulated in such a way with respect to the equations $R_{\alpha \beta}=0$ can be deduced by a change of coordinates from the solution of this Cauchy problem relative to the equations $\mathcal{G}_{\alpha \beta}=0$. This last solution is unique. Thus, let us consider a solution $g_{\alpha \beta}$ of the Cauchy problem relative to the equations $R_{\alpha \beta}=0$ and look for a transformation of coordinates \begin{equation*} \tilde{x}^\alpha = f^\alpha (x^\beta). \end{equation*} By conserving $S$ point-wise and in such a way that the potentials in the new system of coordinates $\tilde{g}_{\alpha \beta}$ verify the four equations \begin{equation*} \tilde{F}^\lambda =0, \end{equation*} we know that the four quantities $\tilde{F}^\lambda$ are invariant which verify the identities \begin{equation*} \tilde{F}^\lambda \equiv \tilde{\Delta}_2 \tilde{x}^\lambda = \Delta_2 f^\lambda. \end{equation*} In order for the equations $\tilde{F}^\lambda =0$ to be verified it is therefore necessary and sufficient that the functions $f^\alpha$ satisfy the equations \begin{equation} \label{eq:6.5} \Delta_2 f^\alpha \equiv \sum_{\lambda, \mu=1}^4 (g^{-1})^{\lambda \mu} \bigg{(} \frac{\partial^2 f^\alpha}{\partial x^\lambda \partial x^\mu} - \sum_{\rho=1}^4 \Gamma \{ \rho, [\lambda, \mu] \} \frac{\partial f^\alpha}{\partial x^\rho} \bigg{)}=0 \end{equation} which are partial differential equations of second-order, linear, normal hyperbolic in the domain $D$. If we take for values of the functions $f^\alpha$ and of their first derivatives upon $S$, the following values \begin{equation} \begin{split} \label{eq:6.6} & f^4=0, \hspace{1cm} \partial_\alpha f^4= \delta^4_\alpha, \\ & f^i=x^i, \hspace{1cm} \partial_\alpha f^i = \delta^i_\alpha, \end{split} \end{equation} we see that the Cauchy problems formulated in such a way admit in $D$ solutions possessing their partial derivatives up to the fourth order continuous and bounded. Thus, we have defined a change of coordinates $\tilde{x}^\lambda=f^\lambda (x^\alpha)$ such that, in the new system of coordinates, the potentials $\tilde{g}_{\alpha \beta}$ verify the conditions of isothermy $\tilde{F}^\lambda=0$. It remains to prove that this change of coordinates determines in a unique way the Cauchy data $\tilde{g}_{\alpha \beta}$ and $\tilde{\partial}_4 \tilde{g}_{\alpha \beta}$ for $x^4=0$, in terms of the original data $g_{\alpha \beta}$ and $\partial_4 g_{\alpha \beta}$ for $x^4=0$. We know that $g_{\alpha \beta}$ are the components of a covariant rank-two tensor \begin{equation} \label{eq:6.7} g_{\alpha \beta}= \sum_{\lambda, \mu=1}^4 \tilde{g}_{\lambda \mu} \bigg{(} \partial_\alpha f^\lambda \bigg{)} \bigg{(} \partial_\beta f^\mu \bigg{)}, \end{equation} from which, by making use of $(\ref{eq:6.6}$), we have \begin{equation*} g_{\alpha \beta}= \tilde{g}_{\alpha \beta}, \hspace{0.5cm} \partial_i g_{\alpha \beta}= \tilde{\partial}_i \tilde{g}_{\alpha \beta} \hspace{1cm} {\rm for}\; x^4= \tilde{x}^4= 0.\end{equation*} It remains to evaluate the derivatives of the potentials with respect to $x^4$ and $\tilde{x}^4$ for $x^4= \tilde{x}^4=0$. Since $\varphi$ is an arbitrary function of a space-time point we have \begin{equation*} \partial_4 \varphi= \sum_{\lambda=1}^4 \bigg{(} \tilde{\partial}_{\lambda} \varphi \bigg{)} \bigg{(} \partial_4 f^\lambda \bigg{)},\end{equation*} from which \begin{equation} \label{eq:6.8} \partial_4 \varphi= \tilde{\partial}_4 \varphi. \end{equation} Furthermore, we find by differentiating the equality $(\ref{eq:6.7}$) with respect to $x^4$ \begin{equation*} \partial_4 g_{\alpha \beta}= \sum_{\lambda, \mu=1}^4 \bigg{[} (\partial_4 \tilde{g}_{\lambda \mu})(\partial_\alpha {f}^\lambda)(\partial_\beta f^\mu) + \tilde{g}_{\lambda \mu} \bigg{(} (\partial^2_{\alpha 4} f^\lambda)(\partial_\beta f^\mu) + (\partial^2_{\beta 4} f^\mu)(\partial_\alpha f^\lambda ) \bigg{)} \bigg{]}, \end{equation*} from which \begin{equation} \label{eq:6.9} \partial_4 g_{\alpha \beta}= \partial_4 \tilde{g}_{\alpha \beta} + \sum_{\lambda=1}^4 (\tilde{g}_{\lambda \beta}) \bigg{(} \partial^2_{\alpha 4} f^\lambda \bigg{)} + \sum_{\mu=1}^4 (\tilde{g}_{\mu \alpha}) \bigg{(} \partial^2_{\beta 4} f^\mu \bigg{)}. \end{equation} We deduce also from the initial values $(\ref{eq:6.6}$): \begin{equation*} \partial^2_{\alpha i} f^\lambda =0. \end{equation*} The $f^\lambda$ verify on the other hand the conditions of isothermy $(\ref{eq:6.5}$), from which \begin{equation*} (g^{-1})^{44}\partial^2_{44}f^\lambda= \sum_{\alpha, \beta=1}^4 (g^{-1})^{\alpha \beta} \Gamma \{ \lambda, [\alpha, \beta] \}. \end{equation*} Hence, $\partial^2_{44}f^\lambda$ is determined in a unique way by the original Cauchy data; this is also equally true of $\partial_4 \tilde{g}_{\alpha \beta}$ for $x^4=0$. Thus, we have \begin{thm} Once a solution $g_{\alpha \beta}$ of the Cauchy problem is given in relation to the equations $R_{\alpha \beta}=0$, with the initial data satisfying upon $S$ the stated assumptions, there exists a change of coordinates, conserving $S$ point-wise, such that the potentials $\tilde{g}_{\alpha \beta}$ in the new system of coordinates verify everywhere the conditions of isothermy and represent the solution, unique, of a Cauchy problem, determined in a unique way, relative to the equations $\mathcal{G}_{\alpha \beta}=0$. \end{thm} Therefore, we can conclude that, in gravitational physics: \begin{thm} There exists one and only one exterior space-time corresponding to the initial conditions assigned upon $S$. \end{thm} Once we have proved that there exists a unique solution of the Cauchy Problem for Einstein Equations, we will proceed, in the next part of this Chapter, with the study of the causal structure of space-time. \section{Causal Structure of Space-Time} Given a space-time, from a physical point of view, it would seem reasonable to suppose that there is a local thermodynamic arrow of time defined continuously at every of its point, but for our purpose we shall only require that it should be possible to define continuously a division of non-spacelike vectors into two classes, which we arbitrarily label future-directed and past-directed. If this is the case, we shall say that space-time is $\textit{time-orientable}$. Thus, following Hawking-Ellis \cite{hawking1973large}, by letting $(M, g)$ be a space-time which is time-orientable as explained and given two sets $\mathcal{L}$ and $\mathcal{U}$, we can give the following definitions: The $\textit{chronological future}$ $I^+(\mathcal{L}, \mathcal{U})$ of $\mathcal{L}$ relative to $\mathcal{U}$ is the set of all points in $\mathcal{U}$ which can be reached from $\mathcal{L}$ by a future-directed timelike curve in $\mathcal{U}$. We shall denote $I^+(\mathcal{L}, M)$ as $I^+(\mathcal{L})$ and it is an open set, since if $p \in M$ can be reached by a future-directed time-like curve from $\mathcal{L}$, then there is a small neighbourhood of $p$ which can be so reached. Hence, if $p \in M$ we can define: \begin{equation} \label{eq:6.10} I^{+} (p) \equiv \{ q \in M : p <<q \}, \end{equation} i.e. $I^{+}(p)$ is the set of all points $q$ of $M$ such that there is a future-directed timelike curve from $p$ to $q$. Similarly, one defines the $\textit{chronological past}$ of $p$ \begin{equation} \label{eq:6.11} I^{-}(p) \equiv \{ q \in M: q << p \}. \end{equation} The $\textit{causal future}$ of $\mathcal{L}$ relative to $\mathcal{U}$ is denoted by $J^+(\mathcal{L}, \mathcal{U})$ and it is defined as the union of $\mathcal{L} \cap \mathcal{U}$ with the set of all points in $\mathcal{U}$ which can be reached from $\mathcal{L}$ by a future-directed non-spacelike curve in $\mathcal{U}$. We denote $J^+(\mathcal{L}, M)$ as $J^+(\mathcal{L})$ and it is the region of space-time which can be causally affected by events in $\mathcal{L}$. It is not necessarily a closed set even when $\mathcal{L}$ is a single point. Therefore, if $p \in M$ we can define \begin{equation} \label{eq:6.12} J^{+}(p) \equiv \{ q \in M: p \leq q \}, \end{equation} and similarly for the $\textit{causal past}$ \begin{equation} \label{eq:6.13} J^{-}(p) \equiv \{ q \in M: q \leq p \}, \end{equation} where $a \leq b$ means that there exists a future-directed non-spacelike curve from $a$ to $b$. \begin{figure} \centering \includegraphics{fig6mod2.png} \caption{When a point has been removed from Minkowski space, the causal future $J^+(\mathcal{L})$ of a closed set $\mathcal{L}$ is not necessarily closed. Further parts of the boundary of the future of $\mathcal{L}$ may be generated by null geodesic segments which have no past endpoints in $M$.}\label{fig:6} \end{figure} A non-spacelike curve between two points which was not a null geodesic curve could be deformed into a timelike curve between two points. Thus, if $\mathcal{U}$ is an open set and $p$, $q$ and $r \in \mathcal{U}$, then we have \begin{equation*} \left\{ \begin{array}{l} q \in J^+(p, \mathcal{U}), r \in I^+(q, \mathcal{U}) \\ q \in I^+(p, \mathcal{U}), r \in J^+(q, \mathcal{U}) \end{array}\right\} \end{equation*} both imply $r \in I^+(p, \mathcal{U})$. From this follows that $\overline{I^+}(p, \mathcal{U})= \overline{J^+}(p, \mathcal{U})$ and $\dot{I^+}(p, \mathcal{U})= \dot{J^+}(p, \mathcal{U})$, where for any set $I$, $\bar{I}$ is the $\textit{closure}$ of $I$ and $\dot{I} \equiv \overline{I} \cap \overline{(M - I)}$ denotes the $\textit{boundary}$ of $I$. This example, illustrates a useful technique for constructing space-times with given causal properties: one starts with some simple space-time, such as Minkowski space, cuts out any closed set and, if desired, pastes it together in an appropriate way. The result is still a manifold with a Lorentz metric and therefore still a space-time even though it may look incomplete where the points have been cut out. This incompleteness can be resolved by a conformal transformation which sends the cut out points to infinity. For our purpose, we give a few more definitions. \begin{defn} The $\textit{future horismos}$ of $\mathcal{L}$ relative to $\mathcal{U}$, denoted by $E^+(\mathcal{L}, \mathcal{U})$, is defined has \begin{equation} E^+(\mathcal{L}, \mathcal{U}) \equiv J^+(\mathcal{L}, \mathcal{U}) - I^+(\mathcal{L}, \mathcal{U}); \end{equation} we write $E^+(\mathcal{L})$ for $E^+(\mathcal{L}, M)$. \end{defn} If $\mathcal{U}$ is an open set, points of $E^+(\mathcal{L}, \mathcal{U})$ must lie on future-directed null geodesics from $\mathcal{L}$. Similary, we can define the $\textit{past horismos}$ $E^-(\mathcal{L}, \mathcal{U})$. \begin{defn} A point $p$ is a $\textit{future endpoint}$ of a future-directed non-spacelike curve $\lambda: F \rightarrow M$, if for every neighbourhood $V$ of $p$ there is a $t \in F$ such that $\lambda(t_1) \in V$ for every $t_1 \in F$ with $t_1 \geq t$. \end{defn} \begin{defn} A non-spacelike curve is future-inextendible in a set $\mathcal{L}$ if it has no future endpoint in $\mathcal{L}$. \end{defn} At this stage, to derive the properties of the boundaries we introduce the concepts of achronal and future sets. \begin{defn} A set $\mathcal{L}$ is said to be $\textit{achronal}$ if $I^+(\mathcal{L}) \cap \mathcal{L}$ is empty, in other words if no two points of $\mathcal{L}$ can be joined by a timelike curve. \end{defn} \begin{defn} A set $\mathcal{L}$ is said to be a $\textit{future set}$ if $I^+(\mathcal{L}) \subset \mathcal{L}$. Hence, $M - \mathcal{L}$ is a $\textit{past set}$. \end{defn} Examples of future sets include $I^+(\mathcal{N})$ and $J^+(\mathcal{N})$ where $\mathcal{N}$ is any set. The causal structure of $(M, g)$ is the collection of past and future sets at all points of $M$ together with their properties as shown in figure ($\ref{fig:6}$). \begin{prop} \label{prop:6.2} If $\mathcal{L}$ is a future set then $\dot{\mathcal{L}}$ is a closed, imbedded, achronal three-dimensional $C^1$ submanifold. \end{prop} We shall call a set with the properties of $\dot{\mathcal{L}}$ an $\textit{achronal boundary}$. Such a set can be divided into four disjoint subset $\dot{\mathcal{L}}_N$, $\dot{\mathcal{L}}_+$, $\dot{\mathcal{L}}_-$ and $\dot{\mathcal{L}}_0$. For a point $q \in \dot{\mathcal{L}}$ there may or may not exist points $p, r \in \dot{\mathcal{L}}$ with $p \in E^-(q)-q$, $r \in E^+(q)-q$. The different possibilities define the subset of $\dot{\mathcal{L}}$ according to the scheme in table ($\ref{table:1}$). \begin{table} \caption{Scheme of $\dot{\mathcal{L}}$ subsets; see the discussion below.} \label{table:1} \center $q \in$ \begin{tabular}{|c|c|c|} \hline $ \exists p$ & $\not \exists p$ & $\null$ \\ \hline $\dot{\mathcal{L}}_N$ & $\dot{\mathcal{L}}_+$ & $\exists r$ \\ \hline $\dot{\mathcal{L}}_-$ & $\dot{\mathcal{L}}_0$ & $\not \exists r$ \\ \hline \end{tabular} \end{table} If $q \in \dot{\mathcal{L}}_N$, then $r \in E^+(p)$ since $r \in J^+(p)$ and $r \not \in I^+(p)$. This means that there is a null geodesic segment in $\dot{\mathcal{L}}$ through $q$. If $q \in \dot{\mathcal{L}}_+$ (respectively $\dot{\mathcal{L}}_-$) then $q$ is the future (past) endpoint of a null geodesic in $\dot{\mathcal{L}}$. The subset $\dot{\mathcal{L}}_0$ is spacelike. A useful condition for a point to lie in $\dot{\mathcal{L}}_N$, $\dot{\mathcal{L}}_+$ or $\dot{\mathcal{L}}_-$ is given by the following lemma due to Penrose: \begin{lem} Let $W$ be a neighbourhood of $q \in \dot{\mathcal{L}}$ where $\mathcal{L}$ is a future set. Then \begin{description} \item[(i)] $I^+(q) \subset I^+ (\mathcal{L} - W)$ implies $q \in \dot{\mathcal{L}}_N \cup \dot{\mathcal{L}}_+$, \item[(ii)] $I^-(q) \subset I^+ (M - \mathcal{L} - W)$ implies $q \in \dot{\mathcal{L}}_N \cup \dot{\mathcal{L}}_-$. \end{description} \end{lem} An example is given by $\dot{J}^+(K)= \dot{I}^+(K)$, that it the boundary of the future of a closed set $K$. It is an achronal manifold and by the above lemma, every point of $\dot{J}(K) - K$ belongs to $[\dot{J}^+(K)]_N$ or $[\dot{J}^+(K)]_+$. This means that $\dot{J}(K) - K$ is generated by null geodesic segments which may have future endpoints in $\dot{J}^+(K) - K$ but which, if they do have past endpoints, can have them only on $K$ itself. We shall say that an open set $\mathcal{U}$ is $\textit{causally simple}$ if for every compact set $K \subset U$, \begin{equation*} \dot{J}^+(K) \cap \mathcal{U} = E^+(K) \cap \mathcal{U} \hspace{0.5cm} {\rm and} \hspace{0.5cm} \dot{J}^-(K) \cap \mathcal{U} = E^-(K) \cap \mathcal{U}. \end{equation*} This is equivalent to say that $\dot{J}^+(K)$ and $\dot{J}^-(K)$ are closed in $U$. \subsection{Causality conditions} Since the causality holds only locally the global question is left open. Thus we did not rule out the possibility that on large scale there might be closed timelike curves. However the existence of such curves would seem to lead the possibilities of logical paradoxes. Thus, we are more ready to believe that space-time satisfies the $\textit{chronology condition}$, i.e. there are no closed timelike curves. However, we must bear in mind the possibility that there might be points of space-time at which this condition does not hold. The set of all such points will be called the $\textit{chronology violating}$ set of $M$ and it is defined as follows: \begin{prop} The chronology violating set of $M$ is the disjoint union of sets of the form $I^+(q) \cap I^-(q)$, with $q \in M$. \end{prop} \begin{prop} If $M$ is compact, the chronology set of $M$ is non-empty. \end{prop} From this result it would seem reasonable to assume that space-time is non-compact. Another argument against compactness is that any compact, four-dimensional manifold on which there is a Lorentz metric cannot be simply connected. Thus, a compact space-time is really a non-compact manifold in which points have been identified. It would seem physically reasonable to regard the covering manifold as representing space-time. We shall say that the $\textit{causality condition}$ holds if there are no closed non-spacelike curves. \begin{prop} The set of points at which the causality condition does not hold is the disjoint union of sets of the form $J^-(q) \cap J^+(q)$, with $q \in M$. \end{prop} In particular, if the causality condition is violated at $q \in M$ but the chronology condition holds, there must be a closed null geodesic curve $\gamma$ through $q$. For physically realistic solutions, the causality and chronology conditions are equivalent. It would seem reasonable to exclude situations in which there were non-spacelike curves which returned arbitrarily close to their point of origin or which passed arbitrarily close to other non-spacelike curves which then passed arbitrarily close to the origin of the first curve and so on. We shall describe the first three of these conditions. \begin{defn}The $\textit{future distinguishing condition}$ is said to hold at $p \in M$ if every neighbourhood of $p$ contains a neighbourhood of $p$ which no future directed non-spacelike curves from $p$ intersects more than once. An equivalent statement is that $I^+(q)=I^+(p)$ implies that $q=p$. \end{defn} Similarly, it is possible to define the $\textit{past distinguishing condition}$ by exchanging the future with the past in the previous definition. \begin{defn}The $\textit{strong causality condition}$ is said to hold at $p$ if every neighbourhood of $p$ contains a neighbourhood of $p$ which no non-spacelike curve intersects more than once. \end{defn} Another definition of strong causality can be given, by following Penrose, if we exclude the null curves. It is defined as follows: \begin{defn} $\textit{Strong causality}$ holds at $p \in M$ if arbitrarily small neighbourhoods of $p$ exist which each intersect no timelike curve in a disconnected set. \end{defn} \begin{cor} The past and the future distinguishing conditions would also hold on $M$ since they are implied by strong causality. \end{cor} Closely related to these three higher degree causality conditions is the phenomenon of $\textit{imprisonment}$. A non spacelike curve $\lambda$ that is future-inextendible can do one of the three things as one follows it to the future. It can \begin{description} \item[(i)] enter and remain within a compact set $\mathcal{L}$; \item[(ii)] not remain in any compact set $\mathcal{L}$ and not re-enter a compact set $\mathcal{L}$; \item[(iii)] not remain within any compact set $\mathcal{L}$ and not re-enter any such set more than a finite number of times. \end{description} In the third case, $\lambda$ can be thought as going off to the edge of space-time, that is either to infinity or a singularity. In the first and second cases we shall say that $\lambda$ is $\textit{totally}$ and $\textit{partially future imprisoned}$ in $\mathcal{L}$, respectively. Furthermore, we have the following result: \begin{prop} \label{prop:6.6} If the strong causality condition holds on a compact set $\mathcal{L}$, there can be no future-inextendible non-spacelike curve totally or partially future imprisoned in $\mathcal{L}$. \end{prop} and \begin{prop} If the future or past distinguishing condition holds on a compact set $\mathcal{L}$, there can be no future-inextendible non-spacelike curve totally future imprisoned in $\mathcal{L}$. \end{prop} The causal relations on $(M, g)$ may be used to put a topology on $M$ called the $\textit{Alexandrov topology}$. \begin{defn} The Alexandrov topology, is a topology in which a set is defined to be open if and only if it is the union of one or more sets of the form $I^+(q) \cap I^-(q)$, with $p, q \in M$. \end{defn} As $I^+(q) \cap I^-(q)$ is open in the manifold topology, any set which is open in the Alexandrov topology will be open in the manifold topology, though the converse is not necessarily true. \begin{thm}The following three requirements on a space-time $(M,g)$ are equivalent: \begin{description} \item[(1)] $(M,g)$ is strongly causal; \item[(2)] the Alexandrov topology agrees with the manifold topology; \item[(3)] the Alexandrov topology is Hausdorff. \end{description} \end{thm} However, suppose that the strong causality condition holds on $M$. Then, about any point $p \in M$ one can find a local causality neighbourhood $\mathcal{U}$. The Alexandrov topology of $(\mathcal{U}, {g|}_{\mathcal{U}})$ regarded as a space-time in its own right, is the same as the manifold topology of $\mathcal{U}$. Thus the Alexandrov topology of $M$ is the same as the manifold topology since $M$ can be covered by local causality neighbourhoods. This means that if the strong causality holds, one can determine the topological structure of space-time by observation of causal relationships. Even imposition of strong causality condition does not rule out all causal pathologies and to ensure that space-time is not just about to violate chronology condition. Thus, in order to be physically significant, a property of space-time ought to have some form of stability. The situation can be considerably improved if $\textit{stable causality condition}$ holds. To be able to define properly this concept, one has to define a topology on the set of all space-times, that is, all non-compact four-dimensional manifolds and all Lorentz metric on them. Essentially, three topologies seem of major interest: compact-open topology, open topology and fine topology. \begin{description} \item[(1)] $\textbf{Compact-Open Topology}$ $\forall n=0, 1, ..., r$, let $\epsilon_n$ be a set of continuous positive functions on $M$, $\mathcal{U} \subset M$ a compact set and $g$ the Lorentz metric under study. We then define: $G(\mathcal{U}, \epsilon_n, g)$ the set of all Lorentz metrics $\tilde{g}$ such that \begin{equation} |g - \tilde{g}|_n < \epsilon_n \; {\rm on} \; \mathcal{U} \; \forall n, \end{equation} where \begin{eqnarray*} \; & \; & | g - \tilde{g}|_n \nonumber \\ & \equiv & \sqrt{\sum\limits_{a_i, b_j, r, s, u, v} \bigg{[} \nabla_{a_1} ...\nabla_{a_n} (g_{rs} - \tilde{g}_{rs}) \bigg{]}\bigg{[} \nabla_{b_1} ...\nabla_{b_n} (g_{uv} - \tilde{g}_{uv}) \bigg{]} h^{a_1 b_1}...h^{sv}}, \end{eqnarray*} where $\nabla_a$ is the covariant derivative operator on $M$ and $\sum_{a,b=1}^4 h_{ab} dx^a$ $\otimes dx^b$ is any positive-definite metric on $M$. In the compact-open topology, open sets are obtained from the $G(\mathcal{U}, \epsilon_i, g)$ through the operations of arbitrary union and finite intersection. \item[(2)] $\textbf{Open Topology}$ We no longer require $\mathcal{U}$ to be compact, and we take $\mathcal{U}=M$ in section (1). \item[(3)] $\textbf{Fine Topology}$ We define $H(\mathcal{U}, \epsilon_i, g)$ as the set of all Lorentz metrics $\tilde{g}$ such that \begin{equation} |g - \tilde{g}|_i < \epsilon_i, \end{equation} and $\tilde{g}=g$ out of the compact set $\mathcal{U}$. Moreover, we set $G'(\epsilon_i, g)$ \\$= \cup H(\mathcal{U}, \epsilon_i, g)$. A sub-basis for the fine topology is then given by the neighbourhoods $G'(\epsilon_i, g)$. \end{description} Now, the underlying idea for stable causality is that space-time must not contain closed timelike curves, and we still fail to find closed timelike curves if we open out the null cones. Thus, for our purpose, we are interested in the $C^0$ open topology. \begin{defn} The stable causality condition holds on $M$ if the space-time metric $g$ has an open neighbourhood in the $C^0$ open topology such that there are no closed timelike curves in any metric belonging to the neighbourhood. \end{defn} In other words, what this condition means is that one can expand the light cones slightly at every point without introducing closed timelike curves. The Minkowski, Friedmann-Robertson-Walker, Schwarzschild and Reissner-Nordström space-times are all stably causal. If stable causality condition holds, the differentiable and conformal structure can be determined from the causal structure, and space-time cannot be compact (because in a compact space-time there exist closed timelike curves). A very important characterization of stable causality is given by the following proposition: \begin{prop} The stable causality condition holds everywhere on $M$ if and only if there is a function $f$ on $M$ whose gradient is everywhere timelike. \end{prop} The function $f$ can be thought as a sort of $\textit{cosmic time}$ in the sense that it increases along every future-directed non-spacelike curve. Now if the stable causality condition holds one can find a family of $C^r$ Lorentz metrics $h(a)$, with $a \in [0, 3]$, such that (Hawking-Ellis 1973): \begin{description} \item[(1)] $h(0)$ is the space-time metric $g$; \item[(2)] there are no closed timelike curves in the metric $h(a)$ for each $a \in [0, 3]$; \item[(3)] if $a_1$, $a_2 \in [0, 3]$ with $a_1 < a_2$, then every non-spacelike vector in the metric $h(a_1)$ is timelike in the metric $h(a_2)$. \end{description} \subsection{Cauchy developments} In Newtonian theory there is instantaneous action-at-a-distance and hence in order to predict events at future points in space-time one has to know the state of the entire universe at the present time and also to assume the boundary conditions at infinity, such as that the potential goes to zero. On the other hand, in relativity theory, events at different points of space-time can be causally related only if they can be joined by a non-spacelike curve. Thus a knowledge of the appropriate data on a closed set $\mathcal{L}$ would determine events in a region $D^+(\mathcal{L})$ to the future of $\mathcal{L}$ called the $\textit{future Cauchy}$ $\textit{development}$ or $\textit{domain of dependence}$ of $\mathcal{L}$, and it is defined as the set of all points $p\in M$ such that every past-inextendible non-spacelike curve through $p$ intersects $\mathcal{L}$. Similarly, the $\textit{past Cauchy development}$, $D^-(\mathcal{L})$, is defined by exchanging the past with the future in the previous definition. The $\textit{total Cauchy development}$ is given by $D(\mathcal{L})=D^+(\mathcal{L}) \cup D^-(\mathcal{L})$. Penrose defines the Cauchy development of $\mathcal{L}$ slightly differently, as the set of all points $p \in M$ such that every past-inextendible timelike curve through $p$ intersect $\mathcal{L}$. We shall denote this set $\tilde{D}^+(\mathcal{L})$. Thus, one has $\tilde{D}^+(\mathcal{L}) = \overline{D^+}(\mathcal{L})$. The future boundary of ${D}^+(\mathcal{L})$, that is $\overline{D^+}(\mathcal{L}) - I^-({D}^+(\mathcal{L}))$, marks the limit of the region that can be predicted from knowledge of data on $\mathcal{L}$. We call this closed achronal set the $\textit{future Cauchy horizon}$ of $\mathcal{L}$ and denote it by $H^+(\mathcal{L})$. \begin{defn}The future Cauchy horizon $H^+(\mathcal{L})$ of $\mathcal{L}$ is given by \begin{equation} H^+(\mathcal{L}) \equiv \{ X: X \in D^+(\mathcal{L}), I^+(X) \cap D^+(\mathcal{L})= \phi \}. \end{equation} Similarly, the past Cauchy horizon $H^-(\mathcal{L})$ is defined as \begin{equation} H^-(\mathcal{L}) \equiv \{ X: X \in D^-(\mathcal{L}), I^-(X) \cap D^-(\mathcal{L})= \phi \}. \end{equation} \end{defn} The future Cauchy horizon of $\mathcal{L}$ will intersect $\mathcal{L}$ if $\mathcal{L}$ is null or if $\mathcal{L}$ has an edge. To make this precise we define the $\textit{edge}(\mathcal{L})$ as follows. \begin{defn} The edge($\mathcal{L}$) for an achronal set $\mathcal{L}$ is the set of all points $q \in \bar{\mathcal{L}}$ such that in every neighbourhood $\mathcal{U}$ of $q$ there are points $p \in I^-(q, \mathcal{U})$ and $r \in I^+(q, \mathcal{U})$ which can be joined by a timelike curve in $\mathcal{U}$ which does not intersect $\mathcal{L}$. \end{defn} It follows that if the edge($\mathcal{L}$) is empty for a non-empty achronal set $\mathcal{L}$, then $\mathcal{L}$ is a three-dimensional imbedded $C^1$-submanifold. \begin{prop} For a closed achronal set $\mathcal{L}$, \begin{equation*} {\rm edge}(H^+(\mathcal{L}))={\rm edge}(\mathcal{L}). \end{equation*} \end{prop} \subsection{Global Hyperbolicity} Closely related to Cauchy developments is the property of global hyperbolicity. The notion of $\textit{global hyperbolicity}$ was introduced by Leray in order to deal with questions of existence and uniqueness of solutions of hyperbolic differential equations on a manifold. It plays a key role in developing a rigorous theory of geodesics in Lorentzian geometry, in proving singularity theorems and its ultimate meaning can be seen as requiring the existence of Cauchy surfaces, i.e. spacelike hypersurfaces which each non-spacelike curve intersects exactly once. We shall here follow Geroch \cite{geroch1970domain} and Hawking-Ellis \cite{hawking1973large}, defining and proving in part what follows. \begin{defn} A space-time $(M, g)$ is said to be globally hyperbolic if \begin{description} \item[(1)]the strong causality assumption holds on $(M, g)$; \item[(2)]if for any two points $p$, $q \in M$, $J^+(p) \cap J^-(q)$ is compact and contained in $M$. \end{description} \end{defn} Condition $\textbf{(2)}$ can be thought of as saying that $J^+(p) \cap J^-(q)$ does not contain any points on the edge of space-time, i.e. at infinity or at a singularity. The reason for the nomenclature $\textit{global hyperbolicity}$ is that on $M$, the wave equation for a $\delta$-function source at $p \in M$ has a unique solution which vanishes outside $M- J^+(p, M)$. Recall that $M$ is said to be causally simple if for every compact set $\mathcal{K}$ contained in $M$, $J^+(\mathcal{K}) \cap M$ and $J^-(\mathcal{K}) \cap M$ are closed in $M$. \begin{prop} An open globally hyperbolic set $M$ is causally simple. \end{prop} Leray did not give the above definition of global hyperbolicity but an equivalent one that is the following \begin{defn} Given two points $p$, $q \in M$ such that strong causality holds on $J^+(p) \cap J^-(q)$, we define $C(p, q)$ to be the space of all non-spacelike curves from $p$ to $q$, regarding two curves $\gamma(t)$ and $\lambda(u)$ as representing the same point of $C(p,q)$ if one is a reparametrization of the other, i.e. if there exists a continuous monotonic function $f(u)$ such that $\gamma(f(u))=\lambda(u)$. \end{defn} The topology of $C(p, q)$ is defined by saying that a neighbourhood of $\gamma$ in $C(p, q)$ consists of all curves in $C(p, q)$ whose points in $M$ lie in a neighbourhood $W$ of the points of $\gamma$ in $M$. Leray's definition is that $M$ is globally hyperbolic if $C(p, q)$ is compact for all $p$, $q \in M$. These definitions are equivalent, as shown by the condition $\textbf{(2)}$ of the following theorem. \begin{thm}In a globally hyperbolic space-time $(M, g)$, the following properties hold: \begin{description} \item[(1)] $J^+(p)$ and $J^-(p)$ are closed $\forall p \in M$; \item[(2)] strong causality holds on $M$ such that \begin{equation*} M = J^-(M) \cap J^+(M), \end{equation*} and, $\forall p, q \in M$, the space $C(p, q)$ of all non-spacelike curves from $p$ to $q$ is compact in a suitable topology; \item[(3)] there exist Cauchy surfaces. \end{description} \end{thm} \begin{proof}[$\textbf{Proof (1)}$] If $(X, F)$ is Hausdorff space and $A \subset X$ is compact, then $A$ is closed. In our case, this implies that $J^+(p) \cap J^-(q)$ is closed. Moreover, it is not difficult to see that $J^+(p)$ itself must be closed. In fact, otherwise we could find a point $r \in \overline{J^+(p)}$ such that $r \not \in J^+(p)$. Let us now choose $q \in I^+(r)$. We would then have $r \in \overline{J^+(p) \cap J^-(q)}$ but $r \not \in J^+(p) \cap J^-(q)$, which implies that $J^+(p) \cap J^-(q)$ is not closed, contradicting what we found before. Similarly we also prove that $J^-(p)$ is closed. \end{proof} \begin{proof}[$\textbf{Proof (2)}$] Suppose that $C(p, q)$ is compact. Let $r_n$ be an infinite sequence of points in $J^+(p) \cap J^-(q)$ and let $\lambda_n$ be a sequence of non-spacelike curves from $p$ to $q$ through the corresponding $r_n$. As $C(p, q)$ is compact, there will be a curve $\lambda$ to which some sequence $\lambda'_n$ converges in the topology on $C(p, q)$. Let $\mathcal{U}$ be a neighbourhood of $\lambda$ in $\mathcal{M}$ such that $\bar{\mathcal{U}}$ is compact. Then $\mathcal{U}$ will contain all $\lambda'_n$ and hence all $r'_n$ for $n$ sufficiently large, and so there will be a point $r \in \mathcal{U}$ which is a limit point of the $r'_n$. Clearly $r$ lies on $\lambda$. Thus, every infinite sequence in $J^+(p) \cap J^-(q)$ has a limit point in $J^+(p) \cap J^-(q)$. Therefore, $J^+(p) \cap J^-(q)$ is compact. Conversely, suppose $J^+(p) \cap J^-(q)$ is compact. Let $\lambda_n$ be an infinite sequence of non-spacelike curves from $p$ to $q$. A lemma exists (see Hawking-Ellis 1973) which assures that given an open set, in our case $\mathcal{M} - q$, there will be a future-directed non-spacelike curve $\lambda$ from $p$ to $q$ which is inextendible in $\mathcal{M}- q$, and it is such that there is a subsequence $\lambda'_n$ which converges to $r$ for every $r \in \lambda$. The curve $\lambda$ must have a future endpoint at $q$ since by proposition it cannot be totally future imprisoned in the compact set $J^+(p) \cap J^-(q)$, and it cannot leave the set except at $q$. Let $\mathcal{U}$ be any neighbourhood of $\lambda$ in $\mathcal{M}$ and let $r_i$, with $1 \leq i \leq k$, be a finite set of points on $\lambda$ such that $r_1=p$, $r_k=q$ and each $r_i$ has a neighbourhood $V_i$ with $J^+(V_i) \cap J^-(V_{i+1})$ contained in $\mathcal{U}$. Then, for sufficiently large $n$, $\lambda'_n$ will be contained in $\mathcal{U}$. Thus, $\lambda'_n$ converges to $\lambda$ in the topology on $C(p, q)$ and so $C(p, q)$ is compact. \end{proof} \begin{proof}[$\textbf{Proof (3)}$] We put a measure $\mu$ on $M$ such that the total volume of $M$ in this measure is equal to 1. For $p \in M$, we define $f^+ : p \in M \rightarrow V$, to be the volume $V$ of $J^+(p, M)$ in the measure $\mu$. Clearly, $f^+(p)$ is a bounded function on $M$ which decreases along every future-directed non-spacelike curve. We shall show that global hyperbolicity implies that $f^+(p)$ is continuous on $M$. To do this, it will be sufficient to show that $f^+(p)$ is continuous on any non-spacelike curve $\lambda$. Let $r \in \lambda$ and let $x_n$ be an infinite sequence of points on $\lambda$ strictly to the past of $r$. Let $T\equiv \cap_n J^+(x_n, M)$. Suppose that $f^+(p)$ was not upper semi-continuous on $\lambda$ at $r$. There would be a point $q \in T - J^+(r, M)$. Then $r \not \in J^-(q, M)$; but each $x_n \in J^-(q, M)$ and so $r \in \overline{J^-}(q, M)$, which is impossible as $J^-(q, M)$ is closed in $M$. The proof that it is lower semi-continuous is similar. As $p$ is moved to the future along an inextendible non-spacelike curve $\lambda$ in $M$, the value of $f^+(p)$ must tend to zero. For suppose there were some point $q$ which lies to the future of every point of $\lambda$. Then the future-directed curve $\lambda$ would enter and remain within the compact set $J^+(r) \cap J^-(q)$ for every $r \in \lambda$ which would be impossible, by proposition ($\ref{prop:6.6}$), as the strong causality condition holds on $M$. It becomes then trivial to prove the continuity of the function $f^+ : p \in M \rightarrow V^1$, where $V^1$ is the volume of $I^+(p, M)$. From now on, we shall mean by $f^+$ the volume function of $I^+(p, M)$. Now we consider a function $f(p)$ defined on $M$ by \begin{equation*} f: p \in M \rightarrow f(p) \equiv \frac{f^-(p)}{f^+(p)}. \end{equation*} Any surface of constant $f$ will be an acausal set and, by proposition ($\ref{prop:6.2}$), will be a three-dimensional $C^1$-manifold imbedded in $M$. The function $f(p)$ is also continuous and strictly decreasing along each past-directed timelike curve. Let $\mathcal{L}$ be the set of points at which $f=1$, since $f$ is strictly decreasing along timelike curves, $\mathcal{L}$ is achronal. To show that $\mathcal{L}$ is also a $\textit{Cauchy surface}$, we shall prove the following: \begin{prop} Let $\mathcal{L}$ be the set of points where $f=1$, and let $p \in M$ be such that $f(p) >1$ and $\gamma$ be any past-directed timelike curve, without a past endpoint, from $p$. Since $f$ is continuous, $\gamma$ must intersect $\mathcal{L}$, provided that $p \in D^+(\mathcal{L}$). Similarily, if $f(p) <1$, $p \in D^-(p)$. \end{prop} Eventually, the previous proposition, implies that $\mathcal{L}$ is indeed a Cauchy surface. Hence, we consider any past-directed timelike curve $\gamma$ without past endpoint from $p$. In view of the continuity of $f$, such a curve $\gamma$ must intersect $\mathcal{L}$, provided one can show that there exists $\epsilon \rightarrow 0^+: f|_{\gamma}= \epsilon$, where $\epsilon$ is arbitrary. Furthermore, given $q \in M$, e we denote a set $U \subset M$ such that $U \subset I^+(q)$. A subset $U$ of this form covers $M$. Moreover, any $U$ cannot be in $I^-(r)$, $ \forall r \in \gamma$. This is forbidden by global hyperbolicity. Suppose, on the contrary, that $q \in \cap_{ r \in \gamma} I^-(r)$. Then we choose a sequence of points $\{t_i\}$ on $\gamma$ such that $t_{i+1} \in I^-(t_i)$ and such that every point of $\gamma$ lies to the past of at least one $t_i$. For each $i$, draw a timelike curve $\gamma'$ which: \begin{description} \item[(a)]begins at $p$, \item[(b)] $\gamma'=\gamma$ to $t_i$, \item[(c)] $\gamma'$ continues to $q$. \end{description} Since $M$ is globally hyperbolic, this sequence has a limit curve, $\Gamma$. The limit curve evidently contains $\gamma$. But this is impossible, for $\gamma$, if it were contained in a compact causal curve from $p$ to $q$, would then have a past endpoint. Hence, there must be some point $r$ of $\gamma$ such that $U \not \subset I^-(r)$. Since $M$ may be covered by such $U$'s, we conclude that $f^-(r)$ approaches zero as $r$ continues into the past on $\gamma$, and, therefore, that $\gamma$ intersect $\mathcal{L}$. We have shown that every past-directed timelike curve from $p$ intersect $\mathcal{L}$, i.e., that $p \in D^+(\mathcal{L})$. Similarly, if $f(p) < 1$, then $p \in D^-(\mathcal{L})$. Hence, $\mathcal{L}$ is a $\textit{Cauchy surface}$. \end{proof} Global hyperbolicity is a stable property of space-times, i.e., arbitrary, sufficiently small variations in the metric will not destroy global hyperbolicity. The proof can be found in Geroch \cite{geroch1970domain}. An useful example of globally hyperbolic manifolds is given by the $\textit{Hyperbolic Riemannian manifolds}$ \cite{lichnerowicz2018republication}. $\textbf{Example.}$ Let us consider an oriented differentiable manifold $V_n$ of dimension $n$ and class $C^{\infty}$, endowed with a volume element $\eta$ and let us introduce orthonormal frames, the elements of a principal fibre bundle $\mathcal{E}(V_n)$ over $V_n$, with structure group the Lorentz group $L(n)$. With respect to these frames $(e_0, e_A)$, where $A=1, .., (n-1)$ and $\alpha=0, 1, ..., (n-1)$, the metric can be written locally on an open neighbourhood as: \begin{equation*} ds^2= g_{\alpha \beta} \theta^\alpha \theta^\beta, \end{equation*} where the $\theta^\alpha$ are $1-forms$ and $g_{\alpha \beta}= \eta_{\alpha \beta}$ with $\eta_{\alpha \beta}=0$ for $\alpha \neq \beta$, $\eta_{00}=1$, $\eta_{AA}=-1$. We assume that the metric $ds^2$ of $V_\eta$ is normal hyperbolic. This metric defines in the tangent space at each point $x$ a convex cone of second order $C_x$. If $A=(A^{\lambda'}_{\alpha})$ is a matrix in $L(n)$, the time signature $\rho_{\lambda}$ of the matrix $A$ is equal to $\pm 1$ depending on the sign of $A^{0'}_0$. A $\textit{time orientation}$ $\rho$ is defined on $V_n$, with respect to the frames $y \in \mathcal{E}(V_n)$, by a indicator $\rho_y= \pm 1$ such that, if $y=y'A$, one has \begin{equation*} \rho_y=\rho_{y'} \rho_A. \end{equation*} We all assume that $V_n$ admits a time orientation $\rho$. A vector $e_0$, with $e^2_0=1$, is $\textit{future-oriented}$ if the component of $\rho$ with respect to the orthonormal frames $(e_0, e_A)$ is equal to 1. Similarly, a vector $e_0$ is $\textit{past-oriented}$ if the component of $\rho$ with respect to the orthonormal frames $(e_0, e_A)$ is equal to $-1$. Thus, the time orientation $\rho$ makes it possible to distinguish the half-cones of $C$, the $\textit{future half-cone}$ $C^+$ and $\textit{the past half-cone}$ $C^-$. We want to stress that an orientable hyperbolic manifold may not admit a time orientation. A timelike path of $V_n$ is a path whose tangent at every point $x$ lies within or on $C_x$. If $\mathcal{U}$ is a set in $V_n$, the $\textit{future}$ ${I}^+(\mathcal{U})$ is the set of points on timelike paths emanating from the points $x$ of $\mathcal{U}$ and lying in the future of $x$, the $\textit{past}$ ${I}^-(\mathcal{U})$ being the set of points on timelike paths leading to the points $x$ of $\mathcal{U}$ and lying in the past of $x$. These definitions hold in particular in the case $\mathcal{U}= \{x'\}$. The emission ${I}(x')$ of a point $x'$ is the union of its future ${I}^+(x')$ and its past ${I}^-(x')$. The boundary $\partial {I}(x')$ of this emission is characteristic with respect to the field of cones, i.e. it is tangent at each of its points $x$ to the cone $C_x$. The boundary $\partial {I}(x')= \Gamma_{x'}$ is said to be the $\textit{characteristic conoid of vertex x'}$. This conoid consists of bicharacteristics or null geodesics emanating from $x'$. By the use of geodesic normal coordinates centred at $x'$, one finds that as one approaches its vertex, the conoid $\Gamma_{x'}$ is diffeomorphic to a neighbourhood of the vertex of a cone, the bicharacteristics corresponding to the generators of the cone. That is no longer so away from the vertex $x'$, even under the global assumptions made below; in particular, the null geodesics emanating from $x'$ can intersect. In the theory of hyperbolic linear systems, Leray has introduced some global assumptions which ensure the existence of elementary solutions, even in the presence of singularities of the characteristic conoid. According to Leray and Madame Choquet-Bruhat, a hyperbolic manifold $V_n$ satisfying the previous assumptions is said to be $\textit{globally hyperbolic}$ if the set of timelike paths joining two points is always either empty or compact: from every infinite set of timelike paths joining the two points, one can always extract a sequence that converges to a timelike path. It this condition is satisfied, no timelike line can ever be closed. On a globally hyperbolic manifold, a set $\mathcal{U}$ is said to be $\textit{compact towards}$ $\textit{the past}$ if the intersection of $\mathcal{U}$ with ${I}^-(x)$ is compact or empty for all $x$; ${I}^+(\mathcal{U})$ and every closed subset of ${I}^+(\mathcal{U})$ are then also compact towards the past. Similarly, one can say that $\mathcal{U}$ is $\textit{compact towards the future}$ if the intersection of $\mathcal{U}$ with ${I}^+(x)$ is compact or empty for all $x$; ${I}^-(\mathcal{U})$ and every closed subset of ${I}^-(\mathcal{U})$ are then also compact towards the future. From a fundamental lemma of Leray, it turns out that if $\mathcal{U}$ is compact towards the past and ${\mathcal{U}}'$ is compact, the intersection ${I}^+(\mathcal{U}) \cap {I}^-({\mathcal{U}}')$ is compact. Every point of a locally hyperbolic manifold admits a neighbourhood $\Omega$ homeomorphic to an open ball and globally hyperbolic, in such a way that the previous results hold on $\Omega$. This example is interesting because it also provides an alternative definition of the $\textit{characteristic conoid}$ to that given in the first chapters. Its interest lies in the use of causal structure concepts and hence can be seen as more fundamental. Eventually, global hyperbolicity plays a key role in proving singularity theorems because of the following proposition: \begin{prop} Let $p$ and $q$ lie in a globally hyperbolic set $M$ and $q \in J^+(p)$. Then, there exists a non-spacelike geodesic from $p$ to $q$ whose length is greater than or equal to that of any other non-spacelike curve from $p$ to $q$. \end{prop}. \chapter{Application: Green functions of Gravitational Radiation Theory} \chaptermark{Green functions of Gravitational Radiation Theory} \epigraph{The heavens and all the constellations rung, \\ The planets in their station listening stood.}{John Milton, Paradise Lost} In the previous Chapters, it has been shown how the Riemann function solves a characteristic initial-value problem. Our aim is to use this method to study gravitational radiation in black hole collisions at the speed of light. More precisely, to analyse the Green function for the perturbative field equations by studying the corresponding second-order hyperbolic operator with variable coefficients. After reduction to canonical form of this hyperbolic operator, the integral representation of the solution in terms of the Riemann kernel is obtained. The study of the axisymmetric collision of two black holes at the speed of light is useful in order to understand the more realistic collision of two black holes with a large but finite incoming Lorentz factor $\gamma$. The curved radiative region of the space-time, produced after the two incoming impulsive plane-fronted shock waves have collided, is treated using perturbation theory. To proceed with the study of the Green functions of the gravitational radiation in black hole collisions at the speed of light, following D'Eath \cite{d1992gravitational1, d1992gravitational2}, we make an introduction about its main features. \section{Black Hole Collisions at the speed of light} Since the time when general relativity was originally formulated by Einstein there is no analytic solution which does not possess a large number of simplifying symmetries. To study the generation of gravitational radiation by realistic physical sources it is necessary to consider isolated gravitating systems that are time dependent and which can have no simplifying features apart from axisymmetry. This can be done by making use of approximation procedures. There are two alternatives which are $\textit{numerical simulation}$ and $\textit{perturbation theory}$, respectively. In this last case, one assumes that the space-time metric differs only very slightly from some fixed background. The field equations for the metric perturbations are linear in the lowest order and mathematically tractable owing to the simple nature of the background metric. However, since the time-dependent perturbations must be small, the gravitational radiation produced is almost always correspondingly weak. To deduce the behaviour of gravitating systems when the perturbations are not small, it is necessary to perform the weak-field limit which can provide physical insight but not quantitative results. In fact, there is only one physical process in which perturbation methods have proved successful in describing truly strong-field gravitational radiation that is the high-speed collision of two black holes. The success of perturbation theory in these space-times is due to certain special features of their geometry. More precisely, owing to special-relativistic effects, the gravitational field of a black hole travelling close to the speed of light becomes concentrated in the vicinity of its trajectory, which lies close to a null plane in the surrounding nearly Minkowskian space-time. At precisely the speed of light, the black hole turns into a particular sort of impulsive gravitational plane-fronted wave. Then the curvature is zero except on the null plane of its trajectory, and there is a massless particle travelling along the axis of symmetry at the center of this null plane. An important property of this sort of gravitational shock wave is that geodesics crossing it are not only bent inwards, but also undergo an instantaneous translation along the null surface that describes the trajectory of the wave. The nature of this translation is such that geodesics crossing the shock close to the axis of symmetry are delayed relative to those which cross the shock far out from the axis. Hence, when two such waves pass through each other in a head-on collision, the far-field region of each wave is given a large head start over its near-field counterpart, in addition to being bent slightly inwards. Because of this, the self-interaction of the far field of each wave as it propagates out towards null infinity takes place without interference from the highly nonlinear region near the axis of symmetry; and because gravity is weak in the far-field region, perturbation theory can be used to study this process. However, the radiation produced in the forward and backward null directions is not weak, for although the far fields contain only a fraction of the total energy, the solid angle into which they are focused is small, and hence the energy flux per unit solid angle in these directions is not small. Thus, the perturbation methods can successfully describe the generation of truly strong-field gravitational radiation in these space-times. There are two different perturbation methods that one can use to treat these high-speed collisions. In one approach, the collision was studied by large but finite $\gamma$, where $\gamma$ is the Lorentz factor of the incoming holes. It was shown that the metric of a single high-speed hole, and hence also the precollision metric in the high-speed collision, can be expressed as a perturbation series in ${\gamma}^{-1}$. Then, it is possible to use a method of matched asymptotic expansions to investigate the space-time geometry to the future of the collision. It is necessary to use a number of different asymptotic expansions to allow for the various length and time scales characteristic of the gravitational field in different parts of the space-time. One expects that expansions holding in adjacent regions will match smoothly on to each other; the regions to the past thereby providing boundary conditions for those neighbouring regions to the future. Following this approach, it is possible to calculate the radiation on angular scales of $O({\gamma}^{-1})$ produced by the focusing of the far fields of the waves as they pass through each other during the collision. In this region the news function has an asymptotic expansion of the form \begin{equation} \label{eq:7.1} c_0(\bar{\tau}, \hat{\theta}={\gamma}^{-1}\psi) \sim \sum_{n=0}^{\infty} {\gamma}^{-2n} \mathcal{Q}_{2n}(\bar{\tau}, \psi) \end{equation} valid as $\gamma \rightarrow \infty$ with $\bar{\tau}$, $\psi$ fixed, where $\bar{\tau}$ is a suitable $\textit{retarded time}$ coordinate and $\hat{\theta}$ is the angle from the symmetry axis in the center-of-mass frame. The calculus of the leading term $\mathcal{Q}_0(\bar{\tau}, \psi)$ shows that this does not vanish and it is a regular term of $\bar{\tau}$. Since $\mathcal{Q}_0(\bar{\tau}, \psi)$ is not dumped by any power of ${\gamma}^{-1}$, the news function is of order 1, and therefore describes truly strong-field gravitational radiation. On angular scales of order 1, the news function should have an asymptotic expansion of the form \begin{equation} \label{eq:7.2}c_0(\hat{\tau}, \hat{\theta}={\gamma}^{-1}\psi) \sim \sum_{n=0}^{\infty} {\gamma}^{-n} {S}_{n}(\hat{\tau}, \hat{\theta}) \end{equation} valid as $\gamma \rightarrow \infty$ with $\hat{\tau}$, $\hat{\theta}$ fixed. The retarded time variables in $(\ref{eq:7.1})$ and $(\ref{eq:7.2})$ are not the same, since they refer to varying time delays suffered by different parts of the shocks when they collide. Here ${S}_{0}(\hat{\tau}, \hat{\theta})$ must be the news function for the collision at the speed of light, i.e. $\gamma=\infty$. If the two asymptiotic expansions $(\ref{eq:7.1})$ and $(\ref{eq:7.2})$ both hold in the intermediate region where ${\gamma}^{-1} << \hat{\theta} << 1$, then matching enables us to gain information about the angular dependence of ${S}_{0}(\hat{\tau}, \hat{\theta})$ near the axis $\hat{\theta}=0$. Furthermore, if ${S}_{0}(\hat{\tau}, \hat{\theta})$ is sufficiently regular it will possess a convergent series of the form \begin{equation} \label{eq:7.3}{S}_{0}(\hat{\tau}, \hat{\theta})= \sum_{n=0}^{\infty} a_{2n} (\hat{\tau}) (sin(\hat{\theta}))^{2n}, \end{equation} since it is symmetrical about $\hat{\theta}= \frac{\pi}{2}$ in the center-of-mass frame. Since $\hat{\theta}={\gamma}^{-1}\psi$ in Eq. $(\ref{eq:7.1})$, the ${\hat{\theta}}^{2n}$ part of Eq. $(\ref{eq:7.3})$ will be found from the $({\gamma}^{-1}\psi)^{2n}={\gamma}^{-2n}\psi^{2n}$ part of $(\ref{eq:7.1})$, and then finding $\mathcal{Q}_{2n}(\bar{\tau}, \psi) $ enables one to determine the coefficient $a_{2n}(\hat{\tau})$ of $(sin(\hat{\theta}))^{2n}$ in Eq. ($\ref{eq:7.3})$. In this way, $a_{2n}(\hat{\tau})$ was found, given by the limiting form of $\mathcal{Q}_{0}(\bar{\tau}, \psi)$ as $\psi \rightarrow \infty$. Hence, perturbation methods can be used to determine the entire news function of the highly nonlinear speed-of-light collision. But to calculate high-order $\mathcal{Q}_{2n}(\bar{\tau}, \psi)$ requires the solution of inhomogeneous flat-space wave equations with complicated source terms, and it is not possible to determine the nonisotropic part of ${S}_{0}(\hat{\tau}, \hat{\theta})$. We will follow another way of calculating ${S}_{0}(\hat{\tau}, \hat{\theta})$ using perturbation methods, which deals with the collision at the speed of light. Starting with the speed-of-light collision of two shocks which each have energy $\mu$, then we make a large Lorentz boost away from the center-of-mass frame. There the energy $\nu=\mu e^\alpha$ of the incoming shock 1, which initially lies on the hyperplane $z+t=0$ between two portions of Minkowski space, obeys $\nu >> \lambda$, where $\lambda= \mu e^{- \alpha}$ is the energy of the incoming shock 2, which initially lies on the hypersurface $z-t=0$. In the boosted frame, the metric describing the scattering of the weak shock off the strong one possesses a perturbation expansion in powers of $\frac{\lambda}{\nu}$, that is \begin{equation} \label{eq:7.4} g_{ab} \sim \nu^2 \bigg{[} \eta_{ab} + \sum_{i=1}^\infty \bigg{(}\frac{\lambda}{\nu} \bigg{)}^i h_{ab}^{(i)} \bigg{]}, \end{equation} with respect to suitable coordinates, where $\eta_{ab}$ is the Minkowski metric. The problem of solving the Einstein field equations becomes a singular perturbation problem of finding $h_{ab}^{(1)}$, $h_{ab}^{(2)}$, $\dots$, by successively solving the linearized field equations at first, second, ... order in $\frac{\lambda}{\nu}$, given the characteristic initial data on the surface $u=0$ just to the future of the strong shock 1. On boosting back to the center-of-mass frame, one finds that the perturbation series $(\ref{eq:7.4})$ gives an accurate description of the space-time geometry in the region in which gravitational radiation propagates at small angles away from the forward symmetry axis $\hat{\theta}=0$. By reflection symmetry, an analogous series also give a good description near the backward axis $\hat{\theta}=\pi$. The news function $c_0(\hat{\tau}, \hat{\theta})$, which describes the gravitational radiation arriving at future null infinity $J^+$ in the center-of-mass frame, is expected to have the convergent series expansion \begin{equation} \label{eq:7.5} {c}_{0}(\hat{\tau}, \hat{\theta})= \sum_{n=0}^{\infty} a_{2n} \bigg{(}\frac{\hat{\tau}}{\mu}\bigg{)} (sin(\hat{\theta}))^{2n}, \end{equation} where $\hat{\tau}$ is a suitable retarded time coordinate and where we replaced $a_{2n}(\hat{\tau})$ with $a_{2n}\big{(}\frac{\hat{\tau}}{\mu}\big{)}$, since $\hat{\tau}$ will always appear as an argument in the dimensionless combination $\frac{\hat{\tau}}{\mu}$. The first-order perturbation calculation of $h_{ab}^{(1)}$, on boosting back to the center-of-mass frame, yields $a_{0} \big{(}\frac{\hat{\tau}}{\mu}\big{)}$, in agreement with the expression of the isotropic part of the news function of the collision of two black holes at large but finite incoming Lorentz factor $\gamma$ on angular scales of order 1. The second-order calculation of $h_{ab}^{(2)}$, which consists in solving the second-order field equations in the boosted frame which take the form of inhomogeneous flat-space wave equations with complicated source terms, gives an integral expression for the first nonisotropic coefficient $a_{2} \big{(}\frac{\hat{\tau}}{\mu}\big{)}$ which cannot be evaluated numerically. Then, in what follows, we are going to show how the calculation of $a_{2} \big{(}\frac{\hat{\tau}}{\mu}\big{)}$ can be simplified analytically so as to enable us to compute this function numerically. This is of our interest since, if all the gravitational radiation in the space-time is accurately described by Eq. ($\ref{eq:7.5})$, then the mass of the assumed final static Schwarzshild black hole remaining after the collision can be determined from knowledge only of $a_{0} \big{(}\frac{\hat{\tau}}{\mu}\big{)}$ and $a_{2} \big{(}\frac{\hat{\tau}}{\mu}\big{)}$. To begin the process of finding a simpler form for $a_{2} \big{(}\frac{\hat{\tau}}{\mu}\big{)}$, we note that because of the conformal symmetry at each order in perturbation theory, the field equations obeyed by the metric perturbations $h_{ab}^{(1)}$, $h_{ab}^{(2)}$, $\dots$ in Eq. ($\ref{eq:7.4})$ may all be reduced to equations in two independent variables. Indeed, a conformal transformation does not effect the intrinsic nature of the perturbation problem, it merely alters the value of the perturbation parameter. Then, once a conformal transformation is performed, in an appropriate gauge, the field equations for the $h_{ab}^{(i)}$ are all of the form \begin{equation} \label{eq:7.6} \Box h_{ab}^{(i)} = S_{ab}^{(i)}, \end{equation} where $S_{ab}^{(i)}$ is a function of $h_{ab}^{(i-1)}$, ..., $h_{ab}^{(1)}$ and their derivatives. Since each $h_{ab}^{(i)}$ is, at this stage, of the form $fn(q, r){\rho}^{-k}$, its corresponding $S_{ab}^{(i)}$ must be of the form $fn(q, r){\rho}^{-(k+2)}$. This indicates that it is possible to eliminate $\rho$ from the field equations by separation of variables, thereby reducing them to two-dimensional differential equations. \section{Reduction to two dimensions} Let us now perform the reduction to two dimensions explicitly (D'Eath \cite{d1992gravitational2}), starting with the first-order perturbations $h_{ab}^{(1)}$. These are particular cases of the general system given by the flat-space wave equation \begin{equation} \label{eq:7.7} \Box \psi \equiv 2 \frac{\partial^2 \psi}{\partial u \partial v} + \frac{1}{\rho} \frac{\partial}{\partial \rho} \bigg{[} \rho \frac{\partial \psi}{\partial \rho} \bigg{]} + \frac{1}{\rho^2} \frac{\partial^2 \psi}{\partial \phi^2}=0, \end{equation} supplemented by the boundary condition \begin{equation} \label{eq:7.8} \begin{split} &\psi|_{u=0} = e^{im\phi} \rho^{-n}f [8 ln(v \rho) - \sqrt{2} v ], \\ &f(x)=0, \hspace{1cm} \forall x <0, \end{split} \end{equation} where $m$ and $n$ are integers and, apart from the above restriction, $f(x)$ is arbitrary. We know from our previous arguments that $\psi$ must be of the form $e^{i m \phi} \rho^{-n}\chi(q, r)$ if $u \geq 0$, where \begin{equation} \begin{split} \label{eq:7.9} & q \equiv u \rho^{-2}, \\ & r \equiv 8 log(v \rho) - \sqrt{2} v. \end{split} \end{equation} From Eq. ($\ref{eq:7.9})$ we find \begin{equation} \begin{split} \label{eq:7.10} & \bigg{[} \frac{\partial}{\partial v} \bigg{]}_{v, \rho, \phi}= \frac{1}{\rho^2} \bigg{[} \frac{\partial}{\partial q} \bigg{]}_{r, \rho, \phi}, \\ & \bigg{[} \frac{\partial}{\partial u} \bigg{]}_{u, \rho, \phi}= - \sqrt{2} \bigg{[} \frac{\partial}{\partial r} \bigg{]}_{q, \rho, \phi}, \\ & \bigg{[} \frac{\partial}{\partial \rho} \bigg{]}_{u, v, \phi}= \bigg{[} \frac{\partial}{\partial \rho} \bigg{]}_{q, r, \phi} - \frac{2 q}{\rho} \bigg{[} \frac{\partial}{\partial q} \bigg{]}_{r, \rho, \phi} - \frac{8}{\rho} \bigg{[} \frac{\partial}{\partial r} \bigg{]}_{q, \rho, \phi}, \end{split} \end{equation} and therefore \begin{equation}\begin{split}\label{eq:7.11} 2 \frac{\partial^2}{\partial u \partial v} + \frac{1}{\rho} \frac{\partial}{\partial \rho} \bigg{[} \rho \frac{\partial}{\partial \rho} \bigg{]} + \frac{1}{\rho^2}\frac{\partial^2}{\partial \phi^2} =& \frac{1}{\rho^2} \Biggl\{ - 2\sqrt{2} \frac{\partial^2}{\partial q \partial r} + \bigg{[} \rho \frac{\partial}{\partial \rho} - 2q \frac{\partial}{\partial q} \\ &+ 8 \frac{\partial}{\partial r} \bigg{]} \bigg{[} \rho \frac{\partial}{\partial \rho} - 2q \frac{\partial}{\partial q} + 8 \frac{\partial}{\partial r} \bigg{]} + \frac{\partial^2}{\partial \phi^2} \Biggr\}. \end{split} \end{equation} Thus, $\chi$ is the solution of \begin{equation} \begin{split}\label{eq:7.12} \mathcal{L}_{m, n} \chi \equiv & \Biggl\{ - 2 \sqrt{2} \frac{\partial^2}{\partial q \partial r} + \bigg{[} -n -2q \frac{\partial}{\partial q} + 8 \frac{\partial}{\partial r} \bigg{]} \bigg{[} -n - 2q \frac{\partial}{\partial q} \\ &+ 8 \frac{\partial}{\partial r} \bigg{]} - m^2 \Biggr\} \chi=0, \end{split} \end{equation} where the boundary condition is $\chi|_{q=0}= f(r)$. For the homogeneous wave equation where the solution has a simple integral form, there is no advantage in eliminating $\rho$ and $\phi$ from the differential equation. However, the higher-order metric perturbations $h_{ab}^{(i)}$ with $i \geq 2$ turn out to obey inhomogeneous flat-space wave equations of the form \begin{equation} \label{eq:7.13} \Box \psi = S \end{equation} where $S$ is a source term given by $S=e^{im \phi}\rho^{- (n +2)}H(q, r)$ and the boundary condition may be taken to be $\psi|_{u=0}=0$. This leads to the following equation for $\chi\equiv e^{-im \phi} \rho^n \psi$: \begin{equation} \label{eq:7.14} \mathcal{L}_{m, n} \chi(q, r) = H(q, r), \end{equation} where $\mathcal{L}_{m, n}$ is a $\textit{hyperbolic operator}$ in the independent variables $q$ and $r$. In contrast with the homogeneous case, the benefits gained in the reduction of Eq. ($\ref{eq:7.13})$ to Eq. ($\ref{eq:7.14})$ are not insignificant. Previously, to calculate the solution at some space-time point $P$ we would have had to integrate the source term $S$, suitably weighted, over the past null cone of $P$. Now, once the Green's function for the differential operator $\mathcal{L}_{m, n}$, defined by \begin{equation}\label{eq:7.15} \mathcal{L}_{m, n}G_{m, n}(q, r; q_0, r_0)= \delta(q-q_0)\delta(r-r_0),\end{equation} (where $\mathcal{L}_{m, n}$ acts on the $(q, p)$ part of $G_{m, n}$) has been found; we need simply to integrate the product of $H$ and the Green's function for the differential operator $\mathcal{L}_{m, n}$ over some two-dimensional region in the $(q, r)$-plane, i.e. \begin{equation} \label{eq:7.16} \chi(q, r)= \int \int G_{m, n} (q, r; q_0, r_0) H(q_0, r_0)dq_0 dr_0, \end{equation} subject to suitable boundary conditions. This makes it much easier to estimate the various contributions to the solution from different parts of the integration region. \section{Reduction to canonical form and the Riemann function} It is more convenient to reduce Eq. ($\ref{eq:7.14})$ to canonical form, and then to find an integral representation of the solution. But first, we want to demonstrate that the differential operator $\mathcal{L}_{m, n}$ is hyperbolic. Hence, we define new coordinates \begin{equation} \label{eq:7.17} x=x(q, r) \hspace{2cm} y=y(q, r). \end{equation} Now, \begin{equation} \label{eq:7.18} \mathcal{L}_{m, n} = - ( 2 \sqrt{2} + 32 q) \frac{\partial^2}{\partial q \partial r} + 4q^2 \frac{\partial^2}{\partial q^2} + 64 \frac{\partial^2}{\partial r^2} + 4(n+1)q \frac{\partial}{\partial q} - 16n \frac{\partial}{\partial r} + n^2 - m^2. \end{equation} We choose $x$ and $y$ so that the coefficients $\frac{\partial^2}{\partial x^2}$ and $\frac{\partial^2}{\partial y^2}$ vanish and $\mathcal{L}_{m, n}$ is transformed to normal hyperbolic form, in which (see Chapter 1, Eq. ($\ref{eq:60})$) \begin{equation} \label{eq:7.19} \mathcal{L}_{m, n}= f(x, y) \frac{\partial^2}{\partial x \partial y} + g(x, y) \frac{\partial}{\partial x} + h(x, y) \frac{\partial}{\partial y} + n^2 - m^2. \end{equation} Expressing $\mathcal{L}_{m, n}$ in terms of $\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial y}$ we find that \begin{equation} \begin{split} \label{eq:7.20} \mathcal{L}_{m, n}= & \Biggl\{ - (2 \sqrt{2} + 32 q) \bigg{[}\frac{\partial x}{\partial q} \bigg{]}\bigg{[}\frac{\partial x}{\partial r} \bigg{]} + 4q^2 \bigg{[}\frac{\partial x}{\partial q} \bigg{]}^2 + 64\bigg{[}\frac{\partial x}{\partial r} \bigg{]}^2 \Biggr\} \frac{\partial^2}{\partial x^2} \\ & + \Biggl\{ - (2 \sqrt{2} + 32 q) \bigg{[}\frac{\partial y}{\partial q} \bigg{]}\bigg{[}\frac{\partial y}{\partial r} \bigg{]} + 4q^2 \bigg{[}\frac{\partial y}{\partial q} \bigg{]}^2 + 64\bigg{[}\frac{\partial y}{\partial r} \bigg{]}^2 \Biggr\} \frac{\partial^2}{\partial y^2} \\ &+ \Biggl\{ - (2 \sqrt{2} + 32 q) \bigg{[} \bigg{[}\frac{\partial x}{\partial q} \bigg{]}\bigg{[}\frac{\partial y}{\partial r} \bigg{]} + \bigg{[}\frac{\partial y}{\partial q} \bigg{]}\bigg{[}\frac{\partial x}{\partial r} \bigg{]} \bigg{]}+ 8q^2 \bigg{[}\frac{\partial x}{\partial q} \bigg{]}\bigg{[}\frac{\partial y}{\partial q} \bigg{]} \\ &+ 128\bigg{[}\frac{\partial x}{\partial r} \bigg{]}\bigg{[}\frac{\partial y}{\partial r} \bigg{]} \Biggr\} \frac{\partial^2}{\partial y \partial x} + ... \end{split}\end{equation} where we have omitted the terms of first and zeroth order in $\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial y}$. In order that Eq. ($\ref{eq:7.19})$ be satisfied, we must have \begin{equation} \label{eq:7.21} - (2 \sqrt{2} + 32 q)\bigg{[}\frac{\partial x}{\partial q} \bigg{]}\bigg{[}\frac{\partial x}{\partial r} \bigg{]} + 4q^2 \bigg{[}\frac{\partial x}{\partial q} \bigg{]}^2 + 64 \bigg{[}\frac{\partial x}{\partial r} \bigg{]}^2 =0, \end{equation} \begin{equation} \label{eq:7.22} - (2 \sqrt{2} + 32 q)\bigg{[}\frac{\partial y}{\partial q} \bigg{]}\bigg{[}\frac{\partial y}{\partial r} \bigg{]} + 4q^2 \bigg{[}\frac{\partial y}{\partial q} \bigg{]}^2 + 64 \bigg{[}\frac{\partial y}{\partial r} \bigg{]}^2 =0. \end{equation} This means that $\frac{\partial x }{\partial q}/ \frac{\partial x}{ \partial r}$ and $\frac{\partial y }{\partial q}/ \frac{\partial y}{ \partial r}$ must be the two real roots of the quadratic equation \begin{equation} \label{eq:7.23} 4 q^2 x^2 - (2 \sqrt{2} + 32 q) x + 64 =0. \end{equation} The discriminant of this equation is positive, hence $\mathcal{L}_{m, n}$ is hyperbolic and its characteristic coordinates $x$ and $y$ satisfy \begin{equation} \label{eq:7.24} \bigg{[}\frac{\partial x}{\partial q}\bigg{]} = \bigg{[} \frac{ 1 + 8q \sqrt{2} + \sqrt{(1 + 16 q \sqrt{2})}}{2 q^2 \sqrt{2}} \bigg{]} \bigg{[} \frac{\partial x}{\partial r} \bigg{]}, \end{equation} and \begin{equation} \label{eq:7.25} \bigg{[}\frac{\partial y}{\partial q}\bigg{]} = \bigg{[} \frac{ 1 + 8q \sqrt{2} + \sqrt{(1 + 16 q \sqrt{2})}}{2 q^2 \sqrt{2}} \bigg{]} \bigg{[} \frac{\partial y}{\partial r} \bigg{]}, \end{equation} where we have arbitrarily assigned the plus sign to $x$ and the minus sign to $y$. Hence, we have shown the hyperbolic nature of $\mathcal{L}_{m, n}$ and we have reduced Eq. ($\ref{eq:7.14})$ to the form ($\ref{eq:7.19})$. For ease of calculation we now choose \cite{Esposito:2001ry} \begin{equation} \label{eq:7.26} \frac{\partial x}{\partial r}=1, \hspace{2cm} \frac{\partial y}{\partial r}=1. \end{equation} If we solve Eqs. $(\ref{eq:7.24})$ and $(\ref{eq:7.25})$, by making use of $(\ref{eq:7.26})$, we find \begin{equation} \label{eq:7.27} x = r + 8 ln \bigg{[} \frac{\sqrt{(1+ 16 q \sqrt{2})} -1}{2} \bigg{]} - \frac{8}{\big{[}\sqrt{(1+16q\sqrt{2})} - 1\big{]}}-4, \end{equation} \begin{equation} \label{eq:7.28} y = r + 8 ln \bigg{[} \frac{\sqrt{(1+ 16 q \sqrt{2})} +1}{2} \bigg{]} + \frac{8}{\big{[}\sqrt{(1+16q\sqrt{2})} + 1\big{]}}-4, \end{equation} where the constants of integration have been chosen for future convenience. To simplify these formulae we define \begin{equation} \label{eq:7.29} t\equiv \sqrt{1+16 q \sqrt{2}} = t(x, y). \end{equation} Then Eqs. ($\ref{eq:7.27})$ and ($\ref{eq:7.28})$ read as \begin{equation} \label{eq:7.30} x = r + 8 ln \bigg{(} \frac{t -1}{2} \bigg{)} - \frac{8}{\big{(}t - 1\big{)}}-4, \end{equation} \begin{equation} \label{eq:7.31} y = r + 8 ln \bigg{(} \frac{t +1}{2} \bigg{)} + \frac{8}{\big{(}t+ 1\big{)}}-4. \end{equation} If we subtract Eq. ($\ref{eq:7.31})$ to ($\ref{eq:7.30})$, we have \begin{equation*} (x-y)= 8ln \bigg{(} \frac{t-1}{t+1} \bigg{)} - 8 \frac{2t}{(t^2 - 1)}.\end{equation*} From which, we find \begin{equation*} ln \bigg{(} \frac{t-1}{t+1} \bigg{)} - \frac{2t}{(t^2 - 1)} = \frac{(x-y)}{8}, \end{equation*} that can be written in the form \begin{equation*} \frac{(t-1)}{(t+1)}e^{\frac{2t}{(1-t^2)}} = e^{\frac{(x-y)}{8}}. \end{equation*} If we define \begin{equation*} \omega \equiv \frac{(t-1)}{(t+1)} \rightarrow t= \frac{(1 + \omega )}{(1 - \omega)} \end{equation*} we have to solve the transcendental equation \begin{equation*} \omega e^{\frac{(\omega^2 -1)}{2 \omega}}= e^\frac{(x-y)}{8} \end{equation*} to obtain $\omega = \omega (x-y)$, from which we have $t= t(x-y)$. Now, if we exploit the formulae \begin{equation} \label{eq:7.32} \frac{\partial x}{\partial q}= \frac{64 \sqrt{2}}{(t-1)^2} , \end{equation} \begin{equation} \label{eq:7.33}\frac{\partial y}{\partial q}= \frac{64 \sqrt{2}}{(t+1)^2} , \end{equation} we find that the coefficients $f(x, y)$, $g(x, y)$ and $h(x, y)$ of Eq. ($\ref{eq:7.19})$ are \begin{equation} \begin{split} \label{eq:7.34} f(x, y) &=- (2 \sqrt{2} + 32 q) \bigg{(} \frac{\partial x}{\partial q} + \frac{\partial y}{\partial q} \bigg{)} + 8 q^2 \frac{\partial x}{\partial q}\frac{\partial y}{\partial q} + 128 \\ &= 256 \bigg{[} 1 - \frac{ 2 t^2 (t^2 + 1)}{(t-1)^2 (t+1)^2} \bigg{]}, \end{split} \end{equation} \begin{equation} \label{eq:7.35} g(x, y)= 4(n+1) q \frac{\partial x}{\partial q} -16n= 16 \bigg{[} 1 + \frac{2(n+1)}{(t-1)} \bigg{]}, \end{equation} \begin{equation} \label{eq:7.36} h(x, y)= 4(n+1) q \frac{\partial y}{\partial q}- 16n= 16 \bigg{[} 1 - \frac{2(n+1)}{(t+1)} \bigg{]}. \end{equation} The resulting canonical form of Eq. ($\ref{eq:7.14})$ is \begin{equation} \label{eq:7.37} \mathcal{L}[\chi] = \bigg{(} \frac{\partial^2}{\partial x \partial y} + a(x, y) \frac{\partial}{\partial x} + b(x, y) \frac{\partial}{\partial y} + c(x, y) \bigg{)} \chi (x, y) = \tilde{H}(x, y), \end{equation} where \begin{equation} \label{eq:7.38} a(x, y) \equiv \frac{g(x, y)}{f(x, y)} = \frac{1}{16} \frac{(1-t)(t+1)^2 (2n + 1 + t)}{(t^4 + 4t^2 - 1)}, \end{equation} \begin{equation} \label{eq:7.39} b(x, y) \equiv \frac{h(x, y)}{f(x, y)} = \frac{1}{16} \frac{(t+1)(t-1)^2 (2n + 1 + t)}{(t^4 + 4t^2 - 1)},\end{equation} \begin{equation} \label{eq:7.40} c(x, y) \equiv \frac{n^2 - m^2}{f(x, y)} = \frac{(m^2 - n^2)}{256} \frac{(t-1)^2 (t+1)^2}{(t^4 + 4t^2 - 1)},\end{equation} \begin{equation} \label{eq:7.41} \tilde{H}(x, y) \equiv \frac{H(x, y)}{f(x, y)} = -\frac{H(x, y)}{256} \frac{(t-1)^2 (t+1)^2}{(t^4 + 4t^2 - 1)}.\end{equation} Note that $a(-t) =b(t)$, $b(-t)=a(t)$, $c(-t)=c(t)$ and $\tilde{H}(-t)=\tilde{H}(t)$. For a hyperbolic equation in the form ($\ref{eq:7.37})$, we can use the Riemann integral representation of the solution. For this purpose, on denoting by $\mathcal{L}^{*}$ the adjoint of the operator $\mathcal{L}$ in $(\ref{eq:7.37})$, which acts according to \begin{equation} \label{eq:7.42} \mathcal{L}^{*} [R]= \frac{\partial^2 R}{\partial x \partial y} - \frac{\partial (a R)}{\partial x} - \frac{\partial (b R)}{\partial y} + c R \end{equation} we have to find the Riemann kernel $R(x, y; \xi, \eta)$ ($(\xi, \eta)$ are the coordinates of a point $P$ such that the characteristics through it intersect a curve $C$ at points $A$ and $B$) subject to the following conditions: \begin{description} \item[(a)] As a function of $x$ and $y$, $R$ satisfies the adjoint equation \begin{equation} \label{eq:7.43} {\mathcal{L}^*}_{(x, y)}[R]=0. \end{equation} \item[(b)] $R_x=bR$ on $AP$, i.e. \begin{equation} \label{eq:7.44} \frac{\partial R}{\partial x}(x, y; \xi, \eta)=b(x, \eta)R(x, y; \xi, \eta) \hspace{1cm} {\rm on}\; y= \eta, \end{equation} and $R_y=aR$ on $BP$, i.e. \begin{equation} \label{eq:7.45} \frac{\partial R}{\partial y}(x, y; \xi, \eta)=a(\xi, y)R(x, y; \xi, \eta) \hspace{1cm} {\rm on} \; x= \xi. \end{equation} \item[(c)] $R=1$ at $P$, i.e. \begin{equation} \label{eq:7.46} R(\xi, \eta; \xi, \eta)=1. \end{equation} \end{description} \begin{figure} \centering \includegraphics{fig7mod.png} \caption{Geometry of the characteristic initial-value problem in two independent variables.}\label{fig:7} \end{figure} Then, according to the formula ($\ref{eq:83})$ we have obtained in Chapter 1, it is possible to express the solution of Eq. ($\ref{eq:7.37})$ in the form \begin{equation}\begin{split} \label{eq:7.47} \chi(P)= &\frac{ \chi(A)R(A) + \chi(B)R(B)}{2} + \int_A^B \bigg{(}\bigg{[}bR \chi +\frac{1}{2} \bigg{(} R \frac{\partial \chi}{\partial x} - \frac{\partial R}{\partial x}\chi \bigg{)} \bigg{]}dx \\ &- \bigg{[} aR \chi + \frac{1}{2}\bigg{(} R \frac{\partial \chi}{\partial y} - \frac{\partial R}{\partial y} \chi \bigg{)} \bigg{]}dy \bigg{)}+ \int\int_\Omega R(x, y; \xi, \eta)\tilde{H}(x, y) dx dy \end{split}\end{equation} where the path of integration is the one in fig. ($\ref{fig:7})$ and $\Omega$ is a domain with boundary. We note that the main difference between Eq. ($\ref{eq:83})$ and Eq. ($\ref{eq:7.37})$ is that ($\ref{eq:83})$ refers to the equation $L[z]=f$ with $f=0$, whereas in our case $f=\tilde{H} \neq 0$. Eqs. ($\ref{eq:7.44})$ and $(\ref{eq:7.45})$ are ordinary differential equations for the Riemann kernel $R(x, y; \xi, \eta)$ along the characteristics parallel to the coordinate axes. By virtue of ($\ref{eq:7.46})$, their integration yields \begin{equation} \label{eq:7.48} R(x, \eta; \xi, \eta)= exp \bigg{(}\int_\xi^x b(\lambda, \eta) d\lambda\bigg{)}, \end{equation} \begin{equation} \label{eq:7.49} R(\xi, y; \xi, \eta)= exp \bigg{(}\int_\eta^y a(\xi, \lambda) d\lambda \bigg{)},\end{equation} which are the values of $R$ along the characteristics through $P$. Instead, Eq. ($\ref{eq:7.47})$ yields the solution of Eq. ($\ref{eq:7.37})$ for arbitrary initial values given along an arbitrary non-characteristic curve $C$, by means of a solution $R$ of the adjoint equation ($\ref{eq:7.43})$ which depends on $x$, $y$ and two parameters $\xi$ and $\eta$. Unlike $\chi$, $R$ solves a characteristic initial-value problem. \section{Goursat problem for the Riemann function} The reduction to canonical form of Eq. ($\ref{eq:7.14})$ previously performed is based on novel features with respect to the analysis of D'Eath \cite{d1992gravitational2}, since Eq. ($\ref{eq:7.47})$ also contains the integral along $AB$ and the term $\frac{1}{2}[\chi(A)R(A) + \chi(B)R(B)]$. This representation of the solution might be more appropriate for the numerical purposes, but the task of finding the Riemann function $R$ remains extremely difficult. However, it is possible to use approximate methods for solving Eq. ($\ref{eq:7.43})$. For this purpose, by virtue of Eq. ($\ref{eq:7.42})$, equation ($\ref{eq:7.43})$ is an equation of the form \cite{Esposito:2001ry} \begin{equation} \label{eq:7.50} \bigg{(} \frac{\partial^2}{\partial x \partial y} - a \frac{\partial}{\partial x} - b \frac{\partial}{\partial y} + c - \frac{\partial a}{\partial x} - \frac{\partial b}{\partial y}\bigg{)} R(x, y; \xi, \eta)=0. \end{equation} Eq. ($\ref{eq:7.50})$ can be written in the form of a canonical hyperbolic equation \begin{equation} \label{eq:7.51} \bigg{(} \frac{\partial^2}{\partial x \partial y} +A \frac{\partial}{\partial x} +B \frac{\partial}{\partial y} + C \bigg{)} R(x, y; \xi, \eta)=0, \end{equation} where \begin{equation} \left\{ \label{eq:7.52} \begin{array}{l} A \equiv - a = - \frac{1}{16} \frac{(1-t)(t+1)^2 (2n + 1 + t)}{(t^4 + 4t^2 - 1)};\\ B \equiv -b = - \frac{1}{16} \frac{(t+1)(t-1)^2 (2n + 1 + t)}{(t^4 + 4t^2 - 1)};\\ C \equiv c - \frac{\partial a}{\partial x} - \frac{\partial b}{\partial y}= \frac{(m^2 - n^2)}{256} \frac{(t-1)^2 (t+1)^2}{(t^4 + 4t^2 - 1)} - \frac{\partial}{\partial x} \bigg{[}\frac{1}{16} \frac{(1-t)(t+1)^2 (2n + 1 + t)}{(t^4 + 4t^2 - 1)}\bigg{]} \\ - \frac{\partial}{\partial y} \bigg{[} \frac{1}{16} \frac{(t+1)(t-1)^2 (2n + 1 + t)}{(t^4 + 4t^2 - 1)}\bigg{]} . \end{array}\right. \end{equation} Therefore, on defining \begin{equation} \left\{ \label{eq:7.53} \begin{array}{l} U \equiv R,\\ V \equiv \frac{\partial R}{\partial x} + BR, \end{array}\right. \end{equation} we have \begin{equation*} V \equiv \frac{\partial U}{\partial x} + B U \rightarrow \frac{\partial U}{\partial x} = V - BU. \end{equation*} If we replace this expression of $\frac{\partial U}{\partial x}$ in Eq. $(\ref{eq:7.51})$ we have \begin{equation*} \begin{split}& \frac{\partial}{\partial y} \frac{\partial U}{\partial x} + A \frac{\partial U}{\partial x} + B \frac{\partial U}{\partial y} + CU = \frac{\partial}{\partial y} (V - BU) + A(V- BU) + B \frac{\partial U}{\partial y} + CU\\ &= \frac{\partial V}{\partial y} - \frac{\partial B}{\partial y} U - B \frac{\partial U}{\partial y} + AV - ABU + B\frac{\partial U}{\partial y} + CU \\ &= \frac{\partial V}{\partial y} - \frac{\partial B}{\partial y}U + AV - ABU + CU=0. \end{split}\end{equation*} Then, Eq. ($\ref{eq:7.51})$ is equivalent to the hyperbolic canonical system \begin{equation} \label{eq:7.54} \frac{\partial U}{\partial x} = f_1(x, y)U + f_2(x, y)V, \end{equation} \begin{equation} \label{eq:7.55} \frac{\partial V}{\partial y} = g_1(x, y)U + g_2(x, y)V, \end{equation} where \begin{equation} \label{eq:7.56} \left\{ \begin{array} {l} f_1 \equiv - B= b, \\ f_2 \equiv 1, \\ g_1 \equiv AB - C + \frac{\partial B}{\partial y} = ab - c + \frac{\partial a}{\partial x}, \\ g_2 \equiv -A= a. \end{array} \right. \end{equation} An existence and uniqueness theorem holds for the system described by Eqs. ($\ref{eq:7.54})$ and ($\ref{eq:7.55})$ with boundary data ($\ref{eq:7.48})$ and ($\ref{eq:7.49})$ and hence we can exploit the finite-difference method to find approximate solutions for the Riemann function $R(x, y; \xi, \eta)$ and eventually $\chi(P)$ by Eq. ($\ref{eq:7.47})$. \section{Solution of the characteristic initial-value problem for the homogeneous hyperbolic equation} At this stage, we have to solve a characteristic initial-value problem for a homogeneous hyperbolic equation in canonical form in two independent variables, for which we have developed formulae to be used for numerical solution with the help of a finite-differences scheme. For this purpose, we study the canonical system ($\ref{eq:7.54})$ and ($\ref{eq:7.55})$ written as \begin{equation} \label{eq:7.57} \frac{\partial U}{\partial x} = F(x, y, U, V),\end{equation} \begin{equation} \label{eq:7.58} \frac{\partial V}{\partial y}= G(x, y, U, V). \end{equation} in the rectangle $R \equiv \{x, y: x \in [x_0, x_0 + a], y \in [y_0, y_0 + b] \}$ with known values of $U$ on the side $AD$ where $x=x_0$, and known values of $V$ on the side $AB$ where $y=y_0$. Then, the segments $AB$ and $AD$ are divided into $m$ and $n$ equal parts, respectively. By setting $\frac{a}{m}\equiv h$ and $\frac{b}{n} \equiv k$, the original differential equations become equations relating values of $U$ and $V$ at three intersection points of the resulting lattice, i.e. \begin{equation} \label{eq:7.59} \frac{U(P_{r, s+1}) - U(P_{r, s})}{h}=F, \end{equation} \begin{equation} \label{eq:7.60} \frac{V(P_{r+1, s}) - V(P_{r, s})}{k}=G. \end{equation} It is now convenient to set $U_{r,s}\equiv U(P_{r, s})$ and $V_{r,s} \equiv V(P_{r,s})$, hence these equations read \begin{equation} \label{eq:7.61} U_{r, s+1}= U_{r, s}+ hF(P_{r,s}, U_{r, s}, V_{r, s}), \end{equation} \begin{equation} \label{eq:7.62} V_{r+1, s}= V_{r, s}+ kG(P_{r,s}, U_{r, s}, V_{r, s}). \end{equation} Then, if both $U$ and $V$ are known at $P_{r, s}$, it is possible to evaluate $U$ at $P_{r, s+1}$ and $V$ at $P_{r+1, s}$. The evaluation at subsequent intersection points of the lattice goes on along horizontal or vertical segments. In the former case, the resulting algorithm is \begin{equation} \label{eq:7.63} U_{r, s}= U_{r, 0}+ h \sum_{i=1}^{s-1}F(P_{r, i}, U_{r, i}, V_{r, i}), \end{equation} \begin{equation} \label{eq:7.64} V_{r, s}= V_{r-1, s}+ kG(P_{r-1, s}, U_{r-1, s}, V_{r-1, s}), \end{equation} while in the latter case we have the algorithm expressed by the equations \begin{equation} \label{eq:7.65} V_{r, s}= V_{0, s}+ k \sum_{i=1}^{r-1}G(P_{i, s}, U_{i, s}, V_{i, s}), \end{equation} \begin{equation} \label{eq:7.66} U_{r, s}= U_{r, s-1}+ hF(P_{r, s-1}, U_{r, s-1}, V_{r, s-1}). \end{equation} The stability of such solutions is closely linked with the geometry of the associated characteristics. $\mathbf{Conclusion.}$ It is possible to evaluate the coefficient $a_2$ which appears in the news function ($\ref{eq:7.5}$) by solving the equation \begin{equation*}\omega e^{\frac{(\omega^2 -1)}{2 \omega}}= e^\frac{(x-y)}{8} \end{equation*} numerically for $\omega= \omega(x-y)$, from which it is possible to obtain $t(x-y)$. This yields $a$, $b$, $c$ and $H$ as functions of $(x, y)$ according to $(\ref{eq:7.38})$, $(\ref{eq:7.39})$, $(\ref{eq:7.40})$ and $(\ref{eq:7.41})$, and hence $A$, $B$ and $C$ in the equation for the Riemann function are obtained according to $(\ref{eq:7.52})$, where the derivatives with respect to $x$ and $y$ are evaluated numerically. Eventually, the system given by $(\ref{eq:7.54})$ and $(\ref{eq:7.55})$ is solved according to the finite-differences scheme with $F= f_1 U + f_2 V$ and $G=g_1U + g_2V$. Once the Riemann function $R=U$ is obtained with the desired accuracy, numerical evaluation of the integral $(\ref{eq:7.47})$ yields $\chi(P)$, and $\chi(q, r)$ is obtained upon using equations $(\ref{eq:7.30})$ and $(\ref{eq:7.31})$ for the characteristic coordinates. \chapter*{Conclusions} \addcontentsline{toc}{chapter}{Conclusions} \markboth{}{} The study of the Fourès-Bruhat proof of existence and uniqueness of the solution of Cauchy's problem for Einstein vacuum field equations has been the main aim of the present work. This has been shown by first considering systems of $m$ partial differential equations in $n$ unknown functions of $n + 1$ independent variables for which we have given the definition of $\textit{characteristic manifolds}$ and introduced the concept of wavelike propagation. Then, we have introduced the theory of hyperbolic equations giving the definition of hyperbolic equation, first on a vector space and then on a manifold, and hence we have considered a second-order linear hyperbolic equation in two variables to discuss Riemann's method. More precisely, we have given the proof of existence of $\textit{Riemann's kernel function}$ and stressed its importance in solving hyperbolic equations that obey characteristic initial-value problems. Therefore, our argumentation proceeds in studying the $\textit{fundamental}$ \\ $\textit{solutions}$ and their relation with Riemann's kernel. A first definition of $\textit{characteristic conoid}$ has been given by noticing that the fundamental solution is singular not only at a point but along a certain surface. Since any singular surface of a solution of a linear differential equation must be a characteristic, such singular surface must hence satisfy a first order differential equation. Among the solutions, the one we have considered has a given point as a conic point and it is called the $\textit{characteristic conoid}$ and then, upon introducing on a connected, four-dimensional, Hausdorff four-manifold $M$ the $\textit{characteristic polynomial}$ of a linear partial differential operator $L$, it has been defined as the cone in the cotangent space at $x \in M$. Moreover, we have studied the fundamental solution with an algebraic singularity and introduced the concept of $\textit{geodesic}$ as auto-parallel curves. To conclude the discussion upon the fundamental solution, we have seen how to build fundamental solutions, by showing some examples with odd or even number of variables and by studying the case of scalar wave equation. The discussion moves towards the study of linear systems of normal hyperbolic form. We have seen that every solution of a system $[E]$ of $n$ second order partial differential equations, with $n$ unknown functions and four variables $x$, hyperbolic and linear, which possesses in a domain $D$ first partial derivatives with respect to the four variables $x$ continuous and bounded $$ \sum_{\lambda, \mu=1}^4 A^{\lambda \mu} \frac{\partial^2 u_s}{\partial x^\lambda \partial x^\mu} + \sum_{s=1}^n \sum_{\mu=1}^4 {B^s_r}^\mu \frac{\partial u_s}{\partial x^\mu} + f_r=0 \hspace{5cm} [E]$$ verifies some $\textit{Kirchhoff formulae}$. We have then obtained a system of integral equations verified in a domain $D_0$ by these solutions. Then, we have considered a system $[F]$ of non-linear, second-order, hyperbolic partial differential equations with $n$ unknown functions $W_s$ and four variables $x^\alpha$ $$\sum_{\lambda, \mu=1}^4 A^{\lambda \mu}(x^\alpha, W_s, W_{s\alpha}) \frac{\partial^2 W_s}{\partial x^\mu \partial x^\lambda} + f_s(x^\alpha, W_s, W_{s\alpha}) =0 \hspace{3cm} [F]$$ to show under which assumptions it is possible to turn it into a linear system for which the results previously obtained for them hold. For this purpose, we have considered the functions $W_s$ as functions of the four variables $x^\alpha$; the coefficients $A^{\lambda \mu}$ and $f_s$ are then functions of these four variables. We apply these results to the equations $[F']$ obtained by differentiating five times with respect to the variables $x^\alpha$ the given equations $[F]$. Thus, we obtain a system of integral equations whose left-hand side are the unknown functions $W_s$, their partial derivatives with respect to the $x^\alpha$ up to the fifth order and some auxiliary functions $X$, $\Omega$, and whose right-hand sides contain only these functions and the integration parameters. Then, in order to solve the Cauchy problem for the nonlinear equations $[F]$ we tried to solve the system of integral equations verified by the solutions. Some difficulties arise since the quantities occurring under the integral sign must be continuous and bounded upon assuming differentiability of the coefficients $A^{\lambda \mu}$, viewed as a function of the variables $x^\alpha$. This does not hold when the functions $W_s$, $W_{s \alpha}$, ..., $U_S$ are independent, thus the quantity $[A^{ij}]\frac{\partial^2 \sigma}{\partial x^i \partial x^j} J_{x \lambda}$ fails to be bounded and continuous. Moreover, we have passed through the intermediate stage of approximate equations $[F_1]$, where the coefficients $A^{\lambda \mu}$ are some functions of $x^\alpha$. Therefore, we tried to solve the integral equations and to show that their solution is a solution of the equations $[F_1]$, but we have noticed that the obtained solution $W_s$ is only five times differentiable and our method is applicable only if the $A^{\lambda \mu}$ depend on the $W_s$ and not on the $W_{s \alpha}$. Hence, we have solved the Cauchy problem for the system $[G]$ $$ \sum_{\lambda, \mu=1}^4 A^{\lambda \mu}(x^\alpha, W_s) \frac{\partial^2 W_s}{\partial x^\lambda \partial x^\mu}+ f_s(x^\alpha, W_s, W_{s\alpha}) =0 \hspace{4cm} [G]$$ where the coefficients $A^{\lambda \mu}$ do not depend on the $W_{s \alpha}$. It is enough to apply the results for equations $[E]$ to the equations $[G']$ deduced from equations $[G]$ by four differentiations with respect to the variables $x^\alpha$ in order to obtain a system of integral equations whose right-hand sides contain only functions that are the same as those occurring on the left-hand sides. The integral equations $[J]$, verified by the bounded solutions and with bounded first derivatives of equations $[G']$, only involve the coefficients $A^{\lambda \mu}$ and $B^{T \lambda}_S$ and their partial derivatives up to the orders four and two, respectively, and the coefficients $F_S$. To solve the integral equations $[J]$ we have found the same difficulty as in the general case. Hence, to solve the Cauchy problem we have studied the approximated system $[G_1]$ of $[G]$, by making the substitution in $A^{\lambda \mu}$ (and not in $f_s$) of the $W_s$ with some approximate values ${W_s}^{(1)}$. Then, we have studied the equations $[G_1']$, obtained by differentiation of $[G_1]$ five times with respect to the variables $x^\alpha$, viewed as linear equation of type $[E]$ in the unknown functions $U_S$, and we have proved that its corresponding system of integral equations $[J_1]$, admits of a unique, continuous and bounded solution in a domain $D$. Eventually, since the solution of the Cauchy problem given for the equations $[G_1]$ defines a representation of the space of the functions ${W_S}^{(1)}$ into itself, we have proved that this representation admits a fixed point, belonging to the space. The corresponding $W_s$ are solutions of the given equations $[G]$. This solution is unique and possesses partial derivatives continuous and bounded up to the fourth order. At this stage, once we have shown the existence and uniqueness of the solution of the Cauchy problem for systems of linear and non-linear equations, we have seen how finally these results can be used to solve the Cauchy problem for the Einstein field equations. The Cauchy problem for the vacuum field equations, $R_{\alpha \beta}=0$ with initial data on a hypersurface $S$ has been formulated and it has been shown under which conditions this problem admits, in the analytic case, a solution and this solution is unique. Therefore, we refer to the vacuum field equations written for any coordinates and, by making use of $\textit{isothermal coordinates}$, we have seen that they are of the type of the nonlinear equations previously studied, i.e $$ \mathcal{G}_{\alpha \beta}= \sum_{\lambda, \mu=1}^4(g^{-1})^{\lambda \mu} \frac{\partial^2 g_{\alpha \beta}}{\partial x^\lambda \partial x^\mu} + H_{\alpha \beta}=0.$$ Thus, the Cauchy problem for Einstein vacuum field equations can be solved, if we identify $(g^{-1})^{\lambda \mu} =A^{\lambda \mu}$, $g_{\alpha \beta}= W_s$ and $H_{\alpha \beta}=f_s$, by using the same method. Eventually, we have studied the causal structure of space-time giving the conditions in order that causality holds locally, hence we have given the definition of $\textit{strong causality}$, $\textit{stable causality}$ and $\textit{global hyperbolicity}$. Moreover, we have seen the relation between global hyperbolicity and the existence of the Cauchy surfaces and hence we have given an alternative, and more fundamental, definition of the $\textit{characteristic conoid}$ that comes strictly from the causal structure of space-time. To conclude our argumentation, we have studied, as an application of Riemann's kernel, the axisymmetric black hole collisions at the speed of light. More precisely, we have analyzed the Green function for the perturbative field equations by studying the corresponding second-order hyperbolic operator with variable coefficients. Then, we have seen that the inverse of the hyperbolic operator for the inhomogeneous wave equations occurring in the perturbative analysis can be accomplished with the help of the Riemann integral representation, after solving the equation for the Riemann function. Hence, it is necessary to solve a characteristic initial-value problem for a homogeneous hyperbolic equation in canonical form in two independent variables, for which we have developed formulae to be used for the numerical solution with the help of a finite-differences scheme.
8327880490d8bc0616855a2dc69dc43585814234
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} In this paper we aim at solving an inverse problem which regards a mass moving in a bounded domain with finite velocity. We assume that the mass moves following an unknown velocity field and that the evolution of the mass density can be described by an unknown PDE. The input data of the problem are given by some snapshots of the mass distribution at certain times, while the sought information is the velocity field that drives the mass along its displacement. The basic idea can be summarized as follows: given two snapshots of the mass distribution at two instants of time, we want to understand where each portion of the mass (which is assumed to be conserved from one instant of time to the other) is transported from/to, i.e.\ how the first spatial distribution is rearranged in the second one. The goal is pursued by computing a numerical approximation of the Wasserstein distance (also known as earth mover's distance or Mallows distance) between the two consecutive density profiles, specifying a suitable cost function which measures the ``energy'' consumed by the system for moving forward. The computation of the Wasserstein distance gives, as a by-product, the minimum-cost mass flow from the first to the second configuration, i.e.\ how the mass distributes in space and time. Despite the good preliminary results obtained by the above described Wasser\-stein-based approach \cite{balzotti2018IFAC, zhu2018IJGIS}, the algorithm is found to be excessively expensive both in terms of CPU time and memory requirements. This fact strongly restricts the applicability of the method. To fix this, in this paper we propose to couple the method with the Dynamic Mode Decomposition (DMD): a data-driven technique that takes in input the snapshots of the mass distribution and returns an analytical approximation of the dynamics underlying the mass transfer. More precisely, it provides a system of ODEs which describes the evolution of the mass in any point of the domain. Solving the ODEs, we are able to recover the mass distribution \emph{at any time}, thus increasing at will the number of available snapshots or, analogously, decreasing at will the time frame between them. Controlling the time frame between two consecutive snapshots is the key to simplify the computation of the Wasserstein distance and makes the computation of the flows feasible even on large domains. Finally, a real-world application of the proposed methodology is illustrated. We are interested in inferring activity-based human mobility flows from mobile phone data. We assume that mobile devices are not singularly tracked, but their logs are aggregated in order to obtain the total number of users in a given area. In this way we get the density profiles (i.e.\ the spatial distribution) of people in a given area at various instants of time. The dataset we have at our disposal is provided by the Italian telecommunication company TIM. The time frame between two consecutive snapshots is 15 minutes. As before, the goal is to ``assign a direction'' to the presence data. In fact, the mere representation of time varying density of people clearly differentiate attractive from repulsive or neutral areas but does not provide any information about the directions of flows of people. In other words, we are interested in a ``where-from-where-to'' type of information, which reveals travel flows and patterns of people providing a sort of \emph{origin-destination} matrix. \paragraph{Paper organization} The paper is organized as follows. In Section \ref{sec:background} we present the DMD method and the Wasserstein distance. In Section \ref{sec:coupling} we show how to couple those methods to obtain an efficient algorithm. In Section \ref{sec:TIM} we apply the proposed approach to real-life data. In Section \ref{sec:conclusions} we draw our conclusions. Finally, in Appendix \ref{sec:toyexampleDMD} we apply the DMD method to the viscous Burgers' equation. \section{Mathematical background}\label{sec:background} In this section we recall the building blocks of the methodology proposed in the paper, namely the DMD method used to build a data-driven model, and the Wasserstein distance to determine the transport map driving the moving mass. \subsection{DMD method}\label{sec:introDMD} DMD is a data-driven method capable of providing accurate assessments of the spatio-temporal coherent structures in a given complex system, or short-time future estimates of such a system. Although a complete list of references for DMD goes beyond the scopes of this work, we would like to mention \cite{BN15, HH18b, HH18a, S10,TCBN14}. In the current work we use the DMD algorithm as in \cite{TRLBK14}. To begin, we suppose to have a set of data $\mathcal{X}=\{{\bf y}(t_0),\ldots,{\bf y}(t_{n_{\textup{f}}})\}$ for some time instances $\{t_j\}_{j=0}^{n_{\textup{f}}}$ with ${\bf y}(t_j)\in\mathbb R^N, j=0, \ldots, n_{\textup{f}}$ and $\Delta t = t_{j+1}-t_j$ for $j=0,\dots,n_{\textup{f}}-1$. The goal of the method is to build a mathematical model upon the dataset $\mathcal{X}\in\mathbb R^{N\times (n_{\textup{f}}+1)}$. The DMD procedure thus constructs the approximate linear evolution $\widehat{\bf y}(t)$ for the dataset $\mathcal{X}$ exploiting its low-rank structure: \begin{equation} \frac{d \widehat{\bf y}}{dt} = \widehat{\bf A} \widehat{\bf y} \label{eq:dA} \end{equation} where $\widehat{\bf A}\in\mathbb R^{N\times N}$ is unknown, $\widehat{\bf y}(0) = \widehat{\bf y}_0$, and the solution has the form \begin{equation} \widehat{\bf y}(t)= \sum_{i=1}^r \beta_i {\boldsymbol{\psi}}_i \exp(\omega_i t) \, , \label{eq:omegaj} \end{equation} where $r<N, {\boldsymbol{\psi}}_i$ and $\omega_i$ are the eigenvectors and eigenvalues of the unknown matrix $\widehat{\bf A}$. The coefficients $\beta_i$ of the vector ${\boldsymbol \beta}$ can be determined from the initial data. For example, at $t=t_0$ we have ${\bf y}(t_0)={\bf y}_0$ so that \eqref{eq:omegaj} gives ${\boldsymbol \beta}={\bf \Psi}^\dag {\bf y}_0$, where ${\bf \Psi}$ is a matrix comprised of the DMD modes $\boldsymbol{\psi}_i$ and $\dag$ denotes the Moore-Penrose pseudo-inverse. To compute the matrix $\widehat{\bf A}$, we first split the dataset into two snapshot matrices \begin{equation}\label{inp_out} {\bf Y} \!=\! \begin{bmatrix} \vline & \vline & & \vline \\ {\bf y}(t_0) & {\bf y}(t_1) & \cdots & {\bf y}(t_{n_{\textup{f}}-1})\\ \vline & \vline & & \vline \end{bmatrix}, \hspace{0.1in} {\bf Y}' \!=\! \begin{bmatrix} \vline & \vline & & \vline \\ {\bf y}(t_1) & {\bf y}(t_2) & \cdots & {\bf y}(t_{n_{\textup{f}}})\\ \vline & \vline & & \vline \end{bmatrix} \end{equation} and suppose the following linear relation hold true: \begin{equation} {\bf Y}'={\bf AY}, \end{equation} where ${\bf A}:=\exp{(\widehat{\bf A}\Delta t}$). Specifically, we assume that ${\bf y}(t_j)$ is an initial condition to obtain ${\bf y}(t_{j+1})$, i.e. its corresponding output after some prescribed evolution time $\Delta t>0$. Thus, the DMD method computes the best linear operator ${\bf A}$ relating to the matrices above: \begin{equation} {\bf A} = {\bf Y}' {\bf Y}^\dag. \label{eq:newDMD} \end{equation} We refer to ${\bf Y}$ and ${\bf Y}'$ as input and output snapshot matrices respectively. The DMD algorithm aims at optimally constructing the matrix ${\bf A}$ so that the error between the true and approximate solution is small in a least-square sense, i.e. $\| {\bf y}(t) - \widehat{\bf y}(t) \| \ll 1$. Of course, the optimality of the approximation holds only over the sampling window where ${\bf A}$ is constructed, but the approximate solution can be used to make future state predictions, and to decompose the dynamics into various time-scales. The matrix ${\bf A}$ is often highly ill-conditioned and when the state dimension $n$ is large, the aforementioned matrix may be even intractable to analyze directly. Instead, DMD circumvents the eigen-decomposition of ${\bf A}$ by considering a low rank representation in terms of a matrix $\widehat{\bf A}$ projected with the Proper Orthogonal Decomposition (POD). Although the description of the POD method goes beyond the scopes of this paper, we recall that the POD projection solves the following optimization problem \begin{equation}\label{pbmin} \min_{ {\boldsymbol{\varphi}}_1,\ldots,{\boldsymbol{\varphi}}_r\in\mathbb R^n} \sum_{j=0}^{n_{\textup{f}}-1} \left\|{\bf y}(t_j)-\sum_{i=1}^r \langle {\bf y}(t_j),{\boldsymbol{\varphi}}_i\rangle{\boldsymbol{\varphi}}_i\right\|^2\quad \mbox{such that }\langle {\boldsymbol{\varphi}}_i,{\boldsymbol{\varphi}}_j\rangle=\delta_{ij}, \end{equation} where $\{{\boldsymbol\varphi}_i\}_{i=1}^r$ are the POD projectors. The solution of the optimization problem \eqref{pbmin} is obtained by means of a Singular Value Decomposition (SVD) of the dataset ${\bf Y}$, where the first $r\ll N$ columns of the left singular eigenvectors are the required POD basis. We refer to \cite{Vol11} for a complete description of the POD method. We also mention that POD method is equivalent to Principal Component Analysis (PCA) or Karhunen-Lo\'eve expansion in other contexts (see e.g. \cite{J02,P01}). The exact DMD algorithm proceeds as follows \cite{TRLBK14}: first, we collect data ${\bf Y, Y'}$ as in \eqref{inp_out} and compute the reduced, or economy, singular value decomposition of {\bf Y} $${\bf Y}={\bf U}{\bf \Sigma}{\bf V}^T.$$ We note that the use of the economy SVD is suggested since the matrices ${\bf Y}, {\bf Y}'\in\mathbb R^{n\times n_{\textup{f}}}$ with $N\gg n_{\textup{f}}$ and that the economy ${\bf U}\in\mathbb R^{N\times r}$ is sufficient to provide the same approximation of the regular SVD given the limited amount of snapshots. \\ Furthermore, the DMD typically makes use of low-rank structure so that the total number of modes, $r\ll N$, allows for dimensionality reduction of the dynamical system. Then, we compute the least-squares fit ${\bf A}$ that satisfies ${\bf Y}'={\bf AY}$ and project onto the POD modes ${\bf U}$. Specifically, the Moore-Penrose pseudo-inverse of ${\bf Y}$ allows us to compute ${\bf A}={\bf Y}'{\bf Y}^\dag$, where the Moore-Penrose algorithm provides the least-square fitting procedure. In terms of its low-rank projection, this yields $${\bf \widehat{A}}={\bf U}^T {\bf AU}={\bf U}^T{\bf Y}'{\bf V}{\bf \Sigma}^{-1},$$ and then, we compute the eigen-decomposition of ${\bf \widehat{A}}\in\mathbb R^{r\times r}$ $${\bf\widehat A}{\bf W}={\bf W \Lambda},$$ where ${\bf \Lambda}$ are the DMD eigenvalues. Finally, the DMD modes ${\bf \Psi}^{\mbox{\tiny DMD}}$ are given by \begin{equation}\label{dmd_basis} {\bf\Psi}^{\mbox{\tiny DMD}}={\bf Y}'{\bf V}{\bf \Sigma}^{-1}{\bf W}. \end{equation} The algorithm is summarized in Algorithm \ref{Alg_DMD}. \begin{algorithm}[H] \caption{Exact DMD} \label{Alg_DMD} \begin{algorithmic}[1] \REQUIRE Snapshots $\{{\bf y}(t_0),\ldots,{\bf y}(t_{n_{\textup{f}}})\}$, Time step $\Delta t$. \STATE Set ${\bf Y}=[{\bf y}(t_0),\dots, {\bf y}(t_{n_{\textup{f}}-1})]$ and ${\bf Y'}=[{\bf y}(t_1),\dots, {\bf y}(t_{n_{\textup{f}}})]$. \STATE Compute the reduced SVD of rank $r$ of ${\bf Y}$, ${\bf Y}={\bf U}{\bf \Sigma}{\bf V}^T$. \STATE Define $\widehat{{\bf A}}:={\bf U}^T{\bf Y}'{\bf V}{\bf \Sigma}^{-1}$. \STATE Compute eigenvalues and eigenvectors of $\widehat{{\bf A}} {\bf W}={\bf W}{\bf \Lambda}$. \STATE Set ${\bf \Psi}^{\mbox{\tiny DMD}}={\bf \Lambda}^{-1}{\bf Y}'{\bf V}{\bf \Sigma}^{-1}{\bf W}$. \STATE Set $\omega_i = \frac{\log(\lambda_i)}{\Delta t}$ for \eqref{eq:omegaj} \end{algorithmic} \end{algorithm} In Appendix \ref{sec:toyexampleDMD} we provide a numerical experiment to show the effectiveness of the DMD method. We compute the data from a nonlinear PDE, i.e. the viscous Burgers' equation. \subsection{Wasserstein distance and optimal mass transfer problem}\label{sec:wassersteindistance} The notion of Wasserstein distance is strictly related to the Monge--Kanto\-rovich optimal mass transfer problem \cite{villani2008SSBM}, which can be easily explained as follows: given a sandpile with mass distribution $\rho^\textup{i}$ and a pit with equal volume and mass distribution $\rho^\textup{f}$, find a way to minimize the cost of transporting sand into the pit. The cost for moving mass depends on both the distance from the point of origin to the point of arrival and the amount of mass is moved along that path. We are interested in minimizing this cost by finding the optimal path to transport the mass from the initial to the final configuration. Given two density functions $\rho^\textup{i}, \rho^\textup{f}:\Omega\to\mathbb R$ for some bounded $\Omega\subset\mathbb R^n$ such that $\int_{\mathbb R^{n}}\rho^{\textup{i}}=\int_{\mathbb R^{n}}\rho^{\textup{f}}$, we define the $L^p$-Wasserstein distance between $\rho^\textup{i}$ and $\rho^\textup{f}$ as \begin{equation}\label{def:WassDist} W_p(\rho^\textup{i},\rho^\textup{f})=\bigg(\min_{T\in\mathcal{T}}\int_{\Omega}c(\bm{\xi},T(\bm{\xi}))^p \, \rho^\textup{i}(\bm{\xi})d\bm{\xi}\bigg)^{\frac{1}{p}} \end{equation} where \begin{equation*} \mathcal{T}:=\Biggr\{T\colon\Omega\to\Omega \, : \, \int\displaylimits_B \rho^\textup{f}(\bm{\xi})d\bm{\xi} =\int\displaylimits_{\{\bm{\xi}:T(\bm{\xi})\in B\}} \rho^\textup{i}(\bm{\xi})d\bm{\xi}, \quad \forall \, B\subset\Omega\Biggr\} \end{equation*} and $c:\Omega\times\Omega\to\mathbb R$ is a given cost function, which defines the cost of transferring a unit mass between any two points in $\Omega$. Note that $\mathcal{T}$ is the set of all possible maps which transfer the mass from one configuration to the other. It is important to note here that we are not really interested in the actual value of the Wasserstein distance $W_p$, instead we look for the \emph{optimal map} $T^*$ which realizes the $\arg\min$ in \eqref{def:WassDist}, and represents the paths along which the mass is transferred. \subsection{Numerical approximation of the Wasserstein distance}\label{sec:hitchcock} A direct numerical approximation of definition \eqref{def:WassDist} is unfeasible, but a discrete approach is still possible. Indeed, we can resort to classical problems (see Hitchcock's paper \cite{hitchcock1941SAM}) and methods (see e.g., \cite[Sec.\ 6.4.1]{santambrogio2015SPRINGER} and \cite[Chap.\ 19]{sinha2005ELSEVIER}) to recast the original mass transfer problem in the framework of linear programming (LP). We also refer to \cite{briani2017CMS} for a recent application of this methodology to traffic flow problems. The idea consists in approximating the set $\Omega$ with a structured grid with $N$ square cells $C_1,\ldots,C_N$, as it is commonly done for the numerical approximation of PDEs. We denote by $\Delta x$ the length of each side of the cells. Then, we define a graph $\mathcal{G}$ whose nodes coincide with the centers of the $N$ cells. Graph's edges are defined in such a way that each node is directly connected with each other, including itself. Introducing a numerical error (controlled by the parameter $\Delta x$), we are allowed to assume that $\forall j=1,\ldots, N$ all the mass distributed in the cell $C_j$ is concentrated in its center, i.e.\ in a node of the graph. We come up with an initial mass $m^\textup{i}_j:=\int_{C_j}\rho^\textup{i} dx$ and a final mass $m^\textup{f}_j:=\int_{C_j}\rho^\textup{f} dx$, for $j=1,\dots,N$, distributed on the graph nodes. Now, we simply aim at optimally rearranging the first mass into the second one moving it among the graph's nodes. We denote by $c_{jk}$ the cost to transfer a unit mass from node $j$ to node $k$, and by $x_{jk}$ the (unknown) mass moving from node $j$ to node $k$. The problem is then formulated as \begin{equation*} \text{minimize }\mathcal{H}:= \sum_{j,k=1}^N c_{jk}x_{jk} \end{equation*} subject to \[\displaystyle\sum_k x_{jk}=m_j^\textup{i} \quad \forall j, \quad \displaystyle\sum_j x_{jk}=m_k^\textup{f} \quad \forall k \quad\text{and}\quad x_{jk}\geq 0.\] Defining \begin{flalign*} &{\bf x} = (x_{11}, x_{12},\dots,x_{1N},x_{21},\dots,x_{2N},\dots,x_{N1},\dots,x_{NN})^\textup{T},\\ &{\bf c}= (c_{11}, c_{12},\dots,c_{1N},c_{21},\dots,c_{2N},\dots,c_{N1},\dots,c_{NN})^\textup{T},\\ &{\bf b} = (m^\textup{i}_1,\dots,m^\textup{i}_N,m^\textup{f}_1,\dots,m^\textup{f}_N)^\textup{T}, \end{flalign*} and the $2N\times N^2$ matrix \[\bf M =\begin{bmatrix} \boldsymbol\bbone_N & 0 & 0 & \dots & 0\\[0.3em] 0 & \boldsymbol\bbone_N & 0 & \dots & 0\\[0.3em] 0 & 0 & \boldsymbol\bbone_N & \dots & 0\\[0.3em] \vdots & \vdots & \vdots & \ddots & \vdots\\[0.3em] 0 & 0 & 0 & \dots & \boldsymbol\bbone_N\\[0.3em] {\bf I}_N &{\bf I}_N &{\bf I}_N &{\bf I}_N &{\bf I}_N\\[0.3em] \end{bmatrix},\] where ${\bf I}_N$ is the $N\times N$ identity matrix and ${\boldsymbol\bbone_N}=\displaystyle\underbrace{(1\ 1 \dots \ 1)}_{N\text{ times}}$, our problem is written as a standard LP problem: \begin{equation}\label{LPglobal} \begin{tabular}{ r l } $\min$ & ${\bf c}^\textup{T} {\bf x}$, \\ \text{subject to} & ${\bf Mx}={\bf b}$, \\ & ${\bf x}\geq 0.$ \end{tabular} \end{equation} The result of the algorithm is a vector ${\bf x}^*:=\arg\min {\bf c}^\textup{T} {\bf x}$ whose elements $x^*_{jk}$ represent how much mass moves from node $j$ to node $k$ employing the minimum-cost mass rearrangement. \begin{remark}\label{dimensionGLP} The dimension of the LP problem \eqref{LPglobal} is given by the dimensions of the matrix/vectors involved: $$ \dimvett {\bf x}= \dimvett {\bf c} = N^2, \qquad \dimvett {\bf b}=2N, \qquad \dimvett {\bf M}=2N^3. $$ \end{remark} \begin{remark Hereafter we will refer to problem \eqref{LPglobal} as \emph{global}, in order to stress the fact that it is possible to move the mass from and to any node of the graph. \end{remark} \section{Coupling DMD and optimal mass transfer problem}\label{sec:coupling} In this section we describe how we can drastically reduce the size of the LP problem \eqref{LPglobal} by using the DMD method. The resulting algorithm will allow us to study the mass transfer problem on large domains. \subsection{Main idea}\label{sec:completeAlgorithm} The large size of the LP problem \eqref{LPglobal}, stressed in Remark \ref{dimensionGLP}, mainly comes from the fact that the mass is allowed to be transferred from any graph node to any other graph node. However physical constraints prevent this to happening: assuming that the maximal velocity of the mass is $V_{\textup{max}}$ and denoting by $\Delta t$ the time frame from one snapshot to the following one, the maximal distance travelled is $V_{\textup{max}}\Delta t$. In order to reduce the size of the LP problem \eqref{LPglobal}, we restrict the set of all possible movements trajectories. The ideal time frame $\delta t$ of the snapshots would be the one which guarantees that the CFL-like condition (see, e.g., \cite[Chapter 14]{QV94}) \begin{equation}\label{CFL} V_{\textup{max}}\delta t \leq \Delta x \end{equation} holds true. Indeed, under condition \eqref{CFL} the \emph{mass is allowed to move only towards the adjacent cells or not move at all}. Consider now a generic set of mass distributions $\mathcal{X}=\{{\bf m}(t_0),\ldots,{\bf m}(t_{n_{\textup{f}}})\}$, where $t_{j}=j\Delta t$, $j=0,\dots,n_{\textup{f}}$, and $\Delta t$ is the time frame of the snapshots of the set $\mathcal{X}$. \emph{A priori}, the time step $\Delta t$ does not necessarily satisfy condition \eqref{CFL}. In particular, a large distance in time between the snapshots means that there are not enough information for reducing the set of possible movements. However, with the DMD we are able to reconstruct the state of the system at any time instance, even if it is not provided in the original dataset. The coupling of DMD and of optimal mass transfer problem is done in this order: \begin{enumerate}[label=\emph{\roman*)}] \item We first fix $\delta t<\Delta t$ such that it satisfies condition \eqref{CFL} and then we reconstruct the solution for each $\widetilde{t}_{j}=j\delta t$ via DMD, using Algorithm \ref{Alg_DMD}. The new set of snapshots we work with is $\widetilde{\mathcal{X}}=\{{\bf m}(\widetilde{t}_{0}),\ldots,{\bf m}(\widetilde{t}_{\widetilde{n_{\textup{f}}}})\}$, where ${\bf m}(\widetilde{t}_{j})$ is computed from \eqref{eq:omegaj}, $j=0,\dots,\widetilde{n_{\textup{f}}}$. We observe that $\widetilde{n_{\textup{f}}}>n_{\textup{f}}$. \item We recover the flows from the new set $\widetilde{\mathcal{X}}$ by means of an approximation of Wasserstein distance similar to the one done in Section \ref{sec:hitchcock}, but with reduced size, as described in detail below. Note that, since $\widetilde{n_{\textup{f}}}> n_{\textup{f}}$, we have to solve more LP problems with respect to the global approach, but, despite this, we will get advantages by using this new approach. \end{enumerate} Let us denote by $d$ the maximum number of neighbors per cell. The new unknown $\widetilde{x}_{jk}$, corresponding to the mass to be moved from node $j$ to node $k$, is defined only if $j$ and $k$ are adjacent or $j$ is equal to $k$. Analogously we define the cost function $\widetilde{c}_{jk}$. We denote by $\bf \widetilde{x}$ the vector of the unknowns and by $\bf \widetilde{c}$ the vector associated to the cost function. We introduce the vector ${\bf s}$ whose components are indexes of nodes. The first components are the indexes of the nodes adjacent to the node 1, then those adjacent to the node 2 and so until the node $N$. Vectors ${\bf\widetilde{x}}$ and ${\bf\widetilde{c}}$ are ordered similarly to ${\bf s}$. Since the mass can move only towards a maximum of $d$ nodes, the dimensions of $\bf\widetilde{x}$ and $\bf\widetilde{c}$ are lower or equal than $d N$. We introduce the matrix \[ \bf \widetilde{M} = \begin{bmatrix} \bbone_{r_{1}} & 0 & 0 & \dots & 0\\[0.3em] 0 & \bbone_{r_{2}}& 0 & \dots & 0\\[0.3em] 0 & 0 & \bbone_{r_{3}} & \dots & 0\\[0.3em] \vdots & \vdots & \vdots & \ddots & \vdots\\[0.3em] 0 & 0 & 0 & \dots & \bbone_{r_{N}}\\[0.3em] &&\bf\widetilde{D} \end{bmatrix}, \] where ${\bf \bbone}_{r_{i}}=\displaystyle\underbrace{(1\ 1 \dots \ 1)}_{r_{i}\text{ times}}$, with $r_{i}$ the number of adjacent nodes for each node $i$, $r_{i}\leq d$, and ${\bf\widetilde{D}}_{ij}=1$ if the $j$-th element of ${\bf s}$ is equal to $i$, otherwise ${\bf\widetilde{D}}_{ij}=0$. Defining the vector ${\bf \widetilde{b}} = (m^\textup{i}_1,\dots,m^\textup{i}_N,m^\textup{f}_1,\dots,m^\textup{f}_N)^\textup{T}$ the LP problem becomes \begin{equation}\label{LPlocal} \begin{tabular}{ r l } $\min$ & ${\bf\widetilde c}^\textup{T} \bf\widetilde x$, \\ \text{subject to} & ${\bf\widetilde M\widetilde x}=\bf \widetilde b$ \\ & ${\bf \widetilde x}\geq 0.$ \end{tabular} \end{equation} \begin{remark}\label{dimension_global_pb} The dimension of the LP problem \eqref{LPlocal} is given by the dimensions of the matrix/vectors involved: $$ \dimvett {\bf\widetilde x}= \dimvett {\bf\widetilde c} \leq dN, \qquad \dimvett {\bf\widetilde b}=2N, \qquad \dimvett {\bf\widetilde M} \leq 2dN^2. $$ \end{remark} \begin{remark}\label{def:global_pb} Hereafter we will refer to problem \eqref{LPlocal} as \emph{local}, in order to stress the fact that it is possible to move the mass from and to adjacent nodes of the graph only. \end{remark} \subsection{A toy example for the complete algorithm: the advection equation}\label{sec:toyexamplefull} In this test we propose an example for the complete algorithm described in Section \ref{sec:completeAlgorithm}. Let us consider the advection equation: \begin{equation}\label{advection} \begin{cases} \partial_{t}u({\bf x},t) + {\bf v}\cdot\nabla u({\bf x},t) =0 &\quad {\bf x}\in \Omega, t\in[0,T]\\ u({\bf x},t) = 0 &\quad {\bf x}\in \partial\Omega\\ u({\bf x},0)=u_0({\bf x}) &\quad\, {\bf x}\in\Omega, \end{cases} \end{equation} with ${\bf x}=(x_1,x_2)$, $ \Omega = [-2,2]\times[-2,2]$, $u_0({\bf x})=\max(0.5-x_1^2-x_2^2,0)$ and constant velocity ${\bf v}=(0.5,0.5)$. It is well known that the analytical solution of \eqref{advection} is $u({\bf x},t)=u_0({\bf x}-{\bf v}t)$ provided that we set $T$ small enough to have inactive boundary conditions, as it is the case here. Hereafter we denote by \emph{reference solution} the analytical solution $u$ of \eqref{advection}. \begin{figure}[h!] \centering \includegraphics[scale=0.14]{grafici/trasporto1} \includegraphics[scale=0.14]{grafici/trasporto2} \caption{Reference solution of equation \eqref{advection} at time $t=0$ on the left and time $t=2$ on the right.} \label{fig:trasportoPlot} \end{figure} Starting from some snapshots of the solution, we aim at reconstructing the velocity field driving the dynamics, i.e.\ the vector ${\bf v}=(0.5,0.5)$. In doing this, we will also compare the global and the local problem in terms of CPU time, see \eqref{LPglobal} and \eqref{LPlocal} respectively. We choose a time step $\delta t$, and we collect snapshots with a larger temporal step size $\Delta t = \kappa\delta t$ with $\kappa>1$ and we reconstruct the solution via the DMD method. We note that here the snapshots are computed from the analytical solution of \eqref{advection}. In particular, we work on a grid $40\times40$ and we choose $T=2$, $\Delta x = 0.1$, $\delta t = 0.05$ and $\Delta t = 2\,\delta t$. We observe that, since $V_{\textup{max}}=\norm{\bf v}=\sqrt{2}/2$, this choice of $\delta t$ fullfills the condition \eqref{CFL}. The number of snapshots is 40 and the rank $r$ in Algorithm \ref{Alg_DMD} for the DMD reconstruction is 20. Moreover we identify the nodes of the graph $\mathcal{G}$ of the numerical approximation of Wasserstein distance with the cells of the grid. \paragraph{Choice of the cost function} Since the cost function is used to figure out the ``price to pay'' for moving the mass from a node of the graph to another one, the most intuitive definition of $c$ is the Euclidean distance in $\mathbb R^2$. This choice was proved to be unsuitable for the global problem, see \cite{balzotti2018IFAC}. Indeed, since the global algorithm allows any movements between the nodes of the graph, using the Euclidean distance for the function $c$ we loose the uniqueness of the optimal transfer map ${T}^*$. To see this let us assume that we have to move three unit masses one to the right. In the picture on the left of Figure \ref{fig:uniqueness} we move the three unit masses of one to the right while in the picture on the right we move only the first mass of three to the right. The Wasserstein distance between the two configuration is clearly the same and equal to three. \begin{figure}[h!] \centering \includegraphics[scale=1.1]{grafici/tre_movimenti} \includegraphics[scale=1.1]{grafici/un_movimento} \caption{Three small movements versus one large movement.} \label{fig:uniqueness} \end{figure} The solution proposed in \cite{balzotti2018IFAC} to fix this issue was to force the minimization algorithm to select primarly the small movements penalizing the large ones. To get this, the cost function was defined as \begin{equation} c(\boldsymbol{\xi}_1,\boldsymbol{\xi}_2)=\|\boldsymbol{\xi}_1-\boldsymbol{\xi}_2\|_{\mathbb R^2}^{1+\varepsilon}, \label{eq:costoOld} \end{equation} where $\boldsymbol{\xi}_1$ and $\boldsymbol{\xi}_2$ are the coordinates of the nodes of the graph and $\varepsilon>0$ is a small parameter which accounts for the penalization.\\ In Figure \ref{fig:trasporto} we show the level sets of the solution to \eqref{advection} together with some arrows indicating the reconstructed velocity field $\bf v$. More precisely, on the left column of figure we show the results obtained with the local algorithm \eqref{LPlocal}, on the center with the global algorithm with Euclidean distance and on the right with the global algorithm with the cost function defined in \eqref{eq:costoOld} (with $\varepsilon = 0.1$). Similarly, the panels on the top show the results obtained with the reference solution whereas the panels on the bottom those obtained with the DMD solution. \begin{figure}[h!] \centering \subfloat[][{Local approach with reference solution.}] {\includegraphics[scale=0.186]{grafici/exactNew}}\quad \subfloat[][{Global approach with reference solution and Euclidean distance.}] {\includegraphics[scale=0.186]{grafici/exactAlpha1}}\quad \subfloat[][{Global approach with reference solution and cost defined in \eqref{eq:costoOld}.}] {\includegraphics[scale=0.186]{grafici/exactOld}}\\ \subfloat[][{Local approach with DMD solution.}] {\includegraphics[scale=0.186]{grafici/dmdNew}}\quad \subfloat[][{Global approach with DMD solution and Euclidean distance.}] {\includegraphics[scale=0.186]{grafici/dmdAlpha1}}\quad \subfloat[][{Global approach with DMD solution and cost defined in \eqref{eq:costoOld}.}] {\includegraphics[scale=0.186]{grafici/dmdOld}}\\ \caption{Reconstructed flows between $t_1=0.725$ and $t_2 = t_1+\delta t$ superimposed to the level sets of the solution to \eqref{advection} at $t_2$. Left column: local algorithm. Central column: global algorithm with $c$ equal to the Euclidean distance. Right column: global algorithm with $c$ defined in \eqref{eq:costoOld}. Top row: the reference solution to \eqref{advection} is used for computation. Bottom row: the DMD reconstruction of the solution to \eqref{advection} is used for computation. } \label{fig:trasporto} \end{figure} The local algorithm and the global one with the correction in \eqref{eq:costoOld} are able to reconstruct the velocity field ${\bf v}=(0.5,0.5)$ with accurate approximation. The global algorithm with Euclidean distance, instead, is less precise, since the optimal flow does not correspond to the velocity field. Moreover, the algorithm based on the DMD reconstruction of the solution introduces small oscillations in the solution to \eqref{advection}. This is expected in such hyperbolic problems since the decay of the singular values of the dataset is very slow and the initial condition is non-smooth. However, such oscillations do not have an effect in the reconstruction of the flow. \medskip To further validate our approach we compute the numerical error of the proposed algorithm. Since the results obtained with the global algorithm \eqref{LPglobal} with the Euclidean distance as cost function are the least accurate, we focus only on the other two approaches. For each time step $t_{n}=n\delta t$ we define ${\bf x}^*_{E}(t_{n})$ and ${\bf\widetilde{x}}^*_{E}(t_{n})$ as the solutions to the LP problems \eqref{LPglobal} and \eqref{LPlocal} respectively at time $t_{n}$, when the known terms ${\bf b}(t_{n})$ and ${\bf\widetilde{b}}(t_{n})$ are chosen as the reference solution to \eqref{advection} at time $t_{n}$. Analogously, ${\bf x}^*_{D}$ and ${\bf\widetilde{x}}^*_{D}$ are the solution when the known terms are obtained with the DMD method. The vectors ${\bf x}^*_{E}(t_{n})$, ${\bf\widetilde{x}}^*_{E}(t^{n})$, ${\bf x}^*_{D}(t^{n})$ and ${\bf\widetilde{x}}^*_{D}(t_{n})$, for $n=1,\dots,\floor{\frac{T}{\delta t}}+1$, are finally collected as the columns of the matrices ${\bf X}^*_{E}$, ${\bf\widetilde{X}}^*_{E}$, ${\bf X}^*_{D}$ and ${\bf\widetilde{X}}^*_{D}$ respectively. We define the errors \begin{equation}\label{eq:erroreTrasporto1} E^{G} := \frac{\norm{{\bf X}^*_{E}-{\bf X}^*_{D}}_F}{\norm{{\bf X}^*_{E}}_F}, \qquad E^{L} := \frac{\norm{{\bf\widetilde{X}}^*_{E}-{\bf\widetilde{X}}^*_{D}}_F}{\norm{{\bf\widetilde{X}}^*_{E}}_F}, \end{equation} where $\norm{\cdot}_{F}$ is the Frobenius norm. In Table \ref{tab:trasporto} we compare the errors defined in \eqref{eq:erroreTrasporto1} obtained from the simulations on a grid with $N\times N$ nodes, for $N=20,\,30$ and $40$. As we can see from the table, increasing the number of nodes we reduce the space step $\Delta x = 4/N$ and the time step $\Delta t=\Delta x/2$ and we obtain the decrease of the error between the reference solution and the DMD reconstruction. Moreover, the error obtained with the local algorithm is significantly smaller than the one obtained with the global approach. \begin{table}[tbhp] {\footnotesize \caption{Comparison of the errors defined in \eqref{eq:erroreTrasporto1}.}\label{tab:trasporto} \begin{center} \begin{tabular}{|c|l|l|c|l|}\hline \multicolumn{1}{|c|}{$N$} & \multicolumn{1}{c|}{$\Delta x$} &\multicolumn{1}{c|}{$\Delta t$} & \multicolumn{1}{c|}{$E^G$} & \multicolumn{1}{c|}{$E^L$}\\ \hline 20 & 0.20 & 0.100 & 0.185 & 0.030\\ 30 & 0.13 & 0.067 & 0.159 & 0.028\\ 40 & 0.10 & 0.050 & 0.122 & 0.020\\ \hline \end{tabular} \end{center} } \end{table} Finally, in Table \ref{tab:trasportoCosto} we compare the computational time between the global approach \eqref{LPglobal}, with the cost function as in \eqref{eq:costoOld}, and the local approach \eqref{LPlocal} with respect to the nodes of the graph. We observe that the local algorithm is always faster than the global one. The difference between the two approaches becomes more relevant when we refine the grid by increasing the number of nodes, and thus the number of time steps. Specifically, for a grid $40\times 40$, the local algorithm required a few seconds whereas the global one more than three hours. \begin{table}[tbhp] {\footnotesize \caption{Computational time.}\label{tab:trasportoCosto} \begin{center} \begin{tabular}{|c|r|r|r|r|} \hline $N$ & \bf Global Exact & \bf Global DMD & \bf Local Exact & \bf Local DMD \\ \hline 20 & 18.10 s &18.89 s & 0.28 s &0.36 s\\ 30 & 11 min & 11 min & 0.93 s & 1.41 s\\ 40 & 3 h 6 min & 3 h 7 min & 2.40 s & 4.85 s\\ \hline \end{tabular} \end{center} } \end{table} \section{Application to real mobile phone data}\label{sec:TIM} In this section we focus on a specific application of the proposed approach. The real dataset gives information about the spatial distribution of people in a large populated area. The goal is to understand the travel flows of people, focusing in particular on recurring patterns and daily flows of commuters. \subsection{Dataset} The Italian telecommunication company TIM provides estimates of mobile phones presence in a given area in raster form: the area under analysis is split into a number of elementary territory units (ETUs) of the same size (about 130 $\times$ 140 m$^2$ in urban areas). The estimation algorithm does not singularly recognize users and does not track them using GPS. It simply counts the number of phone attached to network nodes and, knowing the location and radio coverage of the nodes, estimates the number of TIM users within each ETU at any time. TIM has now a market share of 30\% with about 29.4 million mobile lines in Italy (AGCOM, Osservatorio sulle comunicazioni 2/2017). The data we considered refer to the area of the province of Milan (Italy), which is divided in 198,779 ETUs, distributed in a rectangular grid 389 $\times$ 511. Data span six months (February, March and April 2016 and 2017). The entire period is divided into time intervals of 15 minutes, therefore we have 96 data per day per ETU in total. In Figure \ref{fig:presenze3D} we graphically represent presence data at a fixed time. We observe that the peak of presence is located in correspondence of Milan city area. % \begin{figure}[h!] \begin{center} \includegraphics[width=8cm]{grafici/datiPresenze3D} \caption{3D-plot of the number of TIM users in each ETU of Milan's province on April 18, 2017.} \label{fig:presenze3D} \end{center} \end{figure} Figure \ref{fig:presenze} shows the presences in the province of Milan in a typical working day in the left panel. The curve in the image decreases during the night, it increases in the day-time and decreases again in the evening. These variations are due to two main reasons: first, the arrival to and departure from Milan's province of visitors and commuters. Second, the fact that when a mobile phone is switched off or is not communicating for more than six hours, its localization is lost. The presence value that most represents the population of the province is observed around 9 pm., when an equilibrium between traveling and phone usage is reached. This value changes between working days and weekends, but it is always in the order of $1.3\times10^6$. \begin{figure}[h!] \begin{center} \includegraphics[scale=.27 ]{grafici/datiProvinciaGiornoFeriale} \includegraphics[scale = .27]{grafici/datiProvinciaMese} \caption{Trend of presences in the province of Milan during a typical working day (left). Trend of presences in the province of Milan during April 2017 (right).} \label{fig:presenze} \end{center} \end{figure} On the right panel of Figure \ref{fig:presenze} we show the trend of presence data during April 2017. We can observe a cyclical behavior: in the working days the number of presences in the province is significantly higher than during the weekends. It is interesting to note the presence of two low-density periods on April 15-18 and on April 22-26, 2017, determined respectively by the Easter and the long weekend for the Italy's Liberation Day holiday. \subsection{DMD approach on TIM data} As explained in Section \ref{sec:introDMD}, we can reconstruct the presence data in each cell at any time. We denote by ${\bf m}(t^\textup{i})\in\mathbb R^N$ the vector containing the number of people present in the $N$ cells at a certain quarter of an hour and by ${\bf m}(t^\textup{i}+15\min)\in\mathbb R^N$ the same quantities at the consecutive time step. Applying the DMD method we are able to calculate ${\bf m}(t)\in\mathbb R^N$ for any $t\in[t^\textup{i},t^\textup{i}+15\min]$, see \eqref{eq:omegaj}. We also note that DMD can be applied to higher dimensional dataset through randomized methods as explained in \cite{AK19}. To validate the DMD approach on the TIM dataset we study the daily error using only half of the data at our disposal in the DMD reconstruction. More precisely, we denote by ${\bf P^{\mathrm{data}}}$ the matrix whose columns correspond to the real data of presences stored every $15$ minutes. Then, we reconstruct the data of presences every minute with the DMD algorithm, using snapshots every 30 minutes. In other words, we exploit only one column out of two of ${\bf P^{\mathrm{data}}}$ to build the matrix ${\bf {\widetilde{P}}}^{DMD}$, which collects the reconstructed data every minute. Since a day contains 96 quarters of an hour and 1440 minutes, ${\bf P^{\mathrm{data}}}$ is a matrix $96\times N$, while ${\bf {\widetilde{P}}}^{DMD}$ is a matrix $1440\times N$. Finally, to compare the real data with the DMD reconstruction, we extract from ${\bf {\widetilde{P}}}^{DMD}$ the rows corresponding to the original interval of $15$ minutes into the matrix ${\bf\widetilde{P}}^{\mathrm{DMD}}$, of dimensions $96\times N$, and then we define the error as: \begin{equation} E = \frac{\norm{{\bf P^{\mathrm{data}}}-{\bf\widetilde{P}}^{\mathrm{DMD}}}_F}{\norm{{\bf P^{\mathrm{data}}}}_F}. \label{eq:erroreDMD} \end{equation} In Figure \ref{DMD_TIM} we show the daily error \eqref{eq:erroreDMD} for an entire month of data in the whole area of the province of Milan. The daily error is of order $10^{-2}$, which certifies the accuracy of the DMD method. \begin{figure}[h!] \centering \includegraphics[scale = .32]{grafici/erroreGlobaleProvincia} \caption{Daily error defined in \eqref{eq:erroreDMD} where DMD has been computed with $r=95$ in step 2 of Algorithm \ref{Alg_DMD}.} \label{DMD_TIM} \end{figure} \subsection{Understanding human mobility flows} Following the approach described in Section \ref{sec:coupling}, we define the graph $\mathcal{G}$ by exploiting the subdivision of the area of the province of Milan (Italy) into ETUs; we identify the $N$ nodes of the graph with the corresponding center of the ETUs, ordered from the left to the right and from the top to the bottom. The result is a rectangular graph $\mathcal{G}$ divided into $N_R$ rows and $N_C$ columns ($N=N_R\times N_C$). The mass $m_j(t_{n})$ is defined as the average number of presences in the node $j$ at time $t_n=n\, 15\min$. Let us assume that $V_{\textup{max}}$ in \eqref{CFL} is equal to 50 km/h. Since the dimensions of the ETUs is around 150 m, to apply the DMD we fix the new time step $\delta t$ equal to 10 seconds. With this choice we assume that people can move only towards the eight adjacent nodes of the rectangular graph, or not move at all. We observe that the mass in the nodes on the four corners of the graph can move only towards four directions (adjacent nodes or no movement), while the mass along the boundaries can move only towards six directions. In this way, the total number of possible movements between the cells is given by \begin{equation} \widetilde{N}= \underbrace{4\cdot 4}_{\text{corners}}+ \underbrace{6\cdot 2\,(N_R+N_C-4)}_{\text{boundaries}}+\underbrace{9\,(N-4-2\,(N_R+N_C-4))}_{\text{internal nodes}}<9N. \label{eq:Ntot} \end{equation} The vectors $\bf\widetilde{x}$ and $\bf \widetilde{c}$ associated to the unknown moving mass and to the cost function have length $\widetilde{N}$, while the matrix $\bf\widetilde{M}$ of the LP problem \eqref{LPlocal} has dimension $2N\times \widetilde{N}$. In Table \ref{tab:confronto}, we compare the dimensions of the vectors and the matrices of the two different LP problems: it is clear that the computational time to solve problem \eqref{LPlocal} is significantly reduced respect to problem \eqref{LPglobal}. \begin{table}[tbhp] {\footnotesize \caption{Comparison between dimensions of matrices and vectors for the two methods.}\label{tab:confronto} \begin{center} \begin{tabular}{|c|c|c|} \hline \bf Algoritm & \bf Vectors Dimension & \bf Matrix Dimension \\ \hline Global & $N^2$ & $2N^3$\\ Local & $\widetilde{N}\, (<9N)$ & $2N\times\widetilde{N} \,(<18N^2)$\\ \hline \end{tabular} \end{center} } \end{table} \paragraph{Choice of the cost function $c$} Since the ETUs are rectangles of length $\ell_x=130$ m and $\ell_y=140$ m, we define the cost function $c_{jk}$ for the local algorithm as follows: \begin{equation*} c_{jk}=\begin{cases} 0 &\quad\text{if $j=k$ or $j$ and $k$ are not adjacent}\\ \ell_x &\quad\text{if $j$ and $k$ are horizontally adjacent}\\ \ell_y &\quad\text{if $j$ and $k$ are vertically adjacent}\\ \sqrt{\ell_x^2+\ell_y^2} &\quad\text{if $j$ and $k$ are diagonally adjacent.} \end{cases} \end{equation*} For the global approach, we use the cost function defined in \eqref{eq:costoOld} with $\varepsilon = 0.1$. To sum up, in order to solve the mass transfer problem for a whole day using snapshots taken every 15 minutes (real data) we have to solve 96 global LP-problems \eqref{LPglobal}, whereas with the DMD algorithm we have to solve 8640 local LP-problems \eqref{LPlocal}. As we will see in the following section, despite the larger number of LP problems, the local approach is more convenient than the global one. \begin{remark} The Wasserstein distance is defined between two distributions of equal mass (see \eqref{def:WassDist}). In our case the conservation of mass between two consecutive snapshots is not guaranteed. Let us consider a couple of snapshots with different total mass $\sum_{j} m_{j}^{0} \neq\sum_{j} m_{j}^{1}$. To correctly apply the algorithms for the identification of flows, we compute the mass in excess between the two snapshots and then we uniformly distribute it in all the nodes of the graph with lower mass. A more sophisticated approach could be the one suggested in \cite{piccoli2014ARMA}, where a definition of Wasserstein distance between two distributions with different mass is proposed. \end{remark} \subsection{Numerical results} In this section we show the results obtained with the local algorithm \eqref{LPlocal} to study the flows of commuters and the influence of great events on human mobility. In both cases the rank $r$ in step 2 of Algorithm \ref{Alg_DMD}, used for the construction of the DMD solution, is 95. The flows are represented by arrows; we draw only those which correspond to the most significant movements, and we associate a darker color to the arrows corresponding to a larger movement of people. For graphical purposes, in the following plots we aggregate 6 time steps $\delta t$ to show 1-minute mass transport. \subsubsection{Flows of commuters} In the following simulations we consider the area of the Province of Milan during a generic working day. Milan is one of the biggest Italian city and it attracts many workers from outside. The city of Milan is located in the right part of the Province, therefore we mainly see movements from/to the left part of the analyzed area. In the top panel of Figure \ref{fig:provincia} we show the morning flows of a working day: we clearly see that the arrows are directed towards the city of Milan. In particular, we zoom on the arrows which overlap the roads heading to Milan. In the bottom panel of Figure \ref{fig:provincia} we show the opposite phenomenon: in the evening people go away from work to come back home inside the Province of Milan. \begin{figure}[h!] \centering \includegraphics[scale=0.56]{grafici/provinciaMattina} \includegraphics[scale=0.56]{grafici/provinciaSera} \caption{Flows of commuters during the morning (09:00-09:01) of a generic working day (top). Flows of commuters during the evening (18:00-18:01) of a generic working day (bottom).} \label{fig:provincia} \end{figure} \paragraph{CPU times} Considering data for a 6 hours frame on an area of $144\times 240$ ETUs the local algorithm requires 144 hours of CPU time and works with 360 snapshots. The global approach \eqref{LPglobal} is not able to analyze such an area, since the matrix $\bf M$ in \eqref{LPglobal} has a computationally unmanageable dimension. \subsubsection{Flows influenced by a great event} In this test we show how the algorithm is able to capture the way a big event influences human flows. The event we have considered is the exhibition of the \emph{Salone del Mobile}, held every April at Fiera Milano exhibition center in Rho, near Milan. We analyze a square area of $31\times 31$ ETUs centered around Fiera Milano. In the left panel of Figure \ref{fig:fiera} we show the morning fluxes directed to the exhibition area whereas in the right panel we show the evening flows directed from the exhibition area to the outside. \begin{figure}[h!] \centering \includegraphics[width=.48\columnwidth]{grafici/fieraMattinaNew} \includegraphics[width=.48\columnwidth]{grafici/fieraSeraNew} \caption{Main flows directed to the exhibition area in the morning (09:45 - 09:46) (left), Main flows from the exhibition area in the evening (18:00 - 18:01).(right)} \label{fig:fiera} \end{figure} \paragraph{CPU times} For a simulation of 18 hours, from 06:00 to 23:45, on an area of $31\times 31$ ETUs, the local algorithm requires 6 minutes of CPU time while the global approach requires 30 minutes. Furthermore, we observe that the local algorithm works with $1065$ snapshots of data, whereas the global one with $266$ snapshots. \section{Conclusions} \label{sec:conclusions} In this paper we have proposed an efficient method to solve an inverse mass transfer problem, consisting in recovering the dynamics underlying the mass displacement. The proposed algorithm prevents to handle large displacements of the mass, thus saving CPU time and memory allocation with respect to other recently proposed solutions. The application of the methodology to a real dataset describing the movement of people was also investigated. It is useful to note here that the applicability of the Wasserstein-based methodology was not at all obvious, since it is based on assumptions which are not totally fullfilled. Indeed, the choice of the cost function $c$ does not take into account the fact that people move mainly along roads, are stopped by obstacles, buildings, etc., and in general are not free to move in all directions. Moreover, and most important, the computation of the Wasserstein distance stems from a global optimization problem in which the mass is considered as a whole. In other words, the optimal transport map $T^*$ is found by minimizing the cost of the displacement of the whole crowd, without any distinction among people, and with no regards about individual targets. Despite this, the results we have obtained (see especially those in Figure \ref{fig:provincia} and \ref{fig:fiera}) are exactly as one can expect, meaning that the method is, overall, robust enough to work well even if the constitutive assumptions are not totally fullfilled.
eaacb524ecfb023263470b7dec24c8eb0eba3a33
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} The way people form an opinion about a given issue, such as making a political decision of choosing a product is a complex social phenomenon. An individual's opinion can be influenced by economic factors, advertising, mass media, as well the opinions of others. When opinion changes occur only through interactions between individuals, a natural model for this dynamics is the voter model (VM)~\cite{clifford1973model,holley1975ergodic,Cox,Liggett,krapivsky1992kinetics,dornic2001critical,RevModPhys.81.591,R01,krapivsky2010kinetic,baronchelli2018emergence}. In the VM, each individual, or voter, can assume one of two states, denoted as $+$ and $-$, with one voter at each node of an arbitrary network. A voter is selected at random and it adopts the state of a randomly chosen neighboring voter. This update is repeated at a fixed rate until a population of $N$ voters necessarily reaches consensus. Each voter is influenced only by its neighbors and has no self confidence in its own opinion. The paradigmatic nature of the VM has sparked much research in probability theory~\cite{clifford1973model,holley1975ergodic,Cox,Liggett} and statistical physics~\cite{krapivsky1992kinetics,dornic2001critical,RevModPhys.81.591,krapivsky2010kinetic,baronchelli2018emergence,M03}. Because of its flexibility and utility, the VM has been applied to diverse problems, such as population genetics~\cite{Blythe}, ecology~\cite{Zillio,Maritan}, and epidemics~\cite{Pinto}, and voting behavior in elections~\cite{Gracia}. However, consensus is not the typical outcome for many decision-making processes. This fact has motivated a variety of extensions of the VM to include realistic elements of opinion formation that can forestall consensus. Examples include: stochastic noise~\cite{Manfred,Boris,Adrian}, the influence of multiple neighbors~\cite{Castellano-q}, self confidence~\cite{Volovik}, heterogeneity~\cite{Masuda}, partisanship~\cite{XSK11,Masuda2}, and multiple opinion states~\cite{deffuant2002can,hegselmann2002opinion,ben2003bifurcations}. An important extension of the VM that is relevant to this work arises when either the underlying network or the decision-making rule of each voter changes with time~\cite{gross2006epidemic,holme2006nonequilibrium,kozma2008consensus,shaw2008fluctuating,durrett2012graph,rogers2013consensus,Woolcock}. The latter scenario represents an attempt to account for the feature that the influence of individuals may be time dependent---some individuals may become more influential and others less so as the opinions of the population evolve. A natural way to account for this feature is to assign each individual a fitness that can change with time. In a single update, the higher-fitness voter imposes its opinion on its neighbor and correspondingly, the fitness of the influencer increases by a fixed amount, while the fitness of the influenced voter does not change. This adaptive voter model (AVM), introduced in~\cite{Woolcock}, leads to a consensus time on the complete graph that appears to scale as $N^{\alpha}$, with $\alpha\approx 1.45$, a slower approach to consensus compared to the classic VM. We will argue, however, this model exhibits a very slow crossover that masks the asymptotic approach to consensus. This AVM also provides the motivation for our \emph{reputational voter model} (RVM) to help understand the role of individual reputation changes on the consensus dynamics. In the RVM, each voter is endowed with a unique integer-valued reputation that ranges from 1, for the voter with the best reputation, to $N$, for the voter with the worst reputation, in addition to its voting state. In an update, two voters in different opinion states are selected at random and the voter with the higher reputation imposes its voting state on the voter with the lower reputation. After this interaction, only the reputation of the influencer voter rises, in analogy with the AVM. As we will show, the effect of these reputational changes significantly hinder the approach to consensus. When the population initially contains equal numbers in the two voting states and the average rank of these two subpopulations are the same, the time to reach consensus scales as $\exp(\sqrt{N})$. This slow approach to consensus arises because close to consensus the average rank of the minority population is typically higher than that of the majority. This imbalance tends to drive the population away from consensus and thereby leads to a long consensus time. In Sec.~\ref{sec:models}, we define the models under study: (i) the fitness voter model (FVM), where each voter is assigned a unique and unchanging fitness value, (ii) the adaptive voter model (AVM)~\cite{Woolcock}, and (iii) the reputational voter model (RVM). In Sec.~\ref{sec:fvm}, we will show that the FVM has the same dynamics as the classic VM. In Sec.~\ref{sec:avm}, we will argue that the consensus time scaling as $N^{\alpha}$, with $\alpha\approx 1.45$ in the AVM~\cite{Woolcock}, is a finite-time artifact and that the dynamics of the AVM eventually crosses over to that of the FVM. In Sec.~\ref{RVM}, we introduce the RVM and discuss the role of the time-dependent individual reputations on the opinion dynamics. In Sec.~\ref{conclude}, we give some concluding remarks. \section{MODELS} \label{sec:models} We begin by defining a set of voter models that culminate with the RVM, which is the focus of this work. All our models are defined on the complete graph; this structure is assumed throughout. \subsection{Classic Voter Model (VM)} \label{subsec:VM} We define the classic VM in a form that is convenient for our subsequent extensions. In the VM, voters are situated on a complete graph of $N$ nodes, with one voter per node. Each voter is initially assigned to one of two opinion states, $+$ or $-$. The number of voters in the $+$ and $-$ states are denoted by $N_+$ and $N_-$. The opinion update is the following: \begin{enumerate} \itemsep -0.5ex \item[(a)] Pick two random voters in opposite opinion states. \item[(b)] One of these two voters changes its opinion. \item[(c)] Repeat steps (a) and (b) until consensus is necessarily reached. \end{enumerate} Figuratively, each agent has no self-confidence and merely adopts the state of one of its neighbors. After each update, the time is incremented by an exponential random variable with mean value $\delta t\equiv N/(N_+N_-)$. There are two basic observables in the VM: the consensus time and the exit probability. The consensus time, $T_N(m)$, is the average time for a population of $N$ voters to reach unanimity when the initial magnetization, which is the difference in the density of $+$ and $-$ voters, equals $m$. For the complete graph, the consensus time is (see e.g.,~\cite{R01}) \begin{subequations} \begin{align} T_N(m) =-N\Big\{(1+m)\ln \big[\tfrac{1}{2}(1+m)\big] +(1-m)\ln \big[\tfrac{1}{2}(1-m)\big]\Big\}\,. \end{align} We are often interested in the zero-magnetization initial condition, in which case, we write the consensus time as $T_N$. The main feature of the consensus time on the complete graph is that it grows linearly with $N$. The exit probability $E(m)$ is defined as the probability that a population of $N$ voters with initial magnetization $m$ reaches $+$ consensus. The form of the exit probability is especially simple because the average magnetization is conserved: \begin{align} E(m)= \tfrac{1}{2}(1+m)\,. \label{vm-ep} \end{align} \end{subequations} In voter models where the magnetization is not conserved, the exit probability is a non-linear function of $m$. \subsection{Fitness Voter Model (FVM)} \label{subsec:fvm} In the FVM, each voter is assigned an opinion state as well as a unique and fixed fitness that is drawn from a uniform distribution in the range $[0,F_0]$. A voter with a larger fitness value is regarded as more fit. The opinion update is now: \begin{enumerate} \itemsep -0.5ex \item[(a)] Pick two random voters in opposite opinion states. \item[(b)] The less fit voter changes its opinion. \item[(c)] Repeat steps (a) and (b) until consensus is necessarily reached. \end{enumerate} The time increment for each update is again an exponential random variable with mean value $\delta t$. The crucial feature of the FVM is the \emph{unique} fitness of each voter; the actual fitness values are immaterial. We will show below that the dynamics of the FVM is the same as the VM. \subsection{Adaptive Voter Model (AVM)} \label{subsec:avm} In our version of the AVM, each voter is assigned a unique fitness that is drawn from the uniform distribution $[0,F_0]$. The fitness of each voter also changes as a result of opinion updates. The opinion update is given by: \begin{enumerate} \itemsep -0.5ex \item[(a)] Pick two random voters in opposite opinion states, with fitnesses $f_i$ and $f_j$. \item[(b)] The less fit voter changes its opinion. \item[(c)] For the fitter voter $i$, $f_i\to f_i+\delta\! f$. \item[(d)] Repeat steps (a)--(c) until consensus is necessarily reached. \end{enumerate} After each update, the time is incremented by an exponential random variable with mean value $\delta t$. As we shall see, the initial fitness range $F_0$, the fitness increment $\delta\!f$ in each voting event, and $N$ play important roles in determining the long-time dynamics. \subsection{Reputational Voter Model (RVM)} \label{subsec:rvm} \begin{figure}[ht] \centerline{\includegraphics[width=0.425\textwidth]{rvm+}} \caption{\small Update event in the RVM. Voters are arranged in rank order. The voter with rank 3 changes the opinion of the voter with rank 6. After the voting event, the ranks of the influencer and an adjacently ranked voter are shuffled to avoid ties.} \label{rvm} \end{figure} In the RVM, each voter is assigned a unique and integer-valued reputation, or rank, between 1 and $N$, with 1 corresponding to the best-ranked voter and $N$ to the worst-ranked. The opinion update is given by: \begin{enumerate} \itemsep -0.5ex \item[(a)] Pick two random voters in opposite opinion states, with ranks $r_i$ and $r_j$. \item[(b)] The lower-ranked voter changes its opinion. \item[(c)] The higher-ranked voter $i$ gains rank, $r_i\to r_i-1$. \item[(d)] The rank of the voter with rank adjacent to $i$ is adjusted to eliminate ties (Fig.~\ref{rvm}). \item[(e)] Repeat steps (a)--(d) until consensus is necessarily reached. \end{enumerate} As we will see, when the population is close to consensus, minority-species voters are typically well ranked and more likely to influence the majority rather than be influenced. This effective bias drives the population back to equal densities of $+$ and $-$ voters, and leads to a large consensus time. \section{Dynamics of the Fitness Voter Model (FVM)} \label{sec:fvm} The main feature of the FVM is that its dynamics is identical to that of the VM. This equivalence will be important to understand the dynamics of the AVM, that will be treated in the next section. First consider the dependence of the exit probability $E(m)$ on the initial magnetization $m$. By construction, the fittest voter in the population can never change its opinion. Consequently, the final consensus state coincides with the initial voting state of this fittest voter. The probability that the fittest voter is in the $+$ state equals $\frac{1}{2}(1+m)$. Thus $E(m)=\frac{1}{2}(1+m)$, as in the VM. Let us now treat the consensus time. For the VM on the complete graph, the initial magnetization uniquely specifies the system. From this initial state, there are many trajectories that eventually take the system to consensus. To compute fundamental quantities like the exit probability and the consensus time, we need to average over all stochastic trajectories of the voting dynamics. For the FVM on the complete graph, the initial state is specified by both the magnetization and the fitness of each voter. The computation of the exit probability and the consensus time requires averaging over all stochastic trajectories and over all fitness values. Thus let us compare the fate of a single pair of voters $ij$ in the state $+-$ in the VM and in the FVM. For the VM, this pair changes to either $++$ or $--$ equiprobably. In the FVM, if the fitness of voter $i$, $f_i>f_j$, then this pair changes from the state $+-$ to $++$. However, if $f_i<f_j$, then this pair changes from $+-$ to $--$. Since it is equally likely that $f_i>f_j$ or $f_i<f_j$, then in averaging over all stochastic trajectories \emph{and} over all fitness assignments, the $ij$ pair in the FVM equally likely changes to $++$ or to $--$. Thus the dynamics of the VM, averaged over all stochastic trajectories, is the same as that of the FVM, when averaged over all stochastic trajectories and over all initial fitness assignments. A detailed microscopic derivation of this equivalence is given \ref{phi}. \section{Dynamics of the Adaptive Voter Model (AVM)} \label{sec:avm} \subsection{Consensus Time and Exit Probability} \label{sec:ctep-avm} In Ref.~\cite{Woolcock}, it was reported that the consensus time scales as $T_N \sim N^{\alpha}$, with $\alpha\approx 1.45$. Instead, we will argue that this exponent estimate is a finite-time effect. To support this assertion, we show simulation data for the dependence of $T_N$ versus $N$ in Fig.~\ref{avm-T} for representative parameter values: (a) initial width of the fitness distribution $F_0=1$ (b) $F_0=N$ and (c) $F_0=N^2$, and $\delta\! f$, the change in individual fitness in a voting event, fixed to be 1. The data in the figure are based on $10^4$ realizations for $N$ up to $2^{14}=16384$. On a double logarithmic scale, the data of $T_N$ versus $N$ appears relatively straight, which suggests that a linear fit is warranted. However, there is a small but consistent downward curvature in the data, a feature that becomes apparent by studying local slopes of $T_N$ versus $N$ based on $k$ successive data points (insets to Fig.~\ref{avm-T}). The choice of $k$ is important: for too-small $k$ values, successive local slopes fluctuate strongly and cannot be reliably extrapolated, while for $k$ too large, the systematic trend in the local slope is averaged away. We find that for $k=10$, there is a good compromise between minimizing statistical fluctuations and uncovering systematic local trends. \begin{figure}[ht] \centerline{ \subfigure[]{\includegraphics[width=0.33\textwidth]{MFPT-adap-0}} \subfigure[]{\includegraphics[width=0.33\textwidth]{MFPT-adap-1}} \subfigure[]{\includegraphics[width=0.33\textwidth]{MFPT-adap-2}}} \caption{Average consensus time $T_N$ versus $N$ for the AVM on the complete graph of $N$ sites with: (a) $F_0=1$, (b) $F_0=N$, and (c) $F_0=N^{2}$, with $\delta\! f=1$. The insets show local 10-point slopes as a function of $1/\ln N$. The error bars are the standard deviation in a linear least-squares fit. } \label{avm-T} \end{figure} In Figs.~\ref{avm-T}(a) \& (b), corresponding to $F_0=1$ and $F_0=N$ respectively, the local slope is non-monotonic in $N$. The source of this crossover behavior appears to be the broadening of the fitness distribution as a function of time. This leads to rank-changing events becoming progressively less frequent. When rank changes stop occurring, the dynamics should be the same as the FVM, for which $T_N\sim N$. However, consensus interrupts this gradual crossover. Conversely, for $F_0=N^2$, the initial fitness distribution is sufficiently broad that rank-changing events never occur. The dynamics thus coincides with that of the FVM, for which $T_N\sim N$. For this case, the simulation data for the local slope appears to extrapolate to a value that is close to the expected value of 1 (Fig.~\ref{avm-T}(c)). \begin{figure}[h] \centerline{ \subfigure[]{\includegraphics[width=0.33\textwidth]{E-adap-0}} \subfigure[]{\includegraphics[width=0.33\textwidth]{E-adap-1}} \subfigure[]{\includegraphics[width=0.33\textwidth]{E-adap-2}} } \caption{Exit probability as a function of initial magnetization $m$ for the AVM on the complete graph of $N$ sites with: (a) $F_0=1$, (b) $F_0=N$, and (c) $F_0=N^{2}$, for $\delta\! f=1$ in (a)--(c). These data are obtained by averaging over $10^5$ trajectories.} \label{avm-E} \end{figure} Simulation results for the exit probability is shown in Fig.~\ref{avm-E} for: (a) $F_0=1$, (b) $F_0=N$, and (c) $F_0=N^2$, with $\delta f=1$ in all cases. In (a) and (b), the exit probability $E(m)$ is a non-linear function of $m$, which means that the magnetization is not conserved. The non-linearity indicates that there is an effective bias in the dynamics that tends to drive a population with non-zero magnetization back to the zero-magnetization state and thus forestalls consensus. Note the curious feature, for which we have no explanation, is that $E(m)$ is non-monotonic in $m$ for small $N$. When $F_0=N^2$, the exit probability is linear in $m$. As discussed above, rank-changing events no longer occur for $F_0=N^2$, so that the dynamics should be the same as the VM. To summarize, in spite of the simplicity of the AVM update rule, its basic properties are surprisingly complex. When the initial fitness distribution is sufficiently broad or equivalently, the fitness increment $\delta\!f$ in a single voting effect is sufficiently small, rank-changing events do not occur, so that the dynamics is the same as the FVM, which, in turn, is the same as that of the classic VM. The dynamics of the AVM has a paradoxical character in the time range where rank-changing events do occur. Figure~\ref{avm-E} shows that the average magnetization is not conserved because $E(m)$ strongly deviates from the form $E(m)=\frac{1}{2}(1+m)$ that arises in the magnetization-conserving VM. The non-linear dependence of $E(m)$ in this figure indicates the presence of an underlying bias that tends to drive the system to zero magnetization whenever $m\ne 0$. In other examples of voter-like models with non-conserved magnetization~\cite{Lambiotte,Lambiotte3}, a similar non-linearity for $E(m)$ was observed. As a result of the effective bias that drives the system to zero magnetization, the consensus times in these models were found to grow faster than a power law in $N$~\cite{Lambiotte,Lambiotte3}. The observation of an apparent power-law dependence of $T_N$ on $N$ found above and in Ref.~\cite{Woolcock} is possibly a manifestation of the gradually diminishing effective bias. The main message from our analysis is that the exponent $\alpha$ in $T_N\sim N^\alpha$ is strongly $N$-dependent and less than the value 1.45 reported in~\cite{Woolcock}. \subsection{Dynamical Non-Stationarity} By directly adapting the theory given in~\cite{ben2005dynamics,Ben-Naim2006} for the fitness distribution in a model of social competition, the distribution of individual fitnesses in the AVM approaches a uniform distribution in $[0,F(t)]$, with $F(t)=F_0+\,\delta\! f\, t/2$. Consequently, as the fitness distribution broadens, changes in fitness rank become more rare. When rank changes can no longer occur, the subsequent dynamics approaches that of the FVM. To understand this transition, we estimate the time dependence of fitness-rank changes. Consider two voters $i$ and $j$ of adjacent ranks, with $f_i(0)>f_j(0)$; that is, voter $i$ is initially fitter than voter $j$. Their fitnesses $f_i$ and $f_j$ at a later time $t$ are \begin{align} \label{fij} \begin{split} f_i(t) &=f_i(0)+v_it \pm \sqrt{Dt}\,,\\ f_j(t) &=f_j(0)+v_jt \pm \sqrt{Dt}\,. \end{split} \end{align} Here $v_i$ is the systematic change in fitness because a higher-ranked voter typically is more influential than a lower-ranked voter. The ``speed'' $v_i$ at which the $i^{\rm th}$ voter gains fitness is proportional to the fraction of voters with lower fitness. For a uniform fitness distribution, $v_i = f_i\, \delta\! f/F$. Thus the speed of the best-ranked voter is $\delta\! f$ and that of the worst-ranked voter is 0. The term $\pm\sqrt{Dt}$ denotes the change in fitness due to stochastic effects, which give rise to rank-changing events. \begin{figure}[ht] \centerline{\includegraphics[width=0.45\textwidth]{fitness-cartoon}} \caption{Schematic of the left-hand and right-hand sides of Eq.~\eqref{criterion} (red and blue respectively). For this example, rank changes can occur only in the intermediate time regime between $t_*$ and $t^*$.} \label{cartoon} \end{figure} In the absence of stochasticity, no rank-changing events occur. To assess the role of stochasticity on rank changes, we assume a negative stochastic term for $f_i$ and a positive stochastic term for $f_j$ and find the condition under which the ranks of these two voters can switch~\cite{TW83,KR84}. That is, suppose that at some time $t$, $f_i(t)<f_j(t)$. From Eq.~\eqref{fij}, this criterion gives \begin{subequations} \begin{align} \label{ci} f_i(0)-f_j(0)+(v_i-v_j)t < \sqrt{4Dt}\,. \end{align} Now $v_i-v_j=\delta\!f/N$, while the diffusion coefficient associated with the stochasticity is proportional to $(\delta\! f)^2$. Thus Eq.~\eqref{ci} becomes \begin{align} \label{criterion} \frac{F_0}{N}+\frac{\delta\! f}{N} t < \delta\! f \,\sqrt{4t}\,. \end{align} \end{subequations} Dividing through by $\delta\! f$, defining $a={1}/{N}$, and $b=F_0/(N \delta\! f)$, the solution to \eqref{criterion} is \begin{align} t=\frac{1}{2a^2}\big[(1-2ab)\pm\sqrt{1-4ab}\big]\,. \end{align} There are no solutions for $4ab>1$, which translates to $F_0/\delta\!f>N^2$. That is, for a given $N$, if either the initial fitness range is sufficiently large or the fitness change in a single voting event is sufficiently small, no rank changes occur. In this limit, the dynamics of the AVM reduces to the FVM, which, in turn, is the same as the VM. For $4ab<1$, the physically relevant situation is $4ab\ll 1$. Now there are two solutions: \begin{align} t_* \approx \left(\frac{F_0}{N\delta\!f}\right)^2\,,\qquad\qquad t^*\approx N^2\,. \end{align} Between these two times, rank-changing events occur. We may estimate the time dependence of the number of rank changes as follows. The typical fitness difference of neighboring-ranked voters at time $t$ is $\Delta\equiv F(t)/N$. In a single voting event, the typical number of rank changes is $dr\approx {\delta\! f}/{\Delta}= N\delta\! f/F(t)$, as long as $\delta\!f>\Delta$. Thus we estimate the number of rank changes per unit time as \begin{subequations} \begin{align} \label{rankchange} N_r(t) \simeq \frac{N\delta\! f/F(t)}{\delta t}=\frac{N\delta\!f/\delta t}{F_0 + \,\delta\! f\,t/2}\,. \end{align} \begin{figure}[ht] \centerline{\includegraphics[width=0.5\textwidth]{rank-change-chopped}} \caption{Time dependence of the number of rank changes per unit time, $N_r(t)$, scaled by $N^2$ in the AVM for $F_0=1$ and $\delta f =1$. Data for $t>T_N$ are dominated by noise and are not shown. The data are generated by averaging over $10^5$ realizations. The dashed line is the prediction in Eq.~\eqref{rankchange2}.} \label{avm-rank} \end{figure} We can make this estimate more precise by computing the number of rank changes averaged over the uniform distribution of fitnesses. Consider the case where $F_0=1$ and $\delta\! f =1$. For the first voting event between two voters with fitnesses $f_i$ and $f_j<f_i$, their fitnesses after the voting event will be $f_i+1$ and $f_j$ respectively. The number of rank changes due to these changes is $dr=N(1-f_i)$. Averaging this expression over the uniform distribution of fitnesses subject to the constraint $f_i>f_j$, gives $dr=N/3$. Then using $\delta t= 4/N$ as the time increment for this first voting event, the initial number of rank changes per unit time is $N^2/12$. Using this for $N_r(t\!=\!0)$ in \eqref{rankchange}, the number of rank changes per unit time at any later time is \begin{align} \label{rankchange2} N_r(t) =\frac{N^2\delta\! f/12}{F_0 + \,\delta\! f\,t/2}\,. \end{align} \end{subequations} This prediction is consistent with the simulations shown in Fig.~\ref{avm-rank}. The simple reasoning given above shows that the dynamics of the AVM is non stationary. At early times, rank-changing events occur frequently (as long as $\delta\! f$ is not pathologically small) and these rank changes are responsible for the slow approach to consensus. However, at sufficiently long times, rank changing events stop occurring and the dynamics crosses over to that of the FVM. Thus over a substantial time range the dynamics of the AVM is governed by crossover effects. \subsection{Magnetization Zero Crossings} The non-stationarity of the AVM also manifests itself in the times between successive zero crossings of the magnetization. For a system that starts at zero initial magnetization, there are typically multiple instances when the magnetization returns to zero before consensus is reached. We define $\tau_n$ as the average time between the $(n-1)^{\rm st}$ and $n^{\rm th}$ zero crossing, with the $0^{\rm th}$ crossing occurring at $t=0$. Each $\tau_n$ is averaged over those trajectories that have not yet reached consensus by the $n^{\rm th}$ crossing. A basic feature of these magnetization zero crossings for the AVM is that $\tau_n$ varies non-monotonically with $n$ (Fig.~\ref{avm-zero}). In this plot, the number of ``surviving'' trajectories decreases as $n$ increases (roughly a fraction $10^{-3}$ of all realizations survive until $n=2500$), and the behavior of $\tau_n$ becomes progressively noisier. In contrast, the dynamics of the classic VM is stationary and successive zero-crossing times are all the same; the derivation of the crossing time for the VM is given in \ref{zct-vm}. \begin{figure}[ht] \centerline{\includegraphics[width=0.5\textwidth]{nth-0-chopped}} \caption{Dependence of $\tau_n$, the $n^{\rm th}$ zero-crossing time on $n$ for $10^6$ realizations with $N=256$. The data are smoothed by averaging over 15 successive points. The parameters are $F_0=1$ and $\delta\! f=1$. The dashed line is the exact zero-crossing time for the VM (\ref{zct-vm}). The average number of zero crossings is 900.3. } \label{avm-zero} \end{figure} We can qualitatively understand the non-monotonicity of the AVM zero-crossing times in terms of the time dependence of the rank changes of the voters. As derived in Eq.~\eqref{rankchange2}, rank changes are frequent at early times and become progressively less common. These rank changes give rise to an effective bias $v(m)$ towards zero magnetization (see also the next section). At early times, these frequent rank changes imply a strong bias to zero magnetization; this leads to zero-crossing times that are smaller than in the VM. At later times, we can assess the role of the bias on magnetization trajectories in terms of the P\'eclet number~\cite{P03}, $P\!e\equiv |v(m)m|/D(m)$, where $D(m)$ is the diffusion coefficient associated with the trajectories. As time increases and the bias becomes weaker, only those trajectories that approach close to $m=\pm 1$ experience a P\'eclet number $P\!e>1$ and get driven back towards zero magnetization. These large-deviation trajectories lead to a zero-crossing time that is larger than that of the VM. Finally, at late times ($t \gg N^2$), rank changes become sufficiently rare that the dynamics approaches that of the VM and the zero-crossing times also approach that of the VM. This asymptotic limit will be reached only when the number of zero crossings $n$ is of the order of $N^2$ when rank changes no longer occur. \section{Dynamics of the Reputational Voter Model (RVM)} \label{RVM} \subsection{Effective Potential} In RVM, each voting event leads to a fixed number of rank changes (Fig.~\ref{rvm}). This implies that the dynamics is stationary, which simplifies the analysis of this model. We will argue that the dynamics of the magnetization is equivalent to that of a random walk that is confined to an effective potential well, leading to an anomalously long consensus time compared to the VM and the AVM. In a single voting event, the magnetization $m$ changes by $\delta m\equiv \pm 2/N$ and the average time for such a voting event is $\delta t=N/(N_+N_-)$, where $N_\pm$ are the number of voters in the $+$ and $-$ states, respectively. We define $w(m\to m')$ as the probability that the magnetization changes from $m$ to $m'$ in a single voting event and $P(m,t)\delta m$ as the probability that the system has a magnetization between $m$ and $m+\delta m$. The Chapman-Kolmogorov equation for the time dependence of $P(m,t)$ is \begin{subequations} \begin{align} P(m,t\!+\!\delta t)\!=\! w(m\!-\!\delta m\to m)P(m\!-\!\delta m,t) + w(m\!+\!\delta m\to m)P(m\!+\!\delta m,t). \end{align} Expanding this equation to second order in a Taylor series gives the Fokker-Planck equation \begin{align} \frac{\partial}{\partial t} P(m,t)=-\frac{\partial }{\partial m}[v(m)P] + \frac{\partial^2 }{\partial m^2}[D(m)P]\,, \label{FP} \end{align} \end{subequations} where the drift velocity and diffusion coefficient are given by \begin{align*} v(m)&=2[2w(m\!\to\! m+\delta m)-1]/(N\delta t)=[2w(m\to m+\delta m)-1](1-m^2)/2\,,\\ D(m)&= 2/(N^2 \delta t)=(1-m^2)/2N\,, \end{align*} and where the second equalities follow by expressing the time step $\delta t =N/(N_+N_-)$ in terms of the magnetization, $\delta t= 4/[N(1-m^2)]$. \begin{figure}[ht] \centerline{\includegraphics[width=0.6\textwidth]{VD-vs-m}} \caption{Dependence of $v(m)/D(m)$ for the RVM on $m$ for different $N$. The inset shows the data collapse when $v/D$ is divided by $\sqrt{N}$. The solid curve is the empirical fit $f(m)=-0.65\, \text{tanh}^{-1}\,m$ (see text). The data represent averages over $10^4$ realizations.} \label{rvm-bias} \end{figure} In Fig.~\ref{rvm-bias}, we plot the ratio $v(m)/D(m)$ versus $m$. For this data, we take the initial magnetization to be zero, and define the initial average ranks of voters in the $+$ and $-$ states to be equal. The quantity $w(m \to m+\delta m)$ is measured as the probability that the magnetization of the system increases from $m$ to $m+\delta m$. The important feature is the non-zero drift velocity that drives the population \emph{away} from consensus and ultimately leads to a long consensus time. Empirically, we also find that the curves of $v/D$ for different $N$ all collapse onto a single universal curve when the data is scales by $\sqrt{N}$ (inset to Fig.~\ref{rvm-bias}). The resulting scaled curve has a sigmoidal shape that is turned on its side. We find, therefore, that this curve is well fit by the archetypal sigmoidal function $f(m)= - 0.65\,\text{tanh}^{-1}\,m$, where the amplitude 0.65 gives the minimum deviation between the data for $v/m$ and the fit. \subsection{Consensus Time and Exit Probability} Because the drift velocity drives the system away from consensus, we anticipate that the consensus time will grow faster than a power law in $N$, as shown in Fig.~\ref{rvm-fpt}. For this data, the initial magnetization is set to $m=0$ and the voter ranks are chosen so that the average ranks of $+$ and $-$ voters are, on average, equal. The data in this figure indicate that $T_N$ grows faster than a power law in $N$. There is also an extremely slow crossover to the asymptotic behavior (inset to Fig.~\ref{rvm-fpt}(a)) and it is not possible to determine the functional form of $T_N$ based on simulation data up to $N=1024$. \begin{figure}[ht] \centerline{\subfigure[]{\includegraphics[width=0.475\textwidth]{MFPT-ni}}\quad \subfigure[]{\includegraphics[width=0.475\textwidth]{E-vs-m0}}} \caption{(a) Dependence of $\ln T_N$ versus $N$ on a double logarithmic scale based on: (i) $10^4$ realizations of the RVM (red triangles), and (ii) numerical integration of Eq.~\eqref{mfpt-rvm} (blue circles). The inset shows the local slopes of these two datasets as a function of $1/\ln N$. (b) Exit probability of the RVM as a function of initial magnetization $m$ for different $N$. These data are based on $10^5$ realizations. } \label{rvm-fpt} \end{figure} To give a more principled and reliable estimate for the $N$ dependence of $T_N$, we write the backward Kolmogorov equation for the consensus time~\cite{R01,Gardiner} \begin{subequations} \begin{align} T_N(m)=w(m\to m+\delta m)T_N(m+\delta m)+w(m\to m-\delta m)T_N(m-\delta m)+\delta t\,. \end{align} In the continuum limit this recursion becomes \cite{R01,Gardiner} \begin{align} \label{T-RVM} \frac{v(m)}{D(m)}\frac{\partial T_N(m)}{\partial m} + \frac{\partial^2 T_N(m)}{\partial m^2}=-\frac{1}{D(m)}\,. \end{align} \end{subequations} For arbitrary functional forms of $v(m)$ and $D(m)$, the formal solution of \eqref{T-RVM} is~\cite{Gardiner} \begin{align} \label{mfpt-rvm} T_N(m)=\int^{1}_m e^{-{A}(m')} \left[\int^{m'}_0 \frac{e^{{A}(m'')}}{D(m'')}dm''\right]dm'\,, \end{align} where ${A}(m)=\int^{m}_0 [v(m')/D(m')] dm'$. While it is generally not possible to solve \eqref{mfpt-rvm} analytically, we can numerically integrate this equation. Here, we use our empirical observation that $v(m)/D(m) = \sqrt{N} f(m)$, with $f(m)= -0.65\,\text{tanh}^{-1}\,m$ (inset to Fig.~\ref{rvm-bias}). The outcome of this numerical integration for $N$ up to $10^6$ is also shown in Fig.~\ref{rvm-fpt}(a). The simulation data and the integration data are nearly the same, and the local slopes of these two datasets show similar behaviors. However, since we can obtain integration data up to $N=10^6$, we can now see the asymptotic trend in the local slope, which indicates that the local slope eventually converges to $\frac{1}{2}$ (inset to this figure). Thus we argue that the consensus time for the RVM has the dependence $T_N\sim \exp(\sqrt{N})$. Due to the non-zero drift velocity in the RVM, the magnetization is not conserved, a feature that again manifests itself in the non-linear dependence of the exit probability $E(m)$ on initial magnetization (Fig.~\ref{rvm-fpt}(b)). We again define the initial state so that the ranks of $+$ and $-$ voters are equal, on average. As $N$ increases, $E(m)$ gradually approaches a step function; this is a consequence of $v(m)/D(m)$ being an increasing function of $N$. The step-like form of $E(m)$ is also consistent with the consensus time growing faster than any power law in $N$ as shown in Fig.~\ref{rvm-fpt}(a). \section{Summary and Discussion} \label{conclude} We studied a set of voter-like models on the complete graph, corresponding to the mean-field limit, in which each voter has a characteristic fitness that is a measure of its influence on others. Our motivation for investigating these models is that, in real life, some individuals are influential and other less so; moreover, the influence of each individual can change with time as opinions evolve. While our models are highly stylized, perhaps they provide a useful first step to understand the role of individual persuasiveness on how opinions change in a population. For the fitness voter model (FVM), where the fitness of each voter is distinct and fixed, a simple, but striking result is that its voting dynamics turns out to be identical to the classic VM. Our main focus was on voter models in which the fitness of each voter, as well as its opinion, can change in an elemental update event. We found that the coupled dynamics of the fitness and voting state of each voter leads to rich dynamics and also to very slow and subtle crossover effects. This type of coupled dynamics between voting state and fitness also shares some conceptual commonality with voter models in which the connections between voters can change as their opinions in each update~\cite{gross2006epidemic,holme2006nonequilibrium,kozma2008consensus,shaw2008fluctuating,durrett2012graph,rogers2013consensus,Woolcock}. We investigated two examples in which changing individual voter fitness controls the consensus dynamics. In the adaptive voter model (AVM), the fitness of the influencer voter increases by a fixed amount while the fitness of the influenced voter is unchanged in a single voting event. This same model was recently investigated in Ref.~\cite{Woolcock}, where it was reported that the consensus time $T_N\sim N^\alpha$, with $\alpha\approx 1.45$. We argued instead that the dynamics of the AVM is more subtle than this simple power law. In particular, the dynamics has a non-stationary character, in which the fitness distribution of the population broadens in time. Eventually, the width of this distribution broadens to the point where fitness updates no longer change the relative ranks of individual voters. When this occurs, the opinion dynamics slowly crosses over to that of the FVM, which in turn is the same as the classic VM. This crossover is interrupted by consensus, and the dependence of $T_N$ on $N$ appears to be a power law, but with an exponent that is smaller than 1.45 (Fig.~\ref{avm-T}). We also introduced the reputational voter model (RVM), which has the advantage that its dynamics is stationary. In the RVM, each voter is assigned a unique integer-valued rank that ranges from 1, for the best-ranked voter, to $N$, for the worst-ranked voter. In an update, the influencer voter moves up by 1 in rank while the rank of the influenced voter is unchanged. A salient feature of this dynamics is that the voters in the minority opinion tend to be higher ranked than those with the majority opinion. \begin{figure}[ht] \centerline{\includegraphics[width=0.475\textwidth]{rpm-vs-m}} \caption{Difference between the average rank of $+$ and $-$ voters, $R_+-R_-$, as a function of $m$ for different $N$. The inset shows the data collapse when $R_+-R_-$ is scaled by $\sqrt{N}$. The data represent averages over $10^4$ realizations.} \label{rvm-ep} \end{figure} Consider a single update event, in which a voter with a $-$ opinion is converted to $+$. For this to occur, the reputation of this $-$ voter must be lower than the $+$ voter. After this update, the average rank of the $+$ voters becomes a bit worse: the influencer voter moves up by one rank, but the influenced voter, whose rank is typically much lower, now joins the list of $+$ voters. Concomitantly, the $-$ voters have lost one voter whose average rank is low, so that the average rank of this group improves. We have not been able to go beyond this heuristic observation to compute the magnitude of the rank difference as a function of the magnetization. Nevertheless, the trend from Fig.~\ref{rvm-ep} is clear: for nonzero $m$, the minority voters are better ranked and for fixed $m$ this rank difference appears to grow as $\sqrt{N}$ (inset to Fig.~\ref{rvm-ep}). This rank difference is the mechanism underlying the drift velocity that drives the system away from consensus. The primary consequence of this bias is that $T_N$ grows faster than a power law in $N$ and the numerical evidence suggests that $T_N\sim\exp(\sqrt{N})$ (Fig.~\ref{rvm-fpt}(a)). There are multiple ways in which fitness, or rank changes of voters can be implemented; we only treated the case where the influencer voter becomes ``stronger'', while the influenced voter is not affected. It is also natural to consider the cases where: (i) the influencer voter becomes stronger and the influenced voter becomes weaker, and (ii) the influencer voter is unaffected and only the influenced voter becomes weaker. In case (i), simulations indicate that the dynamical behavior is similar to the situation where only the influencer voter becomes stronger. In case (ii), however, the dynamics appears to be in the same universality class as the VM. Namely, the consensus time $T_N\sim N$ and the exit probability $E(m)=(1+m)/2$. The latter behavior arises because the highest-ranked voter does not change its opinion throughout the dynamics, a situation that also arises in the FVM. \section*{Acknowledgments} We gratefully acknowledge financial support from NSF grant DMR-1608211.
bbd079166896dc8e88542c00517e11bf6bc24ef2
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} For a prime power $q$, let $\mathbb{F}_q$ be the finite field of order $q$. Let $n > k \geq 1$ and $m \geq 1$ be integers. A subset $S \subseteq \mathbb{F}_q^n$ is a {\em $(k,m)$-Furstenberg} set if, for each rank $k$ subspace $W$ of $\mathbb{F}_q^n$, there is a translate of $W$ that intersects $S$ in at least $m$ points. For a prime power $q$ and integers $n,k,$ and $m$ with $1 \leq k < n$ and $m \leq q^k$, let $K(q,n,k,m)$ be the least $t$ such that there exists a $(k,m)$-Furstenberg set in $\mathbb{F}_q^n$ of cardinality $t$. A $(1,q)$-Furstenberg set is called a Kakeya set. The question of determining $K(q,n,1,q)$ was originally posed by Wolff \cite{wolf1999} as a toy version of the Euclidean Kakeya conjecture. For this case, the polynomial method \cite{dvir2009size,saraf2008,dvir2013extensions} gives the bound \begin{equation}\label{eq:kakeya} K(q,n,1,q) \geq 2^{-n}q^n, \end{equation} which is tight up to a factor of 2. The same techniques also handle the more general case of $K(q,n,1,m)$ for arbitrary $m$. The approach used to prove (\ref{eq:kakeya}) was generalized to all $k$ when $m=q^k$ by Kopparty, Lev, Saraf, and Sudan \cite{kopparty2011kakeya}, who improved earlier work by Ellenberg, Oberlin, and Tao \cite{ellenberg_oberlin_tao_2010}. They show \begin{equation}\label{eq:fullFurstenberg} K(q,n,k,q^k) \geq \left( \frac{q^{k+1}}{q^k + q - 1} \right)^n = \left (1+\frac{q-1}{q^k} \right )^{-n} q^n. \end{equation} For fixed $k \geq 2$, fixed $n$, and $q$ large, (\ref{eq:fullFurstenberg}) states that a $(k,q^k)$-Furstenberg set in $\mathbb{F}_q^n$ must contain nearly all of the points of $\mathbb{F}_q^n$. For fixed $k \geq 2$, fixed $q$, and $n$ large, (\ref{eq:fullFurstenberg}) states that a $(k,q^k)$-Furstenberg set in $\mathbb{F}_q^n$ must have size at least $C^{-n} q^n$, for some constant $C>1$ depending on $q$ and $k$. Kopparty, Lev, Saraf, and Sudan also described several ways to construct small Furstenberg sets when $m=q^k$. We include only one of these here. Other constructions described in \cite{kopparty2011kakeya} give better bounds for large $k$, and for some explicit, small values of $q$. \begin{equation}\label{eq:furstConstruction} K(q,n,k,q^k) \leq \left (1 - \frac{q-3}{2 q^k} \right )^{\lfloor n/(k+1) \rfloor} q^n. \end{equation} Furstenberg sets with $k \geq 2$ and $m < q^k$ are not understood as well. The first progress on the general case was by Ellenberg and Erman \cite{ellenberg2016furstenberg}, who used a sophisticated algebraic argument to prove \begin{equation}\label{eq:generalFurstenberg} K(q,n,k,m) \geq C_{n,k}m^{n/k}. \end{equation} Ellenberg and Erman did not explicitly specify the value of $C_{n,k}$ obtained, but a close inspection of the proof shows that it is $C_{n,k} = (1/n)^{\Omega(n \ln(n/k))}$. Recent work of the current authors \cite{DDL-1} gives a slightly more streamlined version of the Ellenberg and Erman proof to obtain (\ref{eq:generalFurstenberg}) with $C_{n,k} = \Omega((1/16)^{n \ln (n/k)})$. The contribution of this paper is to improve (\ref{eq:generalFurstenberg}) using much simpler and more elementary arguments. Our first main result deals with the case of general $k$ and $m \leq q^k$: \begin{theorem}\label{thm:FurstenbergRecurse} Let $q$ be a prime power, and let $n,k,$ and $m$ be positive integers such that $m \leq q^k$, we have $$K(q,n,k,m)\ge \frac{1}{2^n} m^{n/k}. $$ \end{theorem} This statement is better than the bound of Ellenberg-Erman (and its improvement in \cite{DDL-1}) as long as $k\le n/2$. When $k>n/2$, Theorem \ref{th:pureIncidences} (below) gives us a bound superior to the Ellenberg-Erman one for all parameter regimes. We note that the method of Ellenberg-Erman can be used to prove Furstenberg-style bounds involving hypersurfaces which we are not able to replicate using the methods here. The proof of Theorem~\ref{thm:FurstenbergRecurse} relies on a new equivalent formulation of the problem using the notion of min-entropy. This new formulation (or slight generalization thereof), described in Section~\ref{sec:entropy}, allows us to derive the bound for general $k$ easily using a recursive argument, starting with $k=1$ as a base case (proved using the polynomial method). We now describe several other results that deal with more restricted parameter regimes. Let $S$ be any set of $mq^{n-k}$ points in $\mathbb{F}_q^n$. A simple pigeonholing argument shows that $S$ is a $(k,m)$-Furstenberg set. Our first result is that when $m$ (and hence $q$) is sufficiently large relative to $n$, there are no Furstenberg sets much smaller than this trivial construction. \begin{theorem}\label{th:largeMFurst} Let $\varepsilon > 0$, let $q$ be a prime power, and let $n,k,$ and $m$ be integers with $2 \leq k < n$ and $m \leq q^k$. If $m \geq 2^{n+7-k}q \varepsilon^{-2}$, then \[K(q,n,k,m) \geq (1-\varepsilon)m q^{n-k}.\] \end{theorem} Since $q^k \geq m$, Theorem \ref{th:largeMFurst} applies only when $q^{k-1} \geq 2^{n+7-k}$. When $k > n/2$ and $m > q^{n-k}$, we can remove the assumption that the $k$-flats are in different directions and still prove a stronger bound than previously known. The number of rank $k$ subspaces in $\mathbb{F}_q^n$ is given by the $q$-binomial coefficient $\binom{n}{k}_q$ (see Section~\ref{sec:finiteGeometry} for details). \begin{theorem}\label{th:pureIncidences} Let $q$ be a prime power, and let $n,k,$ and $m$ be integers with $n/2 < k < n$ and $q^{n-k} < m \leq q^k$. Let $S \subseteq \mathbb{F}_q^n$. Let $L$ be a set of $k$-flats that each contain at least $m$ points of $S$, with $|L| = \binom{n}{k}_q$. Then, \[|S| \geq \left (1-q^{n-2k} - \sqrt{q^{n-k} m^{-1}} \right ) mq^{n-k}.\] In particular, the same lower bound holds for $K(q,n,k,m)$. \end{theorem} The proof of Theorem \ref{th:largeMFurst} combines (\ref{eq:kakeya}) with incidence estimates for large sets in finite fields. The proof of Theorem \ref{th:pureIncidences} relies only on incidence estimates for large sets in finite fields, and doesn't depend even indirectly on the polynomial method. Lastly, when $n$ is divisible by $k$ we can give an incredibly simple proof (a few lines) giving the following bound. \begin{theorem}\label{thm:FurstenbergDiv} Let $q$ be a prime power, and let $n,k$ and $m$ be positive integers such that $m \leq q^k$ and $n$ is divisible by $k$, we have $$K(q,n,k,m)\ge \frac{1}{2^{n/k}} m^{n/k} .$$ \end{theorem} \paragraph{Organization:} we begin in Section~\ref{sec:prelim} with some preliminaries on finite geometry and polynomials over finite fields. In Section~\ref{sec:entropy} we discuss the equivalent entropic formulation to the problem of bounding the size of Furstenberg sets. In Section~\ref{sec:entropykakeya} we prove the one dimensional case of the entropic version using the polynomial method and in \ref{sec:entropykakeya} prove the general case (Theorem~\ref{thm:FurstenbergRecurse}) using recursion. Theorem~\ref{thm:FurstenbergDiv} is proved in Section~\ref{sec:divide} and Theorems \ref{th:largeMFurst} and \ref{th:pureIncidences} are proved in Section~\ref{sec:geometric}. \section{Entropy formulation for the Kakeya problem}\label{sec:entropy} Let $X$ be a random variable (r.v) taking values in $\mathbb{F}_q^n$. The $q$-ary {\em min entropy} of $X$ (or just min-entropy if $q$ is clear from the context) is defined as \[ {\mathbf H}^{q}_{\infty}(X) = -\log_q \left( \max_{w \in \mathbb{F}_q^n}\mathbf{Pr}[ X = w ] \right) \] For example, if $X$ is distributed uniformly on a set of size $q^k$ then its min-entropy will be exactly $k$. In general, a r.v with min-entropy $k$ must have support size at least $q^k$. We first consider a class of statements which state Furstenberg bounds in the usual manner. \begin{definition}(Furstenberg set bound, $A(n,k)$) Let $1 \leq k < n$ be integers. We say that the statement $A(n,k)$ holds with constant $C_{n,k}$ if the following is true: \begin{quote} If $S \subset \mathbb{F}_q^n$ is $(k,m)$-Furstenberg then $|S| \geq C_{n,k}\cdot m^{n/k}$. \end{quote} In other words $A(n,k)$ is the statement that $K(q,n,k,m)\geq C_{n,k}\cdot m^{n/k}$. \end{definition} Note, as mentioned earlier, the proof of the Kakeya bound in~\cite{dvir2013extensions} shows that for all $n$, $A(n,1)$ holds with $C_{n,1} = 2^{-n}$. We now define a seemingly different statement involving min-entropy of linear maps. \begin{definition}(Linear maps with high min-entropy, $B(n,k)$) Let $1 \leq k < n$ be integers. We say that the statement $B(n,k)$ holds with constant $D_{n,k}$ if the following is true: \begin{quote} For all $\delta \in [0,1]$. If $S \subset \mathbb{F}_q^n$ is of size $|S| = q^{\delta n}$ then there exists an onto linear map $\varphi : \mathbb{F}_q^n \mapsto \mathbb{F}_q^{n-k}$ such that ${\mathbf H}^{q}_{\infty}(\varphi(U_S)) \geq \delta(n-k) - D_{n,k}$, where $U_S$ is a random variable distributed uniformly over $S$. \end{quote} \end{definition} In other words, $B(n,k)$ says that given the random variable $U_S$, which is uniform over a set $S$ of size $q^{\delta n}$ and hence having min-entropy $\delta n$, one can find a linear map that keeps the same {\em relative} min-entorpy (the ratio between min-entropy and dimension) up to some small loss $D_{n,k}$. Surprisingly, these two statements turn out to be equivalent for $C_{n,k}\in (0,1]$ and $D_{n,k}\ge 0$. With a simple formula relating $C_{n,k}$ and $D_{n,k}$. The statement $B(n,k)$ is easily generalizable with $U_S$ replaced by a general random variable. The generalization of the statement $B(n,1)$ can be proven using a simple generalization of the proof in \cite{dvir2013extensions}. This generalized statement will allow us to perform induction to prove Furstenberg set bounds. First, we prove the equivalence between $A(n,k)$ and $B(n,k)$ in two lemmas below. \begin{lemma}\label{lem-AtoB} For integer $1 \leq k < n$. If $A(n,k)$ holds with constant $0<C_{n,k}\le 1$ then $B(n,k)$ holds with constant $$ D_{n,k} = \frac{k}{n}\cdot \log_q\left(\frac{1}{C_{n,k}}\right) .$$ \end{lemma} \begin{proof} Let $n > k$ and suppose in contradiction that $B(n,k)$ does not hold. This means that there is a set $S \subset \mathbb{F}_q^n$ of size $|S| = q^{\delta n}$ such that for any onto linear map $\varphi : \mathbb{F}_q^n \mapsto \mathbb{F}_q^{n-k}$ we have ${\mathbf H}^{q}_{\infty}(\varphi(U_S)) < \delta(n-k) - D_{n,k}$. By the definition of min-entropy this means that for all $\varphi$ there must exist some $v = v_{\varphi} \in \mathbb{F}_q^{n-k}$ such that \begin{equation}\label{eq-varphi-me} \mathbf{Pr}\left[ \varphi(U_S) = v_\varphi \right] = \frac{ | \varphi^{-1}(v_\varphi) \cap S |}{|S|} > \frac{q^{D_{n,k}}}{q^{\delta(n-k)}}. \end{equation} Let $K_\varphi \subset \mathbb{F}_q^n$ denote the $k$-dimensional kernel of $\varphi$. Then, Eq.~\ref{eq-varphi-me} implies that there is a shift $w_{\varphi} \in \mathbb{F}_q^n$ so that \begin{equation} | (K_{\varphi} + w_{\varphi}) \cap S | > |S|\cdot \frac{q^{D_{n,k}}}{q^{\delta(n-k)}} \geq q^{\delta k + D_{n,k}} \end{equation} Since $K_\varphi$ can be any $k$-dimensional linear subspace, $S$ is $(k,m)$-Furstenberg with $m> q^{\delta k + D_{n,k}}$. Since $A(n,k)$ holds with constant $C_{n,k}$ we get that \begin{equation} |S| > C_{n,k} \cdot \left( q^{\delta k + D_{n,k}} \right)^{n/k} = C_{n,k}\cdot q^{\frac{n}{k}D_{n,k}} \cdot |S|. \end{equation} Cancelling $|S|$ from both sides and using the expression for $D_{n,k}$, we get a contradiction. \end{proof} \begin{lemma}\label{lem-BtoA} For integers $1 \leq k < n$. If $B(n,k)$ holds with constant $0\le D_{n,k}$ then $A(n,k)$ holds with constant $$ C_{n,k} = q^{-\frac{n}{k}D_{n,k}}.$$ \end{lemma} \begin{proof} Let $S \subset \mathbb{F}_q^n$ be $(k,m)$-Furstenberg. By definition of Furstenberg set, for any onto linear map $\varphi :\mathbb{F}_q^n \mapsto \mathbb{F}_q^{n-k}$ with $k$-dimensional kernel $K \subset \mathbb{F}_q^n$, there is a shift $w \in \mathbb{F}_q^n$ so that $|(K + w)\cap S| \geq m$. This implies that the random variable $\varphi(U_S)$ obtains the value $\varphi(w) \in \mathbb{F}_q^{n-k}$ with probability at least $m/|S|$. Hence, \begin{equation}\label{eq-varphi-me2} {\mathbf H}^{q}_{\infty}(\varphi(U_S)) \leq \log_q\left(\frac{|S|}{m}\right). \end{equation} Let $\delta$ be defined as \begin{equation} \delta = \frac{1}{n-k}\cdot \log_q\left( \frac{|S|q^{D_{n,k}}}{m}\right) \end{equation} so that \begin{equation} \log_q\left(\frac{|S|}{m}\right) = \delta(n-k) - D_{n,k}. \end{equation} We first consider the case when $\delta\not\in [0,1]$. If $\delta<0$ then $|S|<q^{-D_{n,k}}m$ which is impossible as $D_{n,k}\ge 0$. If $\delta>1$ then we must have $$|S|>q^{-D_{n,k}}q^{n-k}m,$$ which suffices to show $A(n,k)$ holds with $C_{n,k}=q^{-n/k}D_{n,k}$. Now we can assume $\delta\in [0,1]$. Applying $B(n,k)$ we see that $|S| \leq q^{\delta n}$ as, if we had $|S| = q^{\delta' n}$ with $\delta' > \delta$ then there would exist a linear map $\varphi$ contradicting Eq.~\ref{eq-varphi-me2}. Plugging in the value for $\delta$ we get $$ |S| \leq q^{\delta n} = \left( \frac{ |S|q^{D_{n,k}}}{m} \right)^{\frac{n}{n-k}}$$ which gives $$ |S| \geq q^{-\frac{n}{k}D_{n,k}} \cdot m^{n/k}. $$ This proves $A(n,k)$ with the claimed expression for $C_{n,k}$. \end{proof} As mentioned earlier we can generalize the statement $B(n,k)$ to statements about general random variables. In particular we will prove the following theorem. \begin{restatable}[Entropic-Furstenberg bound]{theorem}{entropicFurstenbergRecurse} \label{thm:entropicFurstenbergRecurse} For any random variable $R$ supported over $\mathbb{F}_q^n$ there exists an onto linear map $\phi: \mathbb{F}^n_q\rightarrow \mathbb{F}^{n-k}_q$ such that $$\mathbf{H}^q_\infty(\phi(R))\ge \frac{n-k}{n}\mathbf{H}^q_\infty(R)-\log_q(2-q^{-1})k.$$ \end{restatable} The theorem above proves the statement $B(n,k)$ with constant $D_{n,k}=k\log_q(2)$. Lemma \ref{lem-BtoA} then proves Theorem \ref{thm:FurstenbergRecurse}. We will prove Theorem \ref{thm:entropicFurstenbergRecurse} using the polynomial method for the case $k=1$ and the general case will follow from an inductive argument by composing a sequence of onto maps. For that reason, let us restate the $k=1$ case separately. \begin{restatable}[Entropic bound for $k=1$]{theorem}{entropicKakeya} \label{thm:entropicKakeya} For any random variable $R$ supported over $\mathbb{F}_q^n$ there exists an onto linear map $\phi: \mathbb{F}^n_q\rightarrow \mathbb{F}^{n-1}_q$ such that $$\mathbf{H}^q_\infty(\phi(R))\ge \frac{n-1}{n}\mathbf{H}^q_\infty(R)-\log_q(2-q^{-1}).$$ \end{restatable} \section{Preliminaries}\label{sec:prelim} \subsection{Facts from finite geometry}\label{sec:finiteGeometry} In this section, we review a few basic facts from finite geometry, as well as the results we need from incidence geometry. A {\em $k$-flat} is a translate of a rank $k$ linear subspace. The span of a set $X \subseteq \mathbb{F}_q^n$ is the smallest flat that contains $X$, and is denoted $\overline{X}$. For flats $\Lambda,\Gamma$ in $\mathbb{F}_q^n$, we denote by $\overline{\Lambda,\Gamma}$ the span of $\Lambda \cup \Gamma$. If $\Lambda$ and $\Gamma$ are subspaces ({\em i.e.} they each contain the origin), then \begin{equation}\label{eq:dimSpan} \dim(\overline{\Lambda, \Gamma}) = \dim(\Lambda) + \dim(\Gamma) - \dim(\Lambda \cap \Gamma). \end{equation} For integers $1 \leq k < n$, the number of rank $k$ subspaces of $\mathbb{F}_q^n$ is given by the $q$-binomial coefficient $\binom{n}{k}_q$. As with ordinary binomial coefficients, the $q$-binomial coefficients are centrally symmetric: \begin{equation} \label{eq:centrallySymmetric} \binom{n}{k}_q = \binom{n}{n-k}_q. \end{equation} The Pascal identities for $q$-binomial coefficients are \begin{align} \label{eq:pascal1} \binom{n}{k}_q &= q^k \binom{n-1}{k}_q + \binom{n-1}{k-1}_q, \text{ and} \\ \label{eq:pascal2} \binom{n}{k}_q &= \binom{n-1}{k}_q + q^{n-k} \binom{n-1}{k-1}_q. \end{align} A direct expression is given by \begin{equation}\label{eq:exactQBinom} \binom{n}{k}_q = \frac{(1 - q^n)(1- q^{n-1}) \ldots (1 - q^{n-k+1})}{(1-q)(1-q^2)\ldots (1-q^k)}.\end{equation} The number of $k$-flats in $\mathbb{F}_q^n$ is $ q^{n-k}\binom{n}{k}_q$. A point is {\em incident} to a flat if the point is contained in the flat. Given a set $L$ of flats, and a set $S$ of points, both in $\mathbb{F}_q^n$, we denote by \[I(S,L) = | \{(p,\ell) \in S \times L : p \in \ell \} | \] the number of incidences between $S$ and $L$. The following bound on the number of incidences between points and $k$-flats was first proved by Haemmers \cite[Theorem 3.1.1]{haemers1980eigenvalue}. \begin{lemma}\label{th:incidenceBound} If $S$ is a set of points and $L$ a set of $k$-flats, both in $\mathbb{F}_q^n$, then \[I(S,L) \leq q^{k-n} |S|\, |L| + \sqrt{q^k \binom{n-1}{k}_q |S| \, |L|}. \] \end{lemma} Given a set $S$ of points, a flat is {\em $(S,t)$-rich} if it contains at least $t$ points of $S$. A flat is {\em $(S,t)$-poor} if it contains fewer than $t$ points of $S$. The following upper bound on the number of $(S,t)$-poor flats is a slight reformulation of \cite[Corollary 5]{lund2016incidence}. A slightly weaker bound was proved earlier by Alon \cite{alon1986eigenvalues}. \begin{lemma}\label{th:becks} Let $S \subset \mathbb{F}_q^k$ be a set of $m$ points. Let $0 < \delta < 1$ and $1 \leq \ell \leq k-1$. The number of $(S,\delta m q^{\ell - k} + 1)$-poor $\ell$-flats is at most \[\left (1 + mq^{\ell-k}(1-\delta)^2 \right )^{-1} q^{k-\ell} \binom{k}{\ell}_q.\] \end{lemma} \subsection{Method of multiplicities}\label{prelim:multiplicities} The results here are from a paper by Dvir, Kopparty, Saraf, and Sudan~\cite{dvir2013extensions}. We state the theorems we need and the proofs can be found in the aforementioned paper. \begin{definition}[Hasse Derivatives] Given a polynomial $P\in \mathbb{F}[x_1,\hdots,x_n]$ and a $i\in \mathbb{Z}_{\ge 0}^n$ the $i$th {\em Hasse derivative} of $P$ is the polynomial $P^{(i)}$ in the expansion $P(x+z)=\sum_{i\in \mathbb{Z}_{\ge 0}^n} P^{(i)}(x)z^i$ where $x=(x_1,...,x_n)$, $z=(z_1,...,z_n)$ and $z^i=\prod_{j=1}^n z_j^{i_j}$. \end{definition} Hasse derivatives satisfy some useful identities. We state the only one we will need. \begin{lemma}\label{lem:chainRule} Given a polynomial $P\in \mathbb{F}[x_1,\hdots,x_n]$ and $i,j\in \mathbb{Z}_{\ge 0}^n$ we have, $$(P^{(i)})^{(j)}=P^{(i+j)}\prod\limits_{k=1}^n\binom{i_k+j_k}{i_j}$$ \end{lemma} We make precise what it means for a polynomial to vanish on a point $a\in \mathbb{F}^n$ with multiplicity. First we recall for a point $j$ in the non-negative lattice $\mathbb{Z}^n_{\ge 0}$, its weight is defined as $\text{wt}(i)=\sum_{i=1}^n j_i$. \begin{definition}[Multiplicity] For a polynomial $P$ and a point $a$ we say $P$ vanishes on $a$ with {\em multiplicity} $N$, if $N$ is the largest integer such that all Hasse derivatives of $P$ of weight strictly less than $N$ vanish on $a$. We use $\text{mult}(P,a)$ to refer to the multiplicity of $P$ at $a$. \end{definition} Notice, $\text{mult}(P,a)=1$ just means $f(a)=0$. We will use the following simple property concerning multiplicities of composition of polynomials. \begin{lemma}\label{lem:multComp} Given a polynomial $P\in \mathbb{F}[x_1,\hdots,x_n]$ and a tuple $Q=(Q_1,\hdots,Q_n)$ of polynomials in $\mathbb{F}[y_1,\hdots,y_m]$, and $a\in \mathbb{F}^m$ we have, $$\text{mult}(P\circ Q, a)\ge \text{mult}(P,Q(a)).$$ \end{lemma} The key lemma here is an extended Schwartz-Zippel bound~\cite{schwartz1979probabilistic}\cite{ZippelPaper} which leverages multiplicities. \begin{lemma}[Schwartz-Zippel with multiplicity]\label{lem:multSchwartz} Let $f\in \mathbb{F}[x_1,..,x_n]$, with $\mathbb{F}$ an arbitrary field, be a nonzero polynomial of degree at most $d$. Then for any finite subset $U\subseteq \mathbb{F}$ , $$\sum\limits_{a\in U^n} \text{mult}(f,a) \le d|U|^{n-1}.$$ \end{lemma} We will also need the following lemma which lets us find polynomials which vanish on different points with differing multiplicities. \begin{lemma}\label{lem:findPolyVanishMult} Given non-negative integer $d$ and a set of non-negative integers $N_x$ indexed by elements $x\in \mathbb{F}^n_q$ which satisfy the following bound, $$\sum\limits_{x\in \mathbb{F}^n_q}\binom{N_x+n-1}{n}< \binom{d+n}{n},$$ we can find a non-zero polynomial $P$ of total degree at most $d$ such that for all $x\in \mathbb{F}_q^n$, $P$ vanishes on $x$ with multiplicity at least $N_x$. \end{lemma} \begin{proof} Note $ \binom{d+n}{n}$ is the vector space dimension of the space of polynomials in $n$ variables with total degree at most $d$. The condition of a polynomial vanishing on a point $x$ with multiplicity $N_x$ is defined by $\binom{N_x+n-1}{n}$ many linear equations in the coefficients of the polynomial. The condition of vanishing on $x$ with multiplicity $N_x$ for all $x$ is then defined by at most $\sum_{x\in \mathbb{F}^n_q} \binom{N_x+n-1}{n}$ many linear equations. The condition in the statement of the lemma implies that we can find a non-zero polynomial which satisfies all these conditions. \end{proof} \section{Proof of the entropic bound when $k=1$}\label{sec:entropykakeya} We will prove Theorem \ref{thm:entropicKakeya} by first proving an estimate for the $\ell^n$ norm of integer valued functions over $\mathbb{F}^n_q$ and reducing Theorem \ref{thm:entropicKakeya} to it. \begin{theorem}\label{thm:normBound} Given $r\in \mathbb{Z}_{\ge 0}$ and a function $f:\mathbb{F}^n_q\rightarrow \mathbb{Z}$ such that for every direction $\gamma$ there exists a line $E_\gamma$ in that direction such that $\sum_{x\in E_\gamma }|f(x)| \ge r$ we have the following bound, $$ \|f\|_{\ell^n}^n=\sum\limits_{x\in \mathbb{F}^n_q} |f(x)|^n\ge \frac{r^n}{\left(2-q^{-1}\right)^n}.$$ \end{theorem} Note, if $f$ is an indicator function for a subset of $\mathbb{F}^n_q$ then the theorem above is simply the Kakeya bound in \cite{dvir2013extensions}. Also note that this theorem can easily be generalized to real valued functions and positive real $r$ by taking ratios and limits. Our proof would be a simple modification of the proof of the Kakeya theorem in~\cite{dvir2013extensions}. Indeed, the modification appears in~\cite{ellenberg_oberlin_tao_2010} and is used to prove a more general distributional Kakeya estimate for curves in a slightly different setting with unspecified constants. Their proof doesn't use the extended method of multiplicities and will require projective transformations and random rotations to reduce to this setting which would prevent us from getting the best constant. \begin{proof} Fix $m$ to be a positive multiple of $r$. Let $d=mq-r$ where and $N=m(2q-1)/r$. We want to prove the following for large enough values of $m$, \begin{align}\sum\limits_{x\in \mathbb{F}^n_q}\binom{N|f(x)|+n-1}{n}\ge \binom{d+n}{n}.\label{eq:keyBound} \end{align} Dividing by $\binom{d+n}{n}$ on both sides and substituting for $d$ and $N$ gives us, $$\sum\limits_{x\in \mathbb{F}^n_q} \frac{((2q-1)m|f(x)|/r+n-1)\hdots((2q-1)m|f(x)|/r)}{(mq-r+n)\hdots (mq-r+1)}\ge 1.$$ As $m$ can be arbitrarily large, we let it grow towards infinity which gives us, $$\sum\limits_{x\in \mathbb{F}^n_q}|f(x)|^n\ge \frac{r^n}{(2-q^{-1})^{n}},$$ which is exactly what we want to prove. Hence, we only need to prove \eqref{eq:keyBound} now. Let \eqref{eq:keyBound} be false. Using Lemma \ref{lem:findPolyVanishMult}, we can find a non-zero polynomial $P$ of total degree at most $d$ such that it vanishes on each point $x$ of $\mathbb{F}^n_q$ with multiplicity $N|f(x)|$. Let $P^H$ refer to the homogenous part of $P$ of highest degree. We make the following claim. \begin{claim} For all $b\in \mathbb{F}^n_q$, $$\text{mult}(P^H,b)\ge m.$$ \end{claim} \begin{proof} It is easy to see the statement is true for $b=0$ because $P^H$ is a homogenous polynomial of degree $d>m$. Recall, for any point $\alpha \in \mathbb{Z}_{\ge 0}^n$ its weight is defined as the sum of its coordinate. Fix any $\alpha\in \mathbb{Z}_{\ge 0}$ such that $\text{wt}(\alpha)=m'<m$. Let us consider $Q=P^{(\alpha)}$, that is its $\alpha$th Hasse derivative. It will have degree at most $d-m'$ and it will vanish on $x$ with multiplicity $\max(N|f(x)|-m',0)$. For any direction $b\in \mathbb{F}^n_q \setminus \{0\}$ we can find a point $a\in \mathbb{F}^n_q$ such that the line $L=\{x:x=a+bt,t\in \mathbb{F}^n_q\}$ satisfies, \begin{align} \sum\limits_{x\in L} \text{mult}(P,x)\ge \sum\limits_{x\in L} N|f(x)|\ge Nr.\label{eq:multPoly1} \end{align} Then Lemma \ref{lem:chainRule} implies, $$\sum\limits_{x\in L} \text{mult}(Q,x)\ge \sum\limits_{x\in L} \max(N|f(x)|-m',0)\ge Nr-qm'. $$ $Q(a+bt)$ will be a degree at most $d-m'$ univariate polynomial in $t$. Lemma \ref{lem:multComp} and \eqref{eq:multPoly1} implies, \begin{align} \sum\limits_{t\in \mathbb{F}_q} \text{mult}(Q(a+bt),t)\ge \sum\limits_{x\in L} \text{mult}(Q(x),x)\ge Nr-qm'.\label{eq:multpoly2} \end{align} If $Q(a+bt)$ is non-zero then Lemma \ref{lem:multSchwartz} and \eqref{eq:multpoly2} gives us the bound $Nr-qm'\le d-m'$ which implies $m(q-1)+r\le m'(q-1)$. This leads to a contradiction proving $Q(a+bt)$ is identically zero. We note $(P^H)^{(\alpha)}$ is precisely the homogenous part of highest degree of $Q$. $Q(a+bt)$ being identically zero implies $(P^H)^{(\alpha)}$ vanishes on $b$. This proves the claim. \end{proof} Putting everything together we now know that $P^H$, which has total degree at most $d$, vanishes on all values in $\mathbb{F}^n_q$ with multiplicity at least $m$. Lemma \ref{lem:multSchwartz} now implies, $mq \le d$ leading to a contradiction. This finishes the proof of the Theorem. \end{proof} We are now ready to prove Theorem \ref{thm:entropicKakeya}. \begin{proof}[Proof of Theorem \ref{thm:entropicKakeya}] We will prove this theorem for random variables $R$ such that $\mathbf{Pr}(R=x)$ is a rational number for all $x\in \mathbb{F}^n_q$. After a simple limiting argument we will obtain the statement for all random variables $R$. As mentioned earlier, we will reduce to Theorem \ref{thm:normBound}. We let $\mathbf{Pr}(R=w)=f(x)/S$ for some positive integer $S$ and non-negative integer $f(x)$ for all $x\in \mathbb{F}^n_q$. It is clear that $S=\sum_{x\in \mathbb{F}^n_q} f(x)$. We note $\mathbf{H}^q_\infty(R)$ is simply going to be $-\log_q(f(v)/S)$ where $v\in \mathbb{F}^n_q$ is the mode of $R$. Given any onto linear map $\phi:\mathbb{F}^n_q\rightarrow \mathbb{F}_q^{n-1}$ we note its kernel will be some line passing through the origin with direction $\gamma$. It is easy to check that the set of values obtained by calculating $\mathbf{Pr}(\phi(R)=x)$ for all $x\in \mathbb{F}^{n-1}_q$ will correspond to summing the values of $\mathbf{Pr}(\phi(R)=x)$ for all lines in the direction of $\gamma$. We call the set of lines in direction $\gamma$, $\mathbf{L}_\gamma$. This means we can write $\mathbf{H}^q_\infty(\phi(R))$ as, $$\mathbf{H}^q_\infty(\phi(R))=-\log_q\left(\max\limits_{\ell \in \mathbf{L}_\gamma} \sum\limits_{x\in \ell} \mathbf{Pr}(R=x)\right).$$ We now pick the $\phi$ for which $\mathbf{H}^q_\infty(\phi(R))$ is the largest. This is basically done by picking the direction $\gamma$ such that $\max_{\ell \in \mathbf{L}_\gamma} \sum\limits_{w\in \ell} \mathbf{Pr}(R=x)$ is the smallest. Let $\gamma_0$ be that direction and $\max_{\ell \in \mathbf{L}_{\gamma_0}} \sum\limits_{w\in \ell} \mathbf{Pr}(R=w)$ equals $r/S$ where $r$ is some non-negative integer. We can now re-write the statement of the Theorem as follows, \begin{align*} &-\log_q\left(\frac{r}{S}\right)\ge -\frac{n-1}{n}\log_q\left(\frac{f(v)}{S}\right)-\log_q(2-q^{-1})\\ \iff& \frac{S}{r}\ge \frac{1}{2-q^{-1}}\left(\frac{S}{f(v)}\right)^{1-1/n}\\ \iff& \left(\frac{S}{f(v)}\right)^{1/n}\ge \frac{1}{2-q^{-1}}\frac{r}{f(v)}\\ \iff& \sum\limits_{x\in \mathbb{F}^n_q} f(x)f(v)^{n-1}\ge \frac{1}{(2-q^{-1})^n}r^n\numberthis \label{eq:KakeyaMult} \end{align*} Noting, $f(v)\ge f(x)\ge 0$ for all $x$, \eqref{eq:KakeyaMult} immediately follows from Theorem \ref{thm:normBound}. \end{proof} \section{Proving the general entropic bound}\label{sec:entorpyfurst} Let us first prove Theorem \ref{thm:entropicFurstenbergRecurse} which is obtained from Theorem \ref{thm:entropicKakeya} by a simple recursion. \begin{proof}[Proof of Theorem \ref{thm:entropicFurstenbergRecurse}] We induct over $k$. Theorem \ref{thm:entropicKakeya} is precisely the case $k=1$. Now, let it be true for some fixed $k$. This means given any random variable $R$ supported over $\mathbb{F}_q^n$ we can find an onto random variable $\phi:\mathbb{F}_q^n\rightarrow \mathbb{F}_q^{n-k}$ such that, \begin{align} \mathbf{H}^q_\infty(\phi(R))\ge \frac{n-k}{n}\mathbf{H}^q_\infty(R)-\log_q(2-q^{-1})k.\label{eq:entrkInduct} \end{align} Applying Theorem \ref{thm:entropicKakeya} on $\phi(R)$ we can find another onto function $\psi:\mathbb{F}_q^{n-k}\rightarrow \mathbb{F}_q^{n-k-1}$ such that, \begin{align} \mathbf{H}^q_\infty(\psi(\phi(R))) \ge \frac{n-k-1}{n-k}\mathbf{H}^q_\infty(\phi(R))-\log_q(2-q^{-1}).\label{eq:entrk1} \end{align} Substituting \eqref{eq:entrkInduct} in \eqref{eq:entrk1} proves the required statement. \end{proof} \section{Better bounds when $n$ is divisible by $k$} In this section we will prove Theorem \ref{thm:FurstenbergDiv} which gives us much better bounds in the case when $n$ is divisible by $k$. \begin{proof}[Proof of Theorem \ref{thm:FurstenbergDiv}]\label{sec:divide} As $k$ is a factor of $n$ we can find a positive integer $r$ such that $n=rk$. Note there exists an $\mathbb{F}_q$-linear isomorphism between $\mathbb{F}_q^{n}$ and $\mathbb{F}_{q^k}^{r}$. This quickly follows from the fact $\mathbb{F}_{q^k}$ is by definition $\mathbb{F}_q[x]/I$ where $I$ is a principal ideal generated by a degree $k$ irreducible polynomial in $\mathbb{F}_q[x]$. This allows us to treat a point set $S$ in $\mathbb{F}_q^n$ as a point set in $\mathbb{F}_{q^k}^r$. Also it is easy to see that any line in $\mathbb{F}_{q^k}^r$ is a $k$-dimensional subspace in $\mathbb{F}_q^n$. This means $S$ is a Kakeya set in $\mathbb{F}_{q^k}^r$. Using the Kakeya bound \eqref{eq:kakeya} we have, $$|S|\ge \frac{1}{2^{n/k}}q^{n},$$ which is precisely what we wanted. \end{proof} One could use a similar argument to prove bounds in the style of Theorem \ref{thm:entropicFurstenbergRecurse} with better constants. In fact, when $n-k$ has a factor smaller than $k$ we can combine the recursive argument of Theorem \ref{thm:entropicFurstenbergRecurse} and argument presented in this section to obtain slightly better constants for Furstenberg set bounds. \section{Proof of Theorems \ref{th:largeMFurst} and \ref{th:pureIncidences}}\label{sec:geometric} We start by proving three lemmas. The proof of Theorem \ref{th:pureIncidences} depends only on Lemma \ref{th:heavyFlatsCoverManyPoints}. The other two lemmas are only needed in the proof of Theorem \ref{th:largeMFurst}. The first lemma shows that a set of flats witnessing a Furstenberg set contains many flats of lower dimension. \begin{lemma}\label{th:kakeyaForFlats} Let $F$ be a set of $k$-flats in $\mathbb{F}_q^n$, one parallel to each rank $k$ subspace, with $2 \leq k < n$. Let $1 \leq \ell < k$. The number of $\ell$-flats that are each contained in some flat of $F$ is at least $K(q,n-\ell,k-\ell, q^{k - \ell}) \binom{n}{l}_q$. In other words, the proportion of $\ell$-flats contained in some flat of $F$ is at least $ q^{\ell-n} K(q,n-\ell,k-\ell,q^{k-\ell})$. \end{lemma} \begin{proof} The basic observation behind this lemma is that the $\ell$-flats that are contained in $F$ and are parallel to a fixed rank $\ell$ subspace are the points in a lower dimensional Furstenberg set. The bound in the conclusion of the lemma comes from summing over all rank $\ell$ subspaces of $\mathbb{F}_q^n$. For each rank $\ell$ subspace $\Lambda$, choose a rank $n-\ell$ subspace $P_\Lambda$ so that $\Lambda \cap P_\Lambda$ is the origin. Since $\dim(\Lambda \cap P_\Lambda) = 0$, equation (\ref{eq:dimSpan}) implies that $\overline{\Lambda,P_\Lambda} = \mathbb{F}_q^n$. Let $F_\Lambda \subset F$ be those flats of $F$ that contain a translate of $\Lambda$. We will show that $K_\Lambda = \bigcup_{\Gamma \in F_{\Lambda}} (\Gamma \cap P_{\Lambda})$ is a $(k-\ell, q^{k-\ell})$-Furstenberg set in $P_\Lambda$. Let $g$ be the map from $k$-dimensional subspaces of $\mathbb{F}_q^n$ that contain $\Lambda$ to subspaces of $P_\Lambda$ defined by $g(\Gamma) = P_\Lambda \cap \Gamma$. Since $\overline{\Gamma, P_\Lambda} = \mathbb{F}_q^n$ for any subspace $\Gamma$ that contains $\Lambda$, (\ref{eq:dimSpan}) implies that every subspace in the image of $g$ has rank $k-\ell$. In addition, any rank $k-\ell$ subspace $H$ contained in $P_\Lambda$ intersects $\Lambda$ only at the origin, so $\dim(\overline{\Lambda, H}) = k$. Consequently, $g$ is bijective. For any vector $v \in \mathbb{F}_q^n$, let $v = v_\Lambda + v_{P_\Lambda}$, where $v_\Lambda \in \Lambda$ and $v_{P_\Lambda} \in P_\Lambda$. Since $\overline{\Lambda,P_\Lambda} = \mathbb{F}_q^n$, this is always possible. Let $\Gamma$ be a rank $k$ subspace that contains $\Lambda$. Since $\Lambda \subset \Gamma$, we have $\Gamma + v = \Gamma + v_\Lambda + v_{P_\Lambda} = \Gamma + v_{P_\Lambda}$. For any $u \in \Gamma \cap P_\Lambda$, we have $u + v_{P_\Lambda} \in (\Gamma + v_{P_\Lambda}) \cap (P_\Lambda + v_{P_\Lambda}) = (\Gamma + v_{P_\Lambda}) \cap P_\Lambda$. Hence, $(\Gamma + v) \cap P_\Lambda = (\Gamma \cap P_\Lambda) + v_{P_\Lambda}$. We are now ready to show that $K_\Lambda$ is a $(k-\ell, q^{k-\ell})$-Furstenberg set. Let $H$ be a $(k-\ell)$-dimensional subspace contained in $P_\Lambda$. By the hypothesis on $F$, there is $v \in \mathbb{F}_q^n$ such that $g^{-1}(H) + v \in F_\Lambda$. Hence, $H + v_{P_\Lambda} \subseteq K_\Lambda$. By definition, $|K_\Lambda| \geq K(q,n-\ell,k-\ell, q^{k-\ell})$. Each point in $K_\Lambda$ is the intersection of $P_\Lambda$ with a $\ell$-flat parallel to $\Lambda$ that is contained in some flat of $F$. Let $L_\Lambda$ be the set of $\ell$ flats corresponding to points of $K_\Lambda$. Since each $\ell$-flat is a translate of exactly one rank $\ell$ subspace, we have that $L_\Lambda \cap L_{\Lambda'} = \emptyset$ for $\Lambda \neq \Lambda'$. Hence, \[ \left| \bigcup_{\Lambda} L_\Lambda \right | = \sum_{\Lambda} |L_\Lambda| = \sum_{\Lambda} |K_\Lambda| \geq \binom{n}{\ell}_q K(q,n-\ell,k-\ell,q^{k-\ell}), \] where $\Lambda$ ranges over all rank $\ell$ subspaces of $\mathbb{F}_q^n$. \end{proof} For the proof of Theorem \ref{th:largeMFurst}, we only need the case $\ell = k-1$ of Lemma \ref{th:kakeyaForFlats}. The application of (\ref{eq:kakeya}) to obtain an explicit bound on $K(q,n,1,q)$ for use with Lemma \ref{th:kakeyaForFlats} is the only application in this section of any result proved using the polynomial method. \begin{lemma}\label{th:kakeyaBecks} Let $S$ be a $(k,m)$-Furstenberg set in $\mathbb{F}_q^n$. Let $\delta < 1$. Let $G_r$ be the set of $(k-1)$-flats that are each incident to at least $r = \delta mq^{-1} + 1$ points of $S$. If $m \geq 2^{n+3-k} q (1-\delta)^{-2}$, then $|G_r| > 2^{k-2-n}q^{n-k+1} \binom{n}{k-1}_q$. \end{lemma} \begin{proof} Let $F$ be a set of $k$-flats that each intersect $S$ in at least $m$ points, such that one flat of $F$ is parallel to each rank $k$ subspace. By Lemma \ref{th:kakeyaForFlats}, there is a set $G$ of $(k-1)$-flats contained in the flats of $F$, with $|G| \geq 2^{k-1-n} q^{n-k+1} \binom{n}{k-1}_q$. Let $G_p \subseteq G$ be those flats of $G$ that are $(S,r)$-poor. We will show that $|G_p| < 2^{-1}|G|$, which implies the conclusion of the lemma. Applying Lemma \ref{th:becks}, the number of $r$-poor $(k-1)$-flats contained in any given $k$-flat is at most $(1+mq^{-1}(1-\delta)^2)q^k$. Hence, \begin{equation}\label{eq:firstGpBound} |G_p| \leq |F| (1+mq^{-1}(1-\delta)^2)^{-1}q^k = (1+mq^{-1}(1-\delta)^2)^{-1} \binom{q}{k}_q q^k.\end{equation} Using the exact expression (\ref{eq:exactQBinom}) for $q$-binomial coefficients, \[\frac{\binom{n}{k}_q}{\binom{n}{k-1}_q} = \frac{1-q^{n-k+1}}{1-q^k} < \frac{q^k}{q^k-1} \frac{q^{n-k+1}}{q^k} < 2 \, \frac{q^{n-k+1}}{q^k}.\] Combining this with (\ref{eq:firstGpBound}), \begin{equation}|G_p| < 2(1+mq^{-1}(1-\delta)^2)^{-1} \binom{n}{k-1}_q q^{n-k+1}.\end{equation} Hence, if $(1+mq^{-1}(1-\delta)^2)^{-1} \leq 2^{k-3-n}$, then $|G_p| < 2^{-1}|G|$. This follows directly from the hypothesis on $m$. \end{proof} The next lemma is essentially a reformulation of Lemma \ref{th:incidenceBound}. \begin{lemma}\label{th:heavyFlatsCoverManyPoints} Let $P \subseteq \mathbb{F}_q^n$ be a set of points. Let $\delta, \gamma > 0$, and let $L$ be a set of $\ell$-flats that each contain at least $\delta q^\ell$ points of $P$, and suppose that $|L| = \gamma q^{n-\ell} \binom{n}{\ell}_q$. Let $\kappa = \gamma q^\ell$. Then, \[|P| \geq \left ( \delta \kappa (\kappa+1)^{-1} - \sqrt{\delta(1-\delta) \kappa^{-1}} \right ) q^n. \] \end{lemma} \begin{proof} Let $\varepsilon = |P| q^{-n} < \delta$. Then by Lemma \ref{th:incidenceBound}, we have \[\delta q^\ell |L| \leq \varepsilon q^\ell |L| + \sqrt{q^\ell \binom{n-1}{\ell}_q |P| \, |L| \left (1 - q^{-n}|P| \right )}. \] Rearranging, \[(\delta - \varepsilon)^2 q^\ell |L| \leq \varepsilon q^n (1-\varepsilon) \binom{n-1}{\ell}_q. \] Since $\binom{n}{\ell}_q > q^\ell \binom{n-1}{\ell}_q$, applying the hypothesis on $|L|$ gives \begin{equation}\label{eq:quad}(\delta - \varepsilon)^2 q^\ell \gamma - \varepsilon(1-\varepsilon) < 0. \end{equation} Since the coefficient of $\varepsilon^2$ in (\ref{eq:quad}) is positive, $\varepsilon$ must be greater than the smaller root of (\ref{eq:quad}). Hence, \begin{align*} \varepsilon &> \frac{1 + 2 \delta \kappa - \sqrt{(2\delta \kappa + 1)^2 - 4(\kappa + 1) \delta^2 \kappa}}{2(\kappa + 1)} \\ &= \frac{1 + 2 \delta \kappa - \sqrt{1 + 4 \delta \kappa (1 - \delta)}}{2(\kappa + 1)}\\ &> \frac{\delta \kappa - \sqrt{\delta \kappa (1 - \delta)}}{\kappa + 1} \\ &> \delta \kappa (\kappa+1)^{-1} - \sqrt{\delta(1-\delta) \kappa^{-1}}. \end{align*} \end{proof} We are now ready to prove Theorems \ref{th:largeMFurst} and \ref{th:pureIncidences}. \begin{proof}[Proof of Theorem \ref{th:pureIncidences}] Apply Lemma \ref{th:heavyFlatsCoverManyPoints} with $\delta = m q^{-k}$. \end{proof} \begin{proof}[Proof of Theorem \ref{th:largeMFurst}] Apply Lemma \ref{th:kakeyaBecks} to $S$ with $\delta = 1-\varepsilon/4$. This gives a set $G_r$ of $(k-1)$-flats, each incident to more than $(1-\varepsilon/4)mq^{k-1}$ points of $S$, with $|G_r| > 2^{k-2-n}q^{n-k+1}\binom{n}{k-1}_q$. Next apply Lemma \ref{th:heavyFlatsCoverManyPoints} to $G_r$ with $\delta = (1-\varepsilon/4)mq^{-1}$, $\ell = k-1$, and $\gamma = 2^{k-2-n}$. As in Lemma \ref{th:heavyFlatsCoverManyPoints}, let $\kappa = \gamma q^{k-1}$. Note that $q^k \geq m \geq 2^{n+7-k} q \varepsilon^{-2}$, and hence \begin{align*} \kappa (1 + \kappa)^{-1} &\geq 1- \varepsilon^2 2^{-5} > 1-\varepsilon/4, \text{ and} \\ \kappa^{-1} &\leq 2^{-5} \varepsilon^2. \end{align*} Thus we have \begin{align*} |S|q^{-n} &\geq \delta \kappa (\kappa+1)^{-1} - \sqrt{\delta(1-\delta) \kappa^{-1}} \\ &> \delta (1-\varepsilon/4) - \sqrt{\delta} (\varepsilon / 4) \\ &> (1 - \varepsilon/4)(1-\varepsilon/2)mq^{k-1} \\ &> (1-\varepsilon)mq^{k-1}. \end{align*} \end{proof}
09b3cac23a47a2df9e738864cab8fbae9c2d21ed
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction}\label{sec:intro} \input{introduction} \section{Background}\label{sec:back} We characterize the performance of integrating RADICAL-Pilot (RP) and PMIx Reference RunTime Environment (PRRTE), and RP and IBM Job Step Manager (JSM)\@. These enable the concurrent execution of thousands of application tasks on Summit. \subsection{Process Management Interface for Exascale}\label{ssec:pmix} The Process Management Interface for Exascale~(PMIx)~\cite{www:pmix-standard} is an open source standard that extends the prior PMI~v1~\&~v2 interfaces used to launch tasks on compute resources. PMIx provides a method for tools and applications to interact with system-level resource mangers and process launch mechanisms. PMIx provides a bridge between such clients and underlying execution services, e.g., process launch, signaling, event notification. The clients communicate with PMIx enabled servers, which may support different versions of the standard. PMIx can also be used as a coordination and resource discovery mechanism, e.g., machine topology information. \subsection{IBM Job Step Manager}\label{ssec:jsrun} The IBM Spectrum Load Sharing Facility~(LSF) software stack is used to manage the resources for the Summit system. This includes a job scheduler that manages resources and provides allocations based on user provided submissions. The Job Step Manager~(JSM) provides services for starting tasks on compute resources within an allocation~\cite{www:jsm}. The \texttt{jsrun} command enables a user to launch an executable on the remote nodes within a user's job allocation. When the user is allocated a collection of compute nodes by the batch system, a daemon (\texttt{jsmd}) is launched on each of the compute nodes in the allocations. These daemons are then responsible for launching the user's processes on their respective nodes in response to future \texttt{jsrun} commands. There are two startup modes for launching the \texttt{jsmd} daemons: \emph{SSH mode} and \emph{non-SSH mode}~\cite{www:jsm}. As the name suggests, when running in \emph{SSH mode}, Secure Shell is used for launching the the \texttt{jsmd} processes on the remote nodes of the allocation. The other option leverages the IBM Cluster Systems Manager~(CSM) infrastructure to bootstrap the JSM daemons within the allocation. Currently, the default mode on Summit is to use CSM\@. Once the JSM daemons are launched, internally the daemons use PMIx~\cite{www:pmix-standard} to launch, signal, and manage processes on the compute resources. \subsection{PMIx Reference RunTime Environment}\label{ssec:prrte} A reference implementation of the PMIx server-side capabilities is available via the PMIx Reference RunTime Environment~(PRRTE)~\cite{castain:pc18:pmix}. PRRTE leverages the modular component architecture~(MCA) that was developed for Open~MPI~\cite{gabriel04:_open_mpi}, which enables execution time customization of the runtime capabilities. The PRRTE implementation provides a portable runtime layer that users can leverage to launch a PMIx server. PRRTE includes a persistent mode called Distributed Virtual Machine~(DVM), which uses system-native launch mechanisms to bootstrap an overlay runtime environment that can then be used to launch tasks via the PMIx interface. This removes the need to bootstrap the runtime layer on each invocation for task launch. Instead, after the launch of the DVM, a tool connects and sends a request to start a task. The task is processed and then generates a launch message that is sent to the PRRTE daemons. These daemons then launch the task. Internally, this task tracking is referred to as a \emph{PRRTE job}, not to be confused with the batch job managed by the system-wide scheduler. The stages of each PRRTE job are tracked from initialization through completion. We can roughly divide the lifetime of a PRRTE job into the following stages (marked by internal PRRTE state change events): (i) \texttt{init\_complete} to \texttt{pending\_app\_launch}---time to setup the task and prepare launch details; (ii) \texttt{sending\_launch\_msg} to \texttt{running}---time to send the process launch request to PRRTE daemons and to enact them on the target nodes; and (iii) \texttt{running} to \texttt{notify\_complete}---duration of the application plus time to collect task completion notification. In our experiments, we record the time for the transition between these stages to provide insights on the time spent in the runtime layer when running tasks driven by RP. It should be noted that these phases do not include time between the user launching a PRRTE task and PRRTE initiating processing for this task (e.g., due to file system delays or dynamic libraries loading). \subsection{RADICAL-Pilot}\label{ssec:rp} \begin{figure} \centering \includegraphics[trim=0 0 0 0,clip,width=0.45\textwidth]{rp-jsrun-prrte-summit.pdf} \Description{An architecture block diagram describing the integration between RP and PRRTE.} \caption{\footnotesize Deployment of RP on Summit with PRRTE/DVM.}\label{fig:rp-on-summit} \end{figure} RP~\cite{merzky2018using} is a runtime system designed to decouple resource acquisition from task execution. As every pilot system, RP acquires resources by submitting a batch job, then bootstraps dedicated software components on those resources to schedule, place and launch application tasks, independent from the machine batch system~\cite{turilli2018comprehensive}. Scheduling, placing and launching capabilities are specific to each HPC platform, which makes supporting diverse platforms with the same code base challenging. RP can execute single or multi core/GPU tasks within a single compute node, or across multiple nodes. RP isolates the execution of each tasks into a dedicated process, enabling concurrent and sequential execution of heterogeneous tasks by design. RP is a distributed system designed to instantiate its components across available resources, depending on the platform specifics. Each components can be individually configured so as to enable further tailoring while minimizing code refactoring. RP uses RADICAL-SAGA~\cite{saga-x} to support all the major batch systems, including Slurm, PBSPro, Torque and LSF\@. RP also supports many methods to perform node and core/GPU placement, process pinning and task launching like, for example, aprun, JSM, PRRTE, mpirun, mpiexec and ssh. RP is composed of two main components: Client and Agent. Client executes on any machine while Agent bootstraps on one of Summit's batch nodes. Agent is launched by a batch job submitted to Summit's LSF batch system via RADICAL-SAGA\@. After bootstrapping, Agent pulls bundles of tasks from Client, manages the tasks' data dependences if any, and then schedules tasks for execution via either JSM/LSF or PRRTE/DVM\@. How Agent deploys on Summit depends on several configurable parameters like, for example, number of sub-agents, number of schedulers and executors per sub-agent, method of communication between agent and sub-agents, and method of placing and launching tasks for each executor of each sub-agent. A default deployment of Agent instantiates a single sub-agent, scheduler and executor on a batch node of Summit. The executor calls one \texttt{jsrun} command for each task, and each \texttt{jsrun} uses the JSMD demon to place and launch the task on work nodes resources (thread, core, GPU). Fig.~\ref{fig:rp-on-summit} shows the deployment of RP/PRRTE\@ Agent on a batch node and one sub-agent on a compute node. In this configuration, RP uses SSH to launch sub-agents on compute nodes and then PRRTE/DVM to place and launch tasks across compute nodes. This configuration enables the sub-agent to use more resources and, as shown in the next section, improves scalability and performance of task execution. Note that, independent from the configuration and methods used, RP can concurrently place and launch different types of tasks that use different amount and types of resources. \section{Performance Characterization}\label{sec:performance} The performance space of RP is vast, including the execution of both homogeneous and heterogeneous tasks, with and without data dependences and at both small and large scales. We thus divide our performance characterization in three phases: (1) scaling the concurrent execution of short, single-core tasks with both JSM and PRRTE\@; (2) optimizing baseline performance for homogeneous real-life workloads with the best performing between JSM and PRRTE\@; (3) tailoring performance to data-intensive, compute-intensive and GPU-intensive workloads. We present the results of the first phase, offering a baseline that we use to drive our development process. Task here indicates a self-contained executable, executed as one or more processes on the operating system of a Summit compute node. RP, JSM and PRRTE introduce time overheads in tasks execution. These systems require time to schedule, place and launch the task executions. We quantify and compare these overheads, measuring how they change with the scaling of the number of concurrently executed tasks and the amount of used resources. We differentiate between individual overheads and the integration of these overheads over the execution of the workload. Individual overheads account for the amount of time that single operations add to the execution time of a task. For example, how much time RP takes to schedule a task or PRRTE takes communicating to launch that task. Aggregated overheads indicate how much time performing a group of operations adds to the execution of all the workload tasks. Aggregated overheads account for the overlapping of multiple concurrent operations. For example, given 10 tasks, a scheduling rate of 1 task/s and a scheduling time of 5s for task, the aggregated scheduling overhead would be 15s for full concurrency, while the individual scheduling overhead for each task would be 5s. The aggregation of the individual overheads across the entire execution determines how available resources are utilized when executing the workload. RP, JSM and PRRTE require resources to perform their operations and some of these operations may partially or globally block the use of available resources. We measure resource utilization showing the portion of resources used or blocked by each system and the percentage of available resources utilized to execute the workload. \subsection{Experiments Design}\label{ssec:exp_design} We perform 4 experiments to measure and compare the performance of RP, JSM and PRRTE when concurrently executing many-tasks workloads on Summit. Task execution requires assigning suitable resources to the tasks, placing them on resources (i.e., a specific compute node, core, GPU or hardware thread) and then launching the execution of those tasks. RP tracks both tasks and available resources, scheduling the former onto the latter; JSM or PRRTE enact task placement and launching. \begin{figure} \centering \includegraphics[trim=0 0 0 80,clip,width=0.49\textwidth,valign=t]{exp1-2-ttc.pdf} \includegraphics[trim=0 0 0 80,clip,width=0.49\textwidth,valign=t]{exp3-4-ttc.pdf} \Description{Bar plot.} \caption{\footnotesize Measured and ideal total execution time (TTX) of the workloads of Experiment 1--3 (green) and 4 (gray).}\label{fig:exp-ttc} \vspace*{-1.7em} \end{figure} Experiment 1 quantifies the aggregated overhead of RP measured as the time required to acquire the workload and scheduling its tasks on the available resources. Experiment 1 measures this overhead with both JSM and PRRTE\@. Experiment 2 quantifies the aggregated overhead of JSM and PRRTE measured as the time from when they receive the tasks from RP to when the tasks start to execute. Experiment 3 measures RP and PRRTE aggregated overheads and resource utilization for scales beyond those currently supported by JSM\@. Experiment 4 shows the performance improvement obtained by reducing the overheads measured in Experiments 1--3. In Experiment 1--4 we execute workloads with between 2 and 16384 tasks, each requiring 1 core and executing for 900s. Given Summit architecture, we utilize between 1 and 410 compute nodes, i.e., 42 to 17220 cores, using the SMT1 setting for Summit nodes. Our experiments maximize execution concurrency and therefore the pressure on RP, JSM and PRRTE capabilities. Any lower degree of execution concurrency would require less scheduling, placements and executions, resulting in a better performance of our systems. As such, our experiments measure the baseline of the combined scaling performance of RP, JSM or PRRTE on Summit for homogeneous compute-intensive, multi-tasks workloads. Experiment 1--4 make a set of reasonable simplifications: each task executes the \texttt{stress} command for exactly 900s, a trade off between core allocation cost and the need to stress RP, JSM and PRRTE performance. We do not perform I/O operations as they would be irrelevant to our characterization. JSM and PRRTE do not manage task-level data while RP only links data on the available filesystems and ensures locality of output data. In this context, data performance depends on the storage systems and the executable capabilities and should be independently characterized. Analogously, in our experiment we do not use real workloads executables. RP, JSM and PRRTE make sure that the executable of a task is launched on the required resources but play no role on their ongoing execution. The executable is self-contained and completely independent from RP, JSM and PRRTE\@. Thus, the measurements we present apply to every homogeneous workload, independent of the scientific domain, and the specifics of the code executed. For our experiments we use JSM/jsrun 10.03.00, built for PMIx 3.1.3rc1; PRRTE v1.0.0 with 2 minor patches, built for PMIx 3.1.3; and RADICAL-Cybertools v0.70.3. The data, analysis and code of our experiments is available at~\cite{turilli2019prrte-exp}. Fig.~\ref{fig:exp-ttc} shows the total execution time (TTX) of the workloads of Experiment 1--4. The black line indicates the ideal execution time, when both software and hardware overheads are assumed to be zero. As expected, the aggregated overheads of the execution increase with the number of tasks and compute nodes utilized. The last column shows Experiment 4 and the marked improvement made by addressing the overheads measured in Experiment 1--3. \subsection{Experiment 1: RP Aggregated Overhead}\label{ssec:exp1} Fig.~\ref{fig:exp1} shows the RP aggregated overhead when using either JSM or PRRTE to place and launch between 1 and 1024 single-core tasks on between 1 and 49 Summit compute nodes. \begin{figure} \centering \includegraphics[trim=0 10 0 0,clip,width=0.49\textwidth]{exp1.pdf} \Description{Bar plot.} \caption{\footnotesize Experiment 1: RP aggregated overheads when scheduling 2--1024 single-core tasks on 1--49 compute nodes on Summit.}\label{fig:exp1} \end{figure} The mean of the aggregated overhead of RP grows exponentially with scale but differences can be noted between JSM and PRRTE when executing 2 to 8 tasks. With JSM, the aggregated overhead is relatively stable but with large variability at 2 and 32 tasks. With PRRTE, the aggregated overhead grows with minimal variability. Assuming ideal concurrency and resource availability, all the tasks would concurrently execute in 900 seconds. Comparatively, the mean aggregated overhead of RP is \(<5\%\) of the ideal total execution time (TTX) with JSM, and \(<25\%\) with PRRTE\@. Across all scales, the mean of the aggregated overhead of RP is consistently higher with PRRTE than with JSM\@. This is due to a communication delay we introduced when pushing tasks to PRRTE\@. RP task scheduling rate is higher than PRRTE task ingestion rate and exceeding it causes PRRTE to fail. We thus slow down RP scheduling rate by introducing an artificial 0.1s delay per task. PRRTE Wait in Fig.~\ref{fig:exp1} shows the portion of RP aggregated overhead which is due to the delay we introduced. As we are measuring an aggregated overhead, the delays accumulates across the whole execution. PRRTE Wait dominates the RP overhead, showing how, in relative terms, PRRTE overhead is smaller than the one of JSM\@. Accounting for PRRTE Wait, the mean aggregated overhead of RP is below 3\% of the ideal execution time with PRRTE\@. We used test runs to empirically derive the amount of delay to introduce in RP when communicating with PRRTE\@. Exceeding the submission rate with PRRTE leads to tasks submission errors that RP could recover at the cost of reduced utilization. At a rate of 10 tasks/second we observe stable operation of the PRRTE DVM\@. Similar test runs uncovered the failure modes of JSM\@. Originally, exceeding the sustainable rate of calls to JSM caused the LSF daemon to unrecoverably fail. This crashed the LSF job with which RP acquired its resources, causing the failure of the workload execution. Recent updates to LSF on Summit resolved those issues, and no delays are required for sequential JSM invocations. \subsection{Experiment 2: JSM and PRRTE Aggregated Overheads}\label{ssec:exp2} Fig.~\ref{fig:exp2} shows the aggregated overheads of JSM and PRRTE when executing the same workloads as Experiment 1. Starting from 4/1 tasks/node, JSM has a smaller aggregated overhead compared to PRRTE\@. PRRTE aggregated overhead grows exponentially with the number of tasks and nodes while JSM shows a less well-defined trend across scales. This is because the delay introduced in RP makes the aggregation of PRRTE individual overheads additive: the delay is longer than PRRTE task placement and launch time. \begin{figure} \centering \includegraphics[trim=0 13 0 8,clip,width=0.49\textwidth]{exp2.pdf} \Description{Bar plot.} \caption{\footnotesize Experiment 2: JSM and PRRTE aggregated overheads when placing and launching 1--2048 single-core tasks on 1--49 nodes.}\label{fig:exp2} \end{figure} The aggregated overheads of RP, JSM and PRRTE do not sum to the total overhead as measured in Fig.\ref{fig:exp-ttc}. RP can schedule tasks while other tasks are been assigned and launched by JSM or PRRTE\@. Thus, the aggregated overhead of RP and those of JSM or PRRTE may overlap during execution, resulting as a single overhead when projected on TTX\@. Both aggregated overheads plateau below 1024 tasks and 25 nodes. This is due to task failure: both JSM and PRRTE can execute up to 967 tasks. Above that, the remaining tasks fail, creating a steady overhead. This upper bound depends on the limit to 4096 open files imposed by the system on a batch node. This results in a maximum of O(1000) tasks as each task consumes at least three file descriptors: standard input, output, and error files. \subsection{Experiment 3: RP and PRRTE at Scale}\label{ssec:exp3} We overcome the limit of O(1000) concurrent tasks by running multiple instances of RP executor onto compute nodes and reconfiguring the open files limit to 65536. This allows up to \({\sim}22000\) concurrent tasks per executor but this approach works only with PRRTE\@. The open files limits cannot be increased with JSM, and JSM becomes unstable with concurrent RP executors. Thus, we could not execute \(>967\) concurrent task with JSM\@. Fig.~\ref{fig:exp3} shows the behavior of the aggregated overheads of both RP and PRRTE at scale. As already observed in Experiment 2, these overheads grow exponentially and, for RP, the waiting time introduced when communicating with PRRTE remains dominant. \begin{figure} \centering \includegraphics[trim=0 13 0 10,clip,width=0.49\textwidth,valign=t]{exp3-rp-prrte.pdf} \Description{Two bar plots.} \caption{\footnotesize Experiments 3: Aggregated overhead of RP and of PRRTE when executing 1024--16384 single-core tasks on 26--410 nodes.}\label{fig:exp3} \end{figure} 16384/401 (single-core) tasks/nodes is the limit currently supported by RP/PRRTE integration. At 32768/815 tasks/nodes, the DVM of PRRTE crashes, likely due to the excessive number of communication channels concurrently opened. Fig.~\ref{fig:exp-ttc} (bottom) shows that the combination of RP and PRRTE aggregated overhead becomes dominant over TTX at 8192/202 tasks/nodes so scaling beyond 16384/410 tasks/cores would not be effective. The aggregated overheads of RP and PRRTE, alongside the TTX of the workload execution are stable. The variance across runs is minimal, indicating consistent executions across time. Further, experiments 1--4 executed \({\sim}200000\) tasks without failure, a measure of the robustness of the integration between RP and PRRTE and, up to 967 concurrent tasks, of RP and JSM\@. Different from JSM and LSF, PRRTE is an open source project. This allows us to profile the execution of each task inside PRRTE's DVM\@. We collects 40 timestamps, that can be grouped pairwise to provide up to 20 sequential durations for each task execution. Profiles allow to isolate overheads both at individual and aggregated levels, enabling to separate in Fig.~\ref{fig:exp3} (bottom) the aggregated task execution overhead and that of the DVM\@. Fig.~\ref{fig:exp3-prrte} (top) shows the breakdown of the aggregated overhead of PRRTE's DVM for the execution of Experiment 3 workloads. The dominant aggregated overhead of PRRTE is the time taken to communicate to the demons on each compute node that a task is ready to execute. As already noted, this overhead is the sum of each individual overhead as the rate at which the tasks are queued for execution by RP to PRRTE is too slow to create any overlapping among communication of initiating the execution of tasks. Fig.~\ref{fig:exp3-prrte} (bottom) shows the time taken to communicate the execution of each task within PRRTE's DVM\@. The average time taken by each individual overhead is 0.034s, and a standard deviation of 0.047s. The outliers are likely produced by an accumulation in the communication buffer but over the 16384 tasks, the time taken by this communication is mostly stable around the mean. For 16384 tasks, the individual overheads sum up to 570s, \({\sim}17\%\) of the TTX of the workload (as shown in Fig.~\ref{fig:exp-ttc} (bottom)). \subsection{Resource Utilization}\label{ssec:utilization} \begin{figure*} \centering \includegraphics[trim=0 30 0 80,clip,width=0.99\textwidth]{ru.png} \Description{Area plot describing RP, PRRTE and workload resource utilization} \caption{\footnotesize Experiment 3: Resource utilization as the time in which each available core has been utilized or blocked by RP, PRRTE or the workload execution.}\label{fig:ru} \end{figure*} We measure for how much time each available computing resource is used, blocked or left idling during the execution of a workload. We focus only on the runs with RP/PRRTE as relevant behavior is measured only at the largest scales of our experiments. Resources become available as soon as Summit's scheduler bootstraps the job submitted to Summit's batch system on one of the requested compute nodes. From then on, we distinguish among three main consumers of those resources: RP, PRRTE and the workload. Each consumer can use the resources to perform computations, block the resources while a computation is performed, or resources can idle because they have no active consumers. The percentage of resource consumed indicate how much of the resources has been actually used to execute the scientific payload and, in case, where resources have been wasted while being blocked by other operations. It is the most relevant measure of the baseline performance of RP and PRRTE integration. Fig.~\ref{fig:ru} shows resource utilization (RU) for Experiment 3. \I{Agent Nodes} (dark orange) indicates the resources allocated exclusively to RP\@. \I{Pilot Startup} (blue) shows the time in which the resources are blocked while RP starts up; and \I{Warmup} (light orange) the time in which resources are blocked by RP while collecting tasks and scheduling them for execution. \I{Prepare Execution} (light green) indicates the resources blocked while waiting to communicate with PRRTE for task execution; \I{Execution Cmd} (black) marks the time in which tasks use resources for execution. \I{Draining} (dark green) indicates the resources blocked while waiting to drain tasks upon their completion; and \I{Pilot Termination} (light gray) shows the resources blocked while terminating the pilot. Consistent with the overhead analysis, execution preparation (light green) and execution draining (dark green) progressively dominate RU with scale. Execution preparation corresponds to the wait time introduced in RP and draining time is specular to wait time: the slower is the rate at which tasks are started, the slower is the rate at which they can be drained. Both starting and draining time are blocking operations that, at the current supported launching rates, result in large amount of available resources not to be used for the execution of the workload. \I{Execution RP}, \I{Execution PRRTE} and \I{Unschedule} are too small to be seen when compared to the other measures of RU\@. This indicates that PRRTE has no appreciable impact over RU during the workload execution. RP impact is noticeable for exclusive resource allocation (a whole compute node), and for blocking available resources while bootstrapping and preparing the execution. \I{Pilot termination} is visible only at the lower scales as it has a mostly constant impact on RU\@. Fig.~\ref{fig:ru} shows several horizontal green lines, cutting across each plot, indicating resources idling across the whole execution. In Experiment 3, these resources are GPUs as our workload does not use them. \begin{figure} \centering \includegraphics[trim=0 0 0 10,clip,width=0.49\textwidth]{exp3-prrte-detailed.pdf} \includegraphics[trim=0 10 0 80,clip,width=0.49\textwidth,valign=t]{exp3-prrte-p2.pdf} \Description{Two bar plots.} \caption{\footnotesize Experiment 3: Dominant aggregated overheads of PRRTE when executing 1024/26--16834/410 tasks/nodes on Summit.}\label{fig:exp3-prrte} \end{figure} Table~\ref{tab:ru} details our measures of RU as percentage of the available resources. Resources used by RP are independent of resource size, thus the percentage of resource utilized by RP decreases with the size of the pilot. Similarly, the percentage of available resource blocked while starting the pilot decreases with scale as startup time is relatively invariant across pilot sizes. The resources blocked while ``warming up'', i.e., collecting and scheduling tasks, significantly increases from 8192 tasks onwards. This is mainly due to scheduling efficiency as RP scheduler performance depends on the amount of available resources. Consistently with what observed in Figs.~\ref{fig:exp3-prrte} and~\ref{fig:ru}, PRRTE has a negligible impact on RU across all the scales. \begin{table*} \caption{\footnotesize Experiments 3--4: Resource utilization (RU) expressed as the percentage of resources used of blocked by RP, PRRTE and the workload. Last line shows optimized run of Experiment 4.}\label{tab:ru} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{l c c c c c c c c c c c } % \toprule % \B{Tasks / Nodes} & \B{Agent Nodes} & \B{Pilot Startup} & \B{Warmup} & \B{Prep. Execution} & \B{Exec. RP} & \B{Exec. PRRTE} & \B{Exec. Cmd} & \B{Unschedule} & \B{Draining} & \B{Pilot Termination} & \B{Idle} \\ % \midrule % 1024 / 26 & 3.846\% & 3.630\% & 1.680\% & 4.510\% & 0.016\% & 0.002\% & \B{73.999}\% & 0.001\% & 6.149\% & 0.812\% & 5.355\% \\ % 2048 / 51 & 1.961\% & 3.622\% & 1.603\% & 9.800\% & 0.011\% & 0.004\% & \B{65.313}\% & 0.000\% & 11.356\% & 0.867\% & 5.462\% \\ % 4096 / 101 & 0.990\% & 2.698\% & 1.398\% & 16.178\% & 0.013\% & 0.002\% & \B{54.797}\% & 0.000\% & 17.798\% & 0.534\% & 5.593\% \\ % 8192 / 202 & 0.495\% & 2.076\% & 1.954\% & 23.375\% & 0.021\% & 0.002\% & \B{39.990}\% & 0.001\% & 25.570\% & 0.396\% & 6.120\% \\ % 16384 / 410 & 0.244\% & 1.271\% & 3.309\% & 28.779\% & 0.021\% & 0.002\% & \B{25.596}\% & 0.001\% & 32.752\% & 0.256\% & 7.771\% \\ % \midrule % 16384 / 410 & 1.013\% & 3.265\% & 6.314\% & 2.345\% & 2.421\% & 4.988\% & \B{63.557}\% & 0.286\% & 11.526\% & 0.800\% & 3.485\% \\ % \bottomrule % \end{tabular} } \end{table*} \subsection{Discussion}\label{ssec:discussion} Experiments 1--3 and the metrics time to execution (TTX) and resource utilization (RU) offer three main insight: (1) performance baseline of RP with JSM or PRRTE on Summit up to the scale currently supported; (2) characterization of aggregated and individual overheads of RP, JSM and PRRTE\@; and (3) performance evaluation of RP for TTX and RU\@. One of the main goals of our performance baseline is to guide the engineering with which we integrate RP, JSM and PRRTE\@. The analysis we presented shows that the waiting time between RP and PRRTE communication is the main barrier to scalability. Figs.~\ref{fig:exp1} and Fig.~\ref{fig:exp2} show that PRRTE Wait is the dominant aggregated overhead. The analysis of Fig.~\ref{fig:exp3-prrte} showed how this waiting time determines the aggregation of PRRTE overheads into the sum of non-overlapping task overheads. Further, Fig.~\ref{fig:ru}, shows that the waiting time reduces up to 3/4 the resources that the workload can utilize for execution. The delay we introduced is conservative so to guarantee no failure in task execution. In many real life use cases, task failure can be managed with fault tolerance as done, for example, with RADICAL Ensemble Toolkit (EnTK) and RP when executing seismic inversion on Titan~\cite{balasubramanian2018harnessing}. There, we scaled the execution up to 131,000 cores, resubmitting \({\sim}15\)\% of tasks due to diverse types of failure. We used the same approach on Summit: eliminating RP waiting time with PRRTE, lead to between 3 and 10\% task failure rate with 2 total execution failures out of 8 runs. Based on the analysis of the failures we recorded, we configured PRRTE to use a flat communication hierarchy and \texttt{ssh} as its communication channel. This reduced the internal performance of PRRTE and limited the total amount of concurrent tasks that it can handle to \({\sim}20000\) but it also allowed a more aggressive communication rate between RP and PRRTE\@. In Experiment 4, we were able to reduce the waiting time from 0.1s to 0.01s and to use 4 concurrent sub-agents for RP\@. This increased the rate of communication between RP and PRRTE both for each single sub-agent and globally, due to the concurrency among sub-agents. Fig.~\ref{fig:ru_opt} shows how this dramatically improved RU\@. Compared to the same run on Experiment 3, Experiment 4 reduced the mean of TTX from 3236s to 1296s, the mean of aggregated RP overhead from 2648s to 522s, and the mean of aggregated overhead of PRRTE from 2228s to 341s. \begin{figure} \centering \includegraphics[trim=0 13 0 0,clip,width=0.49\textwidth]{ru_opt.png} \Description{Area plot describing RP, PRRTE and workload resource utilization} \caption{\footnotesize Experiment 4: Resource utilization of a 16384/404 tasks/nodes run with optimized RP/PRRTE integration.}\label{fig:ru_opt} \end{figure} The last line of Table~\ref{tab:ru} shows the details of RU for Experiment 4. RU of the workload improved from 25\% to 63\% while the RU of preparing execution decreased from 29\% to 2\% and RU of idling resources decreased from 8\% to 3\%. RU of both RP and PRRTE grows: RP deployment requires more time due to the increased number of components instantiated and the higher rate of scheduling; and the increased rate of task scheduling, placement and execution stress more the capabilities of RP and PRRTE implementations. Eliminating the delay between RP and PRRTE requires to introduce concurrency between the two systems. We prototyped a version of RP that partitions the available resources and uses a DVM for each partition. Tasks will be scheduled across partitions, adding a minimal overhead for meta-scheduling operations. Each DVM will operate on smaller portions of resources, lowering the number of tasks scheduled by RP\@. In turn, this will eliminate the need to add a waiting time, further reducing the overheads measured in our baseline and improving both TTX and RU\@. The main space for further improvement in RP is scheduling efficiency. RP already support a scheduling framework in which algorithms tailored to the executed workload can be used to optimize performance. Nonetheless, in RP the scheduling algorithms are implemented in Python, leading to an unavoidable ceiling on the amount of optimization we can implement at large scales. Prototypes implemented in C show the near complete elimination of scheduling overheads when executing both heterogeneous and homogeneous workloads. Integration with third-party tools like Flux~\cite{ahn2014flux} is another promising approach to solve this problem. PRRTE overheads for individual tasks are both stable and small. This leaves little space of optimization for the execution of many-tasks workloads as those scheduled by RP\@. The main improvement for PRRTE would be to increase the number of concurrent tasks that can be managed by the DVM but the importance of such an improvement will decrease once RP can utilize multiple DVMs within the same pilot. \section{Related Work}\label{sec:related} Pilot systems like GlideinWMS~\cite{sfiligoi2008glideinwms}, PanDA~\cite{de2014panda} and DIRAC~\cite{tsaregorodtsev2004dirac} are used to implement late binding and multi-level scheduling on a variety of platforms. While these systems have been successfully used on HPC machines~\cite{hufnagel2017cms,maeno2014evolution,fifield2011integration}, including on the former ORNL leadership class machine Titan~\cite{htchpc2017converging}, they are currently not available on Summit and do not support either JSM or PRRTE\@. Both PRRTE~\cite{castain2018pmix} and JSM~\cite{quintero2019ibm} rely on PMIx to place and launch processes on Summit's nodes. Many applications are actively working to directly use PMIx to interface with the system management stack to benefit from portable process and resource management capabilities~\cite{vallee2018improving}. While PMIx explicitly supports interfacing with command line tools, there are no other pilot systems using PMIx via JSM or PRRTE\@. MPICH and Hydra~\cite{balaji2014mpich} offer capabilities similar to PRRTE but are not supported on Summit. Pilot systems are not the only way to execute many-task applications on HPC machines. JSM and LSF natively support this capability but, as seen in~\S\ref{sec:performance}, in their current deployment on Summit they cannot scale beyond 1000 concurrent task executions. Flux~\cite{ahn2014flux} is a resource manager that provides users with private schedulers on pools of dedicated resources. This enables the task scheduling capabilities of a pilot system, including RP, but requires to be either adopted as the main job manager of the machine or be deployed as part of a pilot system. METAQ~\cite{metaq,metaq-2} are a set of shell scripts that forms a ``middle layer'' between the batch scheduler and the user’s computational job scripts and supports task packing. METAQ requires a separate invocation of mpirun (or equivalent) for each task. METAQ has been superseded by \texttt{mpi\_jm}~\cite{mpi-jm} --- a python library that is linked to applications. In addition to intelligent backfilling and task packing, \texttt{mpi\_jm} allows the executable to be launched based upon an affinity with the hardware. In Ref.\cite{cug-2016,merzky2018using} we investigated the performance of RP on ORTE --- a precursor to PRRTE. Using ORTE, RP was capable of spawning more than 100 tasks/second and the steady-state execution of up to 16K concurrent tasks. Resource utilization was significant lower than with PRRTE and more sensitive to the number of units and unit duration. \section{Conclusions}\label{sec:conclusions} We characterized the performance of the integration between RP, JSM and PRRTE on Summit when executing many-task workloads. Our baseline characterizes aggregated and individual overheads for each system, measuring resource utilization for each available computing core. Our baseline measures the performance for the worst case scenario in which single-core, 15 minutes-long tasks are executed on up to 410 compute nodes of Summit. Further, based on the insight gained by our characterization, we showed the results of our optimization when executing 16384 1-core, 15 minutes-long tasks on up to 404 compute nodes. Our experiments shows that on Summit: (1) PRRTE enables better scalability than JSM when executing homogeneous many-task applications at scales larger than 987 concurrent task executions; (2) up to the scale currently supported, PRRTE individual overheads are negligible when compared to other overheads; and (3) PRRTE open source code enables optimizations that lower the impact of aggregated overheads over the execution of the considered workloads. Further, we show that RP can effectively integrate with both JSM and PRRTE, imposing manageable aggregated overheads while offering high degrees of configurability. Specifically, we show that once optimized, at the largest scale supported and for the considered workload the integration between RP and PRRTE imposes an overall aggregated overhead of 35\% over the total time to execution of the workload. This enables the utilization of 63\% of the available resources to execute the given workload. The presented performance characterization, its analysis, and the implemented optimizations are the foundation of future work with RP, JSM, PRRTE and Summit. The current scale at which RP/PRRTE operate support the development of three use cases: machine learning driven molecular dynamics simulations; machine learning driven drug discovery protocols; and seismic inversion workflows. RP and PRRTE are posed to support several INCITE and Exascale computing projects, accounting for a significant portion of the available allocation on Summit in the next years. To this end, we will enable RP to partition both available resource and workload execution. As seen in~\S\ref{ssec:discussion}, this will greatly reduce aggregated overheads and improve resource utilization efficiency. Further work will be needed to optimize RP scheduler when managing workload with both spatial and temporal heterogeneity, i.e., those in which task execution time are drawn from a large distribution. The next step will be to characterize performance at increasingly large scales, while measuring and addressing the bottlenecks for heterogeneous workloads executed both concurrently and sequentially on the same pilot. \section{Acknowledgments} We would like the thank other members of the PMIx community, and Ralph Castain in particular, for the excellent work that we build upon. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No.~DE-AC05-00OR22725. At Rutgers, this work was also supported by NSF ``CAREER'' ACI-1253644, RADICAL-Cybertools NSF 1440677 and 1931512, and DOE Award DE-SC0016280. We also acknowledge DOE INCITE awards for allocations on Summit. \bibliographystyle{acm}
df868955e8d95260286eb46deb576430f3d0c59f
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Conclusion} \label{sec_concl} In this contribution we outlined a mathematical model of MPI taking into account relaxation effects, which led us to the LLG equation describing the behavior of the magnetic material inside the particles on a microscale level. For calibrating the MPI device it is necessary to compute the system function, which mathematically can be interpreted as an inverse parameter identification problem for an initial boundary value problem based on the LLG equation. To this end we deduced a detailed analysis of the forward model, i.e., the operator mapping the coefficients to the solution of the PDE as well as of the underlying inverse problem. The inverse problem itself was investigated in an all-at-once and a reduced approach. The analysis includes representations of the respective adjoint operators and Fr\'{e}chet derivatives. These results are necessary for a subsequent numerical computation of the system function in a robust manner, which will be subject of future research. Even beyond this, the analysis might be useful for the development of solution methods for other inverse problems that are connected to the LLG equation. \subsection{All-at-once formulation} \label{sec:aao} We split the magnetization additively into its given initial value $\mathbf{m}_0$ and the unknown rest $\hat{\mathbf{m}}$, so that the forward operator reads \[ \begin{aligned} &\mathbb{F}(\hat{\mathbf{m}},\hat{\alpha}_1,\hat{\alpha}_2)= \left(\begin{array}{l} \mathbb{F}_0(\hat{\mathbf{m}},\hat{\alpha}_1,\hat{\alpha}_2)\\[2ex] \Bigl(\mathbb{F}_{k\ell}(\hat{\mathbf{m}},\hat{\alpha}_1,\hat{\alpha}_2)\Bigr)_{k=1,\ldots,K\,, \ \ell=1,\ldots,L} \end{array}\right)\\ &:= \left(\begin{array}{l} \hat{\alpha}_1 \hat{\mathbf{m}}_t-\Delta_N (\mathbf{m}_0+\hat{\mathbf{m}}) - \hat{\alpha}_2 (\mathbf{m}_0+\hat{\mathbf{m}})\times \hat{\mathbf{m}}_t \\ \hspace*{1.5cm} -|\nabla (\mathbf{m}_0+\hat{\mathbf{m}})|^2(\mathbf{m}_0+\hat{\mathbf{m}}) - \mathbf{h} +((\mathbf{m}_0+\hat{\mathbf{m}})\cdot\mathbf{h})(\mathbf{m}_0+\hat{\mathbf{m}})\\[2ex] \Bigl(\int_0^T \int_\Omega \mathbf{K}_{k\ell}(t,\tau,x)\cdot \mathbf{m}_t(x,\tau)\, dx\, d\tau\Bigr)_{k=1,\ldots,K\,, \ \ell=1,\ldots,L} \end{array}\right)\,, \end{aligned} \] for given $\mathbf{h}\in L^2(0,T;L^p(\Omega;\mathbb{R}^3))$, $p\geq2$, where $\Delta_N:H^1(\Omega)\to H^1(\Omega)^*$ and, using the same notation, $\Delta_N:H^2_N(\Omega)\to L^2(\Omega)(\subseteq H^1(\Omega)^*)$ with $H^2_N(\Omega)=\{u\in H^2(\Omega)\, : \, \partial_\nu u=0\mbox{ on }\partial\Omega\}$ \footnote{Note that as opposed to $H^1(\Omega)$ functions, $H^2(\Omega)$ functions do have a Neumann boundary trace} is equipped with homogeneous Neumann boundary conditions, i.e, it is defined by \[ \langle -\Delta_N u,v\rangle_{H^1(\Omega)^*,H^1(\Omega)}= (\nabla u, \nabla v)_{L^2(\Omega)} \quad \forall u,v\in H^1(\Omega) \] and thus satisfies \begin{equation}\label{Laplace} (-\Delta_N u,v)_{L^2(\Omega)}=\int_\Omega \nabla u\cdot \nabla v\, dx \quad \forall u\in H^2_N(\Omega)\,, \ v\in H^1(\Omega)\,. \end{equation} The forward operator is supposed to act between Hilbert spaces \[ \mathbb{F}:\mathcal{U}\times \mathbb{R}^2 \to \mathcal{W}\times L^2(0,T)^{K L} \] with the linear space \begin{equation}\label{U} \begin{aligned} \mathcal{U}&=\{ \mathbf{u}\in L^2(0,T;H^2_N(\Omega;\mathbb{R}^3))\cap H^1(0,T;L^2(\Omega;\mathbb{R}^3))\, : \, \mathbf{u}(0)=0\}\\ &\subseteq C(0,T;H^1(\Omega))\cap H^s(0,T;H^{2-2s}(\Omega))\,, \end{aligned} \end{equation} for $s\in[0,1]$, where the latter embedding is continuous by, e.g, \cite[Lemma 7.3]{Roubicek}, applied to $\frac{\partial u_i}{\partial x_j}$, and interpolation, as well as \begin{equation}\label{W} \mathcal{W}=H^1(0,T;H^1(\Omega;\mathbb{R}^3))^* \mbox{ or, in case $p>2$, } \mathcal{W}=H^1(0,T;L^2(\Omega;\mathbb{R}^3))^*\,. \end{equation} We equip $\mathcal{U}$ with the inner product \[ (\mathbf{u}_1,\mathbf{u}_2)_\mathcal{U}:=\int_0^T\int_\Omega \Bigl( (-\Delta_N\mathbf{u}_1)\cdot(-\Delta_N\mathbf{u}_2) +\mathbf{u}_{1t}\cdot\mathbf{u}_{2t}\Bigr)\, dx\, dt +\int_\Omega \nabla\mathbf{u}_1(T) \colon \nabla\mathbf{u}_2(T)\, dx\,, \] which, in spite of the nontrivial nullspace of the Neumann Laplacian $-\Delta_N$, defines a norm equivalent to the usual norm on $L^2(0,T;H^2(\Omega;\mathbb{R}^3))\cap H^1(0,T;L^2(\Omega;\mathbb{R}^3))$, due to the estimates \[ \begin{aligned} \|\mathbf{u}\|_{L^2(0,T;L^2(\Omega))}^2&= -\int_0^T\int_\Omega\int_0^t\mathbf{u}(s)\, ds\, \mathbf{u}_t(t)\, dx\, dt+\int_\Omega \int_0^t\mathbf{u}(s)\, ds\, \mathbf{u}(T)\, dx\\ &\leq \Bigl(T \|\mathbf{u}_t\|_{L^2(0,T;L^2(\Omega))} +\sqrt{T}\|\mathbf{u}(T)\|_{L^2(\Omega)}\Bigr) \|\mathbf{u}\|_{L^2(0,T;L^2(\Omega))}\\ \|\mathbf{u}(T)\|_{L^2(\Omega)}&=\|\int_0^T\mathbf{u}_t(t)\, dt\|_{L^2(\Omega)}\leq \sqrt{T}\|\mathbf{u}_t\|_{L^2(0,T;L^2(\Omega))}\,. \end{aligned} \] This, together with the definition of the Neumann Laplacian \eqref{Laplace}, and the use of solutions $\mathbf{z}$, $\mathbf{v}$ to the auxiliary problems \begin{equation}\label{auxprob} \left\{\begin{array}{rcl} \mathbf{z}_t-\Delta \mathbf{z}&=&\mathbf{v}\mbox{ in }(0,T)\times\Omega\\ \partial_\nu\mathbf{z}&=&0\mbox{ on }(0,T)\times\partial\Omega\\ \mathbf{z}(0)&=&0\mbox{ in }\Omega \end{array}\right.\,, \quad \left\{\begin{array}{rcl} -\mathbf{v}_t-\Delta \mathbf{v}&=&\mathbf{f}\mbox{ in }(0,T)\times\Omega\\ \partial_\nu\mathbf{v}&=&0\mbox{ on }(0,T)\times\partial\Omega\\ \mathbf{v}(T)&=&\mathbf{g}\mbox{ in }\Omega \end{array}\right.\,, \end{equation} allows to derive the identity \begin{equation}\label{id_adjU} \begin{aligned} (\mathbf{u},\mathbf{z})_\mathcal{U} =& \int_0^T\int_\Omega \Bigl( \nabla\mathbf{u}\colon\nabla(-\Delta_N\mathbf{z})-\mathbf{u}\cdot\mathbf{z}_{tt}\Bigr) dx\, dt +\int_\Omega \mathbf{u}(T)\cdot\Bigl(\mathbf{z}_t(T)-\Delta_N\mathbf{z}(T)\Bigr)\, dx\\ =& \int_0^T\int_\Omega \Bigl( \nabla\mathbf{u}\colon\nabla(\mathbf{v}-\mathbf{z}_t)-\mathbf{u}\cdot(\mathbf{v}_t+\Delta_N\mathbf{z}_t)\Bigr) dx\, dt +\int_\Omega \mathbf{u}(T)\cdot\mathbf{v}(T)\, dx\\ =& \int_0^T\int_\Omega \mathbf{u}\cdot\Bigl(-\Delta_N\mathbf{v}-\mathbf{v}_t\Bigr) dx\, dt +\int_\Omega \mathbf{u}(T)\cdot\mathbf{v}(T)\, dx\\ =&\int_0^T\int_\Omega \mathbf{u}\cdot \mathbf{f} \, dx\, dt + \int_\Omega \mathbf{u}(T)\cdot \mathbf{g} \, dx\,, \end{aligned} \end{equation} which will be needed later on for deriving the adjoint. On $\mathcal{W}=H^1(0,T;H^1(\Omega;\mathbb{R}^3))^*$ we use the inner product \[ \begin{aligned} (\mathbf{w}_1,\mathbf{w}_2)_\mathcal{W} &:=\int_0^T\int_\Omega \Bigl( I_1[\nabla(-\Delta_N+\mbox{id})^{-1}\mathbf{w}_1](t)\colon I_1[\nabla(-\Delta_N+\mbox{id})^{-1}\mathbf{w}_2](t)\\ &\qquad\qquad+I_1[(-\Delta_N+\mbox{id})^{-1}\mathbf{w}_1](t)\cdot I_1[(-\Delta_N+\mbox{id})^{-1}\mathbf{w}_2](t) \, dx\, dt\,, \end{aligned} \] with the isomorphism $-\Delta_N+\mbox{id}:H^1(\Omega)\to (H^1(\Omega))^*$ and the time integral operators \[ \begin{aligned} I_1[w](t):=\int_0^t w(s)\, ds-\frac{1}{T}\int_0^T(T-s)w(s)\, ds\,,\\ I_2[w](t):=-\int_0^t (t-s)w(s)\, ds + \frac{t}{T}\int_0^T(T-s)w(s)\, ds\,, \end{aligned} \] so that $I_2[w]_t(t)=-I_1[w](t)$, $I_1[w]_t(t)=-I_2[w]_{tt}(t)=w(t)$ and $I_2[w](0)=I_2[w](T)=0$, hence \[ \int_0^T I_1[w_1](t)\,I_1[w_2](t)\, dt=\int_0^T I_2[w_1](t)\, w_2(t)\, dt, \] so that in case $\mathbf{w}_2\in L^2(0,T;L^2(\Omega;\mathbb{R}^3))$, \begin{equation}\label{id_adjW} \begin{aligned} (\mathbf{w}_1,\mathbf{w}_2)_\mathcal{W} &=\int_0^T\int_\Omega \Bigl( I_2[\nabla(-\Delta_N+\mbox{id})^{-1}\mathbf{w}_1](t)\colon [\nabla(-\Delta_N+\mbox{id})^{-1}\mathbf{w}_2](t)\\ &\qquad\qquad+I_2[(-\Delta_N+\mbox{id})^{-1}\mathbf{w}_1](t)\cdot [(-\Delta_N+\mbox{id})^{-1}\mathbf{w}_2](t) \, dx\, dt\\ &=\int_0^T\int_\Omega I_2[(-\Delta_N+\mbox{id})^{-1}\mathbf{w}_1](t)\cdot\mathbf{w}_2(t) \, dx\, dt\,. \end{aligned} \end{equation} In case $p>2$ in the assumption on $\mathbf{h}$, we can set $\mathcal{W}=H^1(0,T;L^2(\Omega;\mathbb{R}^3))^*$ and use the simpler inner product \[ (\mathbf{w}_1,\mathbf{w}_2)_\mathcal{W} :=\int_0^T\int_\Omega I_1[\mathbf{w}_1](t)\cdot I_1[\mathbf{w}_2](t)\, dx\, dt\,, \] which in case $\mathbf{w}_2\in L^2(0,T;L^2(\Omega;\mathbb{R}^3))$ satisfies \[ (\mathbf{w}_1,\mathbf{w}_2)_\mathcal{W} =\int_0^T\int_\Omega I_2[\mathbf{w}_1](t)\cdot \mathbf{w}_2(t)\, dx\, dt\,. \] \subsubsection{Well-definedness of the forward operator} Indeed it can be verified that $\mathbb{F}$ maps between the function spaces introduced above, cf. \eqref{U}, \eqref{W}. For the linear (with respect to $\hat{\mathbf{m}}$) parts $\hat{\alpha}_1 \hat{\mathbf{m}}_t$, $-\Delta_N \hat{\mathbf{m}}$, and $\int_0^T \int_\Omega \mathbf{K}_{k\ell}(t,\tau,x)\cdot \mathbf{m}_t(x,\tau)\, dx\, d\tau$ of $\mathbb{F}$, this is obvious and for the nonlinear terms $\hat{\alpha}_2 (\mathbf{m}_0+\hat{\mathbf{m}})\times \hat{\mathbf{m}}_t$, $|\nabla (\mathbf{m}_0+\hat{\mathbf{m}})|^2(\mathbf{m}_0+\hat{\mathbf{m}})$, $((\mathbf{m}_0+\hat{\mathbf{m}})\cdot\mathbf{h})(\mathbf{m}_0+\hat{\mathbf{m}})$ we use the following estimates, holding for any $\mathbf{u},\mathbf{w},\mathbf{z}\in \mathcal{U}$. For the term $\hat{\alpha}_2 (\mathbf{m}_0+\hat{\mathbf{m}})\times \hat{\mathbf{m}}_t$, we estimate \begin{equation}\label{est_nl1} \begin{aligned} \|\mathbf{u}\times\mathbf{w}_t\|_{H^1(0,T;H^1(\Omega;\mathbb{R}^3))^*} &\leq \|\mathbf{u}\times\mathbf{w}_t\|_{L^2(0,T;(H^1(\Omega;\mathbb{R}^3))^*)}\\ &\leq C_{H^1\to L^3}^\Omega \|\mathbf{u}\times\mathbf{w}_t\|_{L^2(0,T;L^{3/2}(\Omega;\mathbb{R}^3))}\\ &\leq C_{H^1\to L^3}^\Omega \|\mathbf{u}\|_{C(0,T;L^6(\Omega;\mathbb{R}^3))} \|\mathbf{w}_t\|_{L^2(0,T;L^2(\Omega;\mathbb{R}^3))}\\ &\leq C_{H^1\to L^3}^\Omega C_{H^1\to L^6}^\Omega\|\mathbf{u}\|_{C(0,T;H^1(\Omega;\mathbb{R}^3))} \|\mathbf{w}_t\|_{L^2(0,T;L^2(\Omega;\mathbb{R}^3))}\,, \end{aligned} \end{equation} where we have used duality and continuity of the embeddings $H^1(0,T;H^1(\Omega;\mathbb{R}^3))\hookrightarrow L^2(0,T;H^1(\Omega;\mathbb{R}^3))\hookrightarrow L^2(0,T;L^3(\Omega))$ in the first and second estimate, and H\"older's inequality with exponent $4$ in the third estimate;\\ For the term $|\nabla (\mathbf{m}_0+\hat{\mathbf{m}})|^2(\mathbf{m}_0+\hat{\mathbf{m}})$, we use \begin{equation}\label{est_nl2} \begin{aligned} &\|(\nabla\mathbf{u}\colon\nabla\mathbf{w})\mathbf{z}\|_{H^1(0,T;H^1(\Omega;\mathbb{R}^3))^*}\\ &\leq C_{H^1\to L^\infty}^{(0,T)} \|(\nabla\mathbf{u}\colon\nabla\mathbf{w})\mathbf{z}\|_{L^1(0,T;(H^1(\Omega;\mathbb{R}^3))^*)}\\ &\leq C_{H^1\to L^\infty}^{(0,T)} C_{H^1\to L^6}^\Omega \|(\nabla\mathbf{u}\colon\nabla\mathbf{w})\mathbf{z}\|_{L^1(0,T;L^{6/5}(\Omega;\mathbb{R}^3))}\\ &\leq C_{H^1\to L^\infty}^{(0,T)} C_{H^1\to L^6}^\Omega \|\nabla\mathbf{u}\|_{L^2(0,T;L^6(\Omega;\mathbb{R}^3))}\|\nabla\mathbf{w}\|_{L^2(0,T;L^6(\Omega;\mathbb{R}^3))} \|\mathbf{z}\|_{C(0,T;L^2(\Omega;\mathbb{R}^3))}\\ &\leq C_{H^1\to L^\infty}^{(0,T)} (C_{H^1\to L^6}^\Omega;\mathbb{R}^3) \|\mathbf{u}\|_{L^2(0,T;H^2(\Omega;\mathbb{R}^3))} \|\mathbf{w}\|_{L^2(0,T;H^2(\Omega;\mathbb{R}^3))} \|\mathbf{z}\|_{C(0,T;H^1(\Omega;\mathbb{R}^3))}\,, \end{aligned} \end{equation} again using duality and the embeddings $H^1(0,T;H^1(\Omega;\mathbb{R}^3))\hookrightarrow L^\infty(0,T;H^1(\Omega))\hookrightarrow L^\infty(0,T;L^6(\Omega))$;\\ For the term $((\mathbf{m}_0+\hat{\mathbf{m}})\cdot\mathbf{h})(\mathbf{m}_0+\hat{\mathbf{m}})$, we estimate \begin{equation}\label{est_nl3} \begin{aligned} &\|(\mathbf{u}\cdot\mathbf{h})\mathbf{z}\|_{H^1(0,T;H^1(\Omega;\mathbb{R}^3))^*}\\ &\leq C_{H^1\to L^6}^\Omega \|(\mathbf{u}\cdot\mathbf{h})\mathbf{z}\|_{L^2(0,T;L^{6/5}(\Omega;\mathbb{R}^3))}\\ &\leq C_{H^1\to L^6}^\Omega \|\mathbf{u}\|_{C(0,T;L^6(\Omega;\mathbb{R}^3))} \|\mathbf{z}\|_{C(0,T;L^6(\Omega;\mathbb{R}^3))} \|\mathbf{h}\|_{L^2(0,T;L^2(\Omega;\mathbb{R}^3))}\\ &\leq (C_{H^1\to L^6}^\Omega;\mathbb{R}^3) \|\mathbf{u}\|_{C(0,T;H^1(\Omega;\mathbb{R}^3))} \|\mathbf{z}\|_{C(0,T;H^1(\Omega;\mathbb{R}^3))} \|\mathbf{h}\|_{L^2(0,T;L^2(\Omega;\mathbb{R}^3))} \end{aligned} \end{equation} by duality and the embedding $H^1(0,T;H^1(\Omega;\mathbb{R}^3))\hookrightarrow L^2(0,T;L^6(\Omega))$, as well as H\"older's inequality. In case $p>2$, $\mathbb{F}$ maps into the somewhat stronger space $\mathcal{W}=H^1(0,T;L^2(\Omega;\mathbb{R}^3))^*$, due to the estimates \begin{equation}\label{est_nl1_L2} \begin{aligned} \|\mathbf{u}\times\mathbf{w}_t\|_{H^1(0,T;L^2(\Omega;\mathbb{R}^3))^*} &\leq C_{H^1\to L^\infty}^{(0,T)} \|\mathbf{u}\times\mathbf{w}_t\|_{L^1(0,T;L^2(\Omega;\mathbb{R}^3))}\\ &\leq C_{H^1\to L^\infty}^{(0,T)} \|\mathbf{u}\|_{L^2(0,T;L^\infty(\Omega;\mathbb{R}^3))} \|\mathbf{w}_t\|_{L^2(0,T;L^2(\Omega;\mathbb{R}^3))}\\ &\leq C_{H^1\to L^\infty}^{(0,T)} C_{H^2\to L^\infty}^\Omega\|\mathbf{u}\|_{L^2(0,T;H^2(\Omega;\mathbb{R}^3))} \|\mathbf{w}_t\|_{L^2(0,T;L^2(\Omega;\mathbb{R}^3))}\,, \end{aligned} \end{equation} \begin{equation}\label{est_nl2_L2} \begin{aligned} &\|(\nabla\mathbf{u}\colon\nabla\mathbf{w})\mathbf{z}\|_{H^1(0,T;L^2(\Omega;\mathbb{R}^3))^*}\\ &\leq C_{H^1\to L^\infty}^{(0,T)} \|(\nabla\mathbf{u}\colon\nabla\mathbf{w})\mathbf{z}\|_{L^1(0,T;L^2(\Omega;\mathbb{R}^3))}\\ &\leq C_{H^1\to L^\infty}^{(0,T)} \|\nabla\mathbf{u}\|_{L^2(0,T;L^6(\Omega;\mathbb{R}^3))}\|\nabla\mathbf{w}\|_{L^2(0,T;L^6(\Omega;\mathbb{R}^3))} \|\mathbf{z}\|_{C(0,T;L^6(\Omega;\mathbb{R}^3))}\\ &\leq C_{H^1\to L^\infty}^{(0,T)} (C_{H^1\to L^6}^\Omega;\mathbb{R}^3) \|\mathbf{u}\|_{L^2(0,T;H^2(\Omega;\mathbb{R}^3))} \|\mathbf{w}\|_{L^2(0,T;H^2(\Omega;\mathbb{R}^3))} \|\mathbf{z}\|_{C(0,T;H^1(\Omega;\mathbb{R}^3))}\,, \end{aligned} \end{equation} \begin{equation}\label{est_nl3_L2} \begin{aligned} &\|(\mathbf{u}\cdot\mathbf{h})\mathbf{z}\|_{H^1(0,T;L^2(\Omega;\mathbb{R}^3))^*}\\ &\leq C_{H^1\to L^\infty}^{(0,T)} \|(\mathbf{u}\cdot\mathbf{h})\mathbf{z}\|_{L^1(0,T;L^2(\Omega;\mathbb{R}^3))}\\ &\leq C_{H^1\to L^\infty}^{(0,T)} \|\mathbf{u}\|_{L^4(0,T;L^{p^{**}}(\Omega;\mathbb{R}^3))} \|\mathbf{z}\|_{L^4(0,T;L^{p^{**}}(\Omega;\mathbb{R}^3))} \|\mathbf{h}\|_{L^2(0,T;L^p(\Omega;\mathbb{R}^3))}\\ &\leq C_{H^1\to L^\infty}^{(0,T)} (C_{H^{1/4},L^4}^{(0,T)})^2 (C_{H^{3/2},L^{p^{**}}}^\Omega)^2 \\ & \qquad\qquad \|\mathbf{u}\|_{H^{1/4}(0,T;H^{3/2}(\Omega;\mathbb{R}^3))} \|\mathbf{z}\|_{H^{1/4}(0,T;H^{3/2}(\Omega;\mathbb{R}^3))} \|\mathbf{h}\|_{L^2(0,T;L^p(\Omega;\mathbb{R}^3))}\,, \end{aligned} \end{equation} for $p^{**}=\frac{2p}{p-2}<\infty$, which can be bounded by the $\mathcal{U}$ norm of $\mathbf{u}$ and $\mathbf{z}$, using interpolation with $s=\frac14$ in \eqref{U}. \subsubsection{Differentiability of the forward operator} Formally, the derivative of $\mathbb{F}$ is given by \[ \begin{aligned} &\mathbb{F}'(\hat{\mathbf{m}},\hat{\alpha}_1,\hat{\alpha}_2)(\mathbf{u},\beta_1,\beta_2)\\ &=\left(\begin{array}{l} \beta_1 \hat{\mathbf{m}}_t - \beta_2 (\mathbf{m}_0+\hat{\mathbf{m}})\times \hat{\mathbf{m}}_t \\ \hspace*{1.5cm} +\hat{\alpha}_1 \mathbf{u}_t-\Delta_N \mathbf{u} - \hat{\alpha}_2 \mathbf{u}\times \hat{\mathbf{m}}_t-\hat{\alpha}_2 (\mathbf{m}_0+\hat{\mathbf{m}})\times \mathbf{u}_t \\ \hspace*{1.5cm}-2(\nabla (\mathbf{m}_0+\hat{\mathbf{m}})\colon\nabla\mathbf{u})(\mathbf{m}_0+\hat{\mathbf{m}})- |\nabla (\mathbf{m}_0+\hat{\mathbf{m}})|^2 \mathbf{u} \\ \hspace*{1.5cm} +((\mathbf{m}_0+\hat{\mathbf{m}})\cdot\mathbf{h}) \mathbf{u} + (\mathbf{u}\cdot\mathbf{h}) (\mathbf{m}_0+\hat{\mathbf{m}}) \\[2ex] \Bigl(\int_0^T \int_\Omega \mathbf{K}_{k\ell}(t,\tau,x) \cdot\mathbf{u}_t(x,\tau)\, dx\, d\tau\Bigr)_{k=1,\ldots,K\,, \ \ell=1,\ldots,L} \end{array}\right)\\ &=\left(\begin{array}{ccc} \frac{\partial\mathbb{F}_0}{\partial\hat{\mathbf{m}}}(\hat{\mathbf{m}},\hat{\alpha})&\frac{\partial\mathbb{F}_0}{\partial\hat{\alpha}_1}(\hat{\mathbf{m}},\hat{\alpha})&\frac{\partial\mathbb{F}_0}{\partial\hat{\alpha}_2}(\hat{\mathbf{m}},\hat{\alpha})\\ (\frac{\partial\mathbb{F}_{k\ell}}{\partial\hat{\mathbf{m}}}(\hat{\mathbf{m}},\hat{\alpha}))_{k=1,\ldots,K,\ell=1,\ldots,L}&0&0 \end{array}\right) \left(\begin{array}{c} \mathbf{u} \\ \beta_1\\ \beta_2\end{array}\right) \end{aligned} \] where $\frac{\partial\mathbb{F}_0}{\partial\hat{\mathbf{m}}}(\hat{\mathbf{m}},\hat{\alpha}):\mathcal{U}\to\mathcal{W}$, $\frac{\partial\mathbb{F}_0}{\partial\hat{\alpha}_1}(\hat{\mathbf{m}},\hat{\alpha}):\mathbb{R}\to\mathcal{W}$, $\frac{\partial\mathbb{F}_0}{\partial\hat{\alpha}_2}(\hat{\mathbf{m}},\hat{\alpha}):\mathbb{R}\to\mathcal{W}$, $(\frac{\partial\mathbb{F}_{k\ell}}{\partial\hat{\mathbf{m}}}(\hat{\mathbf{m}},\hat{\alpha}))_{k=1,\ldots,K,\ell=1,\ldots,L}:\mathcal{U}\to L^2(0,T)^{KL}$. Fr\'{e}chet differentiability follows from the fact that in \[ \mathbb{F}(\hat{\mathbf{m}}+\mathbf{u},\hat{\alpha}_1+\beta_1,\hat{\alpha}_2+\beta_2)-\mathbb{F}(\hat{\mathbf{m}},\hat{\alpha}_1,\hat{\alpha}_2)-\mathbb{F}'(\hat{\mathbf{m}},\hat{\alpha}_1,\hat{\alpha}_2)(\mathbf{u},\beta_1,\beta_2) \] all linear terms cancel out and the nonlinear ones are given by (abbreviating $\mathbf{m}=\mathbf{m}_0+\hat{\mathbf{m}}$) \[ \begin{aligned} &(\hat{\alpha}_1+\beta_1)(\mathbf{m}_t+\mathbf{u}_t)-\hat{\alpha}_1\mathbf{m}_t-\hat{\alpha}_1\mathbf{u}_t-\beta_1 \mathbf{m}_t = \beta_1 \mathbf{u}_t\\ &(\hat{\alpha}_2+\beta_2)(\mathbf{m}+\mathbf{u})\times(\mathbf{m}_t+\mathbf{u}_t) - \hat{\alpha}_2\mathbf{m}\times\mathbf{m}_t - \beta_2\mathbf{m}\times\mathbf{m}_t - \hat{\alpha}_2\mathbf{u}\times\mathbf{m}_t - \hat{\alpha}_2\mathbf{m}\times\mathbf{u}_t\\ &\qquad= \hat{\alpha}_2\mathbf{u}\times\mathbf{u}_t+\beta_2\mathbf{m}\times\mathbf{u}_t+\beta_2\mathbf{u}\times\mathbf{m}_t+\beta_2\mathbf{u}\times\mathbf{u}_t\\ &|\nabla\mathbf{m}+\nabla\mathbf{u}|^2(\mathbf{m}+\mathbf{u}) - |\nabla\mathbf{m}|^2\mathbf{m} -2 (\nabla\mathbf{m}\colon\nabla\mathbf{u})\mathbf{m}-|\nabla\mathbf{m}|^2\mathbf{u}\\ &\qquad= |\nabla\mathbf{u}|^2(\mathbf{m}+\mathbf{u})+2(\nabla\mathbf{m}\colon\nabla\mathbf{u})\mathbf{u}\\ &((\mathbf{m}+\mathbf{u})\cdot\mathbf{h})(\mathbf{m}+\mathbf{u})-(\mathbf{m}\cdot\mathbf{h})\mathbf{m}-(\mathbf{u}\cdot\mathbf{h})\mathbf{m}-(\mathbf{m}\cdot\mathbf{h})\mathbf{u} = (\mathbf{u}\cdot\mathbf{h})\mathbf{u}\,, \end{aligned} \] hence, using again \eqref{est_nl1}--\eqref{est_nl3}, they can be estimated by some constant multiplied by $\|\mathbf{u}\|_{\mathcal{U}}^2+\beta_1^2+\beta_2^2$. \subsubsection{Adjoints} We start with the adjoint of $\frac{\partial\mathbb{F}_0}{\partial\hat{\mathbf{m}}}(\hat{\mathbf{m}},\hat{\alpha})$. For any $\mathbf{u}\in\mathcal{U}$, $\mathbf{y}\in L^2(0,T;L^2(\Omega))$, we have, using the definition of $-\Delta_N$, i.e., \eqref{Laplace}, \[ \begin{aligned} &\int_0^T\int_\Omega (\frac{\partial\mathbb{F}_0}{\partial\hat{\mathbf{m}}}(\hat{\mathbf{m}},\hat{\alpha})\mathbf{u})\cdot \mathbf{y} \,dx\,dt\\ &= \int_0^T\int_\Omega \Bigl(\hat{\alpha}_1 \mathbf{u}_t\cdot\mathbf{y}+\nabla \mathbf{u}\colon\nabla \mathbf{y} - \hat{\alpha}_2 (\mathbf{u}\times \hat{\mathbf{m}}_t)\cdot\mathbf{y}-\hat{\alpha}_2 ((\mathbf{m}_0+\hat{\mathbf{m}})\times \mathbf{u}_t)\cdot\mathbf{y} \\ &\quad-2(\nabla (\mathbf{m}_0+\hat{\mathbf{m}})\colon\nabla\mathbf{u})\, ((\mathbf{m}_0+\hat{\mathbf{m}})\cdot\mathbf{y})- |\nabla (\mathbf{m}_0+\hat{\mathbf{m}})|^2 \, (\mathbf{u}\cdot\mathbf{y}) \\ &\quad+((\mathbf{m}_0+\hat{\mathbf{m}})\cdot\mathbf{h}) \,(\mathbf{u}\cdot\mathbf{y}) + (\mathbf{u}\cdot\mathbf{h}) \,((\mathbf{m}_0+\hat{\mathbf{m}})\cdot\mathbf{y})\Bigr)\,dx\,dt\\ &= \int_0^T\int_\Omega \mathbf{u}\cdot \Bigl(-\hat{\alpha}_1 \mathbf{y}_t+ (-\Delta\mathbf{y}) - \hat{\alpha}_2 \hat{\mathbf{m}}_t\times\mathbf{y}+\hat{\alpha}_2 \mathbf{y}_t\times(\mathbf{m}_0+\hat{\mathbf{m}})+\hat{\alpha}_2 \mathbf{y}\times\hat{\mathbf{m}}_t \\&\qquad\qquad\qquad-2((\mathbf{m}_0+\hat{\mathbf{m}})\cdot\mathbf{y})\, (-\Delta_N(\mathbf{m}_0+\hat{\mathbf{m}})) +2((\nabla (\mathbf{m}_0+\hat{\mathbf{m}})^T(\nabla \mathbf{y}))\, (\mathbf{m}_0+\hat{\mathbf{m}})\\ &\qquad\qquad\qquad +2((\nabla (\mathbf{m}_0+\hat{\mathbf{m}})^T(\nabla (\mathbf{m}_0+\hat{\mathbf{m}})))\, \mathbf{y} -|\nabla (\mathbf{m}_0+\hat{\mathbf{m}})|^2 \mathbf{y} \\ &\qquad\qquad\qquad+((\mathbf{m}_0+\hat{\mathbf{m}})\cdot\mathbf{h}) \,\mathbf{y} + ((\mathbf{m}_0+\hat{\mathbf{m}})\cdot\mathbf{y}) \, \mathbf{h} \Bigr)\,dx\,dt\\ &\quad + \int_\Omega \mathbf{u}(T)\cdot\Bigl(\hat{\alpha}_1\mathbf{y}(T) -\hat{\alpha}_2 \mathbf{y}(T)\times(\mathbf{m}_0+\hat{\mathbf{m}}(T))\Bigr)\, dx\\ &=:\int_0^T\int_\Omega \mathbf{u}\cdot \mathbf{f}^{\mathbf{y}}\, dx\, dt + \int_\Omega \mathbf{u}(T)\cdot \mathbf{g}^{\mathbf{y}}_T\, dx \,, \end{aligned} \] where we have integrated by parts with respect to time and used the vector identities \[ \vec{a}\cdot(\vec{b}\times\vec{c})=\vec{b}\cdot(\vec{c}\times\vec{a})=\vec{c}\cdot(\vec{a}\times\vec{b})\,. \] Matching the integrals over $\Omega\times(0,T)$ and $\Omega\times\{T\}$, respectively, and taking into account the homogeneous Neumann boundary conditions implied by the definition of $-\Delta_N$, \eqref{Laplace}, as well as the identities \eqref{id_adjU}, \eqref{id_adjW}, we find that $\frac{\partial\mathbb{F}_0}{\partial\hat{\mathbf{m}}}(\hat{\mathbf{m}},\hat{\alpha})^*\mathbf{y}=:\mathbf{z}$ is the solution of \eqref{auxprob} with $\mathbf{f}=\mathbf{f}^{\mathbf{y}}$, $\mathbf{g}=\mathbf{g}^{\mathbf{y}}_T$, where in case $\mathcal{W}=H^1(0,T;H^1(\Omega;\mathbb{R}^3))^*$, $\mathbf{y}=I_2[\widetilde{y}]$, with $\widetilde{y}(t)$ solving \[ \left\{\begin{array}{rcl} -\Delta \widetilde{y}(t)+\widetilde{y}(t)&=&\mathbf{w}(t)\mbox{ in }\Omega\\ \partial_\nu\widetilde{y}&=&0\mbox{ on }\partial\Omega \end{array}\right. \] for each $t\in(0,T)$, or in case $\mathcal{W}=H^1(0,T;L^2(\Omega;\mathbb{R}^3))^*$, just $\mathbf{y}=I_2[\mathbf{w}]$. With the same $\mathbf{y}$, after pointwise projection onto the mutually orthogonal vectors $\hat{\mathbf{m}}_t(x,t)$ and $(\mathbf{m}_0(x)+\hat{\mathbf{m}}(x,t))\times\hat{\mathbf{m}}_t(x,t)$ and integration over space and time, we also get the adjoints of $\frac{\partial\mathbb{F}_0}{\partial\hat{\alpha}_1}(\hat{\mathbf{m}},\hat{\alpha})$, $\frac{\partial\mathbb{F}_0}{\partial\hat{\alpha}_2}(\hat{\mathbf{m}},\hat{\alpha})$ \begin{align*} \frac{\partial\mathbb{F}_0}{\partial\hat{\alpha}_1}(\hat{\mathbf{m}},\hat{\alpha})^*\mathbf{w} &= \int_0^T\int_\Omega \hat{\mathbf{m}}_t\cdot \mathbf{y}\, dx\, dt\,, \\ \frac{\partial\mathbb{F}_0}{\partial\hat{\alpha}_2}(\hat{\mathbf{m}},\hat{\alpha})^*\mathbf{w} &= -\int_0^T\int_\Omega ((\mathbf{m}_0+\hat{\mathbf{m}})\times\hat{\mathbf{m}}_t)\cdot \mathbf{y}\, dx\, dt\,. \end{align*} Finally, the fact that for $\mathbf{u}\in\mathcal{U}$, $y\in L^2(0,T)^{KL}$ \begin{equation}\label{adjA21} \begin{aligned} &\left(\Big(\frac{\partial\mathbb{F}_{k\ell}}{\partial\hat{\mathbf{m}}}(\hat{\mathbf{m}},\hat{\alpha})\Big)_{k=1,\ldots,K,\ell=1,\ldots,L}\mathbf{u},y\right)_{L^2(0,T)^{KL}}\\ &=\sum_{k=1}^K\sum_{\ell=1}^L \int_0^T \left(\Big(\frac{\partial\mathbb{F}_{k\ell}}{\partial\hat{\mathbf{m}}}(\hat{\mathbf{m}},\hat{\alpha})\Big)_{k=1,\ldots,K,\ell=1,\ldots,L}\mathbf{u}\right)_{k\ell}(t) y_{k\ell}(t)\, dt\\ &=\sum_{k=1}^K\sum_{\ell=1}^L \int_0^T \int_0^T \int_\Omega \mathbf{K}_{k\ell}(t,\tau,x)\cdot \mathbf{u}_t(x,\tau)\, dx\, d\tau y_{k\ell}(t)\, dt\\ &=\sum_{k=1}^K\sum_{\ell=1}^L \int_0^T \Bigl( -\int_0^T \int_\Omega \frac{\partial}{\partial\tau}\mathbf{K}_{k\ell}(t,\tau,x)\cdot \mathbf{u}(x,\tau)\, dx\, d\tau \\ & \hspace{3cm} + \int_\Omega \mathbf{K}_{k\ell}(t,T,x)\cdot \mathbf{u}(x,T)\, dx \Bigr) y_{k\ell}(t)\, dt\,, \end{aligned} \end{equation} where we have integrated by parts with respect to time, implies that due to \eqref{id_adjU}, \begin{displaymath} (\frac{\partial\mathbb{F}_{k\ell}}{\partial\hat{\mathbf{m}}}(\hat{\mathbf{m}},\hat{\alpha}))_{k=1,\ldots,K,\ell=1,\ldots,L}^*y=\mathbf{z} \end{displaymath} is obtained by solving another auxiliary problem \eqref{auxprob} with \begin{equation}\label{fgA21} \begin{split} \mathbf{f}(x,\tau)&=-\int_0^T\sum_{k=1}^K\sum_{\ell=1}^L \frac{\partial}{\partial\tau}\mathbf{K}_{k\ell}(t,\tau,x) y_{k\ell}(t)\ dt,\\ \mathbf{g}(x)&=\int_0^T\sum_{k=1}^K\sum_{\ell=1}^L \mathbf{K}_{k\ell}(t,T,x) y_{k\ell}(t)\ dt\,. \end{split} \end{equation} \begin{remark} In case of a Landweber-Kaczmarz method iterating cyclically over the equations defined by $\mathbb{F}_0,\mathbb{F}_{k\ell}$, $k=1,...,K$, $\ell=1,...,L$, adjoints of derivatives of $\mathbb{F}_0$ remain unchanged while adjoints of $\frac{\partial\mathbb{F}_{k\ell}}{\partial\hat{\mathbf{m}}}(\hat{\mathbf{m}},\hat{\alpha}))_{k=1,\ldots,K,\ell=1,\ldots,L}$ are defined as in \eqref{adjA21}, \eqref{fgA21} by just skipping the sums over $k$ and $\ell$ there. \end{remark} \subsection{Reduced formulation}\label{sec:red} We now consider the formulation \eqref{ip_red} with $F$ defined by \eqref{red-F}, \eqref{defS}, \eqref{defK}. Due to the estimate \begin{align*} \|\mathcal{K}_{k\ell}\textbf{m}_t\|^2_{L^2(0,T)}\leq T\|\widetilde{a}_{\ell}\|^2_{L^2(0,T)} \|c_k\textbf{p}_{\ell}^R\|^2_{L^2(\Omega,\R^3)}\|\textbf{m}\|^2_{H^1(0,T;L^2(\Omega,\R^3))}\,, \end{align*} if $\widetilde{a}_{\ell}\in L^2(0,T), c_k\textbf{p}_{\ell}^R\in L^2(\Omega,\R^3)$ we can choose the state space in the reduced setting as \begin{align}\label{red-spaceU} \tilde{\mathcal{U}}=H^1(0,T;L^2(\Omega,\R^3)), \end{align} which is different from the one in the all-at-once setting. \subsubsection{Adjoint equation} From \eqref{red-F} the derivative of the forward operation takes the form \begin{align}\label{red-diffF} F'(\hat{\alpha})\beta= \mathcal{K}\textbf{u}_t, \end{align} where $\textbf{u}$ solves the linearized LLG equation \begin{alignat*}{3} &\hat{\alpha}_1\textbf{u}_t - \hat{\alpha}_2\textbf{m}\times\textbf{u}_t - \hat{\alpha}_2\textbf{u}\times\textbf{m}_t - \Delta\textbf{u} -2(\nabla\textbf{u}:\nabla\textbf{m})\textbf{m}\\ &\qquad+\textbf{u}(-|\nabla\textbf{m}|^2+(\textbf{m}\cdot \textbf{h})) + (\textbf{u}\cdot \textbf{h})\textbf{m} &&\\ &=-\beta_1\textbf{m}_t+\beta_2\textbf{m}\times\textbf{m}_t \qquad &&\text{in } (0,T)\times\Omega\\ & \partial_\nu\textbf{u}=0 && \text{on } (0,T)\times\partial\Omega\\ & \textbf{u}(0)=0 && \text{in } \Omega, \end{alignat*} and $\textbf{m}$ is the solution to \eqref{llg3}-\eqref{llg3_abs}. This equation can be obtained by formally taking directional derivatives (in the direction of $\textbf{u}$) in all terms of the LLG equation \eqref{llg3}--\eqref{llg3_abs}, or alternatively by subtracting the defining boundary value problems for $S(\textbf{m}+\epsilon\textbf{u})$ and $S(\textbf{m})$, dividing by $\epsilon$ and then letting $\epsilon$ tend to zero. The Hilbert space adjoint \begin{align*} F'(\hat{\alpha})^*:L^2(0,T)^{KL}\rightarrow\mathbb{R}^2 \end{align*} of $F'(\hat{\alpha})$ satisfies, for each $z\in L^2(0,T)^{KL}$, {\allowdisplaybreaks \begin{align}\label{red-adjoint0} &(F'(\hat{\alpha})^*z,\beta)_{\mathbb{R}^2} \nonumber\\ &=(z,F'(\hat{\alpha})\beta)_{L^2(0,T)^{KL}} \nonumber\\ &=\sum_{k=1}^K\sum_{\ell=1}^L\int_0^T z_{k\ell}(t) \int_0^T\int_\Omega (-\mu_0)\widetilde{a}_\ell(t-\tau) c_k(x)\textbf{p}_{\ell}^R(x)\cdot\textbf{u}_\tau(\tau,x) dx\,d\tau\,dt \nonumber\\ &=\sum_{k=1}^K\sum_{\ell=1}^L\int_0^T z_{k\ell}(t)\bigg(-\int_0^T\int_\Omega (-\mu_0)\cdot (-1)\widetilde{a}_{\ell\,t}(t-\tau) c_k(x)\textbf{p}_{\ell}^R(x)\cdot\textbf{u}(\tau,x)\,dx\,d\tau \nonumber\\ &\hspace{2.5cm} +\int_\Omega (-\mu_0)\widetilde{a}_{\ell}(t-T)c_k(x)\textbf{p}_{\ell}^R(x)\cdot\textbf{u}(T,x)\,dx \bigg)dt \nonumber\\ &= \int_0^T\int_\Omega\textbf{u}(\tau,x)\cdot \sum_{k=1}^K\sum_{\ell=1}^L \bigg( \int_0^T (-\mu_0)\widetilde{a}_{\ell\,t}(t-\tau)z_{k\ell}(t)\,dt \bigg)\, c_k(x)\textbf{p}_{\ell}^R(x)\,dx\,d\tau \nonumber\\ &\hspace{2.5cm} + \int_\Omega\textbf{u}(T,x)\cdot \sum_{k=1}^K\sum_{\ell=1}^L \bigg(\int_0^T (-\mu_0)\widetilde{a}_{\ell}(t)z_{k\ell}(t)\,dt \bigg)\, c_k(x)\textbf{p}_{\ell}^R(x)\,dx \nonumber\\ &=: (\textbf{u},\tilde{K} z)_{L^2(0,T;L^2(\Omega,\R^3))} + (\textbf{u}(T),\tilde{K}_T z)_{L^2(\Omega,\R^3)} \end{align} } as the transfer function $\tilde{a}$ is periodic with period $T$, and the continuous embedding $H(0,T)\hookrightarrow C[0,T]$ allows us to evaluate $\textbf{u}(t=T)$.\\ Observing {\allowdisplaybreaks \begin{alignat*}{3} &\int_0^T\int_\Omega -\hat{\alpha}_1\pp^z_t\cdot\textbf{u}\,dx\,dt =\int_0^T\int_\Omega\hat{\alpha}_1\textbf{u}_t\cdot\pp^z \,dx - \int_\Omega\hat{\alpha}_1\pp^z(T)\cdot\textbf{u}(T)\,dx \,, \\ &\int_0^T\int_\Omega -\hat{\alpha}_2(\textbf{m}\times\pp^z)_t\cdot\textbf{u} \,dx\,dt\\ &=\int_0^T\int_\Omega -\hat{\alpha}_2 (\textbf{m}\times\textbf{u}_t)\cdot\pp^z\,dx\,dt -\int_\Omega\hat{\alpha}_2 (\textbf{m}\times\pp^z)(T)\cdot\textbf{u}(T)\,dx \,,\\ &\int_0^T\int_\Omega\hat{\alpha}_2(\pp^z\times\textbf{m}_t)\cdot\textbf{u} \,dx\,dt =\int_0^T\int_\Omega -\hat{\alpha}_2(\textbf{u}\times\textbf{m}_t)\cdot\pp^z \,dx\,dt \,,\\ &\int_0^T\int_\Omega -\Delta\pp^z\cdot\textbf{u} \,dx\,dt =\int_0^T\int_\Omega -\pp^z\cdot\Delta\textbf{u} \,dx\,dt - \int_0^T\int_{\partial\Omega} \partial_\nu\pp^z\cdot\textbf{u} \,dx\,dt \,,\\ &\int_0^T\int_\Omega \pp^z(-|\nabla\textbf{m}|^2+(\textbf{m}\cdot\textbf{h}))\cdot\textbf{u} \,dx\,dt =\int_0^T\int_\Omega \left(\textbf{u}(-|\nabla\textbf{m}|^2+(\textbf{m}\cdot \textbf{h}))\right)\cdot\pp^z \,dx\,dt \,,\\ &\int_0^T\int_\Omega \left(\pp^z\cdot\textbf{m}\right)\textbf{h}\cdot\textbf{u}\,dx\,dt =\int_0^T\int_\Omega (\textbf{u}\cdot\textbf{h})\textbf{m}\cdot\pp^z\,dx\,dt \,,\\ &\int_0^T\int_\Omega 2(\textbf{m}\cdot\pp^z)\Delta\textbf{m}\cdot\textbf{u} \,dx\,dt\\ &=-\int_0^T\int_\Omega 2(\nabla\textbf{m}:\nabla\textbf{u})(\textbf{m}\cdot\pp^z) \,dx\,dt\\ &\qquad\qquad\qquad +2\int_0^T\int_\Omega -\textbf{u}\cdot((\nabla\textbf{m})^\top\nabla\textbf{m})\pp^z -\textbf{u}\cdot((\nabla\textbf{m})^\top\nabla\pp^z)\textbf{m} \,dx\,dt \,, \end{alignat*} } we see that, if $\pp^z$ solves the adjoint equation \begin{alignat}{3} &-\hat{\alpha}_1\pp^z_t - \hat{\alpha}_2\textbf{m}\times\pp^z_t - 2\hat{\alpha}_2\textbf{m}_t\times\pp^z - \Delta\pp^z \nonumber\\ &\quad +2\left((\nabla\textbf{m})^\top\nabla\textbf{m}\right)\pp^z + 2\left((\nabla\textbf{m})^\top\nabla\pp^z\right)\textbf{m} \nonumber\\ &\quad+(-|\nabla\textbf{m}|^2+(\textbf{m}\cdot \textbf{h}))\pp^z + (\textbf{m}\cdot\pp^z)(\textbf{h} + 2\Delta\textbf{m})=\tilde{K} z \qquad&&\text{in } (0,T)\times\Omega \label{red-adjoint-eq0-1}\\ & \partial_\nu\pp^z=0 && \text{on } (0,T)\times\partial\Omega \label{red-adjoint-eq0-2}\\ & \hat{\alpha}_1\pp^z(T)+\hat{\alpha}_2(\textbf{m}\times\pp^z)(T)=\tilde{K}_T z && \text{in } \Omega \label{red-adjoint-eq0-3} \end{alignat} then with \eqref{red-adjoint0}, we have \begin{align*} (F'(\hat{\alpha})^*z,\beta)_{\mathbb{R}^2}&=(\textbf{u},\tilde{K} z)_{L^2(0,T;L^2(\Omega,\R^3))} + (\textbf{u}(T),\tilde{K}_T z)_{L(\Omega,\R^3)}\\ &=\int_0^T\int_\Omega(-\beta_1\textbf{m}_t+\beta_2\textbf{m}\times\textbf{m}_t)\cdot\pp^z \,dx\,dt \nonumber\\ &=(\beta_1,\beta_2)\cdot\left( \int_0^T\int_\Omega -\textbf{m}_t\cdot\pp^z \,dx\,dt , \int_0^T\int_\Omega(\textbf{m}\times\textbf{m}_t)\cdot\pp^z \,dx\,dt\right), \end{align*} which implies the Hilbert space adjoint $F'(\hat{\alpha})^*:\mathcal{Y}\rightarrow\mathbb{R}^2$ \begin{align} \label{red-adjoint} F'(\hat{\alpha})^*z=\left( \int_0^T\int_\Omega -\textbf{m}_t\cdot\pp^z \,dx\,dt , \int_0^T\int_\Omega(\textbf{m}\times\textbf{m}_t)\cdot\pp^z \,dx\,dt\right), \end{align} provided that the adjoint state $\pp^z$ exists and belongs to a sufficiently smooth space (see Subsection \ref{sec-red-adjoint-solvability} below). The final condition \eqref{red-adjoint-eq0-3} is equivalent to \begin{align*} \begin{pmatrix} \hat{\alpha}_1 & -\hat{\alpha}_2\textbf{m}_3(T) & \hat{\alpha}_2\textbf{m}_2(T)\\ \hat{\alpha}_2\textbf{m}_3(T) & \hat{\alpha}_1 & -\hat{\alpha}_2\textbf{m}_1(T)\\ -\hat{\alpha}_2\textbf{m}_2(T) & \hat{\alpha}_2\textbf{m}_1(T) & \hat{\alpha}_1 \end{pmatrix}\pp^z(T)=:M^{\hat{\alpha}}_T\pp^z(T)=\tilde{K}_T z, \end{align*} where $\textbf{m}_i(T), i=1,2,3,$ denotes the i-th component of $\textbf{m}(T)$. The matrix $M^{\hat{\alpha}}_T$ with $\det(M^{\hat{\alpha}}_T)=|\hat{\alpha}_1(\hat{\alpha}_1^2+\hat{\alpha}_2^2)|$ is invertible if $\hat{\alpha}_1 > 0$, which matches the condition for existence of the solution to the LLG equation. Hence, we are able to rewrite the adjoint equation in the form \begin{alignat}{3} &-\hat{\alpha}_1\pp^z_t - \hat{\alpha}_2\textbf{m}\times\pp^z_t - 2\hat{\alpha}_2\textbf{m}_t\times\pp^z - \Delta\pp^z \nonumber\\ &\quad +2\left((\nabla\textbf{m})^\top\nabla\textbf{m}\right)\pp^z + 2\left((\nabla\textbf{m})^\top\nabla\pp^z\right)\textbf{m} \nonumber\\ &\quad+(-|\nabla\textbf{m}|^2+(\textbf{m}\cdot \textbf{h}))\pp^z + (\textbf{m}\cdot\pp^z)(\textbf{h} + 2\Delta\textbf{m})=\tilde{K} z \qquad&&\text{in } (0,T)\times\Omega \label{red-adjoint-eq-1}\\ & \partial_\nu\pp^z=0 && \text{on } (0,T)\times\partial\Omega \label{red-adjoint-eq-2}\\ & \pp^z(T)=(M_T^{\hat{\alpha}})^{-1}\tilde{K}_T z && \text{in } \Omega. \label{red-adjoint-eq-3} \end{alignat} \begin{remark} Formula \eqref{red-adjoint} inspires a Kaczmarz scheme relying on restricting the observation operator to time subintervals for every fixed $k, \ell$, namely, we segment $(0,T)$ into several subintervals $(t^j, t^{j+1})$ with the break points $0=t^0<\ldots<t^{n-1}=T$ and \begin{align} F^j_{k\ell}:\mathcal{D}(F)(\subseteq\mathcal{X})\rightarrow\mathcal{Y}^j, \qquad \hat{\alpha} \mapsto y^j :=\mathcal{K}_{k\ell} \frac{\partial}{\partial t} S(\hat{\alpha})|_{(t^j, t^{j+1})} \end{align} with \begin{align} \mathcal{Y}^j=L^2(t^j, t^{j+1})^{KL} \qquad j=0\ldots n-1, \end{align} hence \begin{align} y^j_{k\ell}(t)=\int_{t^j}^{t^{j+1}}\int_\Omega -\mu_0\widetilde{a}_{\ell}(t-\tau) c_k(x)\textbf{p}_{\ell}^R(x)\cdot\textbf{m}_\tau(x,\tau) dx d\tau. \end{align} Here we distinguish between the superscript $j$ for the time subinterval index and subscripts $k, \ell$ for the index of different receive coils and concentrations. For $z^j\in\mathcal{Y}^j$, \begin{align*} &(\tilde{K} z^j)(x,t)=\sum_{k=1}^K\sum_{\ell=1}^L-\mu_0c_k(x)\textbf{p}_{\ell}^R(x)\int_{t^j}^{t^{j+1}} \widetilde{a}_{\ell\,\tau}(\tau-t)z^j_{k\ell}(\tau)\,d\tau \qquad t\in(0,T) \,,\\ &(\tilde{K}_T z^j)(x)=\sum_{k=1}^K\sum_{\ell=1}^L-\mu_0c_k(x)\textbf{p}_{\ell}^R(x)\int_{t^j}^{t^{j+1}} \widetilde{a}_{\ell}(\tau)z^j_{k\ell}(\tau)\,d\tau \end{align*} yield the same Hilbert space adjoint $F^{j'}(\hat{\alpha})^*:\mathcal{Y}^j\rightarrow\mathbb{R}^2$ as in \eqref{red-adjoint}, and the adjoint state $\textbf{q}^{z^j}$ still needs to be solved on the whole time line $[0,T]$ with \begin{alignat}{3} &-\hat{\alpha}_1\textbf{q}^{z^j}_t - \hat{\alpha}_2\textbf{m}\times\textbf{q}^{z^j}_t - 2\hat{\alpha}_2\textbf{m}_t\times\textbf{q}^{z^j} - \Delta\textbf{q}^{z^j} \nonumber\\ &\quad +2\left((\nabla\textbf{m})^\top\nabla\textbf{m}\right)\textbf{q}^{z^j} + 2\left((\nabla\textbf{m})^\top\nabla\textbf{q}^{z^j}\right)\textbf{m} \nonumber\\ &\quad+(-|\nabla\textbf{m}|^2+(\textbf{m}\cdot \textbf{h}))\textbf{q}^{z^j} + (\textbf{m}\cdot\textbf{q}^{z^j})(\textbf{h} + 2\Delta\textbf{m})=\tilde{K} z^j \quad&&\text{in } (0,T)\times\Omega \label{red-adjoint-eqLK-1}\\ & \partial_\nu\textbf{q}^{z^j}=0 && \text{on } (0,T)\times\partial\Omega \label{red-adjoint-eqLK-2}\\ & \textbf{q}^{z^j}(T)=(M_T^{\hat{\alpha}})^{-1}\tilde{K}_T z^j && \text{in } \Omega. \label{red-adjoint-eqLK-3} \end{alignat} Besides this, the conventional Kaczmarz method resulting from the collection of observation operators $\mathcal{K}_{k\ell}$ with $k=1\ldots K, \ell=1\ldots L$ as in \eqref{forwardoperator_kl} is always applicable, where \begin{align} F_{k\ell}:\mathcal{D}(F)(\subseteq\mathcal{X})\rightarrow\mathcal{Y}_{k\ell}, \qquad \hat{\alpha} \mapsto y_{k\ell}:=\mathcal{K}_{k\ell}\frac{\partial}{\partial t}(S(\hat{\alpha})) \end{align} with \begin{align} \mathcal{Y}_{k\ell}=L^2(0,T)\qquad k=1\ldots K, \ \ell=1\ldots L \end{align} Thus ${F'_{k\ell}(\hat{\alpha})}^*$ can be seen as \eqref{red-adjoint}, where the adjoint state $\pp^z_{k\ell}$ solves \eqref{red-adjoint-eq-1}-\eqref{red-adjoint-eq-3} with corresponding data \begin{align*} &\tilde{K}_{k\ell}z(x,t)=-\mu_0c_k(x)\textbf{p}_{\ell}^R(x)\int_0^T \widetilde{a}_{\ell\,\tau}(\tau-t)z(\tau)\,d\tau \qquad t\in(0,T) \,,\\ &\tilde{K}_{T\,k\ell}z(x)=-\mu_0c_k(x)\textbf{p}_{\ell}^R(x)\int_0^T \widetilde{a}_{\ell}(\tau)z(\tau)\,d\tau \end{align*} for each $z\in\mathcal{Y}_{k\ell}$. \end{remark} \subsubsection{Solvability of the adjoint equation} \label{sec-red-adjoint-solvability} First of all, we derive a bound for $\pp^z$. To begin with, we set $\tau=T-t$ to convert \eqref{red-adjoint-eq-1}-\eqref{red-adjoint-eq-3} into an initial boundary value problem. Then we test \eqref{red-adjoint-eq-1} with $\pp^z_t$ and obtain the identities and estimates {\allowdisplaybreaks \begin{alignat*}{3} &\int_\Omega \hat{\alpha}_1\pp^z_t(t)\cdot\pp^z_t(t) \,dx=\hat{\alpha}_1\|\pp^z_t(t)\|^2_{L^2(\Omega,\R^3)} \,,\\ &\int_\Omega \hat{\alpha}_2(\textbf{m}(t)\times\pp^z_t(t))\cdot\pp^z_t(t) \,dx=0 \,,\\ &\int_\Omega \hat{\alpha}_2(\textbf{m}_t(t)\times\pp^z(t))\cdot\pp^z_t(t) \,dx \leq |\hat{\alpha}_2| \|\textbf{m}_t(t)\|_{L^3(\Omega,\R^3)}\|\pp^z(t)\|_{L^6(\Omega,\R^3)}\|\pp^z_t(t)\|_{L^2(\Omega,\R^3)} \,,\\ &\int_\Omega -\Delta\pp^z(t)\cdot\pp^z_t(t) \,dx=\frac{1}{2}\frac{d}{dt}\|\nabla\pp^z(t)\|^2_{L^2(\Omega,\R^3)} \,,\\ &\int_\Omega \left(((\nabla\textbf{m}(t))^\top\nabla\textbf{m}(t))\pp^z(t)\right)\cdot\pp^z_t(t) \,dx\\ &\, \leq (C^{\Omega}_{H^1\rightarrow L^6})^2\|\nabla\textbf{m}\|^2_{L^\infty(0,T;H^1(\Omega,\R^3))}\|\pp^z(t)\|_{L^6(\Omega,\R^3)}\|\pp^z_t(t)\|_{L^2(\Omega,\R^3)} \,,\\ &\int_\Omega \left(((\nabla\textbf{m}(t))^\top\nabla\pp^z(t))\textbf{m}(t)\right)\cdot\pp^z_t(t) \,dx \\ &\,\leq C^\Omega_{H^2\rightarrow L^\infty}\|\nabla\textbf{m}(t)\|_{H^2(\Omega,\R^3)}\|\nabla\pp^z(t)\|_{L^2(\Omega,\R^3)}\|\pp^z_t(t)\|_{L^2(\Omega,\R^3)} \,,\\ &\int_\Omega (-|\nabla\textbf{m}(t)|^2+(\textbf{m}(t)\cdot \textbf{h}))\pp^z(t)\cdot\pp^z_t(t) \,dx\\ &\, \leq \left((C^{\Omega}_{H^1\rightarrow L^6})^2 \|\nabla\textbf{m}\|^2_{L^\infty(0,T;H^1(\Omega,\R^3))}+\|\textbf{h}(t)\|_{L^3(\Omega,\R^3)} \right)\|\pp^z(t)\|_{L^6(\Omega,\R^3)}\|\pp^z_t(t)\|_{L^2(\Omega,\R^3)} \,,\\ &\int_\Omega \left(\textbf{m}(t)\cdot\pp^z(t)\right)\textbf{h}(t)\cdot\pp^z_t(t) \,dx \leq \|\textbf{h}(t)\|_{L^3(\Omega,\R^3)}\|\pp^z(t)\|_{L^6(\Omega,\R^3)}\|\pp^z_t(t)\|_{L^2(\Omega,\R^3)} \,,\\ &\int_\Omega (\textbf{m}(t)\cdot\pp^z(t))\Delta\textbf{m}(t)\cdot\pp^z_t(t) \,dx\\ &\,\leq C^\Omega_{H^1\rightarrow L^3}\|\Delta\textbf{m}(t)\|_{H^1(\Omega,\R^3))}\|\pp^z(t)\|_{L^6(\Omega,\R^3)}\|\pp^z_t(t)\|_{L^2(\Omega,\R^3)} \,,\\ &\int_\Omega \tilde{K} z(t)\cdot\pp^z_t(t) \,dx \leq \|\tilde{K} z(t)\|_{L^2(\Omega,\R^3)}\|\pp^z_t(t)\|_{L^2(\Omega,\R^3)}\,. \end{alignat*} } Above, we employ the fact that the solution $\textbf{m}$ to the LLG equation has $|\textbf{m}|=1$ and the continuity of the embeddings $H^1(\Omega,\R^3)\hookrightarrow L^6(\Omega,\R^3)\hookrightarrow L^3(\Omega,\R^3), H^2(\Omega,\R^3)\hookrightarrow L^\infty(\Omega,\R^3)$ through the constants $C^\Omega_{H^1\rightarrow L^6}, C^\Omega_{H^1\rightarrow L^3}$ and $C^\Omega_{H^2\rightarrow L^\infty}$, respectively.\\ Employing Young's inequality we deduce, for each $t\leq T$ and $\epsilon>0$ sufficiently small, \begin{align} \label{red-p-bound-0} &\frac{1}{2}\frac{d}{dt}\|\nabla\pp^z(t)\|^2_{L^2(\Omega,\R^3)}+(\hat{\alpha}_1-\epsilon)\|\pp^z_t(t)\|^2_{L^2(\Omega,\R^3)} \nonumber\\ &\leq \bigg[\left(\|\nabla\textbf{m}\|^4_{L^\infty(0,T;H^1(\Omega,\R^3))}+\|\nabla\textbf{m}(t)\|^2_{H^2(\Omega,\R^3)}+ \|\textbf{m}_t(t)\|^2_{L^3(\Omega,\R^3)}+\|\textbf{h}(t)\|^2_{L^3(\Omega,\R^3)}\right) \nonumber\\ &\qquad\qquad\quad .\|\pp^z(t)\|^2_{H^1(\Omega,\R^3)} +\|\tilde{K} z(t)\|^2_{L^2(\Omega,\R^3)}\bigg]\frac{C}{4\epsilon}. \end{align} The generic constant $C$ might take different values whenever it appears. To have the full $H^1-$norm on the left hand side of this estimate, we apply the transformation $\tilde{\pp^z}(t)=e^t\pp^z(t)$, which yields $\tilde{\pp^z}_t(t)=e^{t}(\pp^z(t)+\pp^z_t(t))$. After testing by $\pp^z_t$, the term $\int_\Omega \pp^z(t)\cdot\pp^z_t(t) \,dx=\frac{1}{2}\frac{d}{dt}\|\pp^z(t)\|^2_{L^2(\Omega,\R^3)}$ will contribute to $\frac{1}{2}\frac{d}{dt}\|\nabla\pp^z(t)\|^2_{L^2(\Omega,\R^3)}$ forming the full $H^1-$norm on the left hand side. Alternatively, one can add $\pp^z$ to both sides of \eqref{red-adjoint-eq-1} and evaluate the right hand side with $\int_\Omega \pp^z(t)\cdot\pp^z_t(t) \,dx\leq \frac{1}{4\epsilon}\|\pp^z(t)\|^2_{H^1(\Omega,\R^3)}+\epsilon\|\pp^z_t(t)\|^2_{L^2(\Omega,\R^3)}$. Integrating over $(0,t)$, we get \begin{align*} &\frac{1}{2}\|\pp^z(t)\|^2_{H^1(\Omega,\R^3)}+(\hat{\alpha}_1-\epsilon)\|\pp^z_t\|^2_{L^2(0,t;L^2(\Omega,\R^3))} \nonumber\\ &\leq \frac{C}{4\epsilon} \bigg[\int_0^t \Big(\|\nabla\textbf{m}\|^4_{L^\infty(0,T;H^1(\Omega,\R^3))}+\|\nabla\textbf{m}(\tau)\|^2_{H^2(\Omega,\R^3)}+ \|\textbf{m}_t(\tau)\|^2_{L^3(\Omega,\R^3)}\\ &\qquad\qquad\qquad +\|\textbf{h}(\tau)\|^2_{L^3(\Omega,\R^3)}\Big).\|\pp^z(\tau)\|^2_{H^1(\Omega,\R^3)}\,d\tau \\ &\qquad\qquad\qquad +\|\tilde{K} z\|^2_{L^2(0,T;L^2(\Omega,\R^3))} + \|(M_T^{\hat{\alpha}})^{-1}\tilde{K}_T z\|^2_{H^1(\Omega,\R^3)}\bigg] \end{align*} with the evaluation for the terms $\|\tilde{K} z\|_{L^2(0,T;L^2(\Omega,\R^3))}$ and $\|(M_T^{\hat{\alpha}})^{-1}\tilde{K}_T z\|^2_{H^1(\Omega,\R^3)}$ (not causing any misunderstanding, we omit here the subscripts $k,\ell$ for indices of concentrations and coil sensitivities) \begin{alignat*}{3} &\|\tilde{K} z(t)\|_{L^2(\Omega,\R^3)}^2\leq C\|c\textbf{p}^R\|^2_{L^2(\Omega,\R^3)}\|\tilde{a}\|^2_{H^1(0,T)}\|z\|^2_{L^2(0,T)}\leq C^{\tilde{a},c,\textbf{p}^R}\|z\|^2_{L^2(0,T)} \,,\\[1ex] &|(M_T^{\hat{\alpha}})^{-1}\tilde{K}_T z\|^2_{H^1(\Omega,\R^3)}\\ &\leq C^{\hat{\alpha}}\|z\|^2_{L^2(0,T)}\|\tilde{a}\|^2_{L^2(0,T)}\\ &\qquad\quad.\left( \|c\textbf{p}^R\|^2_{H^1(\Omega,\R^3)}+\|c\textbf{p}\textbf{m}_i(T)\|^2_{H^1(\Omega,\R^3)}+\|c\textbf{p}^R\textbf{m}_j(T)\textbf{m}_k(T)\|^2_{H^1(\Omega,\R^3)} \right)\\ &\leq C^{\hat{\alpha}_0,\rho, \tilde{a}}\|z\|^2_{L^2(0,T)}\left(\|c\textbf{p}^R\|^2_{H^1(\Omega,\R^3)}+\|c\textbf{p}^R\|^2_{L^6(\Omega,\R^3)}\|\nabla\textbf{m}(T)\|^2_{L^3(\Omega,\R^3)} \right)\\ &\leq C^{\tilde{a}}\|z\|^2_{L^2(0,T)} \\ &\qquad\quad.\left( \|c\textbf{p}^R\|^2_{H^1(\Omega,\R^3)}+(C^{\Omega}_{H^1\rightarrow L^6}C^{\Omega}_{H^1\rightarrow L^3})^2\|c\textbf{p}^R\|^2_{H^1(\Omega,\R^3)}\|\nabla\textbf{m}\|^2_{L^\infty(0,T;H^1(\Omega,\R^3))} \right)\\ &\leq C^{\tilde{a},c,\textbf{p}^R}\|z\|^2_{L^2(0,T)}\|\nabla\textbf{m}\|^2_{L^\infty(0,T;H^1(\Omega,\R^3))} \end{alignat*} with some $i,j,k=1,2,3$. This estimate holds for $c\textbf{p}^R\in H^1(\Omega,\R^3)$ and thus requires some smoothness of the concentration $c$, while the coil sensitivity $\textbf{p}^R$ is usually smooth in practice. Then applying Gr\"onwall's inequality yields \begin{align*} &\|\pp^z\|_{L^\infty(0,T;H^1(\Omega,\R^3))}\\ &\leq C\exp\Big(\|\nabla\textbf{m}\|^2_{L^\infty(0,T;H^1(\Omega,\R^3))}+\|\nabla\textbf{m}\|_{L^2(0,T;H^2(\Omega,\R^3))}+\|\textbf{m}_t\|_{L^2(0,T;L^3(\Omega,\R^3))}\\ &\qquad\quad + \|\textbf{h}\|_{L^2(0,T;L^3(\Omega,\R^3))}\Big) .\big(\|\tilde{K} z\|_{L^2(0,T;L^2(\Omega,\R^3))}+\|(M_T^{\hat{\alpha}})^{-1}\tilde{K}_T z\|_{H^1(\Omega,\R^3)} \big)\\ &\leq C^{\tilde{a},c,\textbf{p}^R}\Big(\|\nabla\textbf{m}\|_{L^\infty(0,T;H^1(\Omega,\R^3))\cap L^2(0,T;H^2(\Omega,\R^3))}, \|\textbf{m}_t\|_{L^2(0,T;L^3(\Omega,\R^3))}\\ &\qquad\quad ,\|\textbf{h}\|_{L^2(0,T;L^3(\Omega,\R^3))}\Big).\|z\|_{L^2(0,T)}. \end{align*} Integrating \eqref{red-p-bound-0} on $(0,T)$, we also get \begin{align*} &\|\pp^z_t\|_{L^2(0,T;L^2(\Omega,\R^3))}\\ &\leq C^{\tilde{a},c,\textbf{p}^R}\Big(\|\nabla\textbf{m}\|_{L^\infty(0,T;H^1(\Omega,\R^3))\cap L^2(0,T;H^2(\Omega,\R^3))}, \|\textbf{m}_t\|_{L^2(0,T;L^3(\Omega,\R^3))})\\ &\qquad\qquad\quad ,\|\textbf{h}\|_{L^2(0,T;L^3(\Omega,\R^3))}\Big) .\|z\|_{L^2(0,T)}. \end{align*} Altogether, we obtain \begin{align} \label{red-p-bound} &\|\pp^z\|_{L^\infty(0,T;H^1(\Omega,\R^3))}+\|\pp^z_t\|_{L^2(0,T;L^2(\Omega,\R^3))}\nonumber \\ &\leq C^{\tilde{a},c,\textbf{p}^R}\Big(\|\nabla\textbf{m}\|_{L^\infty(0,T;H^1(\Omega,\R^3))\cap L^2(0,T;H^2(\Omega,\R^3))}, \|\textbf{m}_t\|_{L^2(0,T;L^3(\Omega,\R^3))} \nonumber\\ &\qquad\qquad\quad ,\|\textbf{h}\|_{L^2(0,T;L^3(\Omega,\R^3))}\Big) .\|z\|_{L^2(0,T)}. \end{align} This result applied to the Galerkin approximation implies existence of the solution to the adjoint equation. Uniqueness also follows from \eqref{red-p-bound}. \subsubsection{Regularity of the solution to the LLG equation} In \eqref{red-p-bound}, first of all we need the solution $\textbf{m}\in L^\infty(0,T;H^2(\Omega,\R^3))$\\$\cap L^2(0,T;H^3(\Omega,\R^3))$ to the LLG equation. This can be obtained from the regularity result in \cite[Lemma 2.3]{bgmh93} for $\textbf{m}_0\in{H^2(\Omega,\R^3)}$ with small $\|\nabla\textbf{m}_0\|_{L^2(\Omega,\R^3)}$. The remaining task is verifying that the estimate still holds in case the external field $\textbf{h}$ is present, i.e., the right hand side of \eqref{llg3} contains the additional term $\mbox{Proj}_{\m^\bot}\textbf{h}$. Following the lines of the proof in \cite[Lemma 2.3]{bgmh93}, we take the second spatial derivative of $\mbox{Proj}_{\m^\bot}\textbf{h}$, then test it by $\Delta\textbf{m}$ such that \begin{alignat*}{4} &\int_\Omega \Delta\textbf{h}(t)\cdot\Delta\textbf{m}(t) \,dx\\ &\leq \begin{cases} \|\Delta\textbf{h}(t)\|_{L^2(\Omega,\R^3)}\|\Delta\textbf{m}(t)\|_{L^2(\Omega,\R^3)} &\text{if } \textbf{h}\in L^2(0,T;H^2(\Omega,\R^3))\\ \|\nabla\textbf{h}(t)\|_{L^2(\Omega,\R^3)}\|\nabla^3\textbf{m}(t)\|_{L^2(\Omega,\R^3)} &\text{if } \textbf{h}\in L^2(0,T;H^1(\Omega,\R^3)),\, \partial_\nu\textbf{h}=0 \text{ on } \partial\Omega\\ \end{cases} \,,\\[1ex] &\int_\Omega \Delta((\textbf{m}(t)\cdot\textbf{h}(t))\textbf{m}(t))\cdot\Delta\textbf{m}(t) \,dx\\ &\leq \begin{cases} C\|\textbf{h}(t)\|_{H^2(\Omega,\R^3)}\left(1+6\|\nabla\textbf{m}(t)\|_{H^1(\Omega,\R^3)}+2\|\nabla\textbf{m}(t)\|_{H^2(\Omega,\R^3)}\|\nabla\textbf{m}\|_{L^\infty(0,T;L^2(\Omega,\R^3))}\right)\\ \hspace{1cm} .\|\Delta\textbf{m}(t)\|_{L^2(\Omega,\R^3)} \hspace{1.5cm}\, \text{ if } \textbf{h}\in L^2(0,T;H^2(\Omega,\R^3))\\ C\|\textbf{h}(t)\|_{H^1(\Omega,\R^3)}\left( 1+2\|\nabla\textbf{m}(t)\|_{L^3(\Omega,\R^3)} \right)\|\nabla^3\textbf{m}(t)\|_{L^2(\Omega,\R^3)}\\ \hspace{5cm}\text{if } \textbf{h}\in L^2(0,T;H^1(\Omega,\R^3)),\, \partial_\nu\textbf{h}=0 \text{ on } \partial\Omega\\ \end{cases} \end{alignat*} with $C$ just depending on the constants in the embeddings $H^1(\Omega,\R^3)\hookrightarrow L^6(\Omega,\R^3)\hookrightarrow L^3(\Omega,\R^3)$. Then we can proceed similarly to the proof of \cite[Lemma 2.3]{bgmh93} by applying Young's inequality, Gronwall's inequality and time integration to arrive at \begin{align}\label{red-GHlem-h} &\|\nabla \textbf{m}\|_{L^\infty(0,T;H^1(\Omega,\R^3))\cap L^2(0,T;H^2(\Omega,\R^3))} \nonumber\\ &\hspace{3cm}\leq \left(\|\nabla \textbf{m}_0\|_{H^1(\Omega,\R^3)}+\|\textbf{h}\|\right)C(\|\nabla \textbf{m}_0\|_{H^1(\Omega,\R^3)},\|\textbf{h}\|), \end{align} where $\|\textbf{h}\|$ is evaluated in $L^2(0,T;H^1(\Omega,\R^3))$ or $L^2(0,T;H^2(\Omega,\R^3))$ as in the two cases mentioned above.\\ It remains to prove $\textbf{m}_t\in L^2(0,T;H^1(\Omega,\R^3))\hookrightarrow L^2(0,T;L^3(\Omega,\R^3))$ to validate \eqref{red-p-bound}. For this purpose, instead of working with \eqref{llg3} we test \eqref{llg2} by $-\Delta\textbf{m}_t$ and obtain {\allowdisplaybreaks \begin{align*} &\int_\Omega \textbf{m}_t(t)\cdot(-\Delta\textbf{m}_t(t)) \,dx=\|\nabla\textbf{m}_t(t)\|^2_{L^2(\Omega,\R^3)} \,,\\ &\int_\Omega -\alpha_1 \Delta\textbf{m}(t)\cdot(-\Delta\textbf{m}_t(t)) \,dx=\frac{\alpha_1}{2}\frac{d}{dt}\|\Delta\textbf{m}(t)\|^2_{L^2(\Omega,\R^3)} \,,\\ &\int_\Omega -\alpha_1 |\nabla\textbf{m}(t)|^2\textbf{m}(t)\cdot(-\Delta\textbf{m}_t(t)) \,dx = -\alpha_1\int_\Omega \nabla\left(|\nabla\textbf{m}(t)|^2\textbf{m}(t)\right):\nabla\textbf{m}_t(t) \,dx\\ &\,\leq \alpha_1\Big(2C^\Omega_{H^1\rightarrow L^6}C^\Omega_{H^1\rightarrow L^3}\|\nabla\textbf{m}\|_{L^\infty(0,T;H^1(\Omega,\R^3))}\|\Delta\textbf{m}(t)\|_{H^1(\Omega,\R^3)}\\ &\qquad\qquad\quad + (C^{\Omega}_{H^1\rightarrow L^6})^3\|\nabla\textbf{m}\|^3_{L^\infty(0,T;H^1(\Omega,\R^3))} \Big).\|\nabla\textbf{m}_t(t)\|_{L^2(\Omega,\R^3)} \,,\\ &\int_\Omega -\alpha_1 (\textbf{h}(t)-(\textbf{m}(t)\cdot\textbf{h}(t))\textbf{m}(t))\cdot(-\Delta\textbf{m}_t(t)) \,dx\\ & =-\alpha_1\int_\Omega \nabla(\textbf{h}(t)-(\textbf{m}(t)\cdot\textbf{h}(t))\textbf{m}(t)):\nabla\textbf{m}_t(t) \,dx\\ &\,\leq 2\alpha_1\Big(\|\nabla\textbf{h}(t)\|_{L^2(\Omega,\R^3)} \\ &\qquad\qquad\quad + C^\Omega_{H^1\rightarrow L^6}\|\textbf{h}(t)\|_{L^3(\Omega,\R^3)}\|\nabla\textbf{m}\|_{L^\infty(0,T;H^1(\Omega,\R^3))} \Big). \|\nabla\textbf{m}_t(t)\|_{L^2(\Omega,\R^3)} \,,\\ &\int_\Omega -\alpha_2(\textbf{m}(t)\times\Delta\textbf{m}(t))\cdot(-\Delta\textbf{m}_t(t)) \,dx =\int_\Omega -\alpha_2\nabla(\textbf{m}(t)\times\Delta\textbf{m}(t)):\nabla\textbf{m}_t(t) \,dx\\ &\,\leq |\alpha_2|\Big(C^\Omega_{H^1\rightarrow L^6}C^\Omega_{H^1\rightarrow L^3}\|\nabla\textbf{m}\|_{L^\infty(0,T;H^1(\Omega,\R^3))}\|\Delta\textbf{m}(t)\|_{H^1(\Omega,\R^3)}\\ &\qquad\qquad\quad + \|\nabla^3\textbf{m}(t)\|_{L^2(\Omega,\R^3)} \Big).\|\nabla\textbf{m}_t(t)\|_{L^2(\Omega,\R^3)} \,,\\ &\int_\Omega -\alpha_2(\textbf{m}(t)\times\textbf{h}(t))\cdot(-\Delta\textbf{m}_t(t)) \,dx=\int_\Omega -\alpha_2\nabla(\textbf{m}(t)\times\textbf{h}(t)):(\nabla\textbf{m}_t(t)) \,dx\\ &\,\leq |\alpha_2|\Big(C^\Omega_{H^1\rightarrow L^6}\|\textbf{h}(t)\|_{L^3(\Omega,\R^3)}\|\nabla\textbf{m}\|_{L^\infty(0,T;H^1(\Omega,\R^3))}\\ &\qquad\qquad\quad + \|\nabla\textbf{h}(t)\|_{L^2(\Omega,\R^3)} \Big).\|\nabla\textbf{m}_t(t)\|_{L^2(\Omega,\R^3)} \,. \end{align*} } Integrating over $(0,T)$ then employing H\"older's inequality, Young's inequality and \eqref{red-GHlem-h}, it follows that \begin{align} &(1-\epsilon)\|\nabla\textbf{m}_t\|_{L^2(0,T;L^2(\Omega,\R^3))}\nonumber\\ &\leq \frac{C}{4\epsilon} \Big(\|\nabla\textbf{m}\|_{L^\infty(0,T;H^1(\Omega,\R^3))}\|\nabla\textbf{m}\|_{L^2(0,T;H^2(\Omega,\R^3))}+\|\nabla\textbf{m}\|^3_{L^\infty(0,T;H^1(\Omega,\R^3))} \nonumber\\ &\qquad\qquad +\|\nabla\textbf{m}\|_{L^2(0,T;H^2(\Omega,\R^3))} + \|\textbf{h}\|_{L^2(0,T;H^1(\Omega,\R^3))}\|\nabla\textbf{m}\|_{L^\infty(0,T;H^1(\Omega,\R^3))} \nonumber\\ &\qquad\qquad +\|\textbf{h}\|_{L^2(0,T;H^1(\Omega,\R^3))}\Big) \nonumber\\ &\leq \left(\|\nabla \textbf{m}_0\|_{H^1(\Omega,\R^3)}+\|\textbf{h}\|\right)C(\|\nabla \textbf{m}_0\|_{H^1(\Omega,\R^3)},\|\textbf{h}\|). \end{align} Also $\|\textbf{m}_t\|_{L^2(0,T;L^2(\Omega,\R^3))}<C\left(\|\nabla\textbf{m}_0\|_{L^2(\Omega,\R^3)}+\|\textbf{h}\|_{L^2(0,T;L^2(\Omega,\R^3))}\right)$ according to \cite{mkap06} with taking into account the presence of $\textbf{h}$, we arrive at \begin{align}\label{Mregularity} \|\textbf{m}_t\|_{L^2(0,T;H^1(\Omega,\R^3))} \leq \left(\|\nabla \textbf{m}_0\|_{H^1(\Omega,\R^3)}+\|\textbf{h}\|\right)C(\|\nabla \textbf{m}_0\|_{H^1(\Omega,\R^3)},\|\textbf{h}\|), \end{align} where $\|\textbf{h}\|$ is evaluated in $L^2(0,T;H^1(\Omega,\R^3))$ or $L^2(0,T;H^2(\Omega,\R^3))$. In conclusion, the fact that $\textbf{m}\in L^\infty(0,T;H^2(\Omega,\R^3))\cap L^2(0,T;H^3(\Omega,\R^3))\cap H^1(0,T;H^1(\Omega,\R^3))$ for $\textbf{m}_0\in H^2(\Omega,\R^3)$ with small $\|\nabla\textbf{m}_0\|_{L^2(\Omega,\R^3)}$, and\\ $\textbf{h}\in L^2(0,T;H^1(\Omega,\R^3)), \partial_\nu\textbf{h}=0$ on $\partial\Omega$ or $\textbf{h}\in L^2(0,T;H^2(\Omega,\R^3))$ guarantee unique existence of the adjoint state $\pp^z\in L^\infty(0,T;H^1(\Omega,\R^3))\cap H^1(0,T;L^2(\Omega,\R^3))$. And this regularity of $\pp^z$ ensures the adjoint $F'(\hat{\alpha})^*$ in \eqref{red-adjoint} to be well-defined. \begin{remark}\text{ } \begin{enumerate}[label=$\bullet$] \item The LLG equation \eqref{llg3}-\eqref{llg3_abs} is uniquely solvable for $\hat{\alpha}_1>0$ and arbitrary $\hat{\alpha}_2$. Therefore, the regularization problem should be locally solved within the ball $\mathcal{B}_\rho(\hat{\alpha}^0)$ of center $\hat{\alpha}^0$ with $\hat{\alpha}^0_1>0$ and radius $\rho<\hat{\alpha}_1^0$. \item \cite[Lemma 2.3]{bgmh93} requires smallness $\|\nabla\textbf{m}_0\|_{L^2(\Omega,\R^3)}\leq\lambda$, and this smallness depends on $\hat{\alpha}$ through the relation $C^I\left(\lambda^2+2\lambda+\frac{\hat{\alpha}_2}{\hat{\alpha}_1}\lambda \right)<1$ with $C^I$ depending on the constants in the interpolation inequalities. \end{enumerate} \medskip Altogether, we arrive at \begin{align} \mathcal{D}(F)=\left\{\hat{\alpha}=(\hat{\alpha}_1,\hat{\alpha}_2)\in\mathcal{B}_\rho(\hat{\alpha}^0): 0<\hat{\alpha}^0_1,\rho<\hat{\alpha}_1^0, C^I\left(\lambda^2+2\lambda+\frac{\hat{\alpha}_2}{\hat{\alpha}_1}\lambda \right)<1 \right\}. \end{align} \end{remark} \subsubsection{Differentiability of the forward operator} \label{sec-red-diff} Since the observation operator $\mathcal{K}$ is linear, differentiability of $F$ is just the question of differentiability of $S$. Let us rewrite the LLG equation \eqref{llg3} in the following form \begin{align*} \tilde{g}(\hat{\alpha},\textbf{m})-\Delta\textbf{m} = \tilde{f}(\textbf{m}) \end{align*} and denote \begin{align*} \mathbf{\tilde{v}}^\epsilon:=\frac{S(\hat{\alpha}+\epsilon\beta)-S(\hat{\alpha})}{\epsilon}-\textbf{u} =:\frac{\textbf{n}-\textbf{m}}{\epsilon}-\textbf{u}=:\mathbf{v^\epsilon}-\textbf{u}. \end{align*} Considering the system of equations \begin{alignat*}{4} &\tilde{g}(\hat{\alpha}+\epsilon\beta,\textbf{n}) &&-\Delta\textbf{n} &&=\tilde{f}(\textbf{n}),\\ &\tilde{g}(\hat{\alpha},\textbf{m}) &&-\Delta\textbf{m} &&=\tilde{f}(\textbf{m}),\\ &\tilde{g}'_\textbf{m}(\hat{\alpha},\textbf{m})\textbf{u} + \tilde{g}'_{\hat{\alpha}}(\hat{\alpha},\textbf{m})\beta &&-\Delta\textbf{u} && =\tilde{f}'_\textbf{m}(\textbf{m})\textbf{u}, \end{alignat*} with the same boundary and initial data for each, we see that $\mathbf{\tilde{v}}^\epsilon$ solves \begin{alignat}{3} &\tilde{g}'_\textbf{m}(\hat{\alpha},\textbf{m})\mathbf{\tilde{v}}^\epsilon -\Delta\mathbf{\tilde{v}}^\epsilon -\tilde{f}'_\textbf{m}(\textbf{m})\mathbf{\tilde{v}}^\epsilon \nonumber\\ &= \frac{\tilde{f}(\textbf{n})-\tilde{f}(\textbf{m})}{\epsilon}-\tilde{f}'_\textbf{m}(\textbf{m})\mathbf{v^\epsilon} - \frac{\tilde{g}(\hat{\alpha}+\epsilon\beta,\textbf{n})-\tilde{g}(\hat{\alpha},\textbf{m})}{\epsilon}&& \label{red-vtil-1}\\ &\qquad + \tilde{g}'_\textbf{m}(\hat{\alpha},\textbf{m})\mathbf{v^\epsilon} + \tilde{g}'_{\hat{\alpha}}(\hat{\alpha},\textbf{m})\beta &&\text{in } (0,T)\times\Omega \nonumber\\ & \partial_\nu\mathbf{\tilde{v}}^\epsilon=0 && \text {on } [0,T]\times\partial\Omega \label{red-vtil-2}\\ & \mathbf{\tilde{v}}^\epsilon(0)=0 && \text{in } \Omega,\label{red-vtil-3} \end{alignat} explicitly \begin{alignat}{3} &\hat{\alpha}_1\mathbf{\tilde{v}}^\epsilon_t - \hat{\alpha}_2\textbf{m}\times\mathbf{\tilde{v}}^\epsilon_t - \hat{\alpha}_2\mathbf{\tilde{v}}^\epsilon\times\textbf{m}_t - \Delta\mathbf{\tilde{v}}^\epsilon \nonumber\\ &\quad -2(\nabla\mathbf{\tilde{v}}^\epsilon:\nabla\textbf{m})\textbf{m} +\mathbf{\tilde{v}}^\epsilon(-|\nabla\textbf{m}|^2+(\textbf{m}\cdot \textbf{h})) + (\mathbf{\tilde{v}}^\epsilon\cdot \textbf{h})\textbf{m} \nonumber\\ &=\frac{1}{\epsilon}\left( |\nabla\textbf{n}|^2\textbf{n}+\mbox{Proj}_{\n^\bot}\textbf{h} -|\nabla\textbf{m}|^2\textbf{m}-\mbox{Proj}_{\m^\bot}\textbf{h} \right) \label{red-vtil-4}\\ &\quad-2(\nabla\mathbf{v^\epsilon}:\nabla\textbf{m})\textbf{m}+\mathbf{v^\epsilon}(-|\nabla\textbf{m}|^2+(\textbf{m}\cdot \textbf{h})) + (\mathbf{v^\epsilon}\cdot \textbf{h})\textbf{m} \nonumber\\ &\quad -\frac{1}{\epsilon}\Big( (\hat{\alpha}_1+\epsilon\beta_1)\textbf{n}_t - (\hat{\alpha}_2+\epsilon\beta_2)\textbf{n}\times\textbf{n}_t -\hat{\alpha}_1\textbf{m}_t + \hat{\alpha}_2\textbf{m} \times &&\textbf{m}_t\Big) \nonumber\\ &\quad +\hat{\alpha}_1\mathbf{v_t^\epsilon} - \hat{\alpha}_2\textbf{m}\times\mathbf{v_t^\epsilon} - \hat{\alpha}_2\mathbf{v^\epsilon}\times\textbf{m}_t \nonumber\\ &\quad +\beta_1\textbf{m}_t-\beta_2\textbf{m}\times\textbf{m}_t \qquad &&\text{in } (0,T)\times\Omega \nonumber\\ & \partial_\nu\mathbf{\tilde{v}}^\epsilon=0 &&\text {on } [0,T]\times\partial\Omega \label{red-vtil-5}\\ & \mathbf{\tilde{v}}^\epsilon(0)=0 &&\text{in } \Omega.\label{red-vtil-6} \end{alignat} Observing the similarity of \eqref{red-vtil-4}-\eqref{red-vtil-6} to the adjoint equation \eqref{red-adjoint-eq-1}-\eqref{red-adjoint-eq-3} with $\mathbf{\tilde{v}}^\epsilon$ in place of $\pp^z$ and denoting by $\mathbf{b^\epsilon}$ the right-hand side of \eqref{red-vtil-1} or \eqref{red-vtil-4}, one can evaluate $\|\mathbf{\tilde{v}}^\epsilon\|$ using the same technique as in Section \ref{sec-red-adjoint-solvability}. By this way, one achieves, for each $\epsilon\in[0,\bar{\epsilon}]$, \begin{align*} \|\mathbf{\tilde{v}}^\epsilon\|_{L^\infty(0,T;H^1(\Omega,\R^3))\cap H^1(0,T;L^2(\Omega,\R^3))}\leq C\|\mathbf{b^\epsilon}\|_{L^2(0,T;L^2(\Omega,\R^3))} \end{align*} with $\mathbf{b^\epsilon}\in L^2(0,T;L^2(\Omega,\R^3))$ also by analogously estimating and employing $\textbf{m},\textbf{n}\in L^\infty(0,T;H^2(\Omega,\R^3))\cap L^2(0,T;H^3(\Omega,\R^3))\cap H^1(0,T;H^1(\Omega,\R^3))$. We note that the constant $C$ here is independent of $\epsilon$.\\ Next letting $\mathcal{V}:={L^\infty(0,T;H^1(\Omega,\R^3))\cap H^1(0,T;L^2(\Omega,\R^3))}$, we have {\allowdisplaybreaks \begin{alignat*}{3} &\|\mathbf{b^\epsilon}\|_{L^2(0,T;L^2(\Omega,\R^3))}=\bigg\lVert \frac{\tilde{f}(\textbf{n})-\tilde{f}(\textbf{m})}{\epsilon}-\tilde{f}'_\textbf{m}(\textbf{m})\mathbf{v^\epsilon} - \frac{\tilde{g}(\hat{\alpha}+\epsilon\beta,\textbf{n})-\tilde{g}(\hat{\alpha},\textbf{m})}{\epsilon}\\ &\hspace{4cm}+ \tilde{g}'_\textbf{m}(\hat{\alpha},\textbf{m})\mathbf{v^\epsilon} + \tilde{g}'_{\hat{\alpha}}(\hat{\alpha},\textbf{m})\beta \bigg\lVert_{L^2(0,T;L^2(\Omega,\R^3))}\\ &\leq \bigg\lVert \int_0^1 \Big( \big(\tilde{f}'_\textbf{m}(\textbf{m}+\lambda\epsilon\mathbf{v^\epsilon})-\tilde{f}'_\textbf{m}(\textbf{m}) \big)\mathbf{v^\epsilon} - \big( \tilde{g}'_\textbf{m}(\hat{\alpha}+\lambda\epsilon\beta,\textbf{m}+\lambda\epsilon\mathbf{v^\epsilon}) - \tilde{g}'_\textbf{m}(\hat{\alpha},\textbf{m}) \big)\mathbf{v^\epsilon}\\ &\hspace{2.5cm} - \big( \tilde{g}'_{\hat{\alpha}}(\hat{\alpha}+\lambda\epsilon\beta,\textbf{m}+\lambda\epsilon\mathbf{v^\epsilon}) - \tilde{g}'_{\hat{\alpha}}(\hat{\alpha},\textbf{m}) \big)\beta \Big)\,d\lambda \bigg\lVert_{L^2(0,T;L^2(\Omega,\R^3))}\\ &\leq 2\sup_{\substack{\lambda\in[0,1]\\ \epsilon\in[0,\bar{\epsilon}]}}\Big( \|\tilde{f}'_\textbf{m}(\textbf{m}+\lambda\epsilon\mathbf{v^\epsilon})\|_{\mathcal{V}\rightarrow L^2(0,T;L^2(\Omega,\R^3))}\|\mathbf{v^\epsilon}\|_{\mathcal{V}}\\ &\hspace{2.5cm} + \|\tilde{g}'_\textbf{m}(\hat{\alpha}+\lambda\epsilon\beta,\textbf{m}+\lambda\epsilon\mathbf{v^\epsilon})\|_{\mathcal{V}\rightarrow L^2(0,T;L^2(\Omega,\R^3))}\|\mathbf{v^\epsilon}\|_{\mathcal{V}} \\ &\hspace{2.5cm} + \|\tilde{g}'_{\hat{\alpha}}(\hat{\alpha}+\lambda\epsilon\beta,\textbf{m}+\lambda\epsilon\mathbf{v^\epsilon})\|_{\mathbb{R}^2\rightarrow L^2(0,T;L^2(\Omega,\R^3))}|\beta| \Big). \end{alignat*} } In order to prove uniform boundedness of the derivatives of $\tilde{f}, \tilde{g}$ w.r.t $\lambda, \epsilon$ in the above estimate, we again proceed in a similar manner as in Section \ref{sec-red-adjoint-solvability} since the space for $\pp^z$ in Section \ref{sec-red-adjoint-solvability} (c.f. \eqref{red-GHlem-h}) coincides with $\mathcal{V}$ here and by the fact that \begin{align} \max\{\|\textbf{m}\|,\|\textbf{n}\|\} &\leq \max\Big\{\frac{1}{\hat{\alpha}_1},\frac{1}{\hat{\alpha}_1+\epsilon\beta_1}\Big\} C\left(\|\textbf{m}_0\|_{H^2(\Omega,\R^3))},\|\textbf{h}\|_{L^2(0,T;H^2(\Omega,\R^3))}\right) \nonumber\\ &\leq\frac{C}{{\hat{\alpha}^0_1}-\rho} \end{align} for $\textbf{m},\textbf{n}\in L^\infty(0,T;H^2(\Omega,\R^3))\cap L^2(0,T;H^3(\Omega,\R^3))\cap H^1(0,T;H^1(\Omega,\R^3))$. \\ If $\partial_\nu\textbf{h}=0$ on $\partial\Omega$, we just need the $\|.\|_{L^2(0,T;H^1(\Omega,\R^3))}$-norm for $\textbf{h}$ as claimed in \eqref{red-GHlem-h}. This estimate holds for any $\epsilon\in[0,\bar{\epsilon}]$, and the constant $C$ is independent of $\epsilon$.\\ To accomplish uniform boundedness for $\|\mathbf{b^\epsilon}\|_{L^2(0,T;L^2(\Omega,\R^3))}$, we need to show that $\|\mathbf{v^\epsilon}\|_{\mathcal{V}}$ is also uniformly bounded w.r.t $\epsilon$. It is seen from \begin{alignat*}{4} &\tilde{g}(\hat{\alpha}+\epsilon\beta,\textbf{n}) &&-\Delta\textbf{n} &&=\tilde{f}(\textbf{n}),\\ &\tilde{g}(\hat{\alpha},\textbf{m}) &&-\Delta\textbf{m} &&=\tilde{f}(\textbf{m}) \end{alignat*} that $\mathbf{v^\epsilon}$ solves \begin{alignat}{3} &\int_0^1 \tilde{g}'_\textbf{m}(\hat{\alpha}+\lambda\epsilon\beta,\textbf{m}+\lambda\epsilon\mathbf{v^\epsilon})\mathbf{v^\epsilon} + \tilde{g}'_{\hat{\alpha}}(\hat{\alpha}+\lambda\epsilon\beta,\textbf{m}+\lambda\epsilon\mathbf{v^\epsilon})\beta \,d\lambda && - \Delta\mathbf{v^\epsilon} \nonumber\\ &\qquad= \int_0^1 \tilde{f}'_\textbf{m}(\textbf{m}+\lambda\epsilon\mathbf{v^\epsilon})\mathbf{v^\epsilon}\, d\lambda \qquad&&\text{in } (0,T)\times\Omega \label{red-ve-1}\\ & \partial_\nu\mathbf{v^\epsilon}=0 \qquad &&\text {on } [0,T]\times\partial\Omega \label{red-ve-2}\\ & \mathbf{v^\epsilon}(0)=0 && \text{in } \Omega\label{red-ve-3}. \end{alignat} Noting that $\textbf{M}:=\textbf{m}+\lambda\epsilon\mathbf{v^\epsilon}=\lambda\textbf{n}+(1-\lambda)\textbf{m}$ has $\|\textbf{M}\|\leq \frac{C}{{\hat{\alpha}^0_1}-\rho}$ for all $\lambda\in[0,1]$ with $C$ being independent of $\epsilon$, and $\tilde{g}$ is first order in $\hat{\alpha}$, we can rewrite \eqref{red-ve-1} into the linear equation \begin{align} \tilde{G}(\hat{\alpha}+\lambda\epsilon\beta,\textbf{M})\mathbf{v^\epsilon} -\Delta\mathbf{v^\epsilon} + \tilde{F}(\textbf{M})\mathbf{v^\epsilon}=\tilde{B}(\textbf{M})\beta. \end{align} Following the lines of the proof in Section \ref{sec-red-adjoint-solvability}, boundedness of the terms $-\Delta, \tilde{F}(\textbf{M})$, $\tilde{B}(\textbf{M})$ are straightforward, while the main term in $\tilde{G}(\hat{\alpha}+\lambda\epsilon\beta,\textbf{M})$ producing the single square norm of $\mathbf{v_t^\epsilon}$, after being tested by $\mathbf{v_t^\epsilon}$ is \begin{align*} \int_0^1 (\hat{\alpha}_1+\lambda\epsilon\beta_1)\int_\Omega \mathbf{v_t^\epsilon}(t)\cdot\mathbf{v_t^\epsilon}(t)\,dx\,d\lambda &= \|\mathbf{v_t^\epsilon}(t)\|^2_{L^2(\Omega,\R^3)}\left(\hat{\alpha}_1+\frac{\epsilon\beta_1}{2}\right)\\ &\geq \|\mathbf{v_t^\epsilon}(t)\|^2_{L^2(\Omega,\R^3)}(\hat{\alpha}^0_1-\rho). \end{align*} According to this, one gets, for all $\epsilon\in[0,\bar{\epsilon}]$, \begin{align} \label{red-ve-bound} \|\mathbf{v^\epsilon}\|_\mathcal{V} \leq C|\beta|\|\tilde{B}(\textbf{M})\|_{\mathbb{R}^2\rightarrow L^2(0,T;L(\Omega,\R^3))}\leq |\beta|C \end{align} with $C$ depending only on $\textbf{m}_0, \textbf{h}, \hat{\alpha}^0,\rho$.\\ Since $\mathbf{b^\epsilon}\rightarrow 0$ pointwise and $\|\mathbf{b^\epsilon}\|_{L^2(0,T;L^2(\Omega,\R^3))}\leq C$ for all $\epsilon\in[0,\bar{\epsilon}]$, applying Lebesgue's Dominated Convergence Theorem yields convergence of $\|\mathbf{b^\epsilon}\|_{L^2(0,T;L^2(\Omega,\R^3))}$, thus of $\|\mathbf{\tilde{v}}^\epsilon\|_\mathcal{V}$, to zero. Fr\'echet differentiability of the forward operator in the reduced setting is therefore proved. \section{Derivatives and adjoints} Motivated by their need in iterative reconstruction methods, we now derive and rigorously justify derivatives of the forward operators as well as their adjoints, both in an all-at-once and in a reduced setting. To simplify notation for the following analysis sections, the subscript "ext" in the external magnetic field will be skipped. Moreover, to avoid confusion with the dual pairing, we will use the dot notation for the Euclidean inner product. \input{diffadj_aao} \input{diffadj_red} \section{Introduction} Magnetic particle imaging (MPI) is a dynamic imaging modality for medical applications that has first been introduced in 2005 by B.~Gleich and J.~Weizenecker \cite{bgjw05}. Magnetic nanoparticles, consisting of a magnetic iron oxide core and a nonmagnetic coating, are inserted into the body to serve as a tracer. The key idea is to measure the nonlinear response of the nanoparticles to a temporally changing external magnetic field in order to draw conclusions on the spatial concentration of the particles inside the body. Since the particles are distributed along the bloodstream of a patient, the particle concentration yields information on the blood flow and is thus suitable for cardiovascular diagnosis or cancer detection \cite{tktb12, tkngmm17}. An overview of MPI basics is given in \cite{tktb12}. Since MPI requires the nanoparticles as a tracer, it mostly yields quantitative information on their distribution, but does not image the morphology of the body, such as the tissue density. The latter can be visualized using computerized tomography (CT) \cite{natterer86} or magnetic resonance imaging (MRI) \cite{HINSHAW;LENT:83}. These do not require a tracer, but involve ionizing radiation in the case of CT or, in the case of MRI, a strong magnetic field and a potentially high acquisition time. Other tracer-based methods are, e.g., single photon emission computerized tomography (SPECT) and positron emission tomography (PET) \cite{natterer_buch_2, SHEPP;VARDI:82}, which both involve radioactive radiation. The magnetic nanoparticles that are used in MPI, on the other hand, are not harmful for organisms. For a more detailed comparison of these methods, we would like to refer the reader to \cite{tktb12}. At this point there have been promising preclinical studies on the performance of MPI, showing that this imaging modality has a great potential for medical diagnosis since it is highly sensitive with a good spatial and temporal resolution, and the data acquisition is very fast \cite{tkngmm17}. However, particularly in view of an application to image the human body, there remain some obstacles. One obstacle is the time-consuming calibration process. In this work, we assume that the concentration of the nanoparticles inside the body remains static throughout both the calibration process and the actual image acquisition. Mathematically, the forward problem of MPI then can essentially be formulated as an integral equation of the first kind for the particle concentration (or distribution) $c$, \begin{displaymath} u(t) = \int_{\Omega} c(x) s(x,t) \, \mathrm{d} x, \end{displaymath} where the integration kernel $s$ is called the \emph{system function}. The system function encodes some geometrical aspects of the MPI scanner, such as the coil sensitivities of the receive coils in which the particle signal $u$ is measured, but mostly it is determined by the particle behavior in response to the applied external magnetic field. The actual inverse problem in MPI is to reconstruct the concentration $c$ under the knowledge of the system function $s$ from the measured data $u$. To this end, the system function has to be determined prior to the scanning procedure. This is usually done by evaluating a series of full scans of the field of view, where in each scan a delta sample is placed in a different pixel until the entire field of view is covered \cite{tktb12}. Another option is a model-based approach for $s$ (see for example \cite{tkpm17}), which basically involves a model for the particle magnetization. Since this model often depends on unknown parameters, the model-based determination of the system function itself can again be formulated as an inverse problem. This article now addresses this latter type of inverse problem, i.e., the identification of the system function for a known set of concentrations from calibration measurements. More precisely, our goal is to find a decent model for the time-derivative of the particle magnetization $\mathbf{m}$, which is proportional to $s$. So far, in model-based approaches for the system function, the particle magnetization $\mathbf{m}$ is not modeled directly. Instead, one describes the mean magnetization $\overline{\mathbf{m}}$ of the particles via the Langevin function, i.e., the response of the particles is modeled on the mesoscopic scale \cite{tktb12, tk18}. This approach is based on the assumption that the particles are in thermodynamic equilibrium and respond directly to the external field. For this reason, the mean magnetization is assumed to be a function of the external field, such that the mean magnetization is always aligned with the external field. The momentum of the mean magnetization is calculated via the Langevin function. This model, however, neglects some properties of the particle behavior. In particular, the magnetic moments of the particles do not align instantly with the external field \cite{cgc12}. In this work, we thus address an approach from micromagnetics, which models the time-dependent behavior of the magnetic material inside the particles' cores on the micro scale and allows to take into account various additional physical properties such as particle-particle interaction. For an overview, see for example \cite{mkap06}. Since the core material is iron oxide, which is a ferrimagnetic material that shows a similar behavior as ferromagnets \cite{cg11imm, demt2}, we use the \emph{Landau-Lifshitz-Gilbert (LLG) equation} \begin{displaymath} \frac{\partial}{\partial t} \mathbf{m} = - \widetilde{\alpha}_1 \mathbf{m} \times \left( \mathbf{m} \times \mathbf{H}_{\mathrm{eff}} \right) + \widetilde{\alpha}_2 \mathbf{m} \times \fH_{\mathrm{eff}}, \end{displaymath} see also \cite{tlg04, llel92}, for the evolution of the magnetization $\mathbf{m}$ of the core material. The field $\mathbf{H}_{\mathrm{eff}}$ incorporates the external magnetic field together with other relevant physical effects. According to the LLG equation, the magnetization $\mathbf{m}$ performs a damped precession around the field vector of the external field, which leads to a relaxation effect. The LLG equation has been widely applied to describe the time evolution in micromagnetics \cite{bsffgppr14, bgmh93, dkps15}. In contrast to the imaging problem of MPI, the inverse problem of determining the magnetization $\mathbf{m}$ along with the constants $\widetilde{\alpha}_1, \widetilde{\alpha}_2$ turns out to be a nonlinear inverse problem, which is typical for parameter identification problems for partial differential equations, for example electrical impedance tomography \cite{borcea02}, terahertz tomography \cite{awts17}, ultrasound imaging \cite{ck13} and other applications from imaging and nondestructive testing \cite{akar16}.\\ We use the \emph{all-at-once} as well as the \emph{reduced} formulation of this inverse problem in a Hilbert space setting, see also \cite{bk16, bk17, ttnn19}, and analyze both cases including well-definedness of the forward mapping, continuity, and Fr\'echet differentiability and calculate the adjoint mappings for the Fr\'echet derivatives. By consequence, iterative methods such as the Landweber method \cite{HaNeSc95, Landweber}, also in combination with Kaczmarz' method \cite{HKLS07,HLS07}, Newton methods (see, e.g., \cite{rieder99}), or subspace techniques \cite{ws16} can be applied for the numerical solution. An overview of suitable regularization techniques is given in \cite{kns08, kss09}. We begin with a detailed introduction to the modelling in MPI. In particular, we describe the full forward problem and present the initial boundary value problem for the LLG equation that we use to describe the magnetization evolution. In Section \ref{sec_inverse}, we formulate the inverse problem of calibration both in the all-at-once and in the reduced setting to obtain the final operator equation that is analyzed in the subsequent section. First, in Section \ref{sec:aao}, we present an analysis for the all-at-once setting. The inverse problem in the reduced setting is then addressed in Section \ref{sec:red}. Finally, we conclude our findings in Section \ref{sec_concl} and give an outlook on further research. \vspace*{2ex} Throughout the article, we make use of the following notation: The differential operators $-\Delta$ and $\nabla$ are applied by components to a vector field. In particular this means that by $\nabla\mathbf{u}$ we denote the transpose of the Jacobian of $\mathbf{u}$. Moreover, $\langle\mathbf{a},\mathbf{b}\rangle$ or $\mathbf{a}\cdot\mathbf{b}$ denotes the Euclidean inner product between two vectors and $A\colon B$ the Frobenius inner product between two matrices. \section{An inverse problem for the calibration process in MPI} \label{sec_inverse} Apart from the obvious inverse problem of determining the concentration $c$ of magnetic particles inside a body from the measurements $v_{\ell}$, $\ell = 1,...,L$, MPI gives rise to a range of further parameter identification problems of entirely different nature. In this work, we are not addressing the imaging process itself, but consider an inverse problem that is essential for the calibration process. Here, calibration refers to determining the system function $s_{\ell}$, which serves as an integral kernel in the imaging process. The system function includes all system parameters of the tomograph and encodes the physical behaviour of the magnetic material in the cores of the magnetic particles inside a temporally changing external magnetic field. Experiments show that a simple model for the magnetization, based on the assumption that the particles are in their equilibrium state at all times, is insufficient for the imaging, see, e.g., \cite{tkpm17}. A model-based approach with an enhanced physical model has so far been omitted due to the complexity of the involved physics and the system function is usually measured in a time-consuming calibration process \cite{tktb12, tkngmm17}. In this work, we address the inverse problem of calibrating an MPI system for a given set of standard calibration concentrations $c_k$, $k=1,...,K$, for which we measure the corresponding signals and obtain the data ${v}_{k\ell}(t)$, $k=1,...,K$, $\ell=1,...,L$. Here we assume that the coil sensitivity $\mathbf{p}^{\mathrm{R}}_{\ell}$ as well as the transfer function $\widetilde{a}_{\ell}$ are known. This, together with the fact that $\mathbf{m}$ is supposed to satisfy the LLG equation \eqref{llg3}--\eqref{llg3_abs}, is used to determine the system function \eqref{systemfunction}. Actually, since $\mathbf{p}^{\mathrm{R}}$ is known, the inverse problem under consideration here consists of reconstructing $\mathbf{m}$ from \eqref{forwardoperator_kl}, \eqref{llg3}--\eqref{llg3_abs}. As the initial boundary value problem \eqref{llg3}--\eqref{llg3_abs} has a unique solution $\mathbf{m}$ for given $\hat{\alpha}_1$, $\hat{\alpha}_2$, it actually suffices to determine these two parameters. This is the point of view that we take when using a classical reduced formulation of the calibration problem \begin{equation}\label{ip_red} F(\hat{\alpha})=y \end{equation} with the data $y_{k\ell}= v_{k\ell}$ and the forward operator \begin{align} \label{red-F} F:\mathcal{D}(F)(\subseteq\mathcal{X})\rightarrow\mathcal{Y}, \qquad \hat{\alpha}=(\hat{\alpha}_1,\hat{\alpha}_2) \mapsto \mathcal{K} \frac{\partial}{\partial t} S(\hat{\alpha}) \end{align} containing the parameter-to-state map \begin{align} \label{defS} S:\mathcal{X}\rightarrow\tilde{\mathcal{U}} \end{align} that maps the parameters $\hat{\alpha}$ into the solution $\textbf{m}:=S(\hat{\alpha})$ of the LLG initial boundary value problem \eqref{llg3}--\eqref{llg3_abs}. The linear operator $\mathcal{K}$ is the integral operator defined by the kernels $\mathbf{K}_{k\ell}$, $k=1,...,K$, $\ell=1,...,L$, i.e., \begin{align} \label{defK} \mathcal{K}_{k\ell} \mathbf{u} = \int_0^T \int_\Omega \mathbf{K}_{k\ell}(t,\tau,\mathbf{x})\cdot \mathbf{u}(\mathbf{x},\tau)\, \mathrm{d} \tau \, \mathrm{d} \mathbf{x}\,. \end{align} Here, the preimage and image spaces are defined by \begin{align}\label{defXY} \mathcal{X} = \mathbb{R}^2, \qquad \mathcal{Y}=L^2(0,T)^{KL} \end{align} and the state space $\tilde{\mathcal{U}}$ will be chosen appropriately below, see Section \ref{sec:red}. Alternatively, we also consider the all-at-once formulation of the inverse problem as a simultaneous system \begin{equation}\label{ip_aao} \mathbb{F}(\mathbf{m},\hat{\alpha})=\mathbf{y}:=(0,y)^T \end{equation} for the state $\mathbf{m}$ and the parameters $\hat{\alpha}$, with the forward operator \[ \mathbb{F}(\mathbf{m},\hat{\alpha})= \left(\begin{array}{l} \mathbb{F}_0(\mathbf{m},\hat{\alpha})\\ \Bigl(\mathbb{F}_{k\ell}(\mathbf{m},\hat{\alpha})\Bigr)_{k=1,\ldots,K\,, \ \ell=1,\ldots,L} \end{array}\right), \] where \[ \mathbb{F}_0(\mathbf{m},\hat{\alpha}_1,\hat{\alpha}_2)=: \hat{\alpha}_1 \mathbf{m}_t-\Delta \mathbf{m} - \hat{\alpha}_2 \mathbf{m}\times \mathbf{m}_t -|\nabla \mathbf{m}|^2\mathbf{m} - \fh_{\mathrm{ext}} +(\mathbf{m}\cdot\fh_{\mathrm{ext}})\mathbf{m} \] and \[ \mathbb{F}_{k\ell}(\mathbf{m},\hat{\alpha}_1,\hat{\alpha}_2) = \mathcal{K}_{k,\ell} \mathbf{m}_t \] with $\mathcal{K}_{k,\ell}$ as in \eqref{defK}. Here $\mathbb{F}$ maps between $\mathcal{U}\times\mathcal{X}$ and $\mathcal{W}\times\mathcal{Y}$ with $\mathcal{X}$, $\mathcal{Y}$ as in \eqref{defXY}, and $\mathcal{U}$, $\mathcal{W}$ appropriately chosen function spaces, see Section \ref{sec:aao}. \medskip Iterative methods for solving inverse problems usually require the linearization $F'(\hat{\alpha})$ of the forward operator $F$ and its adjoint $F'(\hat{\alpha})^*$ (and likewise for $\mathbb{F}$) in the given Hilbert space setting. For example, consider Landweber's iteration cf., e.g., \cite{Landweber,HaNeSc95} defined by a gradient decent method for the least squares functional $\|F(\hat{\alpha})-y\|_\mathcal{Y}^2$ as \[ \hat{\alpha}_{n+1}=\hat{\alpha}_n-\mu_n F'(\hat{\alpha}_n)^*(F(\hat{\alpha}_n)-y) \] with an appropriately chosen step size $\mu_n$. Alternatively, one can split the forward operator into a system by considering it row wise $F_k(\hat{\alpha})=y_k$ with $F_k=(F_{kl})_{\ell=1\dots L}$ or column wise $F_\ell(\hat{\alpha})=y_\ell$ with $F_\ell=(F_{kl})_{k=1,\ldots,K}$, or even element wise $F_{kl}(\hat{\alpha})=y_{kl}$, and cyclically iterating over these equations with gradient descent steps in a Kaczmarz version of the Landweber iteration cf., e.g., \cite{HLS07,HKLS07}. The same can be done with the respective all-at-once versions \cite{bk16}. These methods extend to Banach spaces as well by using duality mappings, cf., e.g., \cite{SKHK12}, however, for the sake of simplicity of exposition and implementation, we will concentrate on a Hilbert space setting here; in particular, all adjoints will be Hilbert space adjoints. \section{The underlying physical model for MPI} The basic physical principle that is exploited in MPI is Faraday's law of induction, which states that whenever the magnetic flux density $\mathbf{B}$ through a coil changes in time, this change induces an electric current in the coil. This current, or rather the respective voltage, can be measured. In MPI, the magnetic flux density $\mathbf{B}$ consists of the external applied magnetic field $\fH_{\mathrm{ext}}$ and the particle magnetization $\mathbf{M}^{\mathrm{P}}$, i.e., \begin{displaymath} \mathbf{B} = \mu_0 \left( \fH_{\mathrm{ext}} + \mathbf{M}^{\mathrm{P}} \right), \end{displaymath} where $\mu_0$ is the magnetic permeability in vacuum. The particle magnetization $\mathbf{M}^{\mathrm{P}}(x,t)$ in $x \in \Omega \subseteq \mathbb{R}^3$ depends linearly on the concentration $c(x)$ of magnetic material, which corresponds to the particle concentration, in $x \in \Omega$ and on the magnetization $\mathbf{m}(x,t)$ of the magnetic material. We thus have \begin{displaymath} \mathbf{M}^{\mathrm{P}}(x,t) = c(x) \mathbf{m}(x,t), \end{displaymath} where $\lvert \mathbf{m} \rvert = m_{\mathrm{S}} > 0$, i.e., the vector $\mathbf{m}$ has the fixed length $m_{\mathrm{S}}$ that depends on the magnetic core material inside the particles. At this point it is important to remark that we use a slightly different approach to separate the particle concentration, which carries the spatial information on the particles, from the magnetization behavior of the magnetic material and the measuring process. In our approach, the concentration is a dimensionless quantity, whereas in most models, it is defined as the number of particles per unit volume (see, e.g. \cite{tktb12}). \vspace*{2ex} A detailed derivation of the forward model in MPI, based on the equilibrium model for the magnetization, can be found in \cite{tktb12}. The steps that are related to the measuring process can be adapted to our approach. For the reader's convenience, we want to give a short overview and introduce the parameters related to the scanner setup. \\ If the receive coil is a simple conductor loop, which encloses a surface $S$, the voltage that is induced can be expressed by \begin{equation} \label{faraday} u(t) = -\frac{\mathrm{d}}{\mathrm{d}t} \int_{S} \mathbf{B}(x,t) \cdot \, \mathrm{d} \mathbf{A} = -\mu_0 \frac{\mathrm{d}}{\mathrm{d}t} \int_{S} \left( \fH_{\mathrm{ext}} + \mathbf{M}^{\mathrm{P}} \right) \cdot \, \mathrm{d} \mathbf{A}. \end{equation} The signal that is recorded in the receive coil thus originates from temporal changes of the external magnetic field $\mathbf{H}$ as well as of the particle magnetization $\mathbf{M}^{\mathrm{P}}$, \begin{align} u(t) &= -\mu_0 \left( \int_{\Omega} \mathbf{p}^{\mathrm{R}}(x) \cdot \frac{\partial}{\partial t} \fH_{\mathrm{ext}}(x,t) \, \mathrm{d} x + \int_{\Omega} \mathbf{p}^{\mathrm{R}}(x) \cdot \frac{\partial}{\partial t} \mathbf{M}^{\mathrm{P}}(x,t) \, \mathrm{d} x \right) \\ &=: u^{\mathrm{E}}(t) + u^{\mathrm{P}}(t) \end{align} For the signal that is caused by the change in the particle magnetization we obtain \begin{displaymath} \begin{split} u^{\mathrm{P}}(t) &= -\mu_0 \frac{\mathrm{d}}{\mathrm{d}t} \int_{\Omega} \mathbf{p}^{\mathrm{R}}(x) \cdot \mathbf{M}^{\mathrm{P}}(x,t) \, \mathrm{d} x \\ &= -\mu_0 \int_{\Omega} \mathbf{p}^{\mathrm{R}}(x) \cdot \frac{\partial}{\partial t} \mathbf{M}^{\mathrm{P}}(x,t) \, \mathrm{d} x \\ &= -\mu_0 \int_{\Omega} c(x) \mathbf{p}^{\mathrm{R}}(x) \cdot \frac{\partial}{\partial t} \mathbf{m}(x,t) \, \mathrm{d} x \\ &= -\mu_0 \int_{\Omega} c(x) s(x,t) \, \mathrm{d} x. \end{split} \end{displaymath} The function \begin{equation}\label{systemfunction} s(x,t) := \mathbf{p}^{\mathrm{R}}(x) \cdot \frac{\partial}{\partial t} \mathbf{m}(x,t) = \left\langle \mathbf{p}^{\mathrm{R}}(x), \frac{\partial}{\partial t} \mathbf{m}(x,t) \right\rangle_{\mathbb{R}^3} \end{equation} is called the \emph{system function} and can be interpreted as a potential to induce a signal in the receive coil. The function $\mathbf{p}^{\mathrm{R}}$ is called the coil sensitivity and is determined by the architecture of the respective receive coil. For our purposes, we assume that $\mathbf{p}^{\mathrm{R}}$ is known. The measured signal that originates from the magnetic particles can thus essentially be calculated via an integral equation of the first kind with a time-dependent integration kernel $s$. \vspace*{2ex} The particle magnetization, however, changes in time in response to changes of the external field. It is thus an important objective to encode the interplay of the external field and the particles in a sufficiently accurate physical model. The magnetization of the magnetic particles that are used in MPI can be considered on different scales. The following characterization from ferromagnetism has been taken from \cite{mkap06}: \\ On the \emph{atomic level}, one can describe the behavior of a magnetic material as a spin system and take into account stochastic effects that arise, for example, from Brownian motion. \\ In the \emph{microscopic scale}, continuum physics is applied to work with deterministic equations describing the magnetization of the magnetic material. \\ In the \emph{mesoscopic scale}, we can describe the magnetization behavior via a mean magnetization, which is an average particle magnetic moment. \\ Finally, on a \emph{macroscopic scale}, all aspects that arise from the microstructure are neglected and the magnetization is described by phenomenological constitutive laws. \vspace*{2ex} In this work, we intend to use a model from micromagnetism, allowing us to work with a deterministic equation to describe the magnetization of the magnetic material. The core material of the nanoparticles consists of iron-oxide or magnetite, which is a ferrimagnetic material. The magnetization curve of ferrimagnetic materials is similar to the curve that is observed for ferromagnets, but with a lower saturation magnetization (see, e.g., \cite{cg11imm, demt2}). This approach has also been suggested in \cite{drjb14}. The evolution of the magnetization in time is described by the \emph{Landau-Lifshitz-Gilbert (LLG) equation} \begin{equation} \mathbf{m}_t := \frac{\partial}{\partial t} \mathbf{m} = - \widetilde{\alpha}_1 \mathbf{m} \times \left( \mathbf{m} \times \mathbf{H}_{\mathrm{eff}} \right) + \widetilde{\alpha}_2 \mathbf{m} \times \fH_{\mathrm{eff}}, \end{equation} see \cite{tlg04, mkap06} and the therein cited literature. The coefficients \begin{displaymath} \widetilde{\alpha}_1 := \frac{\gamma \alpha_{\mathrm{D}}}{m_{\mathrm{S}}(1+\alpha_{\mathrm{D}}^2)} > 0, \quad \widetilde{\alpha}_2 := \frac{\gamma}{(1+\alpha_{\mathrm{D}}^2)} > 0 \end{displaymath} are material parameters that contain the gyromagnetic constant $\gamma$, the saturation magnetization $m_{\mathrm{S}}$ of the core material and a damping parameter $\alpha_{\mathrm{D}}$. The vector field $\fH_{\mathrm{eff}}$ is called the \emph{effective magnetic field}. It is defined as the negative gradient $-D\mathcal{E}(\mathbf{m})$ of the \emph{Landau energy} $\mathcal{E}(\mathbf{m})$ of a ferromagnet, see, e.g., \cite{mkap06}. Taking into account only the interaction with the external magnetic field $\mathbf{H}$ and particle-particle interactions, this energy is given by \begin{displaymath} \mathcal{E}_{A}(\mathbf{m}) = A \int_{\Omega} \lvert \nabla \mathbf{m} \rvert^2 \, \mathrm{d} x - \mu_0 m_{\mathrm{S}} \int_{\Omega} \left\langle \mathbf{H}, \mathbf{m} \right\rangle_{\mathbb{R}^3} \, \mathrm{d} x, \end{displaymath} where $A \geq 0$ is a scalar parameter (the exchange stiffness constant \cite{tlg04}). We thus have \begin{equation} \fH_{\mathrm{eff}} = 2A \Delta \mathbf{m} + \mu_0 m_{\mathrm{S}}\fH_{\mathrm{ext}}. \end{equation} Together with Neumann boundary conditions and a suitable initial condition our model for the magnetization thus reads \begin{align} \mathbf{m}_t &= - \alpha_1 \mathbf{m} \times \left( \mathbf{m} \times (\Delta \mathbf{m} + \fh_{\mathrm{ext}}) \right) + \alpha_2 \mathbf{m} \times (\Delta \mathbf{m} + \fh_{\mathrm{ext}}) \ && \text{in} \ [0,T] \times \Omega, \label{llg1}\\ 0 &= \partial_{\nu} \mathbf{m} && \text{on} \ [0,T] \times \partial\Omega, \label{llg1_bc}\\ \mathbf{m}_0 &= \mathbf{m}(t=0), \ \lvert \mathbf{m}_0 \rvert = m_{\mathrm{S}} && \text{in} \ \Omega, \label{llg1_abs} \end{align} where $\fh_{\mathrm{ext}} = \frac{\mu_0 m_{\mathrm{S}}}{2A} \fH_{\mathrm{ext}}$ and $\alpha_1 := 2A\widetilde{\alpha}_1, \alpha_2 := 2A\widetilde{\alpha}_2 > 0$. The initial value $\mathbf{m}_0 = \mathbf{m}(t=0)$ corresponds to the magnetization of the magnetic material in the beginning of the measurement. To obtain a reasonable value for $\mathbf{m}_0$, we take into account that the external magnetic field is switched on before the measuring process starts, i.e., $\mathbf{m}_0$ is the state of the magnetization that is acquired when the external field is static. This allows us to precompute $\mathbf{m}_0$ as the solution of the stationary problem \begin{equation} \label{llg_initial_value} \alpha_1 \mathbf{m}_0 \times \left( \mathbf{m}_0 \times (\Delta \mathbf{m}_0 + \fh_{\mathrm{ext}}(t=0)) \right) = \alpha_2 \mathbf{m}_0 \times (\Delta \mathbf{m}_0 + \fh_{\mathrm{ext}}(t=0)) \end{equation} with Neumann boundary conditions. \begin{remark} In the stationary case, damping does not play a role, and if we additionally neglect particle-particle interactions, we obtain the approximative equation \begin{displaymath} \hat{\mathbf{m}}_0 \times \left( \hat{\mathbf{m}}_0 \times \fh_{\mathrm{ext}}(t=0) \right) = 0 \end{displaymath} with an approximation $\hat{\mathbf{m}}_0$ to $\hat{\mathbf{m}}$, since $\alpha_2 \approx 0$ and $\fH_{\mathrm{eff}} \approx \mu_0 m_{\mathrm{S}}\fH_{\mathrm{ext}}$. The above equation yields $\hat{\mathbf{m}}_0 \parallel \fh_{\mathrm{ext}}(t=0)$. Together with $\lvert \hat{\mathbf{m}}_0 \rvert = m_{\mathrm{S}}$ this yields \begin{displaymath} \hat{\mathbf{m}}_0 = m_{\mathrm{S}} \frac{\fh_{\mathrm{ext}}(t=0)}{\lvert \fh_{\mathrm{ext}}(t=0) \rvert}. \end{displaymath} This represents a good approximation to $\mathbf{m}_0$ where $\fh_{\mathrm{ext}}$ is strong at the time point $t=0$: \begin{displaymath} \mathbf{m}_0 \approx \hat{\mathbf{m}}_0 = m_{\mathrm{S}} \frac{\fh_{\mathrm{ext}}(t=0)}{\lvert \fh_{\mathrm{ext}}(t=0) \rvert}. \end{displaymath} \end{remark} \subsection{The observation operator in MPI} Faraday's law states that a temporally changing magnetic field induces an electric current in a conductor loop or coil, which yields the relation \eqref{faraday}. By consequence, not only the change in the particle magnetization contributes to the induced current, but also the dynamic external magnetic field $\fH_{\mathrm{ext}}$. Since we need the particle signal for the determination of the particle magnetization, we need to separate the particle signal from the excitation signal due to the external field. This is realized by processing the signal in a suitable way using filters. \\ MPI scanners usually use multiple receive coils to measure the induced particle signal at different positions in the scanner. We assume that we have $L \in \mathbb{N}$ receive coils with coil sensitivities $\mathbf{p}^{\mathrm{R}}_\ell$, $\ell=1,...,L$, and the measured signal is given by \begin{equation} \widetilde{v}_\ell(t) = -\mu_0 \int_{0}^T \widetilde{a}_\ell(t - \tau ) \int_{\Omega} c(x) \mathbf{p}^{\mathrm{R}}_\ell(x) \cdot \frac{\partial}{\partial \tau} \mathbf{m}(x,\tau) \, \mathrm{d} x \, \mathrm{d} \tau, \end{equation} where $T$ is the repetition time of the acquisition process, i.e., the time that is needed for one full scan of the object, and $a_\ell \, : \, [0,T] \rightarrow \mathbb{R}$ is the transfer function with periodic continuation $\widetilde{a}_\ell \, : \, \mathbb{R} \rightarrow \mathbb{R}$. The transfer function serves as a filter to separate particle and excitation signal, i.e., it is chosen such that \begin{displaymath} \widetilde{v}^{\mathrm{E}}_\ell(t) := \big(\widetilde{a}_\ell * u^{\mathrm{E}}_\ell\big)(t) = -\mu_0\int_0^T \widetilde{a}_\ell(t-\tau)\int_{\Omega} \mathbf{p}^{\mathrm{R}}_\ell(x) \cdot \frac{\partial}{\partial t} \fH_{\mathrm{ext}}(x,t) \, \mathrm{d} x \, \mathrm{d} t \approx 0. \end{displaymath} In practice, $\widetilde{a}_\ell$ is often a band pass filter. For a more detailed discussion of the transfer function, see also \cite{tktb12}. In this work, the transfer function is known analytically. We define \begin{displaymath} \mathbf{K}_\ell(t,\tau,x) := -\mu_0\widetilde{a}_\ell(t - \tau ) c(x) \mathbf{p}^{\mathrm{R}}_\ell(x), \end{displaymath} such that the measured particle signals are given by \begin{equation} \label{forwardoperator} {v}_\ell(t) = \int_{0}^T \int_{\Omega} \mathbf{K}_\ell(t,\tau,x) \cdot \frac{\partial}{\partial \tau} \mathbf{m}(x,\tau) \, \mathrm{d} \tau \, \mathrm{d} x, \end{equation} where $\mathbf{m}$ fulfills \eqref{llg1}, \eqref{llg1_bc}, \eqref{llg1_abs}. \vspace*{2ex} To determine $\mathbf{m}$ in $\Omega\times(0,T)$, we use the data ${v}_{k\ell}(t)$, $k=1,...,K$, $\ell=1,...,L$, from the scans that we obtain for different particle concentrations $c_k$, $k=1,...,K$, $K \in \mathbb{N}$. The forward operator thus reads \begin{equation} \label{forwardoperator_kl} \begin{split} & {v}_{k\ell}(t) = \int_{0}^T \int_{\Omega} \mathbf{K}_{k\ell}(t,\tau,x) \cdot \frac{\partial}{\partial \tau}\mathbf{m}(x,\tau) \, \mathrm{d} x \, \mathrm{d} \tau \,, \\ &\mathbf{K}_{k\ell}(t,\tau,x) := -\mu_0\widetilde{a}_\ell(t - \tau ) c_k(x) \mathbf{p}^{\mathrm{R}}_\ell(x). \end{split} \end{equation} \subsection{Equivalent formulations of the LLG equation} In this section, we derive additional formulations of \eqref{llg1} - \eqref{llg1_abs} that are suitable for the analysis. The approach is motivated by \cite{mkap06}, where only particle-particle interactions are taken into account. \\ First of all, we observe that multiplying \eqref{llg1} with $\mathbf{m}$ on both sides yields \begin{equation} \frac{1}{2} \cdot \frac{\mathrm{d}}{\mathrm{d}t} \lvert \mathbf{m}(x,t) \rvert^2 = \mathbf{m}(x,t) \cdot \mathbf{m}_t(x,t) = 0, \end{equation} which shows that the absolute value of $\mathbf{m}$ does not change in time. Since $\lvert \mathbf{m}_0 \rvert = m_{\mathrm{S}}$, we have $\mathbf{m}(x,t) \in m_{\mathrm{S}}\cdot \mathcal{S}^2$, where $\mathcal{S}^2 := \lbrace \mathbf{v} \in \mathbb{R}^3 \, : \, \lvert \mathbf{v} \rvert = 1 \rbrace$ is the unit sphere in $\mathbb{R}^3$. As a consequence, we have $0 = \nabla \lvert \mathbf{m} \rvert^2 = 2\nabla \mathbf{m} \cdot \mathbf{m}$ in $\Omega$, so that, by taking the divergence we get \begin{equation} \label{grad_abs_m} \langle \mathbf{m}, \Delta \mathbf{m} \rangle = - \langle \nabla \mathbf{m}, \nabla \mathbf{m} \rangle. \end{equation} Now we make use of the identity \begin{displaymath} \mathbf{a} \times (\mathbf{b} \times \mathbf{c}) = \langle \mathbf{a},\mathbf{c} \rangle \mathbf{b} - \langle \mathbf{a},\mathbf{b} \rangle \mathbf{c} \end{displaymath} for $\mathbf{a},\mathbf{b},\mathbf{c} \in \mathbb{R}^3$ to derive \begin{align} \mathbf{m} \times (\mathbf{m} \times \Delta \mathbf{m}) &= \langle \mathbf{m}, \Delta \mathbf{m} \rangle \mathbf{m} - \lvert \mathbf{m} \rvert^2 \Delta \mathbf{m} = -\lvert \nabla \mathbf{m} \rvert^2\mathbf{m}-m_{\mathrm{S}}^2\Delta \mathbf{m}, \label{bac_cab_1} \\ \mathbf{m} \times (\mathbf{m} \times \fh_{\mathrm{ext}}) &= \langle \mathbf{m}, \fh_{\mathrm{ext}} \rangle \mathbf{m} - \lvert \mathbf{m} \rvert^2 \fh_{\mathrm{ext}} =\langle \mathbf{m}, \fh_{\mathrm{ext}} \rangle \mathbf{m} - m_{\mathrm{S}}^2\fh_{\mathrm{ext}} . \label{bac_cab_2} \end{align} Using \eqref{grad_abs_m} together with \eqref{bac_cab_1}, \eqref{bac_cab_2} and $\lvert \mathbf{m} \rvert = m_{\mathrm{S}}$, we obtain from \eqref{llg1} - \eqref{llg1_abs} \begin{align} \begin{split} \mathbf{m}_t - \alpha_1 \, m_{\mathrm{S}}^2\,\Delta \mathbf{m} &= \alpha_1 \lvert \nabla \mathbf{m} \rvert^2 \mathbf{m} + \alpha_2 \mathbf{m} \times \Delta \mathbf{m} \\ &\quad- \alpha_1 \langle \mathbf{m}, \fh_{\mathrm{ext}} \rangle \mathbf{m} + \alpha_1 \,m_{\mathrm{S}}^2\,\fh_{\mathrm{ext}} + \alpha_2 \mathbf{m} \times \fh_{\mathrm{ext}} \end{split} && \text{in} \ [0,T] \times \Omega, \label{llg2}\\ 0 &= \partial_{\nu} \mathbf{m} && \text{on} \ [0,T] \times \partial\Omega, \label{llg2_bc}\\ \mathbf{m}_0 &= \mathbf{m}(t=0), \ \lvert \mathbf{m}_0 \rvert = m_{\mathrm{S}} && \text{in} \ \Omega, \label{llg2_abs} \end{align} Taking the cross product of $\mathbf{m}$ with \eqref{llg2} and multiplying with $-\hat{\alpha}_2$, where $\hat{\alpha}_1=\frac{\alpha_1}{m_{\mathrm{S}}^2\alpha_1^2+\alpha_2^2}$, $\hat{\alpha}_2=\frac{\alpha_2}{m_{\mathrm{S}}^2\alpha_1^2+\alpha_2^2}$, by \eqref{bac_cab_1}, \eqref{bac_cab_2} and cancellation of the first and third term on the right hand side we get \[ \begin{aligned} & -\hat{\alpha}_2\mathbf{m}\times\mathbf{m}_t + \alpha_1\hat{\alpha}_2m_{\mathrm{S}}^2\,\mathbf{m}\times\Delta \mathbf{m} \\ &= \frac{\alpha_2^2}{m_{\mathrm{S}}^2\alpha_1^2+\alpha_2^2} \left(\lvert \nabla \mathbf{m} \rvert^2\mathbf{m}+m_{\mathrm{S}}^2\Delta \mathbf{m}\right)\\ &\hspace{2cm} - \alpha_1\hat{\alpha}_2 m_{\mathrm{S}}^2\,\mathbf{m}\times\fh_{\mathrm{ext}} + \frac{\alpha_2^2}{m_{\mathrm{S}}^2\alpha_1^2+\alpha_2^2} \left(m_{\mathrm{S}}^2\fh_{\mathrm{ext}} - \langle \mathbf{m}, \fh_{\mathrm{ext}} \rangle \mathbf{m}\right)\,, \end{aligned} \] where the second term on the left hand side can be expressed via \eqref{llg2} as \[ \alpha_1\hat{\alpha}_2\mathbf{m}\times\Delta \mathbf{m} = \hat{\alpha}_1\mathbf{m}_t + \frac{\alpha_1^2}{m_{\mathrm{S}}^2\alpha_1^2+\alpha_2^2}\left( -m_{\mathrm{S}}^2\Delta \mathbf{m} - \lvert \nabla \mathbf{m} \rvert^2 \mathbf{m} + \langle \mathbf{m}, \fh_{\mathrm{ext}} \rangle \mathbf{m} - m_{\mathrm{S}}^2\fh_{\mathrm{ext}}\right) -\alpha_1\hat{\alpha}_2 \mathbf{m} \times \fh_{\mathrm{ext}}\,. \] This yields the alternative formulation \begin{align} \hat{\alpha}_1 m_{\mathrm{S}}^2 \mathbf{m}_t - \hat{\alpha}_2 \mathbf{m} \times \mathbf{m}_t -m_{\mathrm{S}}^2\Delta \mathbf{m} &= \lvert \nabla \mathbf{m} \rvert^2 \mathbf{m} + m_{\mathrm{S}}^2\fh_{\mathrm{ext}} - \langle \mathbf{m}, \fh_{\mathrm{ext}} \rangle \mathbf{m} && \text{in} \ [0,T] \times \Omega, \label{llg3}\\ 0 &= \partial_{\nu} \mathbf{m} && \text{on} \ [0,T] \times \partial\Omega, \label{llg3_bc}\\ \mathbf{m}_0 &= \mathbf{m}(t=0), \ \lvert \mathbf{m}_0 \rvert = m_{\mathrm{S}} && \text{in} \ \Omega \label{llg3_abs}\,. \end{align}
1517e593471356ee335f15d1a7bfe4e958bf6465
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Glitches are typical events of pulsars, observed as sudden jumps in rotational frequency ($\nu$) and spin-down rate ($\dot\nu$), usually followed by a recovery stage, in which $\nu$ and its derivative $\dot\nu$ recover gradually to the extrapolated values of the pre-glitch evolution trend. The behavior of the spin frequency post glitch could be described by polynomial components and several exponential processes (as described in equation (\ref{eq00})), such as $\sum\Delta\nu_{\rm di}\exp(-t/\tau_{\rm i})$, where $\nu_{\rm di}$ and $\tau_{\rm i}$ are amplitude and time scale of the $i^{\rm th}$ component. Most of these exponential processes are positive values of $\nu_{\rm di}$ (called "normal recovery processes"), however some negative ones are observed in the Crab pulsar\citep{Lyne1992,Wong2001,Shaw2018,Zhang2018_2}. Here, the exponential process with negative $\nu_{\rm di}$ is called delayed spin-up process as defined in \cite{Shaw2018}. The delayed spin-up process may dominate the evolution of $\nu$ and $\dot\nu$ immediately after the occurrence of a glitch, but is much more difficult to be detected, probably due to that this process has a much shorter time scale than the recovery process \citep{McCulloch1990,Dodson2002,Palfreyman2018}. Vela pulsar is very famous for its large glitches in which some of them have been continuously observed, but besides the ordinary recovery processes, only upper limits of $12.6$\,s to $2$\,minutes have been obtained for the rising time scale of these glitches, before the recovery starts. No delayed spin up process has been detected in Vela pulsar \citep{McCulloch1990,Dodson2002,Palfreyman2018,Ashton et al.(2019)}. The Crab pulsar is another important object for pulsar glitch study, from which 26 glitches have been detected so far \citep{Espinoza2011,Wang2012,Lyne2015,Shaw2018,Shaw2018b}. Compared to Vela pulsar, Crab pulsar has two unique features though its glitch amplitudes are usually smaller than those of Vela pulsar. The first feature is that its $\Delta{\nu}$ and $\Delta{\dot\nu_{p}}$ values are positively and linearly correlated \citep{Lyne2015,Shaw2018}. The second feature is that delayed spin-up processes have been observed in its large glitches with time scales of $0.5-3.0$\,days, such as the glitches of 1989, 1996 and 2017 \citep{Lyne1992,Wong2001,Shaw2018,Zhang2018_2}. Presently, there are mainly two trigger mechanisms for pulsar glitches. One is the star quake model, in which the (outer) crystalline crust of a neutron star (NS) would break as strain in the crust gradually accumulates due to the spin-down of the NS and finally surpasses its maximum sustainable strain. Sudden rearrangement of the stellar moment of inertia caused by the star quake would result in a glitch \citep{Ruderman1969}. The other mechanism invokes neutron superfluidity in a NS, which is expected when the internal temperature of star drops below the critical temperature for neutron pairing. The superfluid neutrons rotate by forming quantized vortices, which can get pinned to nuclei in the outer crust. Once the pinned vortices are released suddenly, glitches are the result of angular momentum transfer between the inner superfluid and the outer crust \citep{Anderson1975,Alpar1984a}. After the glitch, the superfluid vortices would move outwards because of loosing angular momentum and subsequently be repinned to the outer crust. The superfluid vortex model has its advantage in understanding pulsar glitches, especially for the post-glitch recovery process \citep{Baym1969}. In addition, it should be noted that sudden crust breaking may also trigger vortex unpinning avalanches \citep{Alpar1993}. The delayed spin-up behaviors observed in some glitches of the Crab pulsar may not be well explained based on a simple star quake or superfluid vortex model, since the time-scale for crust breaking and plate motion or unpinned vortices to move radially outward is less than a minute, which is hard to account for the presence of 2-day delayed spin-up process \citep{Graber2018}. One possible scenario for the delayed spin-up process might be that it is the initially induced inward motion of some vortex lines pinned to broken crustal plates moving inward towards the rotation axis \citep{Gugercinoglu2019}. Other possible scenarios are (1) the excess heating due to a quake in a hot crust induces secular vortex movement \citep{Greenstein1979,Link1996}, or (2) the mutual friction strength in a strongly pinned crustal superfluid region changes due to the propagation of the unpinned vortex front \citep{Haskell2018}. The delayed spin-up processes in glitches thus carry rich information on how the glitches progress and thus offer valuable probes to the inner structure of neutron stars \citep{Haskell2014,Haskell2018}. Given the small number of known spin-up processes, any new event of this kind will add precious knowledge about glitches and the physics behind. The common feature for the three glitches with delayed spin-up processes happened in 1989, 1996 and 2017 is that their $\Delta\nu$ is large, compared to the known glitches of the Crab pulsar. We therefore selected two large glitches in 2004 and 2011, performed detailed analyses about their timing behavior, and found that they do contain delayed spin-up processes. We name them with G1, G2, G3, G4 and G5, corresponding to the events in 1989, 1996, 2004, 2011, and 2017, respectively. In order to describe different components conveniently, the full spin evolution could be divided into four components: the rapid initial spin-up process of the frequency (C1), the delayed spin-up process (C2), exponential decay processes (C3) and the permanent change of the frequency and its derivatives (C4) \cite{Lyne1992,Wong2001,Shaw2018,Zhang2018_2}, which dominate the glitch behavior in different stages accordingly. Based on the parameters of these five glitches, we have also carried out a statistical study of the spin-up processes. This paper is organized as follows. The observations and data reduction are described in Section 2, and the timing analysis results are presented in Section 3. Section 4 includes the physical implications of these results and the main conclusions. \section{Observations and Timing Analysis} The temporal analyses of these glitches use all the radio, X-ray and Gamma-ray observations we can access. We use the {\sl RXTE}, {\sl INTEGRAL} and the Nanshan 25-m radio telescope observations, together with the spin frequency and its derivative from radio format of Jodrell Bank \footnote{http://www.jb.man.ac.uk/~pulsar/crab.html} \citep{Lyne1993}, to analyze the timing behaviors of G3, and due to the low cadence, only the {\sl Fermi-LAT/GBM} observations are used for the analyses of G4. For G5, the observations from {\sl Insight-HXMT}, {\sl Fermi-LAT/GBM}, the Nanshan 25-m radio telescope and the Kunming 40\,m (KM40) radio telescope are used to study the behaviors of G5. We cite the parameters for G1 and G2 from \cite{Wong2001}. In order to perform timing analysis, the arrival time is corrected to Solar System Barycentre (SSB) with solar system ephemerides DE405 using the pulsar position of $\alpha=05^{h}31^{m}31^{s}.972$ and $\delta=22^{\textordmasculine}00^{\prime}52^{\prime\prime}.069$ \citep {Lyne1993}. In this section, we first describe the data reduction for observations. Then, the calculation for time of arrival (TOA) and its error are presented. Finally, the description of the timing method for the glitches is given. \subsection{Data Reduction for Radio observations} We unitize the radio observations from Nanshan 25-m radio telescope located in China \citep{Wang2001} to supply timing solution for G3 and G5. We also utilize some observations from Kunming 40\,m (KM40) radio telescope located in China \citep{Wang2001,Xu2018} to supply timing solution for G5. The Nanshan 25-m radio telescope, operated by Xinjiang Astronomical Observatory (XAO), has observed the Crab pulsar frequently since January 2000 \citep{Wang2001}. The two hands of linear polarization are obtained with a cryogenically cooled receiver at center frequency of 1540\,MHz with bandwidth 320\,MHz. The signals are fed through a digital filter bank with configuration of 2$\times$1024$\times$0.5\,MHz for pulsar timing. The samples are 8-bit digitized at 64\,$\mu$s interval and written as PSRFITS file \citep{Hotan2004}. The integration time of each observation of the Crab pulsar is 16 minutes. The timing observations at 2256\,MHz were conducted with the Kunming 40\,m (KM40) radio telescope \citep{Xu2018}, operated by Yunnan Astronomical Observatory. A room temperature receiver provides circularly-polarized signal with bandwidth of 140\,MHz. The digital filter band divides the intermediated frequency signal with 1.0\,MHz for each sub-channel. The integration time of each observation of the Crab pulsar is 48\,minutes. For the radio observations, the off-line data reduction is performed in the following two steps using the PSRCHIVE package \citep{Hotan2004}: (1) the data are de-dispersed and summed to produce a total intensity profile; (2) correlate the data with the standard pulse profiles of the Crab pulsar to determine the local TOAs that correspond to the peak of the main pulse. The detailed data reduction process is the same as that described in \cite{Yuan2010}. \subsection{Data Reduction of X-ray and $\gamma$-ray observations} In this section, we introduce the data reduction processes of the RXTE, INTEGRAL, Insight-HXMT and Fermi observations, respectively. \subsubsection{Data Reduction of the {\sl RXTE} Observations} The {\sl RXTE} observations used in this paper were obtained by both the Proportional Counter Array (PCA) and the High Energy X-ray Timing Experiment (HEXTE). The detailed introduction of PCA and HEXTE can be found in \cite{Rothschild et al.(1998)}, \cite{Jahoda et al.(2006)} and \cite{Yan et al.(2017)}. In this paper, the public data (ObsID P80802 and P90802) in event mode E\_250us\_128M\_0\_1s in 5--60\,keV from PCA and E\_8us\_256\_DX0F in 15--250\,keV from HEXTE are used. The Standard {\sl RXTE} data processing method with HEASOFT (ver 6.25 ) is used to obtain the timing data ( i.e., the arrival time of each photon used in the analyses) as follows: (1) Generate the Good Time Interval by ftool \texttt{maketime} based on the {\sl RXTE} filter file. (2) Filter the events with the \texttt{grosstimefilt} tools; (3) Convert the arrival time of each photon to the Solar System Barycenter (SSB) with \texttt{faxbary}. The criteria of the selection and the detailed process can be found in \cite{Yan et al.(2017)}. The TOA for RXTE is integrated from the typical observation. \subsubsection{Data Reduction for INTEGRAL} The INTEGRAL observations of the Crab pulsar are subdivided into the so-called Science Windows (ScWs), each with a typical duration of a few kiloseconds \citep{Winkler2003}. By selecting offset angles to the source of less than 10 degrees, between 2014-03-01 and 2014-04-01, 96 public ScWs are selected for Crab in the data archive at the INTEGRAL Scientific Data Center. The data reduction is performed using the standard Off-line Scientific Analysis (osa), version 10.2. The integration time of TOA for INTEGRAL is about 1\,hour. \subsubsection{Data Reduction for Insight-HXMT} Launched on June 15, 2017, {\it Insight-HXMT} was originally proposed in the 1990s, based on the Direct Demodulation Method \citep{Li93,Li94}. As the first X-ray astronomical satellite of China, {\it Insight-HXMT} carries three main payloads onboard \citep{Zhang2014,Zhang2017,Liu2019,Chen2019,Cao2019}: the High Energy X-ray telescope (HE, 20-250\,keV, 5100\,cm$^{2}$), the Medium Energy X-ray telescope (ME, 5-30\,keV, 952\,cm$^{2}$), and the Low Energy X-ray telescope (LE, 1-15\,keV, 384\,cm$^{2}$). The data reduction for the Crab observations is done with HXMTDAS software v1.0 and the data processes are described in \cite{Chen2018}, \cite{Huang2018} and \cite{Tuo2019}. One TOA is obtained from the typical observation. \subsubsection{Data Reduction for Fermi-LAT/GBM} The Large Area Telescope (LAT) is the main instrument of Fermi, which can detect $\gamma$-rays in the energy range from 20\,MeV to 300\,GeV and has an effective area of $\sim 8000$\,cm$^2$. It consists of a high-resolution converter tracker, a CsI(Tl) crystal calorimeter, and an anti-coincidence detector, which make the directional measurement, energy measurement for $\gamma$-rays, and background discrimination, respectively \citep{Atwood2009}. In this work, we use the LAT data to perform timing analysis with the Fermi Science Tools (v10r0p5) \footnote{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/pulsar\_analysis\_tutorial.html}. The events are selected with the angular distance less than 1\textordmasculine \,\,of the Crab pulsar and a zenith angle of less than 105\textordmasculine\,\, and energy range 0.1 to 10\,GeV \citep{Abdo2010}. After event selection, the arrival time of each event is corrected to SSB with DE405. One TOA is obtained from every two-day exposure. We also utilize the Gamma-ray Burst Monitor (GBM) data around the glitch epoch to refine the timing results. Due to the large field of view ($\sim 2\pi$) of GBM and its relatively high count rate ($\sim 30$\,cnts/s) of the Crab pulsar, GBM can also be used to monitor the Crab pulsar continuously like LAT and even has higher cadence as shown in Figure \ref{fig0}. Given the periodicity of the pulse signals, they could be detected when the pulsar is in the field of view of GBM, though the overall background is high due to the large field of view of GBM. As the volume of GBM data is very large, we only select one month data around G4 and G5, which cover 10\,days before each glitch epoch and 20\,days after glitch epoch. The events with elevation angle greater than 5 degrees are used to perform timing analysis. Then, one TOA can be accumulated every 10\,minutes observation. \subsection{TOA calculation for X-ray and $\gamma$-ray observations} {\bf The evolutions of the spin frequency and its derivatives are estimated from the TOAs utilizing the timing tool TEMPO2 \citep{Hobbs et al. (2006)}, while the TOAs are obtained in a similar way to that in \cite{Ge2019}: we first obtained a standard pulse profile that contains 100 bins from all observations, then calculated the phase shift $\Phi_{0}$ in each observation using its pulse profile, the standard pulse profile and the cross correlation method, and finally the TOA is calculated with the formula ${\rm TOA=T_{0}+\Phi_{0}/\nu/86400}$, where ${\rm T_{0}}$ is the start time of one observation and ${\rm \nu}$ is the spin frequency. The uncertainty of a TOA is calculated with a Monte-Carlo method as also described in \cite{Ge2019}.} \subsection{Timing Analysis} \subsubsection{Part--Timing Analysis} We apply the part--timing method to show the spin evolution versus time as described in \cite{Ferdman2015}. In order to show spin evolution directly, we divide the data set into several subsets for each glitch. For G3, the time step for $\nu$ and $\dot\nu$ by TEMPO2 \citep{Hobbs et al. (2006)} is about 15\,days without data over-lapping due to the low cadence of the observations. With high cadence of the observations for G4 and G5, the time steps of the dataset for G4--5 are chosen as 1.5 and 0.5\,days, respectively. For $\dot\nu$, the time steps of the dataset for G4--5 are chosen as two times as $\nu$, and the overlapping time is set as equal to the time step in order to show more data points in the figures. For each subset, we have taken the center of the time span as the reference epoch for the timing analysis. \subsubsection{Coherent Timing Analysis} A coherent timing analysis of the data set is performed, in order to obtain more precise measurements of the glitch parameters using TEMPO2. The phase evolution of the glitch could be described as equation (\ref{eq00}) considering the glitch parameters \citep{Wong2001}. \begin{equation} \Phi=\Phi_{0} + \nu{(t-t_{0})}+\frac{1}{2}\dot\nu{(t-t_{0})^{2}}+\frac{1}{6}\ddot\nu{(t-t_{0})^{3}} + \Phi_{\rm g}(t), \label{eq00} \end{equation} where $\nu$, $\dot\nu$ and $\ddot\nu$ are the spin parameters at the epoch $t_{0}$. $\Phi_{\rm g}(t)$ is the phase description after the glitch as defined in equation (\ref{eq01}). \begin{equation} \Phi_{\rm g}(t)=\Delta\nu_{\rm p}\Delta{t}+\frac{1}{2}\Delta\dot\nu_{\rm p}\Delta{t}^2+ \sum_{i=0,1,2}{\Delta{\nu}_{\rm di}\tau_{\rm i}(1-\exp(-\Delta{t}/\tau_{\rm i})}), \label{eq01} \end{equation} where $\Delta{t}=t-t_{\rm g}$ is the time after glitch, $\Delta\nu_{\rm p}$ and $\Delta\dot\nu_{\rm p}$ are the permanent changes of $\nu$ and $\dot\nu$ for C4, $\tau_{0}$, $\tau_{1}$ and $\tau_{2}$ are the time scales for C1, C2 and C3, $\Delta{\nu}_{\rm d0}$, $\Delta{\nu}_{\rm d1}$ and $\Delta{\nu}_{\rm d2}$ are the amplitudes of the three components. $i=1$ refers to the delayed spin-up process, and $i=2$ refers to the conventionally observed exponential recovering process. In the following timing analysis, the effect of C1 is neglected as its time scale is too short, which will be analyzed in Section 3.2. In order to obtain the net evolution of a glitch, we subtract the pre-glitch spin-down trend and then fit the frequency residuals $\delta\nu$ with equation (3) \citep{Lyne1992,Wong2001,Xie2013}, which consists of a linear function and two exponential functions, \begin{equation} \delta{\nu}=\Delta\nu_{\rm p}+\Delta\dot\nu_{\rm p}\Delta{t}+\sum_{i=1,2}{\Delta{\nu}_{\rm di}\exp(-\Delta{t}/\tau_{\rm i})}, \label{eq0} \end{equation} where the parameters have the same definition with equation (\ref{eq01}). The coherent timing analysis for different instruments is performed simultaneously using parameter `JUMP' to describe the time lags between different energy bands because peak position of the Crab pulsar evolves with energy as reported in \cite{Kuiper et al.(2003),Molkov et al.(2010),Ge et al.(2012)}. Setting the position of the radio peak as phase 0, the values of JUMP are -0.340\,ms ( Insight-HXMT/RXTE/GBM), -0.275\,ms ( INTEGRAL), -0.250\,ms (LAT), compared to radio band, respectively. The residual $\delta\dot{\nu}$ can be described by \begin{equation} \delta\dot{\nu}=\Delta\dot\nu_{\rm p}+\sum_{\rm i=1,2}{\Delta{\dot\nu}_{\rm di}\exp(-\Delta{t}/\tau_{\rm i})}, \label{eq1} \end{equation} where $\Delta{\dot\nu}_{\rm di}=-\Delta{\nu}_{\rm di}/\tau_{\rm i}$. The total frequency and frequency derivative changes at the time of the glitch are $\Delta\nu_{\rm g}= \Delta\nu_{\rm p}+\Delta\nu_{\rm d1}+\Delta\nu_{\rm d2}$ and $\Delta\dot\nu_{\rm g}= \Delta\dot\nu_{\rm p}+\Delta\dot\nu_{\rm d1}+\Delta\dot\nu_{\rm d2}$, respectively; and the degree of recovery can be described by parameters: $\hat{Q}=\Delta\nu_{\rm d2}/(\Delta\nu_{\rm g}+|\Delta\nu_{\rm d1}|)$, as suggested by \cite{Wong2001}. \section{Results} \subsection{The timing results for G3--5} We first analyze G3, the second largest glitch, by using the coherent timing method. The timing residuals are shown in Figure \ref{fig0_0}(a) and the timing parameters are listed in Table \ref{table_timing_para}. The parameters of C3 are consistent with the result from \cite{Wang2012}. After subtraction of pre-glitch evolution, the residuals $\delta{\nu}$ and $\delta{\dot\nu}$ are plotted in Figure \ref{fig1} (a) and (b). Due to the observational coverage, no spin-up process has been detected for $\delta\nu$ and marginally for $\delta\dot{\nu}$. However, the observational data can not be acceptably fitted without C2 with reduced $\chi^{2}$ 1.3 ( d.o.f=54), which means that the delayed spin-up process is needed. Fitting the data with both C2 and C3 gives the time scale $\tau_{1}$ of G3 as $1.7\pm0.8$\,days and $\Delta\nu_{\rm d1}=-0.35\pm0.05$\,$\mu${Hz} for the delayed spin-up process. We note here that the delayed spin-up process of G3 could be quantified in more details, by using data such as the daily radio monitoring observation at Jodrell Bank observatory. G4 is also analyzed using the coherent timing method. The timing residuals are shown in Figure \ref{fig0_0}(b) and the timing parameters are listed in Table \ref{table_timing_para}. As shown in Figure \ref{fig1} (c) and (d) for G4, after subtraction of the pre-glitch evolution, the spin frequency residual $\delta{\nu}$ increases first and then decreases with time, which is just the feature of the delayed spin-up process. From the coherent timing analysis, we can obtain that $\tau_{1}=1.6\pm0.4$\,days with $\Delta\nu_{\rm d1}=-0.43\pm0.05$\,$\mu${Hz}. With the fitted parameters, $\delta{\dot\nu}$ can also be described by equation (\ref{eq1}) with the same parameters as shown in Figure \ref{fig1} (d). G5 is re-analyzed using the coherent timing method as well. The timing residuals are show in Figure \ref{fig0_0}(c) and the timing parameters are listed in Table \ref{table_timing_para}. As shown in Figure \ref{fig1} (e) and (f), the evolution of frequency residual $\delta{\nu}$ is consistent with the result reported in \citep{Shaw2018,Zhang2018_2}. From the fitting result, the time scale $\tau_{1}$ for the delayed spin-up process is $2.56\pm0.04$\,days and $\Delta\nu_{\rm d1}=-1.23\pm0.01$\,$\mu${Hz}, which are also consistent with the result of \cite{Shaw2018} and \cite{Zhang2018_2}. The rest of parameters are listed in Table \ref{table_timing_para}. Our analysis shows that G3 and G4 also have delayed spin-up process. Including G1, G2 and G5, there are five glitches with delayed spin-up process. From Table \ref{table_timing_para}, the mean time scale $\tau_{1}$ of C2 is $\sim1.4$\,days while the mean amplitude $\Delta\nu_{\rm d1}$ of C2 is around $-0.6$\,$\mu$Hz\citep{Lyne1992,Wong2001,Shaw2018,Zhang2018_2} \subsection{The rising time constraint of C1} The rising time scale of C1 is very important to study the pinning process between the inner superfluid and outer crust \citep{Haskell2018}. We make use of the observations from {\sl Fermi-GBM} to constrain the rising time of C1 for G5. Unfortunately, the Crab pulsar were occulted by earth at the right time for G5 in {\sl Fermi-GBM} observation. In order to constrain the rising time scale of C1 of G5, equation (\ref{eq2}) is used to describe the frequency evolution of C1 \citep{Haskell2018}. \begin{equation} \delta{\nu}=\Delta{\nu}_{0}(1-\exp(-\Delta{t}/\tau_{0})), \label{eq2} \end{equation} where $\Delta{\nu}_{0}$ is the amplitude of the frequency jump and $\tau_{0}$ is the rising time scale and $\Delta{t}=t-t_{\rm g}$ is the time after glitch. As shown in Figure \ref{fig2} (a) and (b), $\delta{\nu}$ could be fitted with equation (\ref{eq2}). As shown in Figure \ref{fig2} (b), the rising time scale $\tau_{0}$ is less than 0.0202\,day (0.48\,hour), which is much less than the upper limit of 6 hours given by \cite{Shaw2018} but still longer than the theoretical value of 0.1\,hour suggested by \cite{Graber2018} and \cite{Haskell2018}. For G3, the rising time scale could not be constrained because no high cadence observations could be obtained in high energy bands and radio bands from Nanshan 25-m radio telescope around the glitch epoch. For G4, the errors of $\delta{\nu}$ is close to the frequency step with short integrated time 10 minutes. \subsection{The correlations between the parameters of C2 and other components} We first compare the relationship between G1--5 and the other glitches of the Crab pulsar, to see how these five glitches differ from the other ones. The most conventional comparison is to study the jump amplitudes of their frequencies and frequency derivatives. As shown in Figure \ref{fig3}, the Pearson coefficient between $|\Delta{\dot\nu_{\rm g}}|$ and $\Delta{\nu_{\rm g}}$ is 0.81. Hence, $|\Delta{\dot\nu_{\rm g}}|$ and $\Delta{\nu_{\rm g}}$ show strong linear correlation for all the glitches of the Crab pulsar, including those with delayed spin-up processes. We also compare the correlation between $|\Delta{\dot\nu_{\rm p}}|$ and $\Delta{\nu_{\rm g}}$, which is similar to Figure 5 in \cite{Lyne2015}. The value of $|\Delta\dot\nu_{p}|$ is obtained from \cite{Wong2001}, \cite{Wang2012} and this work because the calculation process in \cite{Lyne2015} is different from the rest ones. As shown in Figure \ref{fig3}, $|\Delta{\dot\nu_{\rm p}}|$ has strong linear correlation with $\Delta{\nu_{\rm g}}$ as the Pearson coefficient is 0.98. As shown in Figure \ref{fig3_2}, the $\Delta{\nu_{\rm g}}$ and $|\Delta{\dot\nu_{\rm g}}|$ values for the five glitches with delayed spin-up process locate in the higher wing of the overall distribution of all the glitches and are not well separated from those with the rest glitches. This unified positive correlation suggests that the physical mechanism of the five glitches with delayed spin-up processes is probably the same as that of all the other glitches, and it is worth to check from the archival data whether the glitches occurred in 1975, 2000, 2001 and 2006 also have delayed spin-up processes, as they have amplitudes comparable to those of G2. To understand more characteristics for G1--5, we examine the Pearson and Spearman correlations between their parameters as listed in Tables \ref{corre_para0}, \ref{corre_para1} and plotted in Figures \ref{fig4}, \ref{fig5}. The relationships between $\tau_{1}$, $|\Delta{\nu}_{\rm d1}|$, $\tau_{2}$, $\Delta{\nu}_{\rm d2}$, $\Delta{\nu}_{\rm p}$, $|\Delta{\dot\nu}_{\rm p}|$ and $\Delta{\nu}_{\rm g}$ have positive correlations as shown in Figures \ref{fig4} and \ref{fig5}, some of which are consistent with the result of \cite{Wang2019}. These positive correlations mean that C2 has a larger amplitude and longer time scale when a glitch has a larger spin frequency jump. If C2 also exists for the smaller glitches, from the positive correlation between $\tau_{1}$ and $\tau_{2}$, $\tau_{1}$ should be less than 0.5\,days if $\tau_{2}<10$\,days, which indicates that C2 might not be easily observed due to the low cadence of most previous observations. We also find that the correlations between $\Delta{\dot\nu}_{\rm d1}$ and $|\Delta{\dot\nu}_{\rm d2}|$, $|\Delta{\dot\nu}_{\rm p}|$, $\tau_{1}$, $\tau_{2}$, $\hat{\rm Q}$ are weak listed in Tables \ref{corre_para0} and \ref{corre_para0}. \section{Discussions and Summary} It is generally believed that a neutron star has the following interior structure: the outer crust made by degenerated electrons and an ion crystal lattice, the inner crust composed of nucleus, superfluid neutrons, probably superfluid protons and leptons, the outer core that contains superfluid neutrons, superfluid protons and electrons, and the inner core \citep{Anderson1975,Alpar1984a}. The angular momentum transfer from the inner superfluid component to the outer normal component can explain the observed frequency jumps (glitches) of pulsars \citep{Anderson1975,Alpar1981}. The response to the glitch of the thermal vortex creep process in the pinned superfluids are suggested to be responsible for the post-glitch behaviors \citep{Alpar1984a,Alpar1984b,Larson2002}. A quick rise of the spin rate in crust, resulting from the initial energy deposition, could be followed by a slower rise as the thermal wave dissipation in the effective crust with thickness ~200m, depending on the crust equation of state \citep{Link1996,Larson2002}. Another possible scenario is that vortex accumulation in strong pinning regions leads to differential rotation and the propagation of vortex fronts, which naturally produces a slower component of the rise after the initial fast step in frequency jump \citep{Haskell2014,Khomenko2018,Haskell2018}. Recently, the combination of crust-quake vortex and unpinning models is proposed to explain the whole glitch behavior as suggested by \cite{Gugercinoglu2019}. \cite{Haskell2018} estimated the rising time scale of rapid initial spin-up of the largest glitch G5 is $\sim0.1$\,hours, which is consistent with upper limit of 0.48\,hours for G5. We hope that the positive correlations between the amplitudes and time scales of C2 and C3 can be also used to constrain the properties of neutron star structures. Figure \ref{fig3} shows the relation between $\Delta\nu$ and $|\Delta\dot\nu|$ for the Crab pulsar \citep{Espinoza2011}. The Crab pulsar, PSR J0537$-$6910 and the Vela pulsar have relatively large glitches as characterized by both the large $\Delta\nu$ and $|\Delta\dot\nu|$ values. However, the glitch properties of the Crab pulsar are very different from those of PSR J0537$-$6910 and the Vela pulsar. $\Delta\nu$ and $|\Delta\dot\nu|$ of Crab's glitches have a strong positive correlation \citep{Lyne2015}, in contrast to the other two pulsars without such correlation \citep{Espinoza2011,Antonopoulou2018}. Five spin-up events are found for the Crab pulsar; however, no similar spin-up phenomenon has been reported for either PSR J0537--6910 or the Vela pulsar. The glitch around MJD 57734 from the Vela pulsar is observed with the rising time scale less than 12.6\,s \citep{Ashton et al.(2019)} and does not show any evidence for delayed spin-up process. Given the different ages of these three pulsars, we speculate that the states of their crust sand interiors are different, and so the conditions of the physical processes involved in glitches are different for pulsars with different ages. In summary, in this work we have studied the glitches of the Crab pulsar, which have delayed spin-up processes. First, in addition to the three glitches occurred in 1989, 1996 and 2017, we also found that second and fourth largest glitches of the Crab pulsar detected in 2004 and 2011 have delayed spin-up processes, with the second and fourth largest glitches detected in 2004 and 2011 are analyzed and these two glitches are found also with delayed spin-up processes of $\tau_{1}=1.7\pm0.8$\,days and $\tau_{1}=1.6\pm0.4$\,days, respectively. Using observations from {\sl Insight-HXMT}, Radio Telescopes in Xinjiang and Kunming China and {\sl Fermi}, we studied the largest glitch in 2017 and obtained similar results with \cite{Shaw2018} and \cite{Zhang2018_2}, and further constrained its rising time to less than 0.48\,hour. We obtained the correlations among the parameters of the delayed spin-up processes and the parameters of the exponential decay processes: the amplitudes of the delayed spin-up ($|\Delta{\nu}_{\rm d1}|$) and the recovery process ($\Delta{\nu}_{\rm d2}$), their respective time scales ($\tau_{1}$, $\tau_{2}$), and permanent changes of spin frequency ($\Delta{\nu}_{\rm p}$) have strong positive correlations, while the rest parameters do not show any correlation with each other. From the positive correlations, we suggest that further analysis of the existing data for smaller glitches are needed to search for any evidence of delayed spin-up processes with possibly shorter spin-up time scales, and more high cadence observations of the Crab pulsar in the future are also critical in understanding delayed spin-up processes and the interior structure of neutron stars. \acknowledgments{This work is supported by the National Key R\&D Program of China (2016YFA0400800) and the National Natural Science Foundation of China under grants U1838201, U1838202, U1938109, U1838104 and U1838101. This work made use of the data from the HXMT mission, a project funded by China National Space Administration (CNSA) and the Chinese Academy of Sciences (CAS). This work also made use of the radio observations from Yunnan Observatory. The Nanshan 25\,m radio telescope is jointly operated and administrated by Xinjiang Astronomical Observatory and Center for Astronomical Mega-Science, Chinese Academy of Sciences. We acknowledge the use of the public data from the {\sl RXTE}, {\sl INTEGRAL} and {\sl Fermi} data archive.} \clearpage \begin{figure} \begin{center} \includegraphics[scale=0.5]{Crab_GBM_G5.eps}\caption{ The pulse profiles detected by Fermi/GBM as function of time around the glitch of G5. Two periods are plotted in this figure. Each profile is integrated with 10\,minutes. The vertical dashed line around 1.0 marks the peak position. The horizontal line represents the glitch epoch of G5. The pulse signal around 1\,hour and 3\,hours disappears because the Crab pulsar could not always be in the field view of Fermi satellite and could be occulted by the Earth. This image is smoothed with gaussian function with radius 10 pixels to eliminate the effect of fluctuation due to 1000 bins for every profile. \label{fig0}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.75]{Crab_Glitch_timre.eps}\caption{ The timing residuals. The timing residuals for G3, G4 and G5 are shown panels (a), (b) and (c). The time for glitch epochs are set 0 to show the residuals in the same time range marked by vertical dashed lines. The circle, square, plus, pentagram and triangle points represent the observations from Fermi-GBM, Fermi-LAT, RXTE, INTEGRAL and radio telescopes. \label{fig0_0}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.6]{Glitch_3.eps}\caption{ The spin evolution of G3 (MJD 53067), G4 (MJD 55875) and G5 (MJD 58064). Panels (a) and (b) are the evolution of spin frequency $\nu$ and frequency derivative $\dot{\nu}$ with fitting result subtracted from the pre-glitch parameters for G3. The square points are the part-timing results from {\sl RXTE}, {\sl INTEGRAL} and Nanshan radio observations. $\delta{\nu}$ and $\delta{\dot\nu}$ marked by empty circle points are the results from monthly ephemerides of Jodrell Bank (http://www.jb.man.ac.uk/pulsar/crab/crab2.txt). For both panels, thin line represents the fitting result with equations (\ref{eq0}) and (\ref{eq1}), respectively. The dot-dashed line represents the fitted result without spin-up process. Panels (c) and (d) are similar to panels (a) and (b), but for G4. The results of G4 are obtained from {\sl Fermi} data. Panels (e) and (f) are similar with panels (a) and (b), but for G5. The results of G5 are obtained from the {\sl Insight-HXMT}, Radio Telescopes in Xinjiang and Kunming China and {\sl Fermi}. The vertical lines in all panels represent the glitch epochs. \label{fig1}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.6]{Glitch_58064.eps}\caption{ The spin evolution of G5 just post the glitch. Panel(a): Similar with Figure \ref{fig1}(e), $\delta{\nu}$ is the evolution of spin frequency $\nu$ with fitting result subtracted from the pre-glitch parameters. The thin line is spin evolution with parameters listed in Table \ref{table_timing_para}. The dotted, dot-dashed and dashed lines represent the equation (\ref{eq2}) with time scale $\tau_{0}=0.01,0.015,0.0202$\,days, respectively. Panel (b): $\chi^{2}$ as function of $\tau_{0}$ to fit $\delta{\nu}$ with the equation (\ref{eq2}). \label{fig2}} \end{center} \end{figure} \begin{figure} \centering \caption{{The $|\Delta{\dot\nu_{\rm g}}|$ or $|\Delta{\dot\nu_{\rm p}}|$ and $\Delta{\nu_{\rm g}}$ correlations for the Crab pulsar.} The samples are maintained from the website http://www.jb.man.ac.uk/pulsar/glitches.html \citep{Espinoza2011}. Black circle points are the glitch events of the Crab pulsar and dual-circles represent G1--5. Black square points are the glitch events of the Crab pulsar but for $|\Delta{\dot\nu_{\rm p}}|$. The value of $|\Delta{\dot\nu_{\rm p}}|$ is obtained from \cite{Wong2001}, \cite{Wang2012} and this work. Dual-square points represent G1--5 but for $|\Delta{\dot\nu_{\rm p}}|$. } \includegraphics[width=0.6\textwidth]{glitch_f0f1.eps} \label{fig3} \end{figure} \begin{figure} \centering \caption{The distributions of $\Delta{\nu_{\rm g}}$ and $|\Delta{\dot\nu_{\rm g}}|$ of the Crab pulsar. Panel (a): the solid-black, dashed-blue and dashed-red lines represent the distribution of $\Delta{\nu_{\rm g}}$, with delayed spin-up process and the rest glitches, respectively. Panel (b): The same description as panel (a) but for $|\Delta\dot{\nu_{\rm g}}|$. } \includegraphics[width=0.6\textwidth]{glitch_f0f1_dis.eps} \label{fig3_2} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.6]{nu0-tau.eps}\caption{ The correlations between the parameters of G1--5 with delayed spin-up processes. The unit of $|\Delta\nu_{\rm d1}|$, $\Delta\nu_{\rm d2}$, $\Delta\nu_{\rm p}$ and $\Delta\nu_{\rm g}$ is $\mu$Hz; $\Delta\dot\nu_{\rm p}$ is in units of pHz\,s$^{-1}$; $\tau_{1}$ and $\tau_{2}$ are in units of day. The dashed lines in panels (a)--(f) are the linear fitting results for the correlations. The errors of $\tau_{1}$ and $\Delta\nu_{\rm d1}$ for 1989 (G1) and 1996 (G2) are taken as zero, since \cite{Lyne1992} and \cite{Wong2001} did not report them in their works. \label{fig4}} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.6]{nu1-tau.eps}\caption{ The correlations between the parameters of G1--5 with delayed spin-up processes. The dashed lines in panels (a)--(f) are the linear fitting results for the correlations. The unit of $|\Delta\nu_{\rm d1}|$, $\Delta\nu_{\rm d2}$, $\Delta\nu_{\rm p}$ and $\Delta\nu_{\rm g}$ is $\mu$Hz. The unit of $|\Delta\dot\nu_{\rm d2}|$ and $|\Delta\dot\nu_{\rm p}|$ is pHz\,s$^{-1}$. $\tau_{1}$ and $\tau_{2}$ are in units of day. \label{fig5}} \end{center} \end{figure} \begin{table} \caption{Parameters of the Crab pulsar for G1--G5.} \scriptsize{} \label{table_timing_para} \begin{center} \begin{tabular}{l l l l l l l} \hline\hline Parameters & G1$^{(a)}$ & G2$^{(a)}$ &G3 & G4 & G5 \\ \hline Epoch(MJD) & -- & -- & 53067 & 55867 & 58038\\ $\nu$(Hz) & -- & -- & 29.796943484(8) & $29.706916048(2)$ & $29.6375626144(3)$ \\ $\dot\nu$($10^{-10}$\,Hz\,s$^{-1}$)& -- & -- & -3.73308(5) & $-3.70743(3)$ & $-3.686433(3)$ \\ $\ddot{\nu}$($10^{-20}$\,Hz\,s$^{-2}$)& -- & -- & 1.3(2) & $0.96(17)$ & $0.81(3)$\\ \hline Glitch epoch (MJD) & 47767.4 & 50259.93 & 53067.0780$^{(b)}$ & 55875.67(1) &58064.548(2)\\ $ \Delta\nu_{p}$ ($\mu$Hz) & $2.38(2)$ & $0.31(3)$ & $0.93(2)$ & $0.49(3)$ & $5.7(9)$ \\ $ \Delta\dot\nu_{p}$ (pHz\,s$^{-1}$) & $-0.155(2)$ & $-0.083(6)$ & $-0.19(1)$ & $-0.09(1)$ & $-0.483(8)$ \\ $ \Delta\ddot\nu_{p}$ (10$^{-20}$Hz\,s$^{-2}$)& -- & $0.09(6)$ & -- & -- & -- \\ $\Delta\nu_{d1}$ ($\mu$Hz) & -0.7 & -0.31 & $-0.35(5)$ & $-0.43(5)$ & $-1.23(1)$\\ $\tau_{1}$ (\,day) & 0.8 & 0.5 & $1.7(8)$ & $1.6(4)$ & $2.56(4)$\\ $\Delta\nu_{d2}$ ($\mu$Hz) & 2.28 & 0.66 & $5.67(4)$ & $1.26(3)$ & $9.91(9)$\\ $\tau_{2}$ (\,day) & 18 & 10.3 & $24(1)$ & $10.6(3)$ & $45.9(3)$\\ Residuals ($\mu$s) & -- & -- & 214 & 311 & 113 \\ $\chi^{2}/{\rm d.o.f}$ (d.o.f) & -- & -- & 0.99(52) & 1.17(873) & 1.35(1269) \\ \hline \end{tabular} \end{center} (a) The parameters are obtained form \cite{Wong2001}.\\ (b) Glitch epoch is adopted from http://www.atnf.csiro.au/people/pulsar/psrcat/glitchTbl.html\\ The confidence interval is 68.3\%. \end{table} \begin{table} \caption{Pearson correlations between the parameters for G1--G5.} \scriptsize{} \label{corre_para0} \begin{center} \begin{tabular}{l l l l l l l l l l l l} \hline\hline ${\rm R}$ & $\tau_{1}$ & $|\Delta\nu_{d1}|$ &$\Delta{\dot\nu}_{\rm d1}$& $\tau_{2}$ & $\Delta\nu_{d2}$ & $|\Delta{\dot\nu}_{\rm d2}|$ & $ \Delta\nu_{p}$ & $ \Delta\dot\nu_{p}$ & $ \Delta\nu_{\rm g}$ &$\hat{\rm Q}$ \\ \hline $\tau_{1}$ & 1 & 0.67 & -0.55 & 0.83 & 0.87 & 0.80 & 0.84 & -0.82 & 0.83 & -0.45 \\ $|\Delta\nu_{d1}|$ & & 1 & 0.24 & 0.87 & 0.78 & 0.44 & 0.88 & -0.90 & 0.90 & -0.42 \\ $\Delta{\dot\nu}_{\rm d1}$ & & & 1 & -0.10& -0.25 & -0.50 & -0.14 & 0.10 & -0.10 & 0.22 \\ $\tau_{2}$ & & & & 1 & 0.98 & 0.76 & 0.95 & -0.99 & 0.99 & -0.43 \\ $|\Delta\nu_{d2}|$ & & & & & 1 & 0.86 & 0.92 & -0.96 & 0.97 & -0.4\\ $\Delta{\dot\nu}_{\rm d2}$ & & & & & & 1 & 0.60 & -0.69 & 0.74 & 0.02 \\ $ \Delta\nu_{p}$ & & & & & & & 1 & -0.98 & 0.95 & -0.67 \\ $ \Delta\dot\nu_{p}$ & & & & & & & & 1 & -0.99 & 0.52 \\ $ \Delta\nu_{\rm g}$ & & & & & & & & & 1 & -0.42 \\ $\hat{\rm Q}$ & & & & & & & & & & 1\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Spearman correlations between the parameters for G1--G5.} \scriptsize{} \label{corre_para1} \begin{center} \begin{tabular}{l l l l l l l l l l l l} \hline\hline ${\rm \rho}$ & $\tau_{1}$ & $|\Delta\nu_{d1}|$ &$\Delta{\dot\nu}_{\rm d1}$& $\tau_{2}$ & $\Delta\nu_{d2}$ & $|\Delta{\dot\nu}_{\rm d2}|$ & $ \Delta\nu_{p}$ & $ \Delta\dot\nu_{p}$ & $ \Delta\nu_{\rm g}$ &$\hat{\rm Q}$ \\ \hline $\tau_{1}$ & 1 & 0.6 & -0.6 & 0.9 & 0.9 & 0.8 & 0.9 & -0.7 & 0.9 & -0.3 \\ $|\Delta\nu_{d1}|$ & & 1 & 0.2 & 0.7 & 0.7 & 0.4 & 0.3 & -0.5 & 0.7 & -0.1 \\ $\Delta{\dot\nu}_{\rm d1}$ & & & 1 & -0.3 & -0.3 & -0.5 & -0.7 & 0.1 & -0.3 & 0.1 \\ $\tau_{2}$ & & & & 1 & 1.0 & 0.9 & 0.7 & -0.9 & 1.0 & -0.1 \\ $|\Delta\nu_{d2}|$ & & & & & 1 & 0.9 & 0.7 & -0.9 & 1.0 & -0.1\\ $\Delta{\dot\nu}_{\rm d2}$ & & & & & & 1 & 0.6 & -0.8 & 0.9 & 0.2 \\ $ \Delta\nu_{p}$ & & & & & & & 1 & -0.6 & 0.7 & -0.6 \\ $ \Delta\dot\nu_{p}$ & & & & & & & & 1 & -0.9 & 0.2 \\ $ \Delta\nu_{\rm g}$ & & & & & & & & & 1 & -0.1 \\ $\hat{\rm Q}$ & & & & & & & & & & 1\\ \hline \end{tabular} \end{center} \end{table} \clearpage
22710628e73fc8462ea03a77e22b553c0815508c
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} We consider the problem of single rotation averaging, i.e., averaging several estimates of a single rotation to obtain the best estimate. This problem is relevant in many applications such as structure from motion (SfM) \cite{hartley2011L1, tron2016survey}, camera rig calibration \cite{dai2009rotation}, motion capture \cite{inna2010arithmetic}, satellite/spacecraft attitude determination \cite{lam2007precision,markley2007averaging} and crystallography \cite{humbert1996determination, morawiec1998note}. A standard approach for single rotation averaging is to find the rotation that minimizes a cost function based on the distance to the input rotations. We refer to \cite{hartley2013rotation} for an extensive study of various distance functions. The current state-of-the-art method is to minimize the sum of geodesic distances using the Weiszfeld algorithm on $SO(3)$ \cite{hartley2011L1}. In this work, we propose a novel method, also based on the Weiszfeld algorithm \cite{weiszfeld1,weiszfeld2}, that is faster and more robust than \cite{hartley2011L1}. Our contributions are as follows: \begin{enumerate}\itemsep0em \item A robust initialization from the elementwise median of the input rotation matrices (Section \ref{subsec:init}). \item An implicit outlier rejection scheme performed at each iteration of the Weiszfeld algorithm (Section \ref{subsec:outlier}). \item An approximation of the chordal median in $SO(3)$ using the Weiszfeld algorithm (Section \ref{subsec:chordal}). \end{enumerate} We substantiate our claim through extensive evaluation on synthetic data (Section \ref{sec:result}). To download our Matlab code, go to \url{http://seonghun-lee.github.io}. \section{Preliminaries} We denote the vectorization of an $n\times m$ matrix by $\mathrm{vec}(\cdot)$ and its inverse by $\mathrm{vec}^{-1}_{n\times m}(\cdot)$. For a 3D vector $\mathbf{v}$, we define $\mathbf{v}^\wedge$ as the corresponding $3\times3$ skew-symmetric matrix, and denote the inverse operator by $(\cdot)^\vee$, i.e., $\left(\mathbf{v}^\wedge\right)^\vee=\mathbf{v}$. The Euclidean, the $L_1$ and the Frobenius norm are respectively denoted by $\lVert \cdot \rVert$, $\lVert\cdot\rVert_1$ and $\lVert \cdot \rVert_F$. A rotation can be represented by a rotation matrix $\mathbf{R}\in SO(3)$ or a rotation vector $\mathbf{v}=\theta\hat{\mathbf{v}}$ where $\theta$ and $\hat{\mathbf{v}}$ are the angle and the unit axis of the rotation, respectively. The two representations are related by Rodrigues formula, and we denote the corresponding mapping between them by $\text{Exp}(\cdot)$ and $\text{Log}(\cdot)$ \cite{forster2017onmanifold}: \begin{equation} \mathbf{R}=\mathrm{Exp}(\mathbf{v}) := \mathbf{I}+\frac{\sin\left(\lVert\mathbf{v}\rVert\right)}{\lVert\mathbf{v}\rVert}\mathbf{v}^\wedge+\frac{1-\cos\left(\lVert\mathbf{v}\rVert\right)}{\lVert\mathbf{v}\rVert^2}\left(\mathbf{v}^\wedge\right)^2, \end{equation} \begin{equation} \label{eq:log_map1} \mathbf{v}=\mathrm{Log}(\mathbf{R}):= \frac{\theta}{2\sin(\theta)}\left(\mathbf{R}-\mathbf{R}^\top\right)^\vee \end{equation} \begin{equation} \label{eq:log_map2} \text{with} \quad \theta = \cos^{-1}\left(\frac{\mathrm{tr}(\mathbf{R})-1 }{2}\right). \end{equation} The geodesic distance between two rotations $d_\angle(\mathbf{R}_1, \mathbf{R}_2)$ is obtained by substituting $\mathbf{R}_1\mathbf{R}_2^\top$ into $\mathbf{R}$ in \eqref{eq:log_map2}. In \cite{hartley2013rotation}, it was shown that the chordal distance is related to the geodesic distance by the following equation: \begin{align} d_\text{chord}(\mathbf{R}_1, \mathbf{R}_2):=& \lVert\mathbf{R}_1-\mathbf{R}_2\rVert_F\\ =&2\sqrt{2}\sin\left(d_\angle(\mathbf{R}_1, \mathbf{R}_2)/2\right) \label{eq:chordal}. \end{align} We define $\mathrm{proj}_{SO(3)}(\cdot)$ as the projection of the $3\times3$ matrix onto the special orthogonal group $SO(3)$, which gives the closest rotation in the Frobenius norm \cite{arun}: For $\mathbf{M}\in\mathbb{R}^{3\times3}$, \begin{equation} \label{eq:orthogonal_projection} \mathrm{proj}_{SO(3)}(\mathbf{M}):=\mathbf{UWV}^\top, \end{equation} where \begin{align} \mathbf{U}\bm{\Sigma}\mathbf{V}^\top &= \text{SVD}\left(\mathbf{M}\right), \\ \mathbf{W} &= \begin{cases} \text{diag}(1,1,-1) & \text{if} \ \ \det\left(\mathbf{UV}^\top\right) < 0\\ \mathbf{I}_{3\times3} & \text{otherwise} \end{cases}. \end{align} \newpage \section{Method} \subsection{Robust Initialization} \label{subsec:init} In \cite{hartley2011L1}, the chordal $L_2$ mean of the rotations is taken as the starting point of the Weiszfeld algorithm. For input rotations $\{\mathbf{R}_i\}_{i=1}^N$, it is given by $\text{proj}_{SO(3)}\left(\sum_{i=1}^N\mathbf{R}_i\right)$ \cite{hartley2013rotation}. Although this initial solution can be obtained very fast, it is often inaccurate and sensitive to outliers. To overcome this weakness, we propose to initialize using the following matrix: \begin{equation} \mathbf{S}_0=\argmin_{\textstyle\mathbf{S}\in\mathbb{R}^{3\times3}} \sum_{i=1}^N\sum_{j=1}^3\sum_{k=1}^3 \left|\left(\mathbf{R}_i-\mathbf{S}\right)_{jk}\right|, \end{equation} where the subscript $jk$ denote the element at the $j$-th row and the $k$-th column of the matrix. Note that $\sum_{j,k}|\mathbf{M}_{jk}|$ is called the elementwise $L_1$ norm of the matrix $\mathbf{M}$. See Fig. \ref{fig:entryL1} for the geometric interpretation of this distance metric. Since the nine entries of $\mathbf{S}$ are independent, we can consider them separately in 1D space. Then, the entry of $\mathbf{S}_0$ at location $(j, k)$ minimizes the sum of absolute deviations from the entries of $\mathbf{R}_i$'s at $(j, k)$, meaning that it is simply their median: \begin{equation} \left(\mathbf{S}_0\right)_{jk} = \text{median}\left(\{\left(\mathbf{R}_i\right)_{jk}\}_{i=1}^N\right) \ \ \text{for all}\ \ j,k \in \{1,2,3\}. \end{equation} The initial rotation matrix is then be obtained by projecting $\mathbf{S}_0$ onto $SO(3)$: \begin{equation} \label{eq:init_proj} \mathbf{R}_0 = \mathrm{proj}_{SO(3)} \left(\mathbf{S}_0\right). \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.33\textwidth]{entryL1.pdf} \caption{The elementwise $L_1$ norm of $(\mathbf{R}-\mathbf{S})$ is equal to $\sum_{i=1}^3\lVert\mathbf{r}_i-\mathbf{s}_i\rVert_1$ where $\mathbf{R}=[\mathbf{r}_1,\mathbf{r}_2,\mathbf{r}_3]$ and $\mathbf{S}=[\mathbf{s}_1,\mathbf{s}_2,\mathbf{s}_3]$. This can be thought as the total length of the green lines. } \label{fig:entryL1} \end{figure} \subsection{Outlier Rejection in the Weiszfeld Algorithm} \label{subsec:outlier} The geodesic $L_1$-mean (i.e., median) of the rotations is defined as \begin{equation} \mathbf{R}_\mathrm{gm}=\argmin_{\mathbf{R}\in SO(3)} \sum_{i=1}^N d_\angle(\mathbf{R}_i, \mathbf{R}). \end{equation} In \cite{hartley2011L1}, it was shown that this can be computed using the Weiszfeld algorithm on $SO(3)$ and that it is more robust to outliers than the $L_2$-mean. However, a large number of outliers is still critical to the accuracy. To further mitigate the influence of the outliers, we modify the Weiszfeld algorithm of \cite{hartley2011L1} such that the large residuals are given zero weight at each iteration. Specifically, we disregard all the residuals larger than $\text{max}(d_{Q1}, d_\text{max})$ where $d_{Q1}$ is the first quartile of the residuals at each iteration and $d_\text{max}$ is some threshold we set in order to avoid discarding inliers. The details are given in Algorithm \ref{al:geodesic}. A similar approach of disregarding large residuals was used in \cite{ferraz2014very} for robust Perspective-$n$-Point (PnP) problem. Note that our approach contrasts \cite{aftab2015convergence} where a smooth robust cost function is favored for theoretically guaranteed convergence. In practice, our method is more robust to outliers (see Section \ref{subsec:result_comparison}). \subsection{Approximate Chordal $L_1$-Mean} \label{subsec:chordal} The chordal $L_1$-mean of the rotations is defined as \begin{equation} \mathbf{R}_\mathrm{cm}=\argmin_{\mathbf{R}\in SO(3)} \sum_{i=1}^N d_\text{chord}(\mathbf{R}_i, \mathbf{R}). \end{equation} In \cite{hartley2013rotation}, a locally convergent algorithm on $SO(3)$ is proposed for this problem. In this work, we propose a different approach: Instead of iteratively updating the estimate on $SO(3)$, we first embed the rotations in a Euclidean space $\mathbb{R}^9$, find their geometric median in $\mathbb{R}^9$ using the standard Weiszfeld algorithm \cite{weiszfeld1,weiszfeld2} (which is globally convergent), and then project this median onto $SO(3)$. In other words, we approximate $\mathbf{R}_\mathrm{cm}$ as \begin{equation} \mathbf{R}_\mathrm{cm}\approx\mathrm{proj}_{SO(3)} \left(\mathbf{S}_\mathrm{cm}\right) \end{equation} \begin{align} \text{with} \ \ \mathbf{S}_\mathrm{cm}&=\argmin_{\textstyle\mathbf{S}\in\mathbb{R}^{3\times3}}\sum_{i=1}^N\lVert\mathbf{R}_i-\mathbf{S}\rVert_F\\ &=\text{vec}^{-1}_{3\times3}\left(\argmin_{\textstyle\mathbf{s}\in\mathbb{R}^9}\sum_{i=1}^N\lVert\text{vec}\left(\mathbf{R}_i\right)-\mathbf{s}\rVert\right). \end{align} Since the optimization is performed using the Weiszfeld algorithm, we can also incorporate the initialization and outlier rejection scheme in the previous sections. Algorithm \ref{al:chordal} summarizes our method. We point out two things in the implementation: First, since we do not optimize on $SO(3)$, the initial estimate does not have to come from a rotation, and we omit \eqref{eq:init_proj}. Second, the threshold $d_\text{max}$ must be scaled appropriately when comparing Algorithm \ref{al:geodesic} and \ref{al:chordal}. Assuming that $\mathbf{s}\in\mathbb{R}^9$ at each iteration does not vastly differ from an embedding of a rotation in $\mathbb{R}^9$, we convert $d_\text{max}$ from geodesic to chordal using \eqref{eq:chordal}, and vice versa. This is done in line \ref{al:chordal:thr} of Algorithm \ref{al:chordal}. \begin{algorithm}[t] \label{al:geodesic} \caption{Geodesic median in $SO(3)$ \cite{hartley2011L1} with outlier rejection} \DontPrintSemicolon \KwInput{List of rotation matrices $\{\mathbf{R}_i\}_{i=1}^N$} \KwOutput{$\mathbf{R}_\mathrm{gm}$} \tcc{Initialize (Section \ref{subsec:init}).} $\mathbf{S}_0 \gets \mathbf{0}_{3\times3}$; $\left(\mathbf{S}_0\right)_{jk} \gets \text{median}\left(\{\left(\mathbf{R}_i\right)_{jk}\}_{i=1}^N\right) \ \ \forall j,k = 1,2,3$; $\mathbf{R}_0 \gets \mathrm{proj}_{SO(3)}\left(\mathbf{S}_0\right)$; \label{al:geodesic:init} \tcc{Run the Weiszfeld algorithm on SO(3) \cite{hartley2011L1} with outlier rejection (Section \ref{subsec:outlier}).\hspace{-3em}} $\mathbf{R}_\mathrm{gm} \gets \mathbf{R}_0$; \For{$\mathrm{it}=1,2, \cdots,10$} { \While{$\mathbf{R}_\mathrm{gm}\in\{{R}_i\}_{i=1}^N$} { $\mathbf{R}_\mathrm{gm}\gets\mathbf{R}_\text{perturb}\mathbf{R}_\mathrm{gm}$;\hspace{-1em} \tcp*{Perturb slightly} } $\mathbf{v}_i \gets \mathrm{Log}\left(\mathbf{R}_i\mathbf{R}_\mathrm{gm}^\top\right) \ \forall i=1,\cdots,N$; $d_i \gets \lVert \mathbf{v}_i \rVert \ \ \forall i=1,\cdots,N$; $d_\text{Q1} \gets Q_1\left(\{d_1, \cdots, d_N\}\right)$; \hspace{-1em}\tcp*{First quartile} $d_\text{max} \gets \begin{cases}1 & \text{if} \ N \leq 50 \\ 0.5 & \text{otherwise}\end{cases}$ $d_\text{thr} \gets \max\left(d_\text{Q1}, d_\text{max}\right)$; $w_i \gets \begin{cases}1 & \text{if} \ d_i \leq d_\text{thr} \\ 0 & \text{otherwise}\end{cases} \ \ \forall i=1,\cdots,N$; $\displaystyle \Delta\mathbf{v} \gets \frac{\sum_{i=1}^Nw_i\mathbf{v}_i/d_i}{\sum_{i=1}^Nw_i/d_i}$; $\mathbf{R}_\mathrm{gm}\gets\mathrm{Exp}(\Delta\mathbf{v})\mathbf{R}_\mathrm{gm}$; \If{$\lVert\Delta\mathbf{v}\rVert<0.001$} { break; } } \Return{$\mathbf{R}_\mathrm{gm}$} \end{algorithm} \begin{algorithm}[t] \label{al:chordal} \caption{Approximate chordal median in $SO(3)$ with outlier rejection} \DontPrintSemicolon \KwInput{List of rotation matrices $\{\mathbf{R}_i\}_{i=1}^N$} \KwOutput{$\mathbf{R}_\mathrm{cm}$} \tcc{Initialize (Section \ref{subsec:init}).} $\mathbf{S}_0 \gets \mathbf{0}_{3\times3}$; $\left(\mathbf{S}_0\right)_{jk} \gets \text{median}\left(\{\left(\mathbf{R}_i\right)_{jk}\}_{i=1}^N\right) \ \ \forall j,k =1,2,3$; \tcc{Run the Weiszfeld algorithm in 9D space with outlier rejection (Section \ref{subsec:outlier}).\hspace{-3em}} $\mathbf{s}_\mathrm{cm} \gets \mathrm{vec}(\mathbf{S}_0)$; \For{$\mathrm{it}=1,2, \cdots, 10$} { \While{$\mathbf{s}_\mathrm{cm}\in\{\mathrm{vec}(\mathbf{R}_i)\}_{i=1}^N$} { $\mathbf{s}_\mathrm{cm}\gets\mathbf{s}_\mathrm{cm}+\mathcal{U}(0, 0.001)$; \tcp*{Perturb} } $\mathbf{v}_i \gets \text{vec}\left(\mathbf{R}_i\right)-\mathbf{s}_\mathrm{cm} \ \forall i=1,\cdots,N$; $d_i \gets \lVert \mathbf{v}_i \rVert \ \ \forall i=1,\cdots,N$; $d_\text{Q1} \gets Q_1\left(\{d_1, \cdots, d_N\}\right)$; \hspace{-1em}\tcp*{First quartile} $d_\text{max} \gets \begin{cases}2\sqrt{2}\sin(1/2)\approx1.356 & \text{if} \ N \leq 50 \\ 2\sqrt{2}\sin(0.5/2)\approx 0.700 & \text{otherwise}\end{cases}$ \label{al:chordal:thr} $d_\text{thr} \gets \max\left(d_\text{Q1}, d_\text{max}\right)$; $w_i \gets \begin{cases}1 & \text{if} \ d_{i} \leq d_\text{thr} \\ 0 & \text{otherwise}\end{cases} \ \ \forall i=1,\cdots,N$; $\mathbf{s}_\mathrm{cm, prev}\gets \mathbf{s}_\mathrm{cm}$; $\displaystyle \mathbf{s}_\mathrm{cm} \gets \frac{\sum_{i=1}^N w_i\mathbf{v}_i/d_i}{\sum_{i=1}^N w_i/d_i}$; \If{$\lVert\mathbf{s}_\mathrm{cm}-\mathbf{s}_\mathrm{cm, prev}\rVert<0.001$} { break; } } $\mathbf{R}_\mathrm{cm}=\text{proj}_{SO(3)}(\mathrm{vec}^{-1}_{3\times3}\left(\mathbf{s}_\mathrm{cm}\right))$; \Return{$\mathbf{R}_\mathrm{cm}$} \end{algorithm} \section{Results} \label{sec:result} \subsection{Initialization} \label{subsec:result_init} For evaluation, we generated a synthetic dataset where the inlier rotations follow a Gaussian distribution with $\sigma=5^\circ$, and the outliers have uniformly distributed angles $\in[0,\pi]$ at random directions. Fig. \ref{fig:init_accuracy} compares the average accuracy of the proposed initial solution (Section \ref{subsec:init}) and the chordal $L_2$-mean \cite{hartley2013rotation} over 1000 runs. It can be seen that our solution is significantly better than the chordal $L_2$ mean unless the outlier ratio is extremely high (i.e., above $90\%$). On average, the $L_2$ chordal method takes 0.37 $\mu$s and ours 0.83 $\mu$s per rotation. This time difference is insignificant compared to the optimization that follows (see Tab. \ref{tab:timings}). \subsection{Comparison against \cite{hartley2011L1}} \label{subsec:result_comparison} Using the same setup as in previous section, we compare Algorithm \ref{al:geodesic} and \ref{al:chordal}, with and without the proposed outlier rejection scheme (Section \ref{subsec:outlier}). This time, we consider two different inlier noise levels, $\sigma=5^\circ$ and $15^\circ$. The average accuracy of the evaluated methods\footnote{We did not include the chordal $L_2$-mean here, since it produced much larger errors than the rest and was already reported in Fig. \ref{fig:init_accuracy}.} is compared in Fig. \ref{fig:final_avg}. With the outlier rejection, the geodesic $L_1$-mean and our approximate chordal $L_1$-mean are almost equally accurate. Without the outlier rejection, the geodesic $L_1$-mean is more accurate than our approximate chordal $L_1$-mean, but only for very high outlier ratios (i.e, $>50\%$). Otherwise, there is no significant difference between the two. The computation times are reported in Tab. \ref{tab:timings}. Our method is always faster than \cite{hartley2011L1}, and is 2--4 times faster with the outlier rejection. That said, the speed is not a major advantage, since all methods can process several hundreds of rotations in less than a millisecond. In most cases, averaging rotations will take much less time than other operations, such as the computation of input rotations. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{init.pdf} \caption{Average rotation errors of different initialization methods: Chordal $L_2$-mean \cite{hartley2011L1} versus ours (Section \ref{subsec:init}). } \label{fig:init_accuracy} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{final_avg.pdf} \caption{Average rotation errors of geodesic $L_1$-mean \cite{hartley2011L1} versus ours (Section \ref{subsec:chordal}), with and without the outlier rejection (Section \ref{subsec:outlier}). } \label{fig:final_avg} \end{figure*} \section{Conclusions} In this work, we proposed a novel alternative to the work of Hartley et al. \cite{hartley2011L1} for robust single rotation averaging. While both our method and \cite{hartley2011L1} use the Weiszfeld algorithm, there are three key differences: \vspace{-0.3em} \begin{enumerate}\itemsep-0.1em \item We initialize the Weiszfeld algorithm using the elementwise median of the input rotation matrices. \item We implicitly disregard the outliers at each iteration of the Weiszfeld algorithm. \item We approximate the chordal median on $SO(3)$ instead of the geodesic median as in \cite{hartley2011L1}. \end{enumerate} \vspace{-0.3em} As a result, our method achieves better performance in terms of speed and robustness to outliers. We also found that incorporating the proposed outlier rejection in the original implementation of \cite{hartley2011L1} leads to similar performance, but at 2--4 times slower speed than ours. {\renewcommand{\arraystretch}{1.2}% \vspace{-1em} \begin{table}[ht] \small \begin{center} \begin{tabular}{c|cc|cc|} \cline{2-5} & \multicolumn{2}{c|}{w/o outlier rejection} & \multicolumn{2}{c|}{w/ outlier rejection} \\ & \cite{hartley2011L1}& Ours & \cite{hartley2011L1} & Ours \\ \hline \multicolumn{1}{|l|}{($5^\circ$, $0\%$)}& 8.69& \textbf{4.38} (2.0$\times$) &9.37& \textbf{4.43} (2.1$\times$) \\ \multicolumn{1}{|l|}{($5^\circ$, $25\%$)} & 10.5 & \textbf{4.47} (2.3$\times$) &11.7 & \textbf{5.68} (2.1$\times$)\\ \multicolumn{1}{|l|}{($5^\circ$, $50\%$)} & 15.2 & \textbf{6.21} (2.4$\times$)& 17.2 & \textbf{6.78} (2.5$\times$) \\ \multicolumn{1}{|l|}{($5^\circ$, $75\%$)}& 24.9 & \textbf{15.2} (1.6$\times$) & 27.2 & \textbf{7.71} (3.5$\times$) \\ \multicolumn{1}{|l|}{($5^\circ$, $95\%$)} & 32.1& \textbf{10.3} (3.1$\times$) & 31.6& \textbf{8.99} (3.5$\times$) \\ \multicolumn{1}{|l|}{($15^\circ$, $0\%$)} & 10.7 & \textbf{6.00} (1.8$\times$)& 17.0 & \textbf{6.02} (2.8$\times$) \\ \multicolumn{1}{|l|}{($15^\circ$, $25\%$)} & 15.0 & \textbf{5.98} (2.5$\times$)& 17.0 & \textbf{7.1} (2.4$\times$) \\ \multicolumn{1}{|l|}{($15^\circ$, $50\%$)} & 19.6 & \textbf{7.66} (2.6$\times$)& 22.3 & \textbf{8.06} (2.8$\times$) \\ \multicolumn{1}{|l|}{($15^\circ$, $75\%$)} & 24.9& \textbf{11.1} (2.2$\times$)&28.1 & \textbf{8.77} (3.2$\times$) \\ \multicolumn{1}{|l|}{($15^\circ$, $95\%$)} & 29.1 & \textbf{10.2} (2.9$\times$)&31.7 & \textbf{8.50} (3.7$\times$) \\ \hline \end{tabular} \end{center} \caption{Median computation time ($\mu s$/rotation) under different inlier noise levels and outlier ratios. The speedup compared to \cite{hartley2011L1} is given in parentheses. All algorithms were implemented in MATLAB and run on a laptop CPU (Intel i7-4810MQ, 2.8 GHz).} \label{tab:timings} \vspace{-5em} \end{table} } \cleardoublepage {\small \balance \bibliographystyle{ieee_fullname}
57f56403c06182a6ba97ff8dfa4a3821bd70b032
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Unsupervised learning extracts features from the unlabeled data to describe hidden structure, which is arguably more attractive, compelling and challenging than supervised learning. One unsupervised application which has gained momentum in recent years, is the task to generate images. The most common image generation models fall into two main approaches. The first one is based on probabilistic generative models, which includes autoencoders\cite{rumelhart1985learning} and powerful variants\cite{vincent2008extracting,bengio2007greedy,vincent2010stacked}. The second class, which is the focus of this paper, is called Generative Adversarial Networks (GANs)\cite{goodfellow2014generative}. These networks combine a generative network and a discriminative network. The advantage of these networks is that they can be trained with back propagation. In addition, since the discriminator network is a convolutional network, these networks are optimizing an objective which reflects human perception of images (something which is not true when minimizing a Euclidean reconstruction error). Since their introduction GANs have been applied to a wide range of applications and several improvements have been proposed. Ranford et al.~\cite{radford2015unsupervised} propose and evaluate several constraints on the network architecture, thereby improving significantly the stability during training. They call this class of GANs, Deep Convolutional GANs (DCGAN), and we will use these GANs in our experiments. Denton et al.~\cite{denton2015deep} propose a Laplacian pyramid framework based on a cascade of convolutional networks to synthesize images at multiple resolutions. Further improvements on stability and sythesized quality have been proposed in~\cite{chen2016infogan,donahue2016adversarial,im2016generating,salimans2016improved}. Several works have shown that, for disciminatively trained CNNs, applying an ensemble of networks is a straightforward way to improve results~\cite{krizhevsky2012imagenet,wang2015unsupervised,zeiler2014visualizing}. The ensemble is formed by training several instances of a network from different initializations on the same dataset, and combining them e.g. by a simple probability averaging. Krizhevsky et al.~\cite{krizhevsky2012imagenet} applied seven networks to improve results for image classification on ImageNet. Similarly, Wang and Gupta~\cite{wang2015unsupervised} showed a significant increase in performance using an ensembles of three CNNs for object detection. These works show that ensembles are a relatively easy (be it computationally expensive) way to improve results. To the best of our knowledge the usage of ensembles has not yet been evaluated for GANs. Here we investigate if ensembles of GANs generate model distributions which closer resembles the data distribution. We investigate several strategies to train an ensemble of GANs. Similar as~\cite{krizhevsky2012imagenet} we train several GANs from scratch from the data and combine them to generate the data (we will refer to these as standard ensembles). When training GANs the minimax game prevents the networks from converging, but instead the networks $G$ and $D$ constantly remain changing. Comparing images generated by successive epochs shows that even after many epochs these networks generate significantly different images. Based on this observation we propose self-ensembles which are generated by several models $G$ which only differ in the number of training iterations but originate from the same network initialization. This has the advantage that they can be trained much faster than standard ensembles. In a recent study on the difficulty of evaluating generative models, Theis et al.~\cite{theis2015note} pointed out the danger that GANs could be quite accurately modeling part of the data distribution while completely failing to model other parts of the data. This problem would not easily show up by an inspection of the visual quality of the generated examples. The fact that the score of the discriminative network $D$ for these not-modelled regions is expected to be high (images in these regions would be easy to recognize as coming from the true data because there are no similar images generated by the generative network) is the bases of the third ensemble method we evaluate. This method we call a cascade ensemble of GANs. We redirect part of the data which is badly modelled by the generative network $G$ to a second GAN which can then concentrate on generating images according to this distribution. We evaluate results of ensemble of GANs on the CIFAR10 dataset, and show that when evaluated for image retrieval, ensembles of GANs have a lower average distance to query images from the test set, indicating that they better model the data distribution. \section{Generative Adversarial Network} A GAN is a framework consisting of a deep generative model $G$ and a discriminative model $D$, both of which play a minimax game. The aim of the generator is to generate a distribution $p_g$ that is similar to the real data distribution $p_{data}$ such that the discriminative network cannot distinguish between the images from the real distribution and the ones which are generated (the model distribution). \begin{figure}[tb] \begin{subfigure}{0.5\textwidth} \includegraphics[width=\linewidth]{72.eps} \end{subfigure} \hspace*{\fill} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\linewidth]{73.eps} \end{subfigure} \caption{Two-hundred images generated from the same random noise with DCGAN on CIFAR dataset after 72 (left) and 73 (right) epochs of training from same network initialization. In the minimax game the generator and discriminator keep on changing. The resulting distributions $p_g$ are clearly different (for example very few saturated greens in the images on the right). } \label{fig:GANepochs} \end{figure} Let $x$ be a real image drawn from the real data distribution $p_{data}$ and $z$ be random noise. The noise variable $z$ is transformed into a sample $G(z)$ by a generator network $G$ which synthesizes samples from the distribution $p_g$. The discriminative model $D(x)$ computes the probability that input data $x$ is from $p_{data}$ rather than from the generated model distribution $p_g$. Ideally $D(x) = 0 $ if $x\sim p_g$ and $D(x) = 1$ if $x\sim p_{data}$. More formally, the generative model and discriminative model are trained by solving: \begin{equation} \underset{G}{\text{min}}\ \underset{D}{\text{max}}\ V\left ( D, G\right) = E_{x\sim p_{data}}\left [ \log D\left ( x \right ) \right ] + E_{z\sim noise}\left [ \log\left( 1-D\left( G(z) \right) \right) \right ] \end{equation} In our implementation of GAN we will use the DCGAN\cite{radford2015unsupervised} which improved the quality of GANs by the usage of strided convolutions and fractional-strided convolutions instead of pooling layers in both generator and discriminator, as well as the RLlu and leakyReLu activation function. We shortly describe two observations which are particular to GANs and which are the motivations for the ensemble models we discuss in the next section. \\ \minisection{Observation 1:} In Fig \ref{fig:GANepochs} we show the images which are generated by a DCGAN for two successive epochs. It can be observed that the generated images change significantly in overall appearance from one epoch to the other (from visual inspection quality of images does not increase after epoch 30 but the overall appearance still varies considerably). The change is caused by the fact that in the minimax game the generator and the discriminator constantly vary and do not converge in the sense that discriminatively trained networks do. Rather than a single generator and discriminator one could consider the GAN training process to generate a set of generative-discriminative network pairs. Given this observation it seems sub-optimal to choose a single generator network from this set to generate images from, and ways to combine them should be explored. \minisection{Observation 2:} A drawback of GANs as pointed out be Theis et al.~\cite{theis2015note} is that they potentially do not describe the whole data distribution $p_{data}$. The reasoning is based on the observation that objective function of GANs have some resemblance with the Jensen-Shannon divergence (JSD). They show that for a very simple bi-modal distribution, minimizing the JSD yields a good fit to the principal mode but ignores other parts of the data. For the application of generating images with GANs this would mean that for part of the data distribution the model does not generate any resembling images. \begin{figure}[tb] \centering \includegraphics[width=.6\textwidth]{Untitled_drawing_11_.eps} \caption{\label{fig:cGANs} The proposed cGANs framework consists of multiple GANs. We start with all train data (left side) and train the first GAN until no further improvements are obtained. We then use the gate-function to select part of the train data to be modeled by the second GAN, etc} \end{figure} \section{Ensembles of Generative Adversarial Networks} As explained in the introduction we investigate the usage of ensembles of GANs and evaluate their performance gain. Based on the observations above we propose three different approaches to construct an ensemble of GANs. The aim of the proposed schemes is to obtain a better estimation of the real data distribution $p_{data}$. \minisection{Standard Ensemble of GANs (eGANs):} We first consider a straightforward extension of the usage of ensembles to GANs. This is similar to ensembles used for discriminative CNNs which have shown to result in significant performance gains~\cite{krizhevsky2012imagenet,wang2015unsupervised,zeiler2014visualizing}. Instead of training a single GAN model on the data, one trains a set of GAN models from scratch from a random initialization of the parameters. When generating data one randomly chooses one of the GAN models and then generates the data according to that model. \minisection{Self-ensemble of GANs (seGANs):}Other than discriminative networks which minimize an objective function, in a GAN the min/max game results in a continuing shifting of the generative and discriminative network (see also observation 1 above). An seGAN exploits this fact by combining models which are based on the same initialization of the parameters but only differ in the number of training iterations. This would have the advantage over eGANs that it is not necessary to train each GAN in the ensemble from scratch. As a consequence it is much faster to train seGANs than eGANs. \minisection{Cascade of GANs (cGANs):} The cGANs is designed to address the problem described in observation 2; part of the data distribution might be ignored by the GAN. The cGANs framework as illustrated in Figure \ref{fig:cGANs} is designed to train GANs to effectively push the generator to capture the whole distribution of the data instead of focusing on the main mode of the density distribution. It consists of multiple GANs and gates. Each of the GAN trains a generator to capture the current input data distribution which was badly modeled by previous GANs. To select the data which is re-directed to the next GAN we use the fact that for badly modeled data $x$, the discriminator value $D(x)$ is expected to be high. When $D(x)$ is high this means that the discriminator is confident this is real data, which most probably is caused by the fact that there are few generated examples $G(z)$ nearby. We use the gate-function $Q$ to re-direct the data to the next GAN according to: \begin{equation} Q(x_k)=\left\{\begin{matrix} 1 & \text{if}\ D(x) > t_r\\ 0& \text{else} \end{matrix}\right.\label{Eq:gate} \end{equation} where $Q(x_k)=1$ means that $x$ will be used to train the next adversarial network. In practice we will train a GAN until satisfactory results are obtained. Then evaluate Eq.~\ref{Eq:gate} and train the following GAN with the selected data, etc. We set a ratio $r$ of images which are re-directed to the next GAN, and select the threshold $t_r$ accordingly. In the experiments we show results for several ratio settings. \begin{figure}[tb] \centering \captionsetup{justification=centering} \includegraphics[width=0.4\textwidth]{retrival.eps} \caption{\label{fig:retrieval} Retrieval example. The leftmost column, annotated by a red rectangle, includes five query images from the test set. To the right the five nearest neighbors in the training set are given.} \end{figure} \section{Experiments} \subsection{Experimental setup} The evaluation of generative methods is known to be problematic\cite{theis2015note}. Since we are evaluating GANs which are based on a similar network architecture (we use the standard settings of DCGAN\cite{radford2015unsupervised}), the quality of the generated images is similar and therefore uninformative as an evaluation measure. Instead, we are especially interested to measure if the ensembles of GANs better model the data distribution. To measure this we propose an image retrieval experiment. We represent all images, both from a held-out test set as well as generated images by the GANs, with an image descriptor based on discriminatively trained CNNs. For all images in the test dataset we look at their nearest neighbor in the generated image dataset. Comparing ensemble methods based on these nearest neighbor distances allows us to assess the quality of these methods. We are especially interested if some images in the dataset are badly modeled by the network, which would lead to high nearest neighbor distances. At the end of this section we discuss several evaluation criteria based on these nearest neighbor distances. For the representation of the images we finetune an Alexnet model (pre-trained on ImageNet) on the CIFAR10 dataset. It has been shown that the layers from AlexNet describe images at varying level of semantic abstraction \cite{zeiler2014visualizing}; the lower layers of the neural network mainly capture low-level information, such as colors, edges and corners etc, whereas the upper layers contain more semantic features like heads, wheels, etc. Therefore, we combine the output of the first convolutional layer, the first fully connected layer and the final results after the softmax layer into one image representation. The conv1 layer is grouped into a 3x3 spatial grid, resulting in a $3\times3\times96=864$ dimensional vector. For the nearest neighbor we use the Euclidean distance\footnote{The average distance between images in the dataset is normalized to be one for each of the three parts conv1, fc7, and prob}. Example retrieval results with this system are provided in Fig.~\ref{fig:retrieval}, where we show the five nearest neighbors for several images. It shows that the image representation captures both color and texture of the image, as well as semantic content. In the experiments we will use the retrieval system to compare various ensembles of GANs. \minisection{Evaluation criteria:} To evaluate the quality of the retrieval results, we will consider two measures. As mentioned they are based on evaluating the nearest neighbor distances of the generated images to images in the CIFAR testset. Consider $d_{i,j}^k$ to be the distance of the $j^{th}$ nearest image generated by method $k$ to test (query) image $i$, and ${\bf{d}}^k_{j} = \left\{ {d_{1,j}^k ...d_{n,j}^k } \right\}$ the set of $j^{th}$-nearest distances to all $n$ test images. Then the Wilcoxon signed-rank test (which we will only apply for the nearest neighbor $j=1$), is used to test the hypothesis that the median of the difference between two nearest distance distributions of generators is zero, in which case they are equally good (i.e., the median of the distribution ${\bf{d}}^k_{1}-{\bf{d}}^m_{1}$ when considering generator $k$ and $m$). If they are not equal the test can be used to assess which method is statistically better. This method is for example popular to compare illuminant estimation methods~\cite{hordley2006reevaluation}. For the second evaluation criterion, consider ${\bf{d}}^t_{j}$ to be the distribution of the $j^{th}$ nearest distance of the train images to the test dataset. Since we consider that the train and test set are drawn from the same dataset, the distribution ${\bf{d}}^t_{j}$ can be considered the optimal distribution which a generator could attain (considering it generates an equal amount of images as present in the trainset). To model the difference with this ideal distribution we will consider the relative increase in mean nearest neighbor distance given by: \begin{equation} \hat{d}_{j}^{k} = \frac{ \bar{d}_{j}^{k}- \bar{d}_{j}^{t}}{ \bar{d}_{j}^{t}} \end{equation} where \begin{equation} \bar{d}_{j}^{k} = \frac{1}{N}\sum_{i = 1}^{N}d_{i,j}^{k},\;\;\;\;\;\;\;\;\;\;\;\;\bar{d}_{j}^{t} = \frac{1}{N}\sum_{i = 1}^{N}d_{i,j}^{t} \end{equation} and where $N$ is the size of the test dataset. E.g., $\hat{d}_{1}^{GAN}=0.1$ means that for method GAN the average distance to the nearest neighbor of a query image is 10 \% higher than for data drawn from the ideal distribution. \begin{table}[tb] \centering \captionsetup{justification=centering} \setlength{\arrayrulewidth}{1.8\arrayrulewidth \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & \rotatebox{90}{0. pdata} &\rotatebox{90}{1. GAN} &\rotatebox{90}{2. cGANs(0.5)\;} &\rotatebox{90}{3. cGANs(0.6)\;} &\rotatebox{90}{4. cGANs(0.7)} &\rotatebox{90}{5. cGANs(0.8)} &\rotatebox{90}{6. cGANs(0.9)\;} \\ \hline 0. & \cellcolor{yellow!50}{0} &\cellcolor{green!50}{1} &\cellcolor{green!50}{1} & \cellcolor{green!50}{1} &\cellcolor{green!50}{1} &\cellcolor{green!50}{1} &\cellcolor{green!50}{1} \\ \hline 1. & \cellcolor{red!50}{-1} &\cellcolor{yellow!50}{0} & \cellcolor{red!50}{-1} & \cellcolor{red!50}{-1} &\cellcolor{red!50}{-1} &\cellcolor{red!50}{-1} &\cellcolor{red!50}{-1} \\ \hline 2. & \cellcolor{red!50}{-1} &\cellcolor{green!50}{1} &\cellcolor{yellow!50}{0} & \cellcolor{red!50}{-1} &\cellcolor{red!50}{-1} &\cellcolor{red!50}{-1} &\cellcolor{red!50}{-1} \\ \hline 3. & \cellcolor{red!50}{-1} &\cellcolor{green!50}{1} &\cellcolor{green!50}{1} & \cellcolor{yellow!50}{0} &\cellcolor{red!50}{-1} &\cellcolor{red!50}{-1} &\cellcolor{red!50}{-1} \\ \hline 4. & \cellcolor{red!50}{-1} &\cellcolor{green!50}{1} &\cellcolor{green!50}{1} &\cellcolor{green!50}{1} &\cellcolor{yellow!50}{0} &\cellcolor{yellow!50}{0} &\cellcolor{green!50}{1} \\ \hline 5. & \cellcolor{red!50}{-1} &\cellcolor{green!50}{1} &\cellcolor{green!50}{1} &\cellcolor{green!50}{1} &\cellcolor{yellow!50}{0} &\cellcolor{yellow!50}{0} &\cellcolor{green!50}{1} \\ \hline 6. & \cellcolor{red!50}{-1} &\cellcolor{green!50}{1} &\cellcolor{green!50}{1} &\cellcolor{green!50}{1} &\cellcolor{red!50}{-1} &\cellcolor{red!50}{-1} &\cellcolor{yellow!50}{0} \\ \hline \end{tabular} \caption{Wilcoxon signed-rank test for cGANs approach. The number between brackets refers to the ratio $r$ which is varied. Best results are obtained with $r$ equal to 0.7 and 0.8.} \label{Wilcoxon_cGANs} \end{table} \begin{table}[tb] \centering \captionsetup{justification=centering} \setlength{\arrayrulewidth}{1.8\arrayrulewidth \begin{subtable}{.5\textwidth} \centering \begin{tabular}{|c|c|c|c|c|} \hline &\rotatebox{90}{1. cGANs\;} &\rotatebox{90}{2. eGANs} &\rotatebox{90}{3. seGANs} \\ \hline 1. &\cellcolor{yellow!50}{0/10/0}&\cellcolor{red!50}{1/0/9} & \cellcolor{red!50}{0/1/9} \\ \hline 2. &\cellcolor{green!50}{9/0/1} &\cellcolor{yellow!50}{0/10/0} & \cellcolor{yellow!50}{4/1/5} \\ \hline 3. & \cellcolor{green!50}{9/1/0} &\cellcolor{yellow!50}{5/1/4} &\cellcolor{yellow!50}{0/10/0} \\ \hline \end{tabular} \caption{ \end{subtable}% \begin{subtable}{.5\textwidth} \centering \begin{tabular}{|c|c|c|c|c|} \hline & \rotatebox{90}{1. GAN\;} &\rotatebox{90}{2. seGANs(2)} &\rotatebox{90}{3. seGANs(4)}&\rotatebox{90}{4. seGANs(8)\;} \\ \hline 1. &\cellcolor{yellow!50}{0/10/0} &\cellcolor{red!50}{0/0/10} & \cellcolor{red!50}{0/0/10} &\cellcolor{red!50}{ 0/0/10} \\ \hline 2. &\cellcolor{green!50}{10/0/0} &\cellcolor{yellow!50}{0/10/0} & \cellcolor{red!50}{0/0/10}& \cellcolor{red!50}{0/0/10 } \\ \hline 3. & \cellcolor{green!50}{10/0/0} &\cellcolor{green!50}{10/0/0} &\cellcolor{yellow!50}{ 0/10/0} &\cellcolor{red!50}{0/2/8} \\ \hline 4. & \cellcolor{green!50}{10/0/0} &\cellcolor{green!50}{10/0/0} &\cellcolor{green!50}{8/2/0} &\cellcolor{yellow!50}{0/10/0} \\ \hline \end{tabular} \caption{ \end{subtable}% \caption{Wilcoxon signed-rank test evaluation. Results are shown as (A)/(B)/(C) where A, B and C are appearing times of 1, 0 and -1 respectively during 10 experiments. The more 1 appears, the better the method. In between brackets we show the number of GAN networks in the ensemble (a) shows eGANs and seGANs outperform cGANs; and the more models used in the ensemble the better the seGANs is shown in (b).} \label{Wilcoxon_different_GANs} \end{table} \subsection{Results} To evaluate the different configuration for ensembles of GANs we perform several experiments on the CIFAR10 dataset. This dataset has 10 different classes, 50000 train images and 10000 test images of size $32\times32$. In our experiments we compare various generative models. With each of them we generate 10000 images and perform the evaluations discussed in the previous section. A cGANs has one parameter, namely the ratio $r$ of images which will be diverted to the second GAN, which we evaluate in the first experiment. The results of the signed-rank test for several different settings of $r$ are provided in Table~\ref{Wilcoxon_cGANs}. In this table, a zero refers to no statistical difference between the distributions. A one (or minus one) refer to non-zero median of the difference of the distributions, indicating that the method is better (or worse) than the method to which it is compared. In the graph we have also included the training dataset and a single GAN. For a fair comparison we only consider 10.000 randomly selected images from the training dataset (similar to the number of images which are generated by the generative models). As expected, the distribution of minimal distances to the test images of the training dataset is superior to any of the generative models. We see this as an indication that the retrieval system is a valid method to evaluate generative models. Next, in Table~\ref{Wilcoxon_cGANs}, we can see that, independent of $r$, cGANs always obtains superior results to using a single standard GAN. Finally, the results show that the best results are obtained when diverting images to the second GAN with a ratio of 0.7 or 0.8. In the rest of the experiments we fix $r=0.8$. In the next experiment we compare the different approaches to ensembles of GANs. We start by only combining two GANs into each of the ensembles. We have repeated the experiments 10 times and show the results in Table~\ref{Wilcoxon_different_GANs}a. We found that the results of GAN did not further improve after 30 epochs of training. We therefore use 30 epochs for all our trained models. For seGANs we randomly pick models between 30 and 40 epochs of training. The cGANs obtain significantly less results than eGANs and seGANs. Interestingly, the seGANs obtains similar results as eGANs. Whereas eGANs is obtained by re-training a GAN model from scratch, the seGANs is formed by models starting from the same network initialization and therefore much faster to compute than eGANs. The results for the ensembles of GANs are also evaluated with the average increase in nearest neighbor distance in Fig~\ref{fig:GANepochs2}(left). In this plot we consider not only the closest nearest neighbor distance, but also the k-nearest neighbors (horizontal axis). All ensemble methods improve over using just a single GAN. This evaluation measure shows that seGANs obtains similar results as eGANs again. \begin{figure}[tb] \begin{subfigure}{0.5\textwidth} \includegraphics[width=\linewidth]{eGANs_seGANs_cGANs_GANs_epoch_26_another.eps} \end{subfigure} \hspace*{\fill} \begin{subfigure}{0.5\textwidth} \includegraphics[width=\linewidth]{seGANs_different_epoch.eps} \end{subfigure} \caption{(left) Showing $\hat{d}_{j}^{k}$ for $k \in \left\{ {cGANs,eGANs,seGANs}\right\}$ and $j \in \left\{ {1, \ldots ,10} \right\}$ along the horizontal axes. (right) similar but for seGANs with varying number of models.} \label{fig:GANepochs2} \end{figure} Finally, we have run seGANs by combining more than two GANs, we have considered ensembles of 2, 4, 6, and 8 GANs (see Table~\ref{Wilcoxon_different_GANs}b). And then results are repeated 10 times. Results shown in Fig~\ref{fig:GANepochs2}(right) show that the average increase in k-nearest distance decreases, when increasing the number of networks in the ensemble, but levels off from 4 to 8. We stress that when combine 2, 4 or 8 GANs, we keep the number of generated images to 10.000 like in all experiments. So in the case of seGANs(8) 1250 images are generated from each GAN. The average increase in nearest distance drops from 0.11\% for a single GAN to 0.06\% for a seGANs combining 8 GANs which is a drop of 40\%. In Fig~\ref{fig:generated_image} we show several examples where a single GAN does not generate any similar image whereas the ensemble GAN does. \begin{figure}[tb] \centering \captionsetup{justification=centering} \includegraphics[width=1\textwidth]{CIFAR10.eps} \caption{Visualization of samples from single GAN and seGANs(4). The leftmost column is the query image from the test dataset; The red box shows the 5 nearest examples from GAN; and the green box shows examples from seGANs(4)\textbf{} the neighboring sample} \label{fig:generated_image} \end{figure} \section{Conclusion} We have evaluated several approaches to construct ensembles of GANs. The most promising results were obtained by seGANs, which were shown to obtain similar results as eGANs, but have the advantage that they can be trained with little additional computational cost. Experiments on an image retrieval experiment show that the average distance to the nearest image in the dataset can drop 40\% by using seGANs, which is a significant improvement (arguably more important than the usage of ensembles for discriminative networks). These results should be verified on multiple datasets. For future work we are especially interested to combine the ideas behind the cGANs with seGANs to further improve results. We are also interested to further improve the image retrieval system, which can be used as a tool to evaluate the quality of image generation methods. It would beneficial for research in GANs if a toolbox of experiments exists which allows us to quantitatively compare generative methods. \minisection{Acknowledgments} This work is funded by the Projects TIN2013-41751-P of the Spanish Ministry of Science and the CHISTERA M2CR project PCIN-2015-226 and the CERCA Programme / Generalitat de Catalunya. We also thank Nvidia for the generous GPU donation. {\small
a8b81bf69688c250d5171fb15441c96939425057
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Algorithmic considerations\label{sec:algo}} The general \abbrevformat{QBKIX}\xspace algorithm for locating of expansions near the boundary of the domain is given in \pr{alg:overview}. We expand that algorithm in \pr{alg:main} and point out some noteworthy comments about each step. \LinesNotNumbered \begin{algorithm}[t!] \algcaption{alg:main}{Setup and evaluation of \abbrevformat{QBKIX}\xspace}{} % \DontPrintSemicolon \SetKwBlock{SubalgBlock}{\nl}{} \SetKw{Subalg}{\nl} \SetKw{cmmt}{--} % \SubalgBlock(Adaptively refine the boundary to resolve the boundary data.\nllabel{aln:adap}){ \cmmt The adaptive refinement is done based on the boundary curve and the boundary data.\; % \cmmt We discretize the boundary using $q$-node Gauss--Legendre\xspace rule, where $q$ is typically 16. Therefore, the boundary curve and the boundary data are represented in \ordinal{q} order Legendre polynomial.\; % \cmmt We initially start by sampling the boundary using a user-defined number of panels (typically one).\; % \cmmt Then, we recursively split the panels until the interpolation error on the fine grid using the coarse-grid coefficients is within the prescribed absolute tolerance $\ensuremath{\acc_a}$ (we choose $\ensuremath{\acc_a}=\sci{-11}$, unless mentioned otherwise).\; % } % \SubalgBlock(Balance the boundary panels such that the ratio of the arclength of neighboring panels is less than two.\nllabel{aln:balance}){ % \cmmt The objective of this stage is to avoid constructing expansions in the near zone of neighboring panels, as it was remarked in \pr{ssc:err-float} and \pr{rmk:twotoone-balance}. } % \SubalgBlock(Locate expansions based on the geometry of the panel.\nllabel{aln:locate}){ % \cmmt For each collocation point on the boundary panels a expansion is located with a center at distance $\delta$ from the boundary\; % \cmmt The distance of each expansion $\ensuremath{\vector{c}}$ to its associated panel is chosen to be $\delta = 0.25 l$ where $l$ is the panel arc length (as discussed in \pr{ssc:err-float} and depicted in \pr{fig:center-dist})\; % \cmmt By choosing the distance $\delta$ other dimensions of the expansion (such as $\ensuremath{{r_c}}$ and $\ensuremath{R}$) are determined by using the preset ratios\emdash/for instance, $\ensuremath{{r_c}}=\ensuremath{{r}}/3$, $\ensuremath{R}=8\ensuremath{{r_c}}$, and using $\ensuremath{{r}}=\delta$\emdash/ as discussed in \pr{ssc:err-float,ssc:error-bounds}). However, the error is not very sensitive to these parameters. % \cmmt Since the boundary is adaptively refined to resolve the boundary data, although not rigorously test, the arclength is of a panel is directly proportional to the distance of singularity to the panel $\tilde\rho$ and consequently $\ensuremath{R}/\rho$ is almost always less than one. % } % \SubalgBlock(Refine the boundary by a factor of four to ensure the checks are in accurate region of panels.\nllabel{aln:refine}){ \cmmt We denote the original boundary panels by $\pi$ and the refined panels by $\pi_{f}$. Each $\pi$ is refined to four $\pi_f$.\; \cmmt See \pr{ssc:err-float} and \pr{rmk:twotoone-balance} for more discussion.\; } % \SubalgBlock(Evaluate the boundary integral on check points using the refined boundary.\nllabel{aln:eval-check}){ \cmmt For moderate and large problems, we use a python implementation of the kernel-independent\xspace FMM \cite{kifmm} for evaluation of the potential on the check points.\; } % \SubalgBlock(Compute the coefficients for each expansion using the check values.\nllabel{aln:compute}){ \cmmt Given the potential on the check points, the density on the proxy point is computed by solving \pr{eqn:prx-to-chk}, where it is solve by computing the pseudo-inverse of $\linopd{Q}$, denoted by $\linopd{Q}^\dagger$. Here we use SVD to compute $\linopd{Q}^\dagger$ using the prescribed relative tolerance, typically $\acc_\mathrm{pinv}=\sci{-9}$. \; % \cmmt The decay in the spectrum of $\linopd{Q}$ and its effect on the \abbrevformat{QBKIX}\xspace error is discussed in \pr{ssc:err-float}.\; % \cmmt For many of the kernels considered here, the matrix $\linop{Q}$ is scale-invariant and we only need to compute $\linopd{Q}^\dagger$ only once.\; } % \SubalgBlock(Evaluate the center back on the boundary collocation point and/or target points near to the boundary.\nllabel{aln:eval}){ \cmmt Each target point, including the collocation on the boundary panels $\pi$ (see \lref{alg:main}{aln:refine} for definition), is associated with an expansion. When the expansion coefficient is known, the potential at the target can be easily evaluated using \pr{eqn:localexp}.\; % \cmmt When the expansion is used as singular quadrature (i.e. to compute the integral in \pr{eqn:generic-den}), the targets are collation points on the boundary panels $\pi$. The map from the check potential to the target potential can be computed as a matrix and stored for fast evaluation.\; % } \end{algorithm} \begin{SCfigure}[1.4][!bt] \centering \setlength\figureheight{2in} \includegraphics[height=\figureheight]{param-reval} % \mcaption{fig:param-reval}{Error at different evaluation radii}{ The error for evaluation of a single expansion with various $\ensuremath{R}$ and $\ensuremath{{r}}$ but fixed $\ensuremath{{r_c}}$ and $\rho$. The expansion is interpolating a harmonic function using Laplace \dl kernel. The singularity of the interpolated function is at $\rho=4$, and $\ensuremath{{r_c}}=.025\rho$. The $y$-axis shows the error for centers while the evaluation points move from $\ensuremath{{r}}=0$ all the way to the proxy circle. The dotted lines are $\ensuremath{{r}} = k \ensuremath{{r_c}}$ for $k=1,2,\text{ and } 3$. In practice we have no direct control on $\frac{\ensuremath{R}}{\rho}$ and it is implied by the panel size \note[AR]{as mentioned above, in the experiments it is typically less than $.5$ }. Here we chose $\ensuremath{{n_p}}=128$, and $\ensuremath{{n_c}}=2\ensuremath{{n_p}}$; the trends are the same for lower $\ensuremath{{n_p}}$ and $\ensuremath{{n_c}}$. } \end{SCfigure} \begin{figure}[!bt] \centering \small \setlength\figureheight{2.2in} \setlength\figurewidth{2.2in} \includepgf{center-dist} % \mcaption{fig:center-dist}{Error vs. center and singularity distances}{The induced error for singularities and centers at various distances from the boundary. The boundary data is generated by putting a Laplace singularity at distance $\tilde\rho$ from the boundary\emdash/the singularity distance to the center of expansion is $\rho > \tilde \rho$. The density is solve directly and \abbrevformat{QBKIX}\xspace is used only for evaluation. The error \note[AR]{infinity norm} is computed using the known solution corresponding to the boundary data. The left plot shows the errors for the case with fixed number of panels on the boundary (40 panels). The right plot shows the errors for adaptive refinement of the boundary. In the latter case, the boundary and the boundary data are resolved to $\ensuremath{\acc_a}=\sci{-11}$. We use $\ensuremath{{n_p}}=64$, $\ensuremath{{n_c}}=2\ensuremath{{n_p}}$, $\ensuremath{R}=8\ensuremath{{r_c}}$, and $\ensuremath{{r}}=3\ensuremath{{r_c}}$. In both cases, the center of expansion is located based on the panel size. } \end{figure} As discussed in \Lref{alg:main}{aln:locate}, the distance of the center of the expansion changes based on panel size. This implies that other dimensions of the expansion also change for each panel. Nonetheless, we use a fixed relative geometry for centers\emdash/i.e. $\ensuremath{{r}}/\ensuremath{{r_c}}=3$ and $\ensuremath{R}/\ensuremath{{r_c}}=8$. This has the beneficial side effect that the map from potential on check points to expansion densities can be reused for many of the kernels (see \Lref{alg:main}{aln:compute}). Based on \pr{eqn:err-bound}, the error of each \abbrevformat{QBKIX}\xspace is determined by $\ensuremath{{r_c}}/\rho, \ensuremath{{r_c}}/\ensuremath{R}, \ensuremath{{r}}/\ensuremath{{r_c}}$, and $\ensuremath{e_c}$. The value of $\ensuremath{{r_c}}/\rho$ is deduced from the panel arc length and we would like $\ensuremath{{r}}/\ensuremath{{r_c}}$ to be as large as possible \note[AR]{to drive down error we need smaller $\ensuremath{e_c}$ and boundary refinement}. Based on our experimentation, we use $\ensuremath{{r}}/\ensuremath{{r_c}}=3$. To investigate the effect of $\ensuremath{{r_c}}/\ensuremath{R}$, we construct expansions with various $\ensuremath{R}$. The error at different distances $\ensuremath{{r}}$ are plotted in \pr{fig:param-reval}. From the plot we see that the highest accuracy at the edge of evaluation radius $\ensuremath{{r}}$ is achieved for $\ensuremath{R}/\rho \approx 0.2$ that is equal to $8 \ensuremath{{r_c}}$ for this case. This trend persist among other expansions we tested. In contrast to QBX, increasing $\ensuremath{{n_c}}$ and/or $\ensuremath{{n_p}}$ does not pollute the result due to the filtering effect of the check to proxy operator (discussed in \pr{ssc:err-float}). The number of proxy points $\ensuremath{{n_p}}$ only affects the first term in \pr{eqn:err-bound}. In our experiments $\ensuremath{{n_p}}$ is chosen to be $16$ or $32$. We have not discussed the effect of the number of check points (in \pr{ssc:err-exact}, $\ensuremath{{n_c}}$ is assumed to be large). Having more check points reduces the aliasing error and improve the accuracy of the pseudo inverse and we opt for $\ensuremath{{n_c}}=2\ensuremath{{n_p}}$. As we mentioned above, the choice of $\acc_\mathrm{pinv}$ for pseudo inverse has filtering effect and keeps the extrapolation in control we chose $\acc_\mathrm{pinv}=\sci{-9}$. The choice of the distance of the center of the expansion to the panel (relative to the panel size) is to balance the error due to evaluation of smooth quadrature at the check points against the expansion's interpolation error\emdash/for larger $\ensuremath{{r_c}}$ the singularity is relatively closer. This is shown in \pr{fig:center-dist}. In all cases, as the expansions are placed farther from the boundary, the error initially decreases due to the fact that the check points move to more accurate regions of the domain. But as the expansions move farther, they become larger and the singularity gets close to them (relative to check and proxy radii) and expansion's error dominate. In both cases, the lowest error is achieved for $\delta/l \approx 0.25$. When the boundary is adaptively refined (the right graph in \pr{fig:center-dist}), the panel size changes depending on the distance of singularity $\tilde\rho$. Due to this, we have consistent error for different singularity distances. \begin{remark} \label{rmk:amortize} \note[AHB]{move to later near end of paper; too detailed here.} \note[AR]{I think now we have enough info before this} We associate with each collocation point on the boundary a single center with its own set of check points. One could try use less number of centers by associating more points with each center. One can try using less number of check points by reusing the check points of a centers in neighboring centers. We did not pursue such enhancements in this paper because they do not change the asymptotic complexity of evaluation steps. \end{remark} \begin{remark} If they are needed, derivatives of the solution can be evaluated after the construction of the expansion by differentiating the expansion basis. \end{remark} \para{Computational complexity of~\abbrevformat{QBKIX}\xspace} Consider a domain with $M$ panels and $q$-node Gauss--Legendre\xspace discretization of the panels. There are $N=Mq$ collocation points on the boundary. Using FMM, the evaluation of the boundary integral from the refined boundary to the check points is \O{4N \ensuremath{{n_c}}}. The pseudo-inverse for computing the density can be precomputed and applied as a matrix multiply with total cost of \O{k(\ensuremath{{n_c}}+\ensuremath{{n_p}})N}. Each expansion is evaluated at its associated boundary collocation point with total cost of \O{N\ensuremath{{n_p}}}. Therefore, the cost of one evaluation (e.g. in the context of iterative solver) is \O{(4\ensuremath{{n_c}} + k\ensuremath{{n_c}} + k\ensuremath{{n_p}} + \ensuremath{{n_p}})N}. And with the choice of $\ensuremath{{n_c}}=2\ensuremath{{n_p}}$, \O{(9 + 3k)\ensuremath{{n_p}} N}. This cost can be amortized if more targets are associated with each expansion (see \pr{rmk:amortize}). \note[AR]{It seems to me that this is dimension independent, of course $\ensuremath{{n_p}}$ is greater for 3D} \note[AHB]{Have we decided CPU time is irrelevant for this paper? Can some comment be made about it being manageable? or a single timing quoted to reassure the worried reader who wants to implement this? } \section{Conclusions\label{sec:conclusion}} In this paper we introduced a new quadrature scheme for the high-order accurate evaluation of layer potentials associated with general elliptic \abbrevformat{PDE}\xspace on the domain boundary and close to it. The scheme\emdash/which builds local solution approximations using a refined evaluation and the solution of small linear systems\emdash/ relies solely on the evaluation of the underlying kernel, so is essentially \abbrevformat{PDE}\xspace-independent. It is highly flexible, being agnostic as to the boundary condition type, the layer representation, and crucially, the dimension of the problem. We have analyzed the eror behavior of the scheme for Laplace and Helmholtz cases. It also fits naturally in the framework of existing fast kernel-independent algorithms for potential evaluation such as the \abbrevformat{KIFMM}\xspace, as it uses similar local approximations. We have tested its accuracy for three scalar- and two vector-valued \twod Dirichlet boundary-value problems that are common in engineering problems. We have not attempted to optimize performance, and leave that for future work. There are several obvious extensions that have motivated this initial study that we plan to pursue: \begin{enumerate}[(1)] \item Generalization to \threed. High-order singular quadratures for surfaces are complicated, application dependent, and scarce. Since it requires only pointwise kernel evaluations, \abbrevformat{QBKIX}\xspace is by design very easy to implement in \threed using proxy and check surfaces, and would handle a wide class of {\abbrevformat{PDE}\xspace}s. The constants will be larger, but the linear systems (anticipated to be of size around $10^3$) would still be very practical. \item Generalization to other boundary conditions. \abbrevformat{QBX}\xspace, and thus also \abbrevformat{QBKIX}\xspace, can apply without modification, for instance, the normal derivative of the double-layer operator, which is hypersingular. \item Integration with \abbrevformat{KIFMM}\xspace. In this work, we only used kernel-independent\xspace \abbrevformat{FMM} for fast evaluation of potential on the check points. However, we expect performance gains by reusing the local expansion of \abbrevformat{KIFMM}\xspace as a \abbrevformat{QBKIX}\xspace expansion. \item Local \abbrevformat{QBKIX}\xspace. The construction of local schemes which automatically handle general domains with thin features (i.e., with geodesically distant parts of the boundary in close proximity in space) without excessive refinement needed for the panel size to be on the order of feature size, is important for making the method practical. \cite{ce} proposed the \emph{local} version of \abbrevformat{QBX}\xspace, in which only the contribution of the nearby panels to a target is evaluated using expansions, while contributions of more distant panels is evaluated using standard quadrature. Implementing this idea is nontrivial however, as the end-points of the group of neighboring panels produce new singularities that can affect the convergence rate. \item Generalization of analysis to all kernels. As Remark~\ref{r:anal} discusses, this is a nontrivial missing piece in the theoretical foundations. \end{enumerate} \begin{acknowledgements} We extend our thanks to Manas Rachh, Andreas Kl{\"o}ckner, Michael O'Neil, and Leslie Greengard for stimulating conversations about various aspects of this work. A.R. and D.Z. acknowledge the support of the US National Science Foundation (NSF) through grant DMS-1320621; A.B. acknowledges the support of the NSF through grant DMS-1216656. \end{acknowledgements} \section{Error analysis and parameter choices\label{sec:error}} In this section, we present theoretical results, focusing on the cases of scalar $\u$ governed by the Laplace equation $\Delta \u = 0$\emdash/or by the Helmholtz equation $(\Delta + \omega^2)\u = 0$ for real $\omega$. We expect similar results for other elliptic {\abbrevformat{PDE}\xspace}s in \cref{eqn:pde-type}. We split \abbrevformat{QBKIX}\xspace into two stages: \begin{inparaenum}[(i)] \item evaluation of $\u$ on the check points using a refined native quadrature, with the associated error $\ensuremath{e_c}$; \item solution of a small linear system to determine the equivalent density values $\bm{\alpha}$ at the proxy points that best represent $\u$ at the check points. This is followed by evaluating the approximation of $u$ at target points using these density values. \end{inparaenum} At the first stage, the error $\ensuremath{e_c}$ is effectively the smooth quadrature error of the refined panels. The primary focus of our analysis is on the second stage. We analyze the error behavior in the idealized situation of exact arithmetic and infinitely many check points, obtaining the dependence of the second-stage error $\ensuremath{e}$ on $\delta$, $\ensuremath{R}$, $\rho$, and $n_p$. We then describe a heuristic model for the effects of finite-precision computations, which adds an extra term to $\ensuremath{e}$, depending on $\ensuremath{e_c}$, $\delta$, $r_c$, and $k$. We use the overall error model, along with experiments, to provide a choice of the various parameters in the scheme resulting in the on- and near-surface evaluation errors of the same magnitude as the far-field integration errors. \subsection{Error at check points\label{ssc:err-check}} Recall that evaluation of $\u$ on the check points is done by approximating the exact integral \cref{eqn:generic-bi-sol} by \cref{upsampled} using $q$-node Gauss--Legendre\xspace quadrature on panels (subdivided by factor $\beta$). For a flat panel, the error $\ensuremath{e_c}$ in this evaluation is bounded by standard quadrature estimates giving a term of the form $C_q (L/(4 \beta d))^{2q} \|\phi\|_{C^{2q}}$ where $d = \ensuremath{\delta}-\ensuremath{{r_c}}$ is the closest distance of check points to the panel, and $\phi$ denotes the density for which we evaluate the integrals. Our adaptive refinement procedure ensures that the formula still holds, as the radius of curvature of the panel is larger than its length, and hence larger than $\ensuremath{\delta}$. This estimate has the form of the second term in \cite[Theorem~1]{QBX}, and for convergence as the panel length $L$ going to zero, it requires $L/d$ to converge to zero as well. Instead of following this route, we fix the ratio $L/d$ to a constant, by choosing $\ensuremath{\delta}$ and $\ensuremath{{r_c}}$ as fractions of $L$. If $L/(4 \beta d)$ is sufficiently small, a high-order quadrature for sufficiently large $q$ allows us to compute the integrals with any desired precision. For instance, when $q=16$, it is sufficient to use $L/(4\beta d) = 1/2$, to obtain an error on the order of $\sci{-10}$ at distance $d$ from the panel. \subsection{Error of the proxy point representation in exact arithmetic\label{ssc:err-exact}} Next, we analyze the dependence of the error (computed in exact arithmetic) of the second stage of \abbrevformat{QBKIX}\xspace on the number of proxy points $n_p$, the proxy circle radius $\ensuremath{R}$, and the distance $r$ from the center $\ensuremath{\vector{c}}$ to the evaluation point. The distance $\ensuremath{{r}}$ could be either smaller than $\ensuremath{\delta}$ if targets are away from the surface, equal to $\ensuremath{\delta}$ if $\evald$ touches the surface at a single point, or exceed $\ensuremath{\delta}$ if there are several on-surface targets in $\evald$; we focus our attention to the case where $\ensuremath{{r}} \le \ensuremath{\delta}$. Let $\uq$ be given by the proxy representation, \cref{eqn:localexp}, with equivalent density values $\alpha_j$ at proxy points $\vy_j$, $j = 1,\ldots, \ensuremath{{n_p}}$. We consider evaluation of the approximation $\uq$ in $\disc{B}{\ensuremath{{r}}}{\ensuremath{\vector{c}}}$, the disc of radius $\ensuremath{{r}}$ centered at $\ensuremath{\vector{c}}$, given correct values for $\u$ at a very large number of check points $\ensuremath{{n_c}}$, so that we can replace the discrete least-squares problem we solve with a continuous one. Let the equivalent densities $\scalard{\alpha}_j$ be chosen to minimize the $L^2$ error on the check circle, i.e., \begin{alignln} \vectord{\alpha} = \arg \min_{\vectord{\alpha}\in\mathbb{C}^{\ensuremath{{n_p}}}} \|\uq - \u\|_{L_2(\checkc)}~. \label{npinf} \end{alignln} By convergence of the periodic trapezoidal quadrature on the check points, this corresponds to the $\ensuremath{{n_c}}\to\infty$ limit of the \abbrevformat{QBKIX}\xspace scheme. Let \begin{equation} \ensuremath{e}(\ensuremath{{r}}) := \sup_{\vx\in\overline{\Omega}\,\cap\,\disc{B}{\ensuremath{{r}}}{\ensuremath{\vector{c}}}} | \uq(\vx)-\u(\vx)|~, \label{sup} \end{equation} be the upper bound on the pointwise error in the part of the disc lying inside the closure of the domain. We have the following bounds on $\ensuremath{e}$ when $\u$ is sufficiently regular, meaning that any singularities in the continuation of $\u$ is further than some distance $\rho>\ensuremath{\delta}$ from the center of the expansion $\ensuremath{\vector{c}}$. \begin{theorem} \label{thm:expconv} Let $\u$ be continuable as a regular solution to the Laplace or Helmholtz equation in the closed disc of radius $\rho$ centered at $\ensuremath{\vector{c}}$. Let $\ensuremath{R} > \ensuremath{\delta}$ in the Laplace case. Let the \abbrevformat{QBKIX}\xspace equivalent density values at proxy points be solved in exact arithmetic in the least-squares sense on the check circle as in \cref{npinf}, and let $\ensuremath{e}$ be defined by \cref{sup} where $\uq$ is the expansion in \cref{eqn:localexp}. Then, in a disc of radius $\ensuremath{{r}}$ \begin{alignln} \ensuremath{e}(\ensuremath{{r}}) \; \le \left\{\begin{array}{ll} C \bigl( \frac{\ensuremath{{r}}}{\rho} \bigr)^{\ensuremath{{n_p}}/2}, & \rho \ensuremath{{r}} < \ensuremath{R}^2~, \\ C \sqrt{\ensuremath{{n_p}}} \bigl( \frac{\ensuremath{{r}}}{\ensuremath{R}} \bigr)^{\ensuremath{{n_p}}}, & \rho \ensuremath{{r}} = \ensuremath{R}^2~, \\ C \bigl( \frac{\ensuremath{{r}}}{\ensuremath{R}} \bigr)^{\ensuremath{{n_p}}}, & \rho \ensuremath{{r}} > \ensuremath{R}^2~, \end{array}\right. \label{eqn:expconv} \end{alignln} where in each case, $C$ indicates a constant that may depend on $\u$ (and $\omega$ in the Helmholtz case), $\ensuremath{\vector{c}}$, $\ensuremath{{r}}$, and $\ensuremath{R}$ but not on $\ensuremath{{n_p}}$. \end{theorem} \begin{proof} Following the technique of Barnett and Betcke~\cite[Theorem~3]{mfs}, we only need to show that there exists some choice of density values $\vectord{\alpha}$ for which the estimate holds; the least-squares solution cannot be worse than this. We choose density values $\vectord{\alpha}$ to cancel the Fourier coefficients with frequency $|n|<\ensuremath{{n_p}}/2$ of the pointwise error $\uq - \u$ on the check circle. By uniqueness of the local expansion for the regular \abbrevformat{PDE}\xspace solution (in polar coordinates, $\sum_{n\ge 0} a_n r^n e^{in\theta}$ for Laplace or $\sum_{n\in\mathbb{Z}} a_n J_n(\omega r) e^{in\theta}$ for Helmholtz) this choice of density values also cancels the same Fourier coefficients on any circle centered at $\ensuremath{\vector{c}}$ with radius less than $\ensuremath{R}$. Applying \cite[Theorem~3]{mfs} for the Helmholtz case, the $L^2$-norm of the error on the circle of radius $\ensuremath{{r}}$ obeys a bound of the form \cref{eqn:expconv}. Barnett and Betcke~\cite[Section~2.1]{mfs} produce the Laplace case as a limit of the Helmholtz case; however, one also needs the result that the constant single-layer density generates the constant potential $\log \ensuremath{R}/\ensuremath{{r_c}}$, which excludes $\ensuremath{R}=\ensuremath{{r_c}}$ because it can only produce zero-mean data on the circle. Finally, we need to show that the sup norm of the error on the circle of radius $\ensuremath{{r}}$ is bounded by the $L^2$-norm; this holds since the error $\uq - \u$ is a regular \abbrevformat{PDE}\xspace solution in a disc with radius strictly larger than $r$, namely $\disc{B}{\min(\ensuremath{R},\rho)}{\ensuremath{\vector{c}}}$. Thus, its Fourier coefficients on the $r$-circle decay exponentially in $|n|$, and are thus summable with a bound controlled by the $L^2$ norm. In the case where $\disc{B}{\ensuremath{{r}}}{\ensuremath{\vector{c}}}$ lies partially outside $\Omega$, one may continue $\u$ as a regular \abbrevformat{PDE}\xspace solution in the disc and apply the above. \qed \end{proof} \begin{remark} The above derivation relies on analysis from the literature on the method of fundamental solutions (\abbrevformat{MFS}). The original result for the Laplace equation is due to Katsurada~\cite[Theorem~2.2]{Ka89}, which considers the case $n_c=n_p$ and restricted to $\ensuremath{{r}}=\ensuremath{{r_c}}$. We extend this result to include {\em extrapolation} from the check radius $\ensuremath{{r_c}}$ out to larger radii $\ensuremath{{r}}$. Remarkably, $\ensuremath{{r_c}}$ does not appear in \cref{eqn:expconv}, because in exact arithmetic it does not matter at what radius the Fourier coefficients are matched. In the next section we will see that in practice rounding error strongly affects the choice of $\ensuremath{{r_c}}$ since the extrapolation is ill-conditioned. \label{r:rcindep} \end{remark} A surprising aspect of \cref{thm:expconv} is that $\u$ may have singularities {\em closer} to the center than the proxy radius $\ensuremath{R}$ and yet exponential convergence still holds; this is closely related to the Runge approximation theorem. \begin{remark} The two regimes in \cref{eqn:expconv} may be interpreted as follows: \begin{itemize}[\quad$\bullet$] \item $\ensuremath{{r}} < \frac{\ensuremath{R}^2}{\rho}$: the solution $\u$ is relatively rough (has a nearby singularity), and error is controlled by the decay of the local expansion coefficients $a_n$ of $\u$ for orders beyond $n_p/2$. \item $\ensuremath{{r}} > \frac{\ensuremath{R}^2}{\rho}$: the solution $\u$ is smooth, and error is controlled instead by aliasing (in Fourier coefficient space) due to the discreteness of the proxy point representation on the proxy circle. \end{itemize} We observe in numerical experiments that when the boundary is adaptively refined based on the boundary data as in \cref{sec:formulation}, $L\approx\rho$ and the expansion centers that dominate the error in a domain are typically those that are near to a singularity of the solution. Such centers are typically in the rough regime. \end{remark} Note that the boundary $\Gamma$ may intersect the closed disc, and still $\u$ may be continued as a \abbrevformat{PDE}\xspace solution into the closed disc. This requires the boundary data $f$ or density to be analytic\emdash/see \cite{ce} for related analysis of \abbrevformat{QBX}\xspace in this case. \begin{remark}[Extension of analysis to other kernels] It is clearly of interest to have a kernel-independent\xspace extension of \cref{thm:expconv} that would apply also to vector {\abbrevformat{PDE}\xspace}s such as Stokes. Initial attempts suggest this requires significantly more complicated analysis, since to use the method of the above proof one needs to be able to write down a proxy coefficient vector $\bm\alpha$ that produces a single Fourier mode on the check circle plus exponentially decaying amounts of aliased modes, which is challenging even in the Stokes case. We leave this for future work. \label{r:anal} \end{remark} \subsection{Modeling the effect of finite-precision arithmetic\label{ssc:err-float}} Independence from $\ensuremath{{r_c}}$ in \cref{thm:expconv} relies on exact arithmetic; since the extrapolation from $\ensuremath{{r_c}}$ to a larger $\ensuremath{{r}}$ is ill-conditioned. Moreover, due to finite precision, there are possibly fewer than $\ensuremath{{n_p}}$ functions available to cancel the Fourier coefficients. As a result, we need to study the effect of rounding error on $\uq - \u$. Rather than attempting a rigorous analysis, we present a heuristic model and demonstrate that it agrees well with numerical observations. We first show that the \ordinal{n} singular value of the matrix $Q$ in \cref{Q} decays as $\frac{1}{n}(\ensuremath{{r_c}}/\ensuremath{R})^{n/2}$, i.e., marginally faster than exponentially. In the continuous limit ($\ensuremath{{n_p}},\ensuremath{{n_c}}\to\infty$), this corresponds to the decay of the eigenvalues of the single-layer operator with kernel $\fundsol$, whose eigenfunctions are the Fourier modes, since the operator is convolutional. For the Laplace equation, the potential defined in polar coordinates centered at $\ensuremath{\vector{c}}$ as \begin{equation*} v(r,\theta) = \begin{cases}(\ensuremath{R}/2n) (r/\ensuremath{R})^n e^{in\theta}~, & r\le \ensuremath{R}~, \\ (\ensuremath{R}/2n) (r/\ensuremath{R})^{-n}e^{in\theta}~, & \text{otherwise}~, \end{cases} \end{equation*} solves the \abbrevformat{PDE}\xspace everywhere except at $r=\ensuremath{R}$, where the jump in radial derivative is $e^{in\theta}$. We conclude that $v$ is the single-layer potential due to the $\ordinal{n}$ Fourier mode density. Substituting $r=\ensuremath{{r_c}}$, and recalling that the $\ordinal{n}$ singular value is eigenvalue for the frequency $n/2$, as the frequencies are in the range $-n/2$ to $n/2$, we conclude that $ \sigma_n = \frac{1}{n}(\ensuremath{{r_c}}/\ensuremath{R})^{n/2}$. The above argument also applies for the Stokes case except due to having two vector components, \ordinal{n} singular value of matrix $Q$ corresponds to the eigenvalue for frequency $n/4$. The Helmholtz case\emdash/although there are \O{\omega} eigenvalues that do not decay\emdash/is asymptotically identical to Laplace \cite[Equation~(14)]{mfs}. To verify this asymptotic behavior, in \cref{fig:sing-vals} we show the decay of singular values for several kernels. \begin{figure}[!b] \centering \small \setlength\figureheight{1.6in} \setlength\figurewidth{2.2in} \includepgf{sing-values} % \mcaption{fig:sing-vals}{Singular values of proxy to check matrix}{ The solid lines are the singular values of $Q$ for different $\ensuremath{R}$ and different single-layer\xspace kernels, and the dashed lines labeled $(T)$ are the theoretical decay: $\frac{1}{n}(\ensuremath{{r_c}}/\ensuremath{R})^{n/2}$ for Laplace or Helmholtz, and $\frac{1}{n}(\ensuremath{{r_c}}/\ensuremath{R})^{n/4}$ for Stokes, where $n$ denotes the index of the singular value. Other parameters are $\ensuremath{{r_c}}=1$, $\ensuremath{{n_p}}=128$, $\ensuremath{{n_c}}=256$. For the Helmholtz problem, the dashed lines show the asymptotic bound for the singular values and are not accurate for small indices; the interested reader is referred to \cite[Eq.~(14)]{mfs}. } \end{figure} When the pseudoinverse of $Q$ is computed based on \cref{eq:pinv-sing-val}, only $k$ singular values lying above $\acc_\mathrm{pinv} \sigma_1$ are retained. The corresponding singular vectors approximate the lowest Fourier modes up to frequency $|n|<k/2$ (in the scalar \abbrevformat{PDE}\xspace cases). Thus, equating up to constants the $\ordinal{k}$ singular value above to $\acc_\mathrm{pinv}$, the ranks of the matrices in the pseudoinverse are \begin{equation} k \;\approx\; \min \left(k_m, \, \ensuremath{{n_p}}\right),\qquad k_m = \,2 \frac{\log(1/\acc_\mathrm{pinv})}{\log(\ensuremath{R}/\ensuremath{{r_c}})} ~, \label{k} \end{equation} and the highest (Nyquist) frequency they can represent is $k/2$. The values of $\un$ at the check points have error bounded by $\ensuremath{e_c}$, so in this model we expect the errors to be amplified (by considering the local expansion as above) to become $\ensuremath{e_c} (\ensuremath{{r}}/\ensuremath{{r_c}})^{k/2}$ at the evaluation radius $\ensuremath{{r}}$. \subsection{Error bounds and optimal parameter choices\label{ssc:error-bounds}} Combining the results from \cref{ssc:err-exact,ssc:err-float} for a kernel-independent\xspace expansion, using $\ensuremath{{n_p}}$ proxy points, the error is bounded by \begin{alignln} \label{eqn:err-bound} \ensuremath{e}(\ensuremath{{r}}) \; \le \left\{\begin{array}{ll} C \left( \dfrac{\ensuremath{{r}}}{\rho} \right)^{k/2}+ C \ensuremath{e_c} \left(\dfrac{\ensuremath{{r}}}{\ensuremath{{r_c}}}\right)^{k/2}, & \rho \ensuremath{{r}} < \ensuremath{R}^2~, \\[10pt] C \left( \dfrac{\ensuremath{{r}}}{\ensuremath{R}} \right)^{\ensuremath{{n_p}}}+ C \ensuremath{e_c} \left(\dfrac{\ensuremath{{r}}}{\ensuremath{{r_c}}}\right)^{k/2},& \rho \ensuremath{{r}} > \ensuremath{R}^2~, \end{array}\right. \end{alignln} where $C$ represents possibly different constants in each case (omitting the case $\rho r = \ensuremath{R}^2$). \begin{figure}[!b] \centering \small \setlength\figureheight{2in} \setlength\figurewidth{2.2in} \hspace{-3em}\includepgf{error-bounds} \mcaption{fig:err-bound}{Error bounds for Laplace \abbrevformat{QBKIX}\xspace with known singularity}{Errors $\ensuremath{e}$ observed (solid lines) and predicted by \cref{eqn:err-bound} (dashed lines) for a single expansion with different singularity distances $\rho=2\ensuremath{R}, \ensuremath{R},\text{~and~}0.8\ensuremath{R}$, and different numbers of proxy points $\ensuremath{{n_p}}$. The expansion is centered at $\vector{c} = [0,0]$ and the solution $\u(\vx) = -\log|\vx-\vx_0|$, $\vx_0 = \rho e^{1\ii/19}$ is a harmonic function with a singularity at distance $\rho$. Laplace single-layer\xspace kernel is used for the expansion. The error is the maximum error over the $\disc{B}{r}{\vector{c}}$ as defined in \cref{sup}. The proxy to check radius ratio is $\ensuremath{R}/\ensuremath{{r_c}}=8$, the number of checks is set to $\ensuremath{{n_c}}=2\ensuremath{{n_p}}$, $\ensuremath{e_c}=\sci{-14}$, and $k_m \approx 27$ (given by \cref{k} with $\acc_\mathrm{pinv}=\sci{-14}$). The constants $C$ in \cref{eqn:err-bound} were chosen to qualitatively match the trend lines (all set to $0.1$). } \end{figure} In \cref{fig:err-bound}, we show how this formula models the error growth for a single kernel-independent\xspace expansion interpolating a Laplace solution in free space with a known nearest singularity at various distances $\rho$, for a typical choice of ratio $\ensuremath{R}/\ensuremath{{r_c}} = 8$. The key observation is that, despite its simplicity, our model \cref{eqn:err-bound} explains well the observed error behavior. Other salient features of the plots include: \begin{itemize} \item As $r$ increases beyond $\ensuremath{{r_c}}$, errors grow rapidly dominated by the second term in the error estimate. \item The error is mostly controlled by $k$ and increasing $\ensuremath{{n_p}}$ beyond $k_m\approx 27$ (defined in \cref{k}) has no tangible effect unless $\rho\ensuremath{{r}}>\ensuremath{R}^2$ (i.e., right half of left plot). \end{itemize} \Cref{fig:param-reval} instead continuously varies $\ensuremath{R}/\rho$ (the inverse scaled singularity distance), showing the same effect: a relatively distant singularity allows high accuracy expansion out to larger $\ensuremath{{r}}/\ensuremath{R}$. \begin{figure}[!bt] \centering \setlength\figureheight{2in} \includegraphics[height=\figureheight]{param-reval} % \mcaption{fig:param-reval}{Error at different evaluation radii}{ The error for evaluation of a single expansion with various $\ensuremath{R}$ and $\ensuremath{{r}}$, but fixed $\ensuremath{{r_c}} = \rho/40$ and $\rho$. The expansion is interpolating a harmonic function (similar to the one used in \cref{fig:err-bound}) with singularity at distance $\rho=4$, using the Laplace \dl kernel. The dotted lines are $\ensuremath{{r}} = m \ensuremath{{r_c}}$ for $m=1,2,\text{ and } 3$. In practice, we have no direct control on $\frac{\ensuremath{R}}{\rho}$, and it is implied by the panel size. Here we chose $\ensuremath{{n_p}}=64$, and $\ensuremath{{n_c}}=2\ensuremath{{n_p}}$; the trends are the same for lower $\ensuremath{{n_p}}$ and $\ensuremath{{n_c}}$. } \end{figure} \para{Choice of parameters} Using the model \cref{eqn:err-bound}, one can make choices for $\ensuremath{R}$, $\ensuremath{{r_c}}$, $\ensuremath{\delta}$, and $\ensuremath{{n_p}}$ to achieve a desired accuracy $\ensuremath{\varepsilon}$. An unknown in applying this in a practical setting is the singularity distance $\rho$. However, in any high-accuracy choice of boundary quadrature, such as the adaptive panel quadrature of \cref{sec:formulation}, panels are refined such that the data $f$ and hence the density $\phi$ and the solution $\u$ are smooth on the local panel scale $L$, thus we expect singularities to be at least of order $L$ distant from the center. Indeed, we experimentally observe (in tests where we know the location of singularity, e.g., \cref{fig:center-dist} or \cref{s:bvp}) that when the panels are adaptively refined, $L<\rho$, and consequently the convergence behavior is most like the left-hand plot of \cref{fig:err-bound}. Given the target accuracy of $\ensuremath{\varepsilon}$ for the solution and the selected native quadrature order $q$, the adaptive refinement of boundary sets the panel length $L$. We use the following steps to glean the value of other parameters. Since the constants in the error estimates are problem dependent and unknown, we set them to unity. To have a concrete example, we pick $\ensuremath{\varepsilon}=\sci{-10}$ and $q=16$. \begin{enumerate}[\quad(1)] \item \lhb{Setting $\ensuremath{\delta}$:} By construction, points farther than $2\ensuremath{\delta}$ from the boundary are evaluated using the native quadrature. To meet the desired error $\ensuremath{\varepsilon}$ at these points, $\frac{L}{\ensuremath{\delta}} \approx 8 \ensuremath{\varepsilon}^{1/2q}$, which implies $\ensuremath{\delta}\approx L/4$ for $\ensuremath{\varepsilon}=\sci{-10}, q=16$. \item \lhb{Setting $k_m$, $\ensuremath{R}/\ensuremath{{r_c}}$, and $\ensuremath{{n_p}}$:} Requiring that the two terms in the error estimate (i.e., proxy point representation and extrapolation errors) have similar contribution at the on surface point ($\ensuremath{{r}}=\ensuremath{\delta}$) and assuming that $L \approx \rho$ we can estimate the minimum required $k$ based on the proxy representation error in the rough regime: \begin{align} \left(\dfrac{\ensuremath{\delta}}{\rho}\right)^{k/2} \approx \ensuremath{\varepsilon} \quad \text{or} \quad k \approx \dfrac{2\log\ensuremath{\varepsilon}}{\log{(\ensuremath{\delta}/L)}}~, \end{align} implying $k\approx 32$ for $L/\delta=4, \ensuremath{\varepsilon}=\sci{-10}$. Since $k$ is bounded by $\min(k_m,\ensuremath{{n_p}})$, knowing minimum $k$ implies a lower bound for $k_m$ and $\ensuremath{{n_p}}$. Therefore, reorganizing \cref{k}, we have ${\ensuremath{R}}/{\ensuremath{{r_c}}} = \acc_\mathrm{pinv}^{2/k} \approx 7$, for $\acc_\mathrm{pinv}=\sci{-14}$. \item \lhb{Setting $\ensuremath{{r_c}}/\ensuremath{\delta}$ and $\beta$:} Inspecting the extrapolation error at an on surface point, we have \begin{align} e_e(\ensuremath{\delta}) \approx \ensuremath{e_c} \left(\dfrac{\ensuremath{\delta}}{\ensuremath{{r_c}}}\right)^{k/2} \approx \left(\dfrac{L}{4 \beta (\ensuremath{\delta} - \ensuremath{{r_c}})}\right)^{2q} \left(\dfrac{\ensuremath{\delta}}{\ensuremath{{r_c}}}\right)^{k/2} \approx \left(\dfrac{L}{4 \beta \ensuremath{\delta}}\right)^{2q} \dfrac{1}{(1-\theta)^{2q}\theta^{k/2}}~, \end{align} where $\theta = \ensuremath{{r_c}}/\ensuremath{\delta}$. This expression attains its minimum at $\theta = \frac{k}{4q+k}$. For $q=16$ and $k=32$, we have $\theta = 1/3$. As we require that two terms in the error estimate have similar contribution, we use $e_e(\ensuremath{\delta})$ and estimate $\beta$: \begin{align} \beta \approx \frac{L/4\ensuremath{\delta}}{(1-\theta)\theta^{k/4q} \ensuremath{\varepsilon}^{1/2q}}~, \end{align} implying $\beta = 5$, for the choices of parameter listed above. \end{enumerate} Note that we have not analyzed the effect of finite $\ensuremath{{n_c}}$, but find that the choice $\ensuremath{{n_c}}=2\ensuremath{{n_p}}$ behaves indistinguishably from the limit $\ensuremath{{n_c}}\to\infty$; we attribute this to the rapid convergence of the periodic trapezoid rule on the check points. \begin{figure}[!bt] \centering \small \setlength\figureheight{2.2in} \setlength\figurewidth{2.2in} \includepgf{center-dist} % \mcaption{fig:center-dist}{Error vs. center and singularity distances}{The induced error for singularities and centers at various distances from the boundary for the Laplace Dirichlet interior \abbrev{BVP}, in the domain shown in \cref{fig:inf-error}. The boundary data is generated by putting a Laplace singularity at distance $\tilde\rho$ from the boundary\emdash/the singularity distance to the center of expansion is $\rho \ge \tilde \rho + \ensuremath{\delta}$. The density is solved directly and \abbrevformat{QBKIX}\xspace is used only for evaluation. The error is computed using the known solution corresponding to the boundary data. The left plot shows the errors for the case with fixed number of panels on the boundary ($M=40$ panels). In this plot, because $L$ is fixed, $L/\tilde\rho$ is decreasing by increasing $\tilde\rho$. The right plot shows the errors for adaptive refinement of the boundary with $\ensuremath{\acc_a}=\sci{-11}$. Here, since $L$ is chosen adaptively due to the boundary data, it increases as the solution becomes smoother. Because, $L$ is chosen proportional to $\tilde\rho$, the error curves almost collapse to one. We use $\ensuremath{{n_p}}=64$, $\ensuremath{{n_c}}=2\ensuremath{{n_p}}$, $\ensuremath{{r_c}}=\ensuremath{\delta}/3$ and $\ensuremath{R}=8\ensuremath{{r_c}}$. In both cases, the center of expansion is located based on the panel size at distance $\ensuremath{\delta}$. } \end{figure} \section{Algorithms\label{sec:formulation}} Given a closed curve $\Gamma\subset \mathbb{R}^2$ with interior $\Omega$, and Dirichlet data $f$ on $\Gamma$, our goal is to numerically solve the integral equation~\eqref{eqn:generic-den} for density and evaluate the solution of the underlying \abbrevformat{PDE}\xspace at an arbitrary target point $\xx\in\overline{\Omega}$. We assume that $\Gamma$ is parametrized by a $2\pi$-periodic piecewise-smooth function $\vX(t)$, so that the arc length element is $\d s = |\vX'(t)|\d t$, $|\vX'(t)|$ is bounded from below, and that $\vX(t)$ and the data function $f(t)$ may be evaluated at any $t\in [0,2\pi)$. The boundary is subdivided into \emph{panels}, which can be of different lengths, on which the native quadrature rule is defined (we use Gauss--Legendre\xspace quadrature), at $q$ nodes $\xx_j$ per panel. We assume that the density is available as a vector of samples $\phi(\xx_j)$ at the quadrature nodes. \subsection{Single-point evaluation} We describe our method in the simplest form for computing the solution accurately at a given point $\vx$. We assume that there is a single point on $\Gamma$ closest to $\vx$, on a panel of length $L$. We assume that at a distance $2\delta$ along the normal to the panel at any point, the native quadrature meets the target accuracy of evaluation, so the distance from $\vx$ to the surface is less than $2\delta$. We discuss how $\delta$ is chosen and how to ensure that this condition holds after the algorithm formulation. The local geometric configuration of various types of points we are using in our algorithm is shown in Figure~\ref{fig:stages}. The setup shown in the image is for computing the potential accurately for any point $\vx$ inside a disk $\evald$ of radius $\delta$ centered at $\ensuremath{\vector{c}}$, touching the surface at a point $\vx_0$ on a panel of length $L$. The points we use in the algorithm are placed on two concentric circles with the same center as the evaluation disk $\evald$. The \emph{proxy points} on a circle $\proxyc$ of a radius $R>\delta$, where we compute \emph{equivalent density} values, are used to approximate the solution inside $\evald$. The \emph{check points} $\vz_i$ are on a circle $\checkc$ of a radius $r_c < \delta$. At these points, we evaluate the solution accurately by using a smooth quadrature on panels refined by a factor $\beta$. The check points are used to compute the equivalent density values at the proxy points as described below. \begin{figure}[!bt] \centering \setlength\figureheight{2.5in} \setlength\figurewidth{1.5\figureheight} \includepgf{schematic} % \mcaption{fig:schematic}{Schematic of a kernel-independent\xspace expansion}{Geometry of \abbrevformat{QBKIX}\xspace, with proxy and check circles centered at $\ensuremath{\vector{c}}$ near a panel of length $L$ of the boundary $\Gamma$ discretized with $q$ Gauss--Legendre\xspace sample points. The evaluation domain $\evald$ is a disc centered at $\ensuremath{\vector{c}}$ of radius $\ensuremath{\delta}$ (dashed circle abutting the boundary at $\vx_0$). The points $\vz_i$ are the check points on the circle $\checkc$ of radius $r_c$, and $\vy_j$ are the proxy points on the circle $\proxyc$ of radius $\ensuremath{R}$. For error analysis, the singularities of the exact solution are assumed to be at a distance farther than $\rho$ from $\ensuremath{\vector{c}}$. Note that, for clarity, the relative sizes of circles and distances between samples are different from the ones actually used.} \end{figure} The algorithm depends on a number of parameters; these parameters need to be chosen appropriately to achieve an overall target accuracy. Specific choices are discussed in the next section. The key steps in the algorithm are \begin{enumerate}[(1)] \item \lhb{Set-up of proxy and check points.} We choose a center ${\vector{c}} \in \Omega$ at a distance $\delta$ from $\Gamma$, such that $\vx$ is no further from ${\vector{c}}$ than $\delta$. E.g., for $\vx\in\Gamma$, we set ${\vector{c}} = \vx - \delta \vn$, where $\vn$ is the outward normal. $\ensuremath{{n_p}}$ proxy points $\vy_j$ are arranged equally on the circle of radius $R$ with center ${\vector{c}}$, where $R>\delta$ is of order $L$. Similarly $\ensuremath{{n_c}}$ check points $\vz_i$ are arranged on the concentric circle of radius $r_c<\delta$ (\cref{fig:schematic}). % \item \lhb{Upsampling the density.} Each panel is split into $\beta$ panels corresponding to equal ranges of $t$, to give a set of $\beta N$ fine-scale nodes $\tilde{\xx}_l$ with weights $\tilde{w}_l$. The global factor $\beta$ is chosen so that the solution can be evaluated accurately at the check points, i.e., at a distance $\ensuremath{\delta} -r_c$ from the surface. The density is interpolated from its original samples $\phi(\xx_j)$ on each panel, using \ordinal{q} order Lagrange interpolation to the fine-scale nodes, to give the refined vector of samples $\tilde{\phi}_l$, $l=1,\ldots, \beta N$. % \item \lhb{Direct upsampled evaluation at check points.} The integral is evaluated at each check point $\vz_i$ using the fine-scale boundary native quadrature: \begin{equation} \label{upsampled} \un(\vz_i) = \sum_{l=1}^{\beta N} \frac{\partial \Phi(\vz_i,\xx_l)} {\partial \vn_{\xx_l}} \tilde{\phi}_l \tilde{w}_l~. \end{equation} Denote by $\unv \defeq \{\un(\vz_i)\}_{i=1}^{\ensuremath{{n_c}}}$ the column vector of these values at the check points. % \item \lhb{Solving for the equivalent density values.} Next, we construct an $\ensuremath{{n_c}}\times\ensuremath{{n_p}}$ matrix $Q$ with elements \begin{equation} Q_{ij} = \Phi(\vz_i,\vy_j)~. \label{Q} \end{equation} Applying $Q$ to a vector of density values at proxy points computes a periodic trapezoidal rule approximation to the single-layer potential corresponding to this density evaluated at check points. Then we solve a small, dense, and ill-conditioned linear system \begin{equation} Q \bm{\alpha} = \unv~, \label{linsys} \end{equation} in the least-squares sense, to get the set of proxy density values $\bm{\alpha} := \{\alpha_j\}_{j=1}^{\ensuremath{{n_p}}}$. The ill-conditioning arises from the exponential decay of singular values in the single-layer operator between concentric circles (see \cref{fig:sing-vals}). Despite this, if \cref{linsys} is solved in a backward-stable manner, a high-accuracy result is obtained (cf. \cite{mfs}, we explain the details below for completeness). % \item \lhb{Evaluation of the proxy sources at the target.} Finally, the equivalent density is evaluated at the target $\vx$, \begin{equation} \label{eqn:localexp} \uq(\vx) = \sum_{j=1}^{\ensuremath{{n_p}}} \scalard{\alpha}_j \fundsol(\vx,\vy_j)~, \end{equation} We may view this as an approximation for the true solution $u$ in the basis of fundamental solutions centered at the proxy points, that holds to high accuracy in the disk $\evald$. \end{enumerate} \Cref{fig:stages} illustrates the stages of \abbrevformat{QBKIX}\xspace evaluation for a set of target points lying in a single disk $\evald$. The final evaluation of \cref{eqn:localexp} over the disc of target points has around 12 digits of accuracy. \begin{figure}[!bt] \centering \small \setlength\figureheight{1.4in} \setlength\figurewidth{1.7in} \includepgf{error-stages} % \mcaption{fig:stages}{Stages of \abbrevformat{QBKIX}\xspace construction}{ The stages given in \cref{sec:formulation} are illustrated using plots of the $\log_{10}$ of the evaluation error near the boundary, for the double-layer density $\phi\equiv 1$ for Laplace's equation. The evaluation disc $\evald$ (dashed circle), check circle $\checkc$ (solid circle) are shown, and proxy points are not shown. } \end{figure} \para{Handling the ill-conditioned linear solves} The ill-conditioned system \cref{linsys} is solved by applying a regularized pseudo-inverse, as follows. Let $\acc_\mathrm{pinv}$ be the desired relative accuracy for inversion; typically we set $\acc_\mathrm{pinv}=\sci{-14}$. Then, taking the singular value decomposition (\abbrevformat{SVD}) \cite{nla} $Q = U \Sigma V^*$ with $\Sigma = \mbox{diag}\{\sigma_j\}$ being the diagonal matrix of singular values, we write $\Sigma^\dagger \defeq \mbox{diag}\{\sigma^\dagger_j\}$ where \begin{equation} \sigma^\dagger_j = \left\{ \begin{array}{ll} \sigma_j^{-1}, & \quad \sigma_j>\acc_\mathrm{pinv} \sigma_1,\\ 0, & \quad \mbox{otherwise}. \end{array} \right.\label{eq:pinv-sing-val} \end{equation} Then we use the solution \begin{equation} \bm{\alpha} := V (\Sigma^\dagger U^* \vu)~. \label{pinv} \end{equation} Note that the matrices $U^*$ and $V$ must be applied in two separate steps (as indicated by the parenthesis) for backward stability \cite{nla}, since a matrix-vector multiply with the single pseudo-inverse matrix $Q^\dagger := V \Sigma^\dagger U^*$ is unstable due to round-off error caused by its large entries. If $k$ is the number of singular values greater than $\acc_\mathrm{pinv}$, i.e., the numerical $\acc_\mathrm{pinv}$-rank of the matrix $Q$, the factors $V$ and $U^*$ have sizes $\ensuremath{{n_p}} \times k$ and $k \times \ensuremath{{n_c}}$ respectively. \para{Parameter summary} The algorithm described above uses a number of parameters, which we summarize here. The following parameters are defined globally: \begin{itemize} \item The quadrature order $q$, which determines the number of samples per panel, and both far-field evaluation accuracy and, together with $\beta$, the accuracy of evaluation at check points. This parameter is selected arbitrarily based on the desired overall accuracy. We use $q = 16$, which is sufficient for full double precision of integration in the far field. \item The panel refinement factor $\beta$ which needs to be chosen to maintain desired accuracy for check point evaluation. \item The numbers of proxy points $\ensuremath{{n_p}}$ and check points $\ensuremath{{n_c}}$; the former determines how accurate the approximation inside $\evald$ can be and the latter is chosen to have enough sampling. \end{itemize} Three additional parameters, the accurate evaluation distance $\delta$, the proxy point circle radius $\ensuremath{R}$ and the check point circle radius $r_c$, are panel-dependent, and are chosen with respect to panel size $L$. A careful choice of all of these, as fractions of $L$, is needed to achieve a target error without requiring excessive refinement. We discuss the choice of these parameters in Section~\ref{sec:error}. \para{Defining panels} In our experiments, we consider two ways of defining panels. The first approach is primarily needed to understand the convergence of the method with respect to the number of panels, i.e., for a given number of panels, we determine the error. In this case, we simply partition the parametric domain of $\vX(t)$ into $M$ equal-sized intervals, with one panel corresponding to each interval. We assume the parametrization to be sufficiently close to an arclength parametrization, so that the panel length has little variation, and choose $M$ to be fine enough so that the geometric condition on the check points is satisfied. In a more practical scenario, when a target error is specified, we need to determine panel sizes adaptively. The key requirement that needs to be satisfied by panels is that the accuracy of check-point evaluation at stage 2 matches the target accuracy in the far field (i.e., points farther than $2\ensuremath{\delta}$ from the boundary). The adaptive refinement starts with one panel covering the entire boundary, then recursively splitting panels into two equal pieces in parameter $t$, until all panels are deemed \emph{admissible} or their length is less than a set tolerance $\ensuremath{\varepsilon}$. A panel is admissible if \begin{inparaenum}[(i)] \item the interpolation of $\vX(t)$ and $f(t)$ from a $q$-node panel at the collocation points of the two $q$-nodes Gauss--Legendre\xspace panels (obtained by splitting the coarse panel to two pieces) matches the direct evaluation of $\vX$ and $f$ on the finer nodes, to a maximum absolute tolerance $\ensuremath{\acc_a}$, which we choose as $\sci{-11}$ unless stated otherwise; \item it is no more than twice the parameter length of that of its neighbors; \item the length of the panel does not exceed a given fraction of the minimal radius of curvature at a point of the panel, \emph{or} is less than a minimal length proportional to the target error; and \item any check point corresponding to a point $\xx$ is not closer than $\ensuremath{\delta}-\ensuremath{{r_c}}$ to any point on the surface. \end{inparaenum} The second criterion ensures that the panels are the leaves of a \emph{balanced} binary tree, which is needed for accurate evaluation of integrals at the check points. For domains with sharp corners, the forth and second conditions imply dyadic refinement of panel length bounded below by panel minimum length $\ensuremath{\varepsilon}_l$. In both cases, the result is a set of $N$ nodes $\xx_j = \vX(t_j)$, where $t_j$ are the parameter values of the nodes, with weights $w_j = |\vX'(t_j)| w'_j$ where $w'_j$ are the Gauss--Legendre\xspace weights scaled by the panel parametric lengths. This native quadrature approximates the boundary condition $f$ with target accuracy $\ensuremath{\acc_a}$. It follows from \cref{eqn:generic-den} that this also holds for the density $\phi$, as $\phi$ to be no less smooth than $f$ and $\vX$. \subsection{On-surface evaluation for iterative solution of the linear system} \label{s:sided} As discussed in the introduction, one context where singular quadratures are needed is for applying $A$, the matrix discretization of the operator $(-\mbox{\small $\frac{1}{2}$} I +D)$, to the current density vector $\bm\phi$ during the iterative solution of \cref{A}. This matrix-vector multiplication is equivalent to evaluation of the interior limit of the double-layer potential at the nodes due to the smooth interpolant of the density vector. As with \abbrevformat{QBX}\xspace \cite[Sec.~3.5]{QBX}, one may exploit this in two different ways. \begin{itemize} \item One-sided \abbrevformat{QBKIX}\xspace: as stated above, we use the interior limit of the potential at the nodes for $A\bm\phi$. \item Two-sided \abbrevformat{QBKIX}\xspace: we average the interior and exterior limits of the potential at the nodes, which, by canceling the jump relation terms, applies a matrix approximation to the operator $D$. We then explicitly add $-\mbox{\small $\frac{1}{2}$} \bm\phi$ to the answer. \end{itemize} Although mathematically equivalent, these two variants smooth high-frequency components in the density differently: one-sided \abbrevformat{QBKIX}\xspace tends to dampen these components, leading to an accumulation of eigenvalues of $A$ around zero. This has a negative impact on convergence. In contrast, for two-sided \abbrevformat{QBKIX}\xspace, since the approximation of $D$ tends to damp high-frequency components, the explicit inclusion of $-\mbox{\small $\frac{1}{2}$} I$ ensures that these components end up being multiplied by a number very close to $-\mbox{\small $\frac{1}{2}$}$, which leads to better clustering of the spectrum and improved convergence rates. We present a numerical comparison of these two alternatives in \cref{ssc:spectrum}. \subsection{Efficiency considerations and computational complexity\label{s:complex}} Given a set of evaluation points $\xx$, the brute-force approach is to run the algorithm described above, including construction of check and proxy points, for each sample point separately. This is highly inefficient, and the following obvious optimizations can be applied: \begin{itemize} \item The upsampled density on the fine-scale nodes need be computed only once, and each expansion center may be chosen to cover several targets; this requires increasing evaluation disk radius $\delta$, adjusting other parameters accordingly. \item The \abbrevformat{SVD} of matrices $Q$ may be precomputed. For translation- and scale-invariant kernels, (i.e., all kernels we consider except Yukawa and Helmholtz) these matrices do not depend on the choice of the center and circle radii, as long as the ratio $\ensuremath{R}/\ensuremath{{r_c}}$ is fixed. \item One may use the kernel-independent FMM method for evaluation of the solution at the check points for all target points at once. \end{itemize} We consider the complexity of using \abbrevformat{QBKIX}\xspace for the task of on-surface evaluation at all boundary nodes $\vx\in\Gamma$. For a boundary with $M$ panels and $q$-node Gauss--Legendre\xspace quadrature on each, there are $N=Mq$ nodes in total. We use a conservative assumption that a distinct set of check and proxy points is used for each of the targets. Then, using \abbrevformat{KIFMM}, the evaluation of the boundary integral from the $\beta$-refined boundary to the check points is \O{(\beta + \ensuremath{{n_c}})N}. We assume that the factorization of the pseudo-inverse for computing the equivalent densities $\bm{\alpha}$ is precomputed. The cost of applying the factors $V$ and $U^*$, of sizes $\ensuremath{{n_p}} \times k$ and $k \times \ensuremath{{n_c}}$, for targets point is \O{k(\ensuremath{{n_c}}+\ensuremath{{n_p}})N}. The cost of evaluation of the approximation from proxy density values at target points is \O{N\ensuremath{{n_p}}}. We conclude that the overall cost is \O{(\beta + \ensuremath{{n_c}} + k\ensuremath{{n_c}} + k\ensuremath{{n_p}} + \ensuremath{{n_p}})N}, which for typical choices $\beta=4$ and $\ensuremath{{n_c}}=2\ensuremath{{n_p}}$ reduces to \O{k\ensuremath{{n_p}} N}. We see that the scheme is linear in $N$, but with a prefactor of order $k^2$ (since, as discussed in the next section, $\ensuremath{{n_p}}$ is of order $k$). The two-sided variant involves another overall factor of 2. If the same check and proxy points are used for a number of targets, an additional, potentially very large, constant-factor speedup can be obtained. The speedup factor is proportional to the average number of targets handled by each set of check and proxy points. \section{Introduction\label{sec:intro}} The boundary integral method is a powerful tool for solving linear partial differential equations ({\abbrevformat{PDE}\xspace}s) of classical physics with piecewise constant material coefficients, with applications including electromagnetic scattering, molecular electrostatics, viscous fluid flow, and acoustics. It involves exploiting Green's theorems to express the solution in terms of an unknown ``density'' function defined on the domain boundaries or material interfaces, using the physical boundary condition to formulate an integral equation for this density, and finally obtaining a linear algebraic system via Galerkin, Nystr\"om\xspace, or other discretization. Compared to commonly used differential formulations, boundary integral methods have a number of advantages: decreasing the dimension of the problem that needs to be discretized, avoiding meshing the volume, and improving conditioning. For instance, the integral equation can often be chosen to be a Fredholm equation of the second kind, resulting in a well-conditioned linear system which can be solved by a Krylov subspace methods in a few iterations. All these considerations are particularly important for problems with complicated and moving geometries \cite{helsing_close,rahimian2010b,corona2015,ojalastokes}. The main difficulty in using boundary integral methods is the need to evaluate singular and nearly-singular integrals: \begin{inparaenum}[(i)] \item Evaluating system matrix entries requires evaluation of the potential on the surface, which involves a \emph{singular} integral; \item Once the density is solved for, the desired solution must still be evaluated in the form of a potential. As an evaluation point approaches the boundary of the domain, the peak in the resulting integrand becomes taller and narrower, giving rise to what is referred to as a \emph{near-singular\xspace} integral. The result is an arbitrarily high loss of accuracy, if the distance from points to the surface is not bounded from below, when a quadrature scheme designed for smooth integrands is used \cite[Section~7.2.1]{atkinson} and \cite{ce}. \end{inparaenum} \begin{figure}[!tb] \subfloat[]{\includegraphics[width=3in,height=1.4in]{near-singular-gl}} \subfloat[]{\includegraphics[width=3in,height=1.4in]{near-singular-trapz}} % \mcaption{fig:near-sing}{}{Evaluation error plotted in the solution domain due to approximating the Laplace \dl potential \cref{lapDLP} using a quadrature designed for smooth functions. Logarithm of absolute error, $\log_{10} |\un(\vx)-\u(\vx)|$, where $\u$ is the true solution and $\un$ is the discrete approximation using smooth quadrature is plotted for the case of constant density $\phi\equiv 1$. (a)~shows composite quadrature with $M=7$ (left) or $M=15$ (right) panels each with $q=10$ Gauss--Legendre\xspace nodes. (b)~shows the global composite trapezoid rule with $N=64$ (left) or $N=128$ (right) nodes. } \end{figure} \Cref{fig:near-sing} illustrates the near-singular evaluation of the solution $u$ of the Dirichlet Laplace equation in a simple smooth domain, which is represented by the double-layer potential \begin{equation} u(\vx) = \frac{1}{2\pi} \int_\Gamma \frac{\partial}{\partial \vn_\vy} \log \frac{1}{\|\vx-\vy\|}\cdot\phi(\vy) \d s(\vy) ~, \label{lapDLP} \end{equation} where $\phi$ is the density defined on the boundary $\Gamma$. The growth in error as $\vx$ approaches $\Gamma$ is apparent in all four plots (showing panel-based and global quadratures with different numbers of nodes $N$). Although the width of the high-error layer near the boundary shrinks like $1/N$ \cite{ce}, the error always reaches \O{1} at the boundary. The goal of this paper is to present a flexible scheme that handles both tasks (singular and near-singular evaluation) to high-order accuracy in a kernel-independent (i.e., {\abbrevformat{PDE}\xspace}-independent) manner. \para{Related work} Designing quadrature schemes for singular and near-singular\xspace integrals has a long and rich history \cite{atkinson,LIE}. Until recently, the quadrature methods were designed specifically for either on-surface evaluation or near-surface evaluation. Many of the on-surface integration quadrature are specific to a certain type of kernel~(singularity), e.g., $\log |\vr|$ in \twod or $1/|\vr|$ in \threed \cite{kapur,alpert,helsing,kolm2001,kress95,sidi1988,strain1995,yarvin1998,bremer3d}; the former case is reviewed in \cite{hao}. A popular method for on-surface quadrature is the product integration (in \twod, for the global trapezoid rule see \cite[Section~4.2]{atkinson} or \cite[Section~12.3]{LIE}, and for panel-based rules see \cite{helsing_close}). In this context, an analytic convolution of the kernel with each function in some basis set is found, reducing evaluation of the integral to projection of the boundary density onto that basis set. Another approach for on-surface evaluation is singularity subtraction, where the integrand is modified by subtracting an expression that eliminates its singularity \cite[Chapter~2]{davisrabin} and \cite{pozrikidis,jarvenpaa2003}. However, this leaves high-order singularities in the kernel which makes the higher derivatives of the kernels unbounded, limiting the accuracy of the quadrature scheme. Alternatively, for weakly singular kernels, one can use transformations to cancel the singularity by the decay of area element (e.g., in \threed using Duffy transformation \cite{duffy1982} or polar coordinates) \cite{brunoFMM,graglia2008,khayat2005,schwab1992,ying06,farina2001,johnson1989,Veerapaneni2011,Ganesh2004,Graham2002}. To achieve a high convergence order, these methods need some form of partition of unity so that a high-order polar patch can be constructed around each point \cite{ying06}. One can also regularize the kernel and then exploit quadrature schemes for smooth functions \cite{lowengrub1993,Tornberg2004}. However, to achieve higher accuracy, the effect of regularization needs to be corrected by using analytic expressions (e.g.,~asymptotic analysis) for the integrand \cite{beale}. Finally, there exist special high-order quadrature schemes for domains with corners, either via reparametrization \cite{Kress91,LIE}, panel-wise geometric refinement \cite{helsingtut}, or by custom generalized Gaussian quadratures \cite{bremer10,bremer2d}. We now turn to near-singular\xspace integrals (evaluation close to the surface), which has traditionally been handled as a distinct task \cite{helsing_close,lsc2d,beale,hackbusch1994,khayat2005,tlupova2013nearly,helsingtut}. Beale and coauthors~\cite{yingbeale,beale2015,tlupova2013nearly} use regularization methods to remove the singularity of the integral. To correct the error introduced by the regularization, they perform asymptotic analysis and find correction expressions. Some authors used singularity cancellation (e.g., using local polar coordinates) in evaluating near-singular\xspace integrals \cite{hackbusch1994,khayat2005}. Interpolation along carefully-chosen lines connecting distant points (where a smooth quadrature is accurate) to an on-surface point has also been successful \cite{ying06,quaife}. Recently, unified approaches to on-surface and close evaluation have been proposed, the first being the \twod Laplace high-order global and panel-based quadratures of Helsing~and~Ojala~\cite{helsing_close}. This approach has been extended to near-singular\xspace Stokes single- and \dl kernels with global \cite{lsc2d} and panel-based \cite{ojalastokes} quadrature. The use of {\em local expansions}\emdash/analytic separation of variables to the \abbrevformat{PDE}\xspace solutions analogous to a Taylor series in the complex plane\emdash/for the evaluation of integrals near the boundary was introduced in \cite{ce}. In this scheme, a refined smooth quadrature is needed to accurately evaluate the expansion coefficients via the addition theorem. It was observed that the expansion can also be used to evaluate at target points on the boundary of the domain, if certain conditions are satisfied \cite{QBX2}; this was used to construct a unified quadrature scheme\emdash/Quadrature by Expansion\xspace (\abbrevformat{QBX}\xspace)\emdash/for near and on-surface evaluation of integrals \cite{QBX}. Racch~\cite{Rachh2016} recently showed how to efficiently combine \abbrevformat{QBX}\xspace evaluations with the fast multipole method. However, powerful as they are, \abbrevformat{QBX}\xspace schemes require both a local expansion and addition theorem particular to each \abbrevformat{PDE}\xspace, which would be algebraically tedious especially for vector-valued {\abbrevformat{PDE}\xspace}s such as Stokes and elastostatics. This motivates the need for a scheme that can handle multiple {\abbrevformat{PDE}\xspace}s without code changes. The present work fills this gap. \para{Overview and model problems} As with \abbrevformat{QBX}\xspace, we construct an approximate representation for \abbrevformat{PDE}\xspace solutions in a small region abutting the boundary, then use it for near and on-surface evaluations. However, in contrast to \abbrevformat{QBX}\xspace, our representation is an {\em equivalent density} on a closed curve enclosing this region; when discretized, this gives a ring of ``proxy'' point sources (also known as the {\em method of fundamental solutions} \cite{Bo85}). Matching is done at a second smaller ring of ``check'' points where a refined smooth quadrature is accurate, thus the only dependence on the \abbrevformat{PDE}\xspace is via point-to-point kernel evaluations\emdash/the method is kernel-independent\xspace, and essentially \abbrevformat{PDE}\xspace-independent. We focus on Dirichlet boundary-value problems \begin{align} \op{L} \u & = 0 \quad\text{in } \Omega~,\label{eqn:generic-pde}\\ \u &= \scalar{f} \quad\text{on } \Gamma~,\label{eqn:generic-bc} \end{align} where $\Omega$ is a simply-connected interior domain with smooth boundary $\Gamma$, for the following partial differential operators: \begin{equation} \op{L} \u = \begin{cases} \Delta \u & \text{Laplace}, \\ (\Delta-\lambda^2)\u & \text{Yukawa}, \\ ( \Delta+\omega^2)\u & \text{Helmholtz} \quad (\Im\omega\ge 0), \\ \Delta \u-\Grad \scalar{p} & \text{Stokes} \quad (\text{subject to } \Div \u =0), \\ \Delta \u+\frac{1}{1-2\nu}\Grad \Div \u & \text{Elastostatic}. \end{cases} \label{eqn:pde-type} \end{equation} To obtain well-conditioned formulations of the problem, we represent the solution of \cref{eqn:generic-pde,eqn:generic-bc,eqn:pde-type} for $\vx\in\Omega$ by the \dl potentials \begin{equation} \label{eqn:generic-bi-sol} \u(\vx) = \conv{D}[\scalar{\phi}](\vx) \defeq \int_{\Gamma} \parderiv{\Phi(\vx,\vy)}{\vn_\vy} \scalar{\phi}(\vy)\, \d s(\vy)~, \end{equation} where $\Phi$ is the fundamental solution for the operator $\op{L}$, and $\phi$ is an unknown density. The fundamental solutions for the operators listed in \cref{eqn:pde-type} are given in \cref{apx:kernels}. A standard step see, e.g., \cite{HW}) is now to substitute \cref{eqn:generic-bi-sol} into the boundary condition and use the jump relation for the potential to obtain the second-kind integral equation \begin{alignln} \label{eqn:generic-den} -\frac{1}{2} \scalar{\phi}(\vx) + (D\scalar{\phi})(\vx) = \scalar{f}(\vx), \quad\text{for } \vx \in \Gamma~, \end{alignln} where $D$ is the restriction of $\conv{D}$ to the curve. Here, the integral implicit in the integral operator $D$ must be taken in the principal value sense. \para{Discretization and overall approach} In general, a smooth quadrature is a set of nodes $\xx_i \in \Gamma$ with associated weights $w_i$, such that \begin{alignln} \int_{\Gamma} \scalar{f} \d s \approx \sum_{i=1}^N w_i \scalar{f}(\xx_i)~, \label{eqn:smooth-quad} \end{alignln} holds to high accuracy for smooth functions on $\Gamma$\emdash/including the density $\phi$. In this work, we use $q$-node Gauss--Legendre\xspace quadrature scheme on panels, and for convergence tests, we increase the number of panels while holding $q$ fixed. Upon discretization, \cref{eqn:generic-den} will be approximated by the linear system \begin{equation} \sum_{j=1}^N A_{ij} \phi_j \; = \; f(\vx_i), \qquad i=1,\ldots,N~, \label{A} \end{equation} whose solution $\vector{\phi} = \{\phi_j\}_{j=1}^N$ approximates the density values at the collocation points. In practice, for large problems, the matrix $A$ is not constructed explicitly, but instead the matrix-vector product $A\vector{\phi}$ is evaluated using the fast multipole method. We test the \abbrevformat{QBKIX}\xspace scheme both for applying matrix $A$ (i.e., on-surface evaluation) and evaluating the solution at arbitrary points, near-evaluation in particular. The system matrix elements are computed using the Nystr\"om\xspace method \cite[Ch.~12]{LIE}. If the operator $D$ is smooth on $\Gamma \times \Gamma$, we use a smooth Nystr\"om\xspace formula; e.g., for Laplace, \begin{equation} A_{ij} = \left\{\begin{array}{ll} \parderiv{\Phi(\xx_i,\xx_j)}{\vn_{\xx_j}} w_j, & i\neq j,\\ -\frac{1}{2} -\frac{\kappa(\xx_j)}{4\pi}w_j,& i=j,\end{array}\right. \label{Anyst} \end{equation} where $\kappa(\xx)$ is the curvature at $\xx\in\Gamma$. This discretization achieves super-algebraic convergence. However, for Yukawa and Helmholtz in \twod, and all \threed elliptic kernels, singular quadrature is needed. In contrast to established approaches using specialized singular quadratures, we follow the idea underlying the \abbrevformat{QBX}\xspace method: \emph{applying $A$ to a vector $\bm\phi$ is equivalent to evaluating the interior limit of the double-layer potential due to a smooth density interpolated from $\bm\phi$}. This observation leads to the \abbrevformat{QBKIX}\xspace idea: use a fast algorithm combined with the smooth quadrature scheme, \cref{eqn:smooth-quad}, for point evaluation \emph{away} from the surface\emdash/at points we refer to as \emph{check points}\emdash/ and interpolate from these points to the on surface point, to compute $A\vector{\phi}$ for the Krylov iteration. As this interpolation can be done using points on one or both sides of the surface, in \cref{ssc:spectrum} we compare ``one-sided'' and ``two-sided'' variants of \abbrevformat{QBKIX}\xspace with respect to their spectra and iterative convergence rates. Although we are focusing on interior Dirichlet tests and Nystr\"om\xspace-style sampled representation of the density in this work, \abbrevformat{QBKIX}\xspace is applicable for Neumann or other boundary conditions, and Galerkin and other discretization types. Moreover, while the approach presented in this paper is restricted to \twod, there is no fundamental obstacle to an extension to \threed. The rest of the paper is structured as follows. In \cref{sec:formulation} we present the \abbrevformat{QBKIX}\xspace algorithm for integration. We present an error analysis in \cref{sec:error}. In \cref{sec:results}, and report the results of numerical experiments quantifying the accuracy of the method for a number of representative problems. \section{List of kernels\label{apx:kernels}} Here we list the kernels for the single- and double-layer potentials for the {\abbrevformat{PDE}\xspace}s considered text. In each case $\vx$ and $\vy$ are points in $\R^2$ and $\vr \defeq \vx-\vy$. The single-layer kernel is the fundamental solution. In \dl kernels, $\vn$ is the unit vector denoting the dipole direction, which in the context of boundary integral formulation is the outward pointing normal to the surface. \begin{itemize}[\quad$\bullet$] \item Laplace: \begin{alignln} \Delta \u & = 0, \\ S(\vx,\vy) & = -\frac{1}{2\pi}\log |\vr|, \\ D(\vx,\vy) & = \frac{1}{2\pi}\frac{\inner{\vr}{\vn}}{|\vr|^2}, \\ \lim_{\vy\to\vx} D(\vx,\vy) & = -\frac{\kappa}{4\pi}, \qquad \vx,\vy\in\Gamma, \quad\text{(where $\kappa$ is the signed curvature)}. \end{alignln} \item Yukawa: \begin{alignln} \Delta \u - \lambda^2 \u & = 0, \\ S(\vx,\vy) & = \frac{1}{2\pi}K_0(\lambda |\vr|), \\ D(\vx,\vy) & = \frac{\lambda}{2\pi}\frac{\inner{\vr}{\vn}}{|\vr|}K_1(\lambda|\vr|), \end{alignln} where $K_0, K_1$ are modified Bessel functions of the second kind of order zero and one, respectively. \item Helmholtz: \begin{alignln} \Delta \u + \omega^2 \u & = 0, \\ S(\vx,\vy) & = \frac{\ii}{4}H_0^1(\omega |\vr|), \\ D(\vx,\vy) & = \frac{\ii\omega}{4}\frac{\inner{\vr}{\vn}}{|\vr|} H_1^1(\omega|\vr|), \end{alignln} where $H^1_0, H^1_1$ are respectively modified Hankel functions of the first kind of order zero and one. \item Stokes: \begin{alignln} -\Delta \vu + \nabla \scalar{p} & = 0, \qquad \Div \vu = 0, \\ S(\vx,\vy) & = \frac{1}{4\pi}\left(-\log|\vr| + \frac{\vr \otimes \vr}{|\vr|^2}\right), \\ D(\vx,\vy) & = \frac{\inner{\vr}{\vn}}{\pi}\frac{\vr \otimes \vr}{|\vr|^4}, \\ \lim_{\vy\to\vx} D(\vx,\vy) & = -\frac{\kappa}{2\pi} \vector{t}\otimes\vector{t}, \\ P(\vx,\vy) & = -\frac{1}{\pi|\vr|^2} \left( 1 - 2\frac{\vr \otimes \vr}{|\vr|^2} \right) \vn. \end{alignln} \item Navier: Linear elasticity for isotropic material with shear modulus $\mu$ and Poisson ratio $\nu$, \begin{alignln} \mu\Delta \vu + \frac{\mu}{1-2\nu} \Grad\Div \vu & = 0, \\ S(\vx,\vy) & = -\frac{3-4\nu}{8\pi(1-\nu)}\log|\vr| + \frac{1}{8\pi(1-\nu)} \frac{\vr \otimes \vr}{|\vr|^2}, \\ D(\vx,\vy) & = \frac{1-2\nu}{4\pi(1-\nu)}\left(\frac{\inner{\vr}{\vn} + \vn \otimes \vr - \vr \otimes \vn}{|\vr|^2} + \frac{2}{1-2\nu} \frac{\inner{\vr}{\vn}\, \vr \otimes \vr}{|\vr|^4}\right). \end{alignln} \end{itemize} \section{Overview of \abbrevformat{QBKIX}\xspace\label{sec:overview}} \begin{SCfigure}[1.4][!bt] \centering \small \setlength\figureheight{2.4in} \includepgf{schematic} % \mcaption{fig:schematic}{Schematic of a kernel-independent\xspace expansion}{Geometry of \abbrevformat{QBKIX}\xspace\, showing the expansion centered at $\ensuremath{\vector{c}}$ near a panel of the boundary $\Gamma$ discretized with $q$-node Gauss--Legendre\xspace rule. The points on which the solution is evaluated at are referred to as check points\emdash/the smooth quadrature (or a refined version of it) is used to evaluate the solution at check points. The check points are organized on a circle with radius $\ensuremath{{r_c}}$ around the center of expansion $\ensuremath{\vector{c}}$. A set of sources is located on a set of proxy points with radius $\ensuremath{R}$ around the center. Based on the desired accuracy, the expansion can be evaluated, within a permissible radius $\ensuremath{{r}}$ away from $\ensuremath{\vector{c}}$. For error analysis, the underlying solution is assumed to have a singularity at distance farther than $\rho$ from the center of the expansion. Note that, to better demonstrate different parts of \abbrevformat{QBKIX}\xspace, the schematic is not plotted to scale. } \end{SCfigure} In \pr{fig:schematic}, we show the general geometry of a kernel-independent\xspace expansion. We use the term \emph{check points} to denote the set of points on which the function can be evaluated accurately by the smooth quadrature (or a refinement of it, we will clarify this in \pr{sec:refinement}). These points are organized on a circle with radius $\ensuremath{{r_c}}$. The fundamental solutions of the PDE are located on the \emph{proxy points} to form the basis of the interpolation. The names check and proxy points are borrowed from the fast direct solvers \cite{Mart05,Corona2015a} and kernel-independent\xspace FMM \cite{kifmm}, due to the similarity of the kernel-independent\xspace expansion center here to the local expansion in the kernel-independent\xspace FMM. More formally, we let \begin{alignln} \label{eqn:localexp} \ui(\vx) = \sum_{j=0}^{\ensuremath{{n_p}}} \scalard{\alpha}_j \fundsol(\vx,\vy_j), \end{alignln} where $\vy_j~(\range[\ensuremath{{n_p}}]{j})$ are the proxy points, $\fundsol$ is the fundamental solution of $\op{L}$, and $\ensuremath{{n_p}}$ is the number of proxy points. The proxy points are organized over a circle with radius $\ensuremath{R}$. The values of function on the check points are used to determine the expansion coefficients $\scalard{\alpha}_j~(\range[n_p]{j})$, namely by solving the least-square problem. Letting $\ud_i$ be known data at the check point $\vx_i$, then we solve the least square problem \begin{alignln} \vectord{\alpha} = \arg \min_{\vectord{\alpha}} \sum_{i=1}^{n_c} \left| \ui(\vx_i) - \ud_i \right|^2 = \arg \min_{\vectord{\alpha}} \sum_{i=1}^{n_c} \left| \sum_{j=1}^{n_p} \scalard{\alpha}_i \Phi(\vx_i,\vy_j) - \ud_i \right|^2. \end{alignln} We consider the boundary data $\scalar{f}$ and the curve $\Gamma$ to be analytic, so that the density $\scalar{\phi}$ is also analytic. In this case, $\u$ may be continued \emph{outside} $\Omega$ as a PDE solution, at least to some open neighborhood of $\Omega$, but may have singularities in $\R^2 \backslash \overline{\Omega}$ some positive distance from $\Gamma$. These singularities will control the exponential convergence rate of local quadrature schemes \cite{ATAP,austinrou,ce} \note[AR]{and the kernel-independent\xspace expansion}. Therefore, for error analysis, we assume that the solution has singularity at a distance farther than $\rho$ from the center of expansion $\ensuremath{\vector{c}}$. In the case of merely smooth data, the convergence is expected to be similar \note[AR]{similar to what?}. To construct high-accuracy evaluator for the low accuracy region near the panel as well as on the collocation points on the panel $\vy_k~(\range[q]{k})$ we construct an expansion around a center $\ensuremath{\vector{c}}$ located near each collocation point, associate that expansion with the collocation point on the boundary, and use the expansion at $\ensuremath{\vector{c}}$ to evaluate the integral at $\vy_k$. Based on the geometry of the expansion (i.e. ratio of proxy radius to check radius $\frac{\ensuremath{R}}{\ensuremath{{r}}}$) and the desired evaluation accuracy $\ensuremath{\varepsilon}$, there is a maximum relative radius $\frac{\ensuremath{{r}}}{\ensuremath{{r_c}}}$ from $\ensuremath{\vector{c}}$ where the expansion has error less than $\ensuremath{\varepsilon}$. We denote this radius by \emph{evaluation radius} $\ensuremath{{r}}$. The center is located such that $|\vy_k - \ensuremath{\vector{c}}|=\ensuremath{{r}}$ so that the expansion can be evaluated at the associated boundary node. Based on the value $\ensuremath{{r}}$, some of the check points typically fall in the inaccurate region of the smooth quadrature of the panel or that of the neighboring panels. To find accurate values of the solution $\ud$ on the checkpoint, the boundary is split (or refined) into smaller panels, each with the same $q$ nodes as the original panel (in all cases in this paper, into four panels). This also requires interpolating a known density $\scalard{\phi}$ onto the new nodes, for which Lagrange interpolation on the original panel is used. The boundary could alternatively be upsampled, but for simplicity we kept the order of panels fixed and split the panels. After the expansions are constructed, i.e. it geometry is decided and the expansion coefficients are computed, they are used to evaluate for the targets that fall within their evaluation radius, including the associated boundary point. The main construction and evaluation steps of a \abbrevformat{QBKIX}\xspace center is depicted in \pr{fig:stages}. \begin{figure}[!bt] \centering \small \setlength\figureheight{1.4in} \setlength\figurewidth{1.7in} \includepgf{error-stages} % \mcaption{fig:stages}{Stages of \abbrevformat{QBKIX}\xspace construction}{An expansion is placed near the boundary at a distance proportional to the panel size. In order to evaluate the solution at the check point accurately, the boundary is refined. Using the values on the check point, an expansion is computed at the center (\pr{eqn:localexp}). The expansion is used for evaluation at all the points that fall within the center's evaluation radius. Most importantly, the associated point on the boundary is within the evaluation disc of the center. In this example Laplace \dl integral with constant density is used. } \end{figure} \begin{remark} As it was hinted in the previous paragraph and will be shown in \pr{sec:error}, the kernel-independent\xspace expansion error at a distance from $\ensuremath{\vector{c}}$ depends on the relative geometry of the expansion ($\frac{\ensuremath{{r_c}}}{\ensuremath{R}}$, $\frac{\ensuremath{R}}{\rho}$, and $\frac{\ensuremath{{r}}}{\ensuremath{{r_c}}}$). Ideally, the check radius should be as small as possible compared to $\rho$ (which is not known in practice). But, the small $\ensuremath{{r_c}}$ implies small $\ensuremath{{r}}$ and causes excessive refinement of the boundary to get accurate data on the check points. We use the panel size as a surrogate for $\rho$ as the corrolate closely and decide a geometry for the expansion that balances these two errors (i.e. truncation error of expansion vs. extrapolation error). \end{remark} \begin{remark} The structure and construction of the \abbrevformat{QBKIX}\xspace centers are very similar to the local expansions in the kernel-independent\xspace FMM context. When used in conjunction with the kernel-independent\xspace FMM, the local expansions can be used to speed up the construction of the \abbrevformat{QBKIX}\xspace centers. \end{remark} \begin{remark} \abbrevformat{QBKIX}\xspace centers can be categorized as global and local centers: \vspace{.5em} \begin{compactitem} \item \lhb{Global centers} are centers that interpolate the integral from all boundary panels \cite{QBX,Rachh2016}. \item \lhb{Local centers} are centers that interpolate the integral from a subset of boundary panels \cite{Rachh2016b}. \end{compactitem} \vspace{.5em} When the boundary is geometrically complicated, it is more efficient to use local centers \missingitem[cite Manas local]. In this paper we focus on the construction and analysis of global \abbrevformat{QBKIX}\xspace centers and postpone the extension of this work to local expansion centers to a later work. \end{remark} \begin{algorithm} \algcaption{alg:overview}{Overview of \abbrevformat{QBKIX}\xspace}{\note[AR]{we have an expanded version in \pr{alg:main}; I think it's better to keep this simple}\note[AHB]{maybe add the equation references, or repeat the key equations there. E.g. step 6 requires the backward-stable soln of small dense ill-cond lin sys.}} % \DontPrintSemicolon % Adaptively refine the boundary to resolve the boundary data.\; % Balance the boundary panels such that the ratio of the arclength of neighboring panels is less than two.\; % Locate centers based on the geometry of the panel.\; % Refine the boundary by a factor of four to ensure the checks are in accurate region of panels.\; % Evaluate the boundary integral on check points using the refined boundary.\; % Compute each center's expansion coefficients using the check values.\; % Evaluate the center back on the boundary collocation point.\; \end{algorithm} \note[AHB]{Can you now write out full equations, or expand on Alg 1?} \section*{\refname}} \ifIsJCP \usepackage{amsthm} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \fi \newcommand\para[1]{\paragraph{\textbf{#1}.}} \newcommand\lh[1]{\emph{#1}} \newcommand\lhb[1]{\textbf{#1}} \usepackage{tabularx} \usepackage{booktabs} \usepackage{multirow} \usepackage{collcell} \usepackage{dcolumn} \usepackage[linesnumbered,lined,ruled,vlined,commentsnumbered]{algorithm2e} \newcommand\cbox[2][.4\linewidth]{\parbox[t]{#1}{#2}} \usepackage[margin=10pt,font=small,textfont=it,labelfont=bf,justification=justified]{caption} \newcommand{\algcaption}[3]{ \ifthenelse{\isempty{#3}} {\caption[#1]{{\sc #2.} \label{#1}}} {\caption[#1]{{\sc #2.} \newline\small{#3} \label{#1}}} } \usepackage{graphicx} \graphicspath{{figs/}} \usepackage[export]{adjustbox} \usepackage[ format=hang, singlelinecheck=false, font={scriptsize}, labelfont=bf, labelformat=simple, subrefformat=simple, justification=centering ]{subfig} \captionsetup[subfigure]{subrefformat=simple,labelformat=simple,listofformat=subsimple} \renewcommand\thesubfigure{(\alph{subfigure})} \newcommand\subfigure[2][]{\subfloat[#1]{#2}} \newcommand{\mcaption}[3]{ \ifthenelse{\isempty{#2}} {\caption[#1]{#3 \label{#1}}} {\caption[#1]{{\sc #2.} #3 \label{#1}}} } \newlength\figureheight \newlength\figurewidth \newif\ifUseTikz \ifUseTikz \usepackage{tikz,pgfplots,import} \pgfplotsset{compat=newest} \usetikzlibrary{pgfplots.groupplots} \usetikzlibrary{calc} \usetikzlibrary{plotmarks} \usetikzlibrary{arrows.meta} \usetikzlibrary{backgrounds} \usetikzlibrary{external} \pgfplotsset{plot coordinates/math parser=false} \pgfkeys{/pgf/images/include external/.code={\includegraphics[]{#1}}} \pgfkeys{/tikz/external/mode=list and make} \tikzexternalize[prefix=figs/] \newcommand{\includepgf}[2][1]{ \beginpgfgraphicnamed{#2}% \tikzsetnextfilename{tikz-external-#2}% \scalebox{#1}{\subimport{figs/}{#2.pgf}}% \endpgfgraphicnamed% } \makeatletter \tikzset{ every picture/.style={ execute at begin picture={ \let\ref\@refstar } } } \makeatother \else \newcommand{\includepgf}[2][1]{ \scalebox{#1}{\includegraphics[]{tikz-external-#2}}% } \f \usepackage[normal]{engord} \let\d\undefined \let\O\undefined \DeclareMathOperator{\Grad}{\nabla} \DeclareMathOperator{\Div}{\nabla\cdot} \DeclareMathOperator{\Curl}{curl} \newcommand\defeq{\mathrel{\mathop :}=} \newcommand\eqdef{\mathrel{\mathop :}=} \newcommand\norm[2][{}]{\lVert #2 \rVert_{#1}} \newcommand\jump[1]{\llbracket #1 \rrbracket} \newcommand\inner[2]{#1\cdot #2} \newcommand\cross[2]{#1\times #2} \newcommand\d{\ensuremath{\,\mathrm{d}}} \newcommand\deriv[2]{\frac{\d #1}{\d #2}} \newcommand\parderiv[2]{\frac{\partial #1}{\partial #2}} \newcommand\totderiv[2]{\frac{\d #1}{\d #2}} \newcommand\ii{i} \newcommand\O[1]{\ensuremath{\mathcal{O}\left(#1\right)}} \newcommand\ordinal[1]{\ifthenelse{\isin{#1}{abcdefghijklmnopqrstuvwxyz}}{\ensuremath{#1^\mathrm{th}}}{\engordnumber{#1}}} \newcommand\R{\mathbb{R}} \newcommand\C{\mathbb{C}} \newcommand\lbl[1]{\mathrm{#1}} \newcommand\0{\phantom{0}} \let\vec\undefined \let\vector\undefined \DeclareMathAlphabet{\mathbfsf}{\encodingdefault}{\sfdefault}{bx}{n} \newcommand\discrete[1]{\mathsf{#1}} \newcommand\fourier[1] {\widehat{#1}} \newcommand\scalar[1] {#1} \newcommand\scalard[1] {\discrete{#1}} \newcommand\vector[1] {\bm{#1}} \newcommand\vectord[1] {\bm{\mathsf{#1}}} \newcommand\linop[1] {\mathbf{#1}} \newcommand\linopd[1] {\bm{\mathsf{#1}}} \newcommand\conv[1] {\mathcal{#1}} \newcommand\convd[1] {\mathbfsf{#1}} \newcommand\impltag[1]{\textcolor{red}{#1}} \newcommand\impl[2][+]{#2^{\impltag{#1}}} \newcommand\expl[2][{}]{#2^{#1}} \renewcommand\Re{\operatorname{Re}} \renewcommand\Im{\operatorname{Im}} \def\ensuremath{\mathit{R\kern-.13em e}}} %{\mbox{\textit{Re}}{\ensuremath{\mathit{R\kern-.13em e}}} \def\ensuremath{\mathit{C\kern-.21em a}}} %{\mbox{\textit{Ca}}{\ensuremath{\mathit{C\kern-.21em a}}} \newcommand\vx{{\vector{x}}} \newcommand\vX{{\vector{X}}} \newcommand\vy{{\vector{y}}} \newcommand\vz{{\vector{z}}} \newcommand\vn{{\vector{n}}} \newcommand\vr{{\vector{r}}} \newcommand\vu{{\vector{u}}} \def\dl{double-layer\xspace} \defnear-singular\xspace{near-singular\xspace} \defNear-singular\xspace{Near-singular\xspace} \defGauss--Legendre\xspace{Gauss--Legendre\xspace} \defNystr\"om\xspace{Nystr\"om\xspace} \defsingle-layer\xspace{single-layer\xspace} \defboundary condition\xspace{boundary condition\xspace} \newcommand\op[1]{\mathcal #1} \newcommand\opd[1]{\discrete{#1}} \def\emdash/{\kern 0.2em---\kern 0.2em} \def\abbrevformat{QBKIX}\xspace{\abbrevformat{QBKIX}\xspace} \defQuadrature by Kernel-Independent Expansion\xspace{Quadrature by Kernel-Independent Expansion\xspace} \def\abbrevformat{QBX}\xspace{\abbrevformat{QBX}\xspace} \def\abbrevformat{GMRES}\xspace{\abbrevformat{GMRES}\xspace} \def\abbrevformat{PDE}\xspace{\abbrevformat{PDE}\xspace} \defQuadrature by Expansion\xspace{Quadrature by Expansion\xspace} \defkernel-independent\xspace{kernel-independent\xspace} \def\abbrevformat{KIFMM}\xspace{\abbrevformat{KIFMM}\xspace} \def\ensuremath{{r_c}}{\ensuremath{{r_c}}} \def\ensuremath{{n_c}}{\ensuremath{{n_c}}} \def\ensuremath{R}{\ensuremath{R}} \def\ensuremath{{n_p}}{\ensuremath{{n_p}}} \def\ensuremath{{r}}{\ensuremath{{r}}} \def\ensuremath{\delta}{\ensuremath{\delta}} \def\ensuremath{\vector{c}}{\ensuremath{\vector{c}}} \def\ensuremath{\varepsilon}{\ensuremath{\varepsilon}} \def\ensuremath{\acc_r}{\ensuremath{\ensuremath{\varepsilon}_r}} \def\ensuremath{\acc_a}{\ensuremath{\ensuremath{\varepsilon}_a}} \def\acc_\mathrm{pinv}{\ensuremath{\varepsilon}_\mathrm{pinv}} \def\ensuremath{e_c}{\ensuremath{e_c}} \def\ensuremath{e}{\ensuremath{e}} \newcommand\e[1]{\ensuremath{e{#1}}} \newcommand\scie[2][1]{\ensuremath{#1e{#2}}} \newcommand\sci[2][1]{\ensuremath{\ifthenelse{\equal{#1}{1}}{}{#1\times}10^{#2}}} \newcommand\twod{\abbrev{2D}\xspace} \newcommand\threed{\abbrev{3D}\xspace} \renewcommand\u{\scalar{u}} \newcommand\uq {\hat {u}} \newcommand\un {\tilde {\u}} \newcommand\unv{\tilde {\vu}} \newcommand\fundsol{\Phi} \newcommand\range[2][n]{#2=1,\dots,#1} \DeclareMathOperator{\argmin}{argmin} \newcommand{\begin{itemize}}{\begin{itemize}} \newcommand{\end{itemize}}{\end{itemize}} \newcommand{\begin{enumerate}}{\begin{enumerate}} \newcommand{\end{enumerate}}{\end{enumerate}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{align}}{\begin{align}} \newcommand{\end{align}}{\end{align}} \newcommand{\begin{subequations}}{\begin{subequations}} \newcommand{\end{subequations}}{\end{subequations}} \newcommand{\begin{figure}}{\begin{figure}} \newcommand{\end{figure}}{\end{figure}} \newcommand{\ca}[2]{\caption{#1 \label{#2}}} \newcommand{\tbox}[1]{{\mbox{\tiny #1}}} \newcommand{{\partial\Omega}}{{\partial\Omega}} \newcommand{\mbox{\small $\frac{1}{2}$}}{\mbox{\small $\frac{1}{2}$}} \newcommand{\vt}[2]{\left[\begin{array}{r}#1\\#2\end{array}\right]} \newcommand{\mt}[4]{\left[\begin{array}{rr}#1&#2\\#3&#4\end{array}\right]} \newcommand{\xx}{\vx} \newcommand{\vy}{\vy} \newcommand{\vz}{\vz} \newcommand{\vu}{\vu} \newcommand{{\vector{c}}}{{\vector{c}}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\bm{\alpha}}{\bm{\alpha}} \newcommand\disc[3]{#1_{#2}^{#3}} \newcommand\proxyc{\disc{\partial B}{\ensuremath{R}}{\ensuremath{\vector{c}}}} \newcommand\checkc{\disc{\partial B}{\ensuremath{{r_c}}}{\ensuremath{\vector{c}}}} \newcommand\evald {\disc{B}{\ensuremath{\delta}}{\ensuremath{\vector{c}}}} \section{Numerical experiments\label{sec:results}} In this section, we present the results of numerical tests demonstrating the accuracy and versatility of the \abbrevformat{QBKIX}\xspace algorithm for on-surface evaluation needed for the boundary integral equation solver and solution evaluation close to the surface. In the following experiments, unless noted otherwise, we use \abbrevformat{QBKIX}\xspace for both tasks. \subsection{Convergence with respect to the number of panels} In \cref{tbl:m-conv}, we report the convergence of the solution evaluated at the interior points using non-adaptive boundary quadrature with increasing number of panels. The test solution is the potential due to a set of singularities at the source points shown outside the domain. These source points are used to generate the boundary data $f$ and the reference solution to check the error. For all problems, the \dl formulation is used, except for the Helmholtz for which a combined-field formulation $u = \conv{D}[\phi] +\ii \omega \conv{S}[\phi]$, where $\conv{S}$ is the single-layer potential \cite[Section~3.2]{coltonkress}, is used. This representation addresses problems associated with resonance of the complementary domain. The \dl (or combined-field) density $\phi$ is solved using \abbrevformat{QBKIX}\xspace to evaluate the matrix-vector product in each iteration of \abbrevformat{GMRES}\xspace. The error in the density is quantified by computing the solution from $\phi$, \cref{eqn:generic-bi-sol}, at a set of target points in the interior of the domain. For the first three kernels, which are smooth, we also report the convergence using the Nystr\"om\xspace (\emph{direct}) evaluation, \cref{Anyst}, which by comparison against one- or two-sided \abbrevformat{QBKIX}\xspace shows how much of the error is due to \abbrevformat{QBKIX}\xspace. In all cases, it can be seen that \abbrevformat{QBKIX}\xspace gives high-order convergence rate that is independent of the type of the kernel. We notice that the error performance of the two-sided variant is worse than one-sided at the same number of panels (however, as we discuss below, it is valuable since it improves the convergence rate of \abbrevformat{GMRES}\xspace). \begin{table}[!bt] \newcolumntype{C}{>{\centering\let\newline\\\arraybackslash\hspace{0pt}$}c<{$}} \newcolumntype{R}{>{\centering\let\newline\\\arraybackslash\hspace{0pt}$}r<{$}} \newcolumntype{L}{>{\centering\let\newline\\\arraybackslash\hspace{0pt}$}l<{$}} \newcolumntype{A}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}D{.}{.}{#1}} % \centering \scriptsize \setlength{\tabcolsep}{4pt} \begin{tabular}{c l l CCCC}\toprule Geometry & Kernel & Quadrature & \multicolumn{4}{c}{Absolute error (Number of panels)} \\\midrule \multirow{3}{*}{\includegraphics[height=.35in,width=.35in]{circle.png}} & \multirow{3}{*}{Laplace} & Direct & 2.90\e{-06}~(2) & 9.46\e{-10}~(4) & 6.42\e{-14}~(6) & 1.98\e{-14}~(8) \\ & & \abbrevformat{QBKIX}\xspace (one) & 3.39\e{-06}~(2) & 9.69\e{-10}~(4) & 4.46\e{-12}~(6) & 3.54\e{-12}~(8) \\ & & \abbrevformat{QBKIX}\xspace (two) & 2.25\e{-05}~(2) & 4.07\e{-07}~(4) & 2.24\e{-08}~(6) & 2.37\e{-09}~(8) \\\midrule \multirow{14}{*}{\includegraphics[height=1in,width=1in]{cos-exp.png}} & \multirow{3}{*}{Laplace} & Direct & 5.80\e{-07}~(6) & 8.52\e{-07}~(8) & 1.67\e{-09}~(10) & 5.65\e{-12}~(12) \\ & & \abbrevformat{QBKIX}\xspace (one) & 2.49\e{+00}~(6)\ensuremath{^1} & 1.32\e{-04}~(8) & 4.62\e{-09}~(10) & 3.09\e{-09}~(12) \\ & & \abbrevformat{QBKIX}\xspace (two) & 4.29\e{-01}~(6) & 3.06\e{-04}~(8) & 4.25\e{-07}~(10) & 1.50\e{-07}~(12) \\\cmidrule(l){2-7} & \multirow{5}{*}{Stokes} & Direct & 1.48\e{-04}~(6) & 6.67\e{-05}~(8) & 6.51\e{-08}~(10) & 6.06\e{-10}~(12) \\ & & \abbrevformat{QBKIX}\xspace (one) & 2.89\e{-08}~(20) & 4.78\e{-09}~(24) & 1.73\e{-09}~(28) & 6.38\e{-10}~(32) \\ & & \abbrevformat{QBKIX}\xspace (two) & 6.95\e{-06}~(16) & 4.87\e{-08}~(32) & 3.31\e{-09}~(48) & 9.45\e{-10}~(64) \\\cmidrule(l){2-7} & \multirow{2}{*}{Helmholtz$^{2}$ ($\omega=2$)} & \abbrevformat{QBKIX}\xspace (one) & 2.12\e{-04}~(8) & 1.20\e{-09}~(12) & 4.22\e{-10}~(16) & 2.09\e{-11}~(20) \\ & & \abbrevformat{QBKIX}\xspace (two) & 3.97\e{-04}~(8) & 1.91\e{-07}~(12) & 3.42\e{-08}~(16) & 7.92\e{-09}~(20) \\\cmidrule(l){2-7} & \multirow{2}{*}{Yukawa ($\lambda=2$)} & \abbrevformat{QBKIX}\xspace (one) & 1.60\e{-04}~(8) & 6.42\e{-07}~(12) & 3.84\e{-09}~(16) & 1.48\e{-09}~(20) \\ & & \abbrevformat{QBKIX}\xspace (two) & 5.44\e{-04}~(8) & 1.27\e{-07}~(12) & 2.19\e{-08}~(16) & 4.79\e{-09}~(20) \\\cmidrule(l){2-7} & \multirow{2}{*}{Elastostatic ($\nu=0.1$)} & \abbrevformat{QBKIX}\xspace (one) & 2.07\e{-03}~(8) & 7.16\e{-06}~(12) & 4.35\e{-07}~(16) & 7.19\e{-07}~(20) \\ & & \abbrevformat{QBKIX}\xspace (two) & 3.17\e{-02}~(8) & 1.27\e{-05}~(12) & 2.26\e{-06}~(16) & 6.77\e{-07}~(20) \\ \bottomrule \multicolumn{7}{l}{\parbox{.95\linewidth}{ \vspace{.5em} \addtocounter{footnote}{1} \thefootnote~~When there are a few panels on the boundary, a check circle may be placed near other panels which adversely affects the error. \newline \addtocounter{footnote}{1} \thefootnote~~For Helmholtz equation, we use a combined field formulation. }} \end{tabular} \mcaption{tbl:m-conv}{Solution convergence vs. number of panels}{ Error in the solution to interior Dirichlet boundary value problems using non-adaptive $M$-panel quadrature and \abbrevformat{QBKIX}\xspace for solution. The subplots show $\Gamma$ (solid) and the exterior sources used to generate the solution, and interior test points. There are 40 source points outside the domain and error is measured on 40 points inside. The error is the maximum of absolute error over these interior points. The numerical parameters are $\ensuremath{{n_p}}=32$, $\ensuremath{{n_c}}=2\ensuremath{{n_p}}$, $\ensuremath{R}=8\ensuremath{{r_c}}$, and $\ensuremath{\delta}=3\ensuremath{{r_c}}$. ``Direct'' indicates usage of the quadrature of \cref{Anyst} instead of \abbrevformat{QBKIX}\xspace for the linear solve. ``One'' and ``two'' indicate one- or two-sided versions of on-surface \abbrevformat{QBKIX}\xspace discussed in \cref{s:sided}. } \end{table} \subsection{Operator spectrum and \abbrevformat{GMRES}\xspace convergence rate\label{ssc:spectrum}} We now perform numerical tests of the one-sided and two-sided variants of on-surface evaluation of \abbrevformat{QBKIX}\xspace discussed in \cref{s:sided} and compare it to direct use of an accurate quadrature. To simplify comparisons, we use an operator with a smooth kernel (Laplace). The spectra and convergence behavior for singular kernels is similar. In \cref{fig:spectrum} we plot\emdash/for the domain shown in \cref{fig:inf-error} and the Laplace equation\emdash/ the eigenvalues for four different approximations to the operator $-\mbox{\small $\frac{1}{2}$} + D$: one-sided (interior) \abbrevformat{QBKIX}\xspace, the one-sided (exterior) \abbrevformat{QBKIX}\xspace, two-sided \abbrevformat{QBKIX}\xspace, and the quadrature given by \cref{Anyst}, to which we refer as \emph{direct}. The exterior version of \abbrevformat{QBKIX}\xspace is constructed similarly to the interior variant discussed in \cref{sec:formulation}. The only modification is that for each collocation point $\vx_0$ on $\Gamma$, we place an expansion center at ${\vector{c}} = \vx_0 + \delta \vn$. We see that the one-sided variants have clusters of eigenvalues near zero, whereas the two-sided variant and the Nystr\"om\xspace matrix have a cleaner spectrum with eigenvalue clustering only around $\frac{1}{2}$. A broader spread of the eigenvalues has a negative impact on \abbrevformat{GMRES}\xspace convergence \cite{nachtigal1992a}. \cref{fig:spectrum}, right, shows \abbrevformat{GMRES}\xspace residual versus the iteration number for the interior, two-sided, and direct operators with two different right-hand sides (boundary data corresponding to a harmonic function and a random right-hand side). The convergence of one-sided interior \abbrevformat{QBKIX}\xspace is identical to the Nystr\"om\xspace method convergence up to the residual magnitude on the order of numerical accuracy of \abbrevformat{QBKIX}\xspace, but it slows down once the residual decreases below this value (near \sci{-9}). The two-sided variant has identical convergence behavior to the direct method, and converges in a few iterations. We also show the residual for a random-right hand side to expose the effect of near-zero eigenvalues: we see that convergence is very slow for the one-sided scheme in this case, but for the two-sided scheme it is the same as for the true smooth data $f$. \begin{figure}[!bt] \centering \small \setlength\figureheight{1.3in} \setlength\figurewidth{2in} \includepgf{spectrum} % \mcaption{fig:spectrum}{The spectra of discretizations of the Laplace \dl operator}{This figure shows eigenvalues, and the \abbrevformat{GMRES}\xspace convergence rate, for different discretizations of the Laplace \dl operator in the domain shown in \cref{fig:inf-error}. The left plots show the real part and the magnitude of the eigenvalues corresponding the one-sided interior \abbrevformat{QBKIX}\xspace, one-sided exterior \abbrevformat{QBKIX}\xspace, two-sided \abbrevformat{QBKIX}\xspace, and the plain Nystr\"om\xspace matrix. See \cref{s:sided,ssc:spectrum}. The right plot shows the residual versus the iteration number for the three interior variants with two different right hand sides (boundary data corresponding to a harmonic function or random data). The residual of the two-sided and Nystr\"om\xspace schemes are indistinguishable. } \end{figure} \subsection{Error for Dirichlet problems for five {\abbrevformat{PDE}\xspace}s\label{s:bvp}} For this set of tests, we use adaptive refinement as described in \cref{sec:formulation}. We use \abbrevformat{QBKIX}\xspace both as the on-surface quadrature scheme when solving for the desired density as well as the evaluator for the near-singular integrals. As before, we use boundary data sampled from a sum of fundamental solutions centered at a set of points close to the boundary. \cref{fig:inf-error} plots the error across the domain for all of the~{\abbrevformat{PDE}\xspace}s listed in \cref{eqn:pde-type}, on the points lying on a $600\times 600$ grid and interior to the domain. When an evaluation point is within $2\delta$ distance from the boundary, it is evaluated using the nearest \abbrevformat{QBKIX}\xspace expansion. The remaining points are evaluated using \cref{eqn:smooth-quad} applied to \cref{eqn:generic-bi-sol}. \begin{figure}[!tb] \centering \setlength\figureheight{1.8in} \subfloat[Laplace $(M=30)$\label{sfg:laplace}]{\includegraphics[height=\figureheight]{interior-LaplaceDL-qbkix}} \subfloat[Helmholtz $(M=30)$\label{sfg:helmholtz}]{\includegraphics[height=\figureheight]{interior-HelmholtzCF-qbkix}} \subfloat[Yukawa $(M=30)$\label{sfg:yukawa}]{\includegraphics[height=\figureheight]{interior-YukawaDL-qbkix} \includegraphics[height=.95\figureheight]{interior-cbar}}\\ \subfloat[Stokes velocity $(M=48)$\label{sfg:stokes}]{\includegraphics[height=\figureheight]{interior-StokesDL-qbkix}} \subfloat[Elastostatic $(M=44)$\label{sfg:elastostatic}]{\includegraphics[height=\figureheight]{interior-ElastostaticDL-qbkix} \includegraphics[height=.95\figureheight]{interior-cbar}}\\ \subfloat[Smooth stokes velocity $(M=26)$\label{sfg:stokes-velocity}]{\includegraphics[height=\figureheight]{interior-velocity-StokesDL-qbkix}} \subfloat[Smooth stokes pressure $(M=26)$\label{sfg:stokes-pressure}]{\includegraphics[height=\figureheight]{interior-pressure-LaplaceSL-qbkix} \includegraphics[height=.95\figureheight]{interior-cbar}} % \mcaption{fig:inf-error}{The $\log_{10}$ of pointwise error}{ The interior Dirichlet boundary value problem is solved with known solution generated by source points distributed over an exterior circle as shown in the lower figure in \cref{tbl:m-conv}, apart from in \subref{sfg:stokes-velocity} and \subref{sfg:stokes-pressure} where we use the cubic flow with velocity $\vu=[y^3,x^3]$ and pressure $p=6xy$. Error is evaluated on the same fine grid used for visualization ($600\times 600$). We use $q=16$ node Gauss--Legendre\xspace panels and set $\ensuremath{\acc_a}=10^{-13}$ in the adaptive panel quadrature set-up. $M$ denotes the number of boundary panels. The expansion centers $\ensuremath{\vector{c}}$ are shown by black dots close to the boundary. } \end{figure} We observe that parameter choices which were selected for the Laplace equation perform well for the other {\abbrevformat{PDE}\xspace}s. As expected, the highest error is due to expansions for panels adjacent to larger ones (e.g. \cref{sfg:laplace}). \subsection{Domain with a large number of corners\label{ssc:corner}} As a final example, we use \abbrevformat{QBKIX}\xspace in a domain with 256 corners as shown in \cref{fig:corner}. A Laplace boundary value problem is solved using \abbrevformat{GMRES}\xspace with tolerance for relative residual set to $\ensuremath{\acc_r} = \sci{-6}$. The boundary condition is generated similar to the examples in \cref{s:bvp} by placing 32 source points on a circle with radius 0.75 centered at $[0.5,0.5]$ (the domain's bounding box is $[0,1]\times[0,1]$). The boundary of the domain is adaptively refined, with minimum panel length set to $\ensuremath{\varepsilon}_l=\ensuremath{\acc_r}/10$. Large panels are also refined based on the adaptive criterion we outlined in \cref{sec:formulation}. The dyadic and adaptive refinements result in a total of 9560 panels. Due to the singularities on the boundary, the system matrix is ill-conditioned. The ill-conditioning is greatly reduced using left and right preconditioners with square root of smooth quadrature weights on its diagonal \cite{bremer2012}, solving for density in $L^2$ sense. Considering this preconditioning and since the last panel in each side of the corner is of length smaller than $\ensuremath{\acc_r}/10$, we set the density on those panels to zero (effectively deleting the last two panels). The \abbrevformat{GMRES}\xspace converges after 33 iterations; we use \abbrevformat{KIFMM}\xspace (with accuracy set to $\ensuremath{\acc_r}/10$) for fast evaluation. \begin{figure}[!tb] \centering \setlength\figureheight{2in} \setlength\figurewidth{2in} \subfloat[The $\log_{10}$ of pointwise error]{\includegraphics[width=\figurewidth]{corner-LaplaceDL-qbkix} \hspace{1em}\includegraphics[height=\figureheight]{interior-cbar}} \hspace{.15\figurewidth} \subfloat[The solution]{\includegraphics[width=\figurewidth]{corner-LaplaceDL-solution} \hspace{1em}\includegraphics[height=\figureheight]{corner-LaplaceDL-cbar}} \mcaption{fig:corner}{\abbrevformat{QBKIX}\xspace in a domain with 256 corners}{} \end{figure}
777a48166a37dcd77d06209fd01ce9114cb1bf2b
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Inflationary cosmology is one of the two existing descriptions of the early Universe, in the context of which the theoretical inconsistencies of the Big Bang description of our Universe were successfully addressed \cite{inflation1,inflation2,inflation3,inflation4}, with the other alternative being bouncing cosmology \cite{bounce1,bounce2,bounce3,bounce4,bounce5,bounce6}. The latest observational data coming from Planck \cite{planck} posed stringent constraints on inflationary models and verified the validity of some models, while rendering other models non-viable. Recently, an interesting class of models was discovered in \cite{alpha1}, called the $\alpha$-attractors models, with the characteristic property of these models being that the predicted spectral index of primordial curvature perturbations and the scalar-to-tensor ratio was identical for all the models, in the large $N$ limit, where $N$ is the $e$-foldings number. These models were later on studied in \cite{alpha2,alpha3,alpha4,alpha5,alpha6,alpha7,alpha8,alpha9,alpha10}, and also for an earlier study of some $\alpha$-attractor-like potentials, see Ref. \cite{slowrollsergei}. Well known inflationary models are special limiting cases of some $\alpha$ attractors models, like for example the Starobinsky model \cite{starob1,starob2}, the Higgs inflationary model \cite{higgs}. To our opinion the most appealing property of the $\alpha$-attractors models is that these models have a large plateau in their potential, for large scalar field values, and in the small $\alpha$ limit, and the potential of these models is asymptotically quite similar to the hybrid inflation scenario \cite{hybrid}. The hybrid inflation model introduced flat potentials in the research field of inflationary cosmology, and a crucial assumption was used, that the initial state of the scalar field did not correspond to the extremum of the scalar potential. Effectively, in the large field limit, all the $\alpha$-attractors models tend to some variant form of the hybrid inflation scenario, hence the very own idea of hybrid inflation is successful, in view of the observational data. The $\alpha$-attractors models originate from supergravity models, which also involve a controllable supersymmetry breaking at the minimum of the scalar potential \cite{susybr1}. In addition, in some cases, late-time acceleration can be accommodated in the context of $\alpha$-attractors models \cite{linderefs1,linder}. In this paper we aim to investigate whether the attractor property also occurs in the Jordan frame, in terms of the corresponding $F(R)$ gravity. In principle, since the curvature perturbation in the Jordan frame is invariant under a conformal transformation, and also since the tensor perturbations are invariant, someone would expect that the both the spectral index and also the scalar-to-tensor ratio in the Jordan frame, should be identical to the ones calculated for the Einstein frame. However, in the large field limit, the conformal transformation diverges, so in principle, the era where the scalar field reaches the plateau, makes the conformal transformation unbounded. This large field era corresponds to a pole in the scalar-field Jordan frame, and it is exactly the era where the $\alpha$-attractors models yield equivalent observational data. Motivated by this, the focus in this paper is to check explicitly whether the attractors property is shared by the $F(R)$ gravity equivalent of these theories. We shall use the large field limit of the $\alpha$-attractors models and using well-known methods we shall find the Jordan frame $F(R)$ gravity theory corresponding to the potential of the $\alpha$-attractors models. As we explicitly demonstrate, the general problem is not so trivial as it seems, since the analytic treatment is in general impossible to be performed, for general values of $\alpha$. In effect, we investigate certain convenient examples and also for various limits of the parameter $\alpha$, and as we show, the attractors property holds true for the $F(R)$ gravity equivalent theories, and more importantly, the models yield identical observational data to the $R^2$ inflation model. This finds its explanation to the fact that the cosmological evolution during the slow-roll era is a quasi-de Sitter evolution. But why there is a need to study the physics in different frames? This is a deep question, so now we shall try to answer this question, since this is our main motivation for the subject of this paper. In general, for every theoretical proposal in modified gravity, it is compelling to compare the results with the observational data. In this research line, the $F(R)$ gravity Jordan frame and/or the Einstein frame, may provide a viable description of the observable Universe, however it is not for sure that a viable theory in the Jordan frame may give also a viable theory in the Einstein frame. In addition, the viability of a theoretical description does not come in hand with the physically convenient description. So the question is which of the two frames is the more physical one (at least, in some sense), or which of the two frames describes in a more appealing way the cosmic history of our Universe. To a great extent, the answer to this question depends on the compatibility of the resulting theory with the observational data. In addition, in principle there are quantities that should be the same in the Jordan and Einstein frames, and these are actually the quantities that are invariant under conformal transformations. For a quasi-de Sitter evolution, it is expected that the spectral index and the scalar-to-tensor ratio should be equivalent in the two frames, as it was shown in \cite{kaizer,newsergei}. However, this should be explicitly checked, since when neutron stars are studied, different results occur in the two frames \cite{capp}. In addition, a finite-time singularity of a certain type in one frame, does not correspond to the same type of singularity in the other frame \cite{noo5,bahamonte}, since the conformal transformation becomes ill defined on the singular point. Also the presence of matter can lead to escalated complications between frames, since it is minimally coupled in the Jordan frame but it is non-minimally coupled in the corresponding Einstein frame. Also it may occur that the Universe is accelerating in one frame, but decelerates in the other \cite{capp2}. Hence, these arguments essentially explain our motivation to study the attractor picture in the Jordan frame. This paper is organized as follows: In section II we describe in brief the essential features of the $\alpha$-attractors models, and we demonstrate how the attractor property occurs. In section III, we address the same issue in the $F(R)$ gravity equivalent theory and we demonstrate in detail how the attractor property occurs in this case, by studying analytically some characteristic examples and limiting cases. Finally, the conclusions follow in the end of the paper. Also in this paper we will assume that the geometric background will be a flat Friedmann-Robertson-Walker metric, with line element, \be \label{metricfrw} ds^2 = - dt^2 + a(t)^2 \sum_{i=1,2,3} \left(dx^i\right)^2\, , \ee with $a(t)$ denoting as usual the scale factor. Moreover, we assume that the connection is a symmetric, metric compatible and torsion-less affine connection, the so-called Levi-Civita connection. For the metric with line element that of Eq. (\ref{metricfrw}), the Ricci scalar reads, \begin{equation} \label{ricciscal} R=6(2H^2+\dot{H})\, , \end{equation} with $H$ denoting the Hubble rate $H=\dot{a}/a$. Also we use a units system such that $\hbar=c=8\pi G=\kappa^2=1$. \section{The Inflationary Attractors Essentials and the $F(R)$ Gravity Description} As we mentioned in the introduction, the terminology $\alpha$-attractor models refers to inflationary models with plateau potentials \cite{alpha1,alpha2,alpha3,alpha4,alpha5,alpha6,alpha7,alpha8,alpha9,slowrollsergei}. These models include the $R^2$ inflation model in the Einstein frame \cite{starob1,starob2}, and the Higgs inflation model \cite{higgs}. An essential feature in these models is the existence of a pole in the kinetic term of the non-canonical scalar field description. Usually the description using a non-canonical scalar field is called the Jordan frame description, so in order to avoid confusion with the $F(R)$ description, we shall refer to the non-canonical scalar field Jordan frame as ``$\phi$-Jordan frame'' and to the $F(R)$ Jordan frame simply as ``Jordan frame''. In the $\phi$-Jordan frame, the $\alpha$-attractors models have the following gravitational action \cite{alpha1}, \begin{equation}\label{alphaact} \mathcal{S}=\sqrt{-g}\left(\frac{R}{2}-\frac{\partial_{\mu}\phi \partial^{\mu}\phi}{2(1-\frac{\phi^2}{6\alpha})^2} -V(\phi)\right)\, , \end{equation} where $R$ is the Ricci scalar and also we used units where the gravitational constant is $G=1$. Notice that the action (\ref{alphaact}) contains a pole at $\phi=\sqrt{6\alpha}$, and this is of fundamental importance in the $\alpha$-attractor theories, since the order of the pole crucially affects the spectral index of primordial curvature perturbations $n_s$, while the residue of the pole affects the scalar-to-tensor ratio $r$ \cite{alpha4}. By making the transformation, \begin{equation}\label{dftrans} \frac{\mathrm{d}\phi }{1-\frac{\phi^2}{6\alpha}}=\mathrm{d}\varphi\, , \end{equation} the non-canonical action of Eq. (\ref{alphaact}) is transformed into the canonical scalar field action, \begin{equation}\label{canonocclact} \mathcal{S}=\sqrt{-g}\left(\frac{R}{2}-\frac{1}{2}\partial_{\mu}\phi \partial^{\mu}\phi -V(\sqrt{6\alpha}\tanh (\frac{\varphi}{\sqrt{6\alpha}}))\right)\, , \end{equation} where the argument of the scalar potential easily follows by solving the transformation equation (\ref{dftrans}). One of the most interesting features of the $\alpha$-attractors models is that at small $\alpha$, or equivalently at large $\varphi$ values, the quite generic potentials $V(\sqrt{6\alpha}\tanh (\frac{\varphi}{\sqrt{6\alpha}}))$ approach an infinitely long de Sitter plateau, which corresponds to the value of the non-canonical potential $V(\phi)$ at the boundary $V(\phi )\Big{|}_{\pm \sqrt{6\alpha}}$. The terminology attractors is justified due to the fact that regardless of the form of the potential, all the $\alpha$-attractor models lead to the same spectral index of primordial curvature perturbations $n_s$ and to the same scalar-to-tensor ratio $r$, in the small $\alpha$ limit, which have the following form, \begin{equation}\label{scaspectscalar} n_s\simeq 1-\frac{2}{N},\,\,\,r\simeq \frac{12\alpha}{N^2}\, , \end{equation} where $N$ is the $e$-foldings number. The purpose of this paper is to investigate if this attractor picture remains when one considers the $F(R)$ gravity equivalent theory corresponding to the canonical scalar fields. The main reason behind the attractor picture in the Einstein frame is that the various generic potentials $V(\sqrt{6\alpha}\tanh (\frac{\varphi}{\sqrt{6\alpha}}))$ have a similar limiting behavior in the small $\alpha$ limit. We shall consider two classes of potentials, namely the T-models and the E-models, with the potential in the T-models case being of the following form, \begin{equation}\label{tmodels} V(\varphi)=\alpha \mu^2 \tanh^2 (\frac{\varphi}{\sqrt{6\alpha}})\, , \end{equation} where $\mu$ is a positive number, freely chosen. In the case of the E-models, the potential has the following form, \begin{equation}\label{potentialemodels} V(\varphi )=\alpha \mu^2 \left( 1-e^{-\sqrt{\frac{2}{3\alpha}}\varphi}\right)^{2 n}\, , \end{equation} with the parameter $n$ being a positive number, not necessarily an integer. Note that for $\alpha=n=1$, the potential (\ref{potentialemodels}) becomes, \begin{equation}\label{starobmodel} V(\varphi )=\alpha \mu^2 \left( 1-e^{-\sqrt{\frac{2}{3}}\varphi}\right)^{2}\, , \end{equation} which is the Starobinsky model \cite{starob1}, so essentially the Starobinsky model is a subcase of the E-models. The scalar potential in the large $\varphi$ limit becomes approximately equal to, \begin{equation}\label{limtmodel} V(\varphi )\simeq \alpha \mu^2 \left(1- 4e^{-\sqrt{\frac{2}{3\alpha}}\varphi}\right)\, , \end{equation} while the E-model potential in the large $\varphi$ limit becomes approximately equal to, \begin{equation}\label{smallalphaemodelpot} V(\varphi )\simeq \alpha \mu^2 \left(1- 2 n e^{-\sqrt{\frac{2}{3\alpha}}\varphi}\right)\, . \end{equation} As it can be seen, the potentials of Eqs. (\ref{limtmodel}) and (\ref{smallalphaemodelpot}) coincide when $n=2$, but as we already mentioned the resulting spectral index $n_s$ and the scalar-to-tensor ratio coincide for general $n$, and also the number $n$ does not appear in the resulting expressions of $n_s$ and $r$. Let us briefly demonstrate this issue, since it is of crucial importance when we compare the Einstein frame observational indices with the Jordan frame ones. As we will show, any difference should originate from the slow-roll conditions in the two frames. Let us consider the limiting case potential of Eq. (\ref{smallalphaemodelpot}), and in the following we shall focus on this potential, since almost all the cases we will study result to this potential in the small $\alpha$ limit. The fist two slow-roll indices $\epsilon$ and $\eta$ in the slow-roll approximation for a canonical scalar field are defined as follows, \begin{equation}\label{slowrollscalar} \epsilon (\varphi)=\frac{1}{2}\left( \frac{V'(\varphi)}{V(\varphi)}\right)^2,\,\,\,\eta (\varphi)=\frac{V''(\varphi)}{V(\varphi)}\, , \end{equation} and also the $e$-foldings number $N$ can also be expressed in terms of the potential when the slow-roll approximation is used, and it explicitly reads, \begin{equation}\label{efoldings} N\simeq \int_{\varphi}^{\varphi_{i}}\frac{V(\varphi)}{V'(\varphi)}\mathrm{d}\varphi \, , \end{equation} where $\varphi_i$ is some initial value of the canonical scalar field. For the potential (\ref{smallalphaemodelpot}), the $e$-foldings number $N$ can be expressed in terms of the canonical scalar field, and in the small $\alpha$ limit, the resulting expression is, \begin{equation}\label{slowrollind} N\simeq \frac{3\alpha e^{\sqrt{\frac{2}{3\alpha }\varphi}}}{4n}\, , \end{equation} so by calculating the slow-roll indices and substituting the $e$-foldings number from Eq. (\ref{efoldings}), the slow-roll indices take the following form, \begin{equation}\label{slowrollapprox1new} \epsilon\simeq \frac{3\alpha}{4 N^2},\,\,\,\eta \simeq -\frac{1}{N}\, . \end{equation} The spectral index of primordial curvature perturbations $n_s$ and the scalar-to-tensor ratio $r$ calculated for a canonical scalar field, are equal to, \begin{equation}\label{spectscalindex} n_s\simeq 1-6\epsilon+2\eta,\,\,\, r\simeq 16 \epsilon\, , \end{equation} so by substituting the slow-roll indices from the expressions (\ref{slowrollapprox1new}), the resulting observational indices are, \begin{equation}\label{observslowroli1} n_s\simeq 1-\frac{2}{N}-\frac{9\alpha}{2N^2},\,\,\,r\simeq \frac{12\alpha}{N^2}\, . \end{equation} At large $N$, the observational indices of Eq. (\ref{observslowroli1}) coincide with the results of Eq. (\ref{scaspectscalar}), so at leading order only the spectral index is independent of $\alpha$ and also both the observational indices do not depend on the parameter $n$. This is exactly the attractor picture for the general class of the potentials, which have limiting form (\ref{smallalphaemodelpot}). Below we quote the three crucial conditions that need to hold true in order the attractor picture in the Einstein frame occurs: \begin{itemize} \item The small $\alpha$ limit of the potential should be taken. \item The large $N$ limit should be taken. \item The slow-roll approximation should hold true. \end{itemize} As we will show shortly, when these conditions hold true in the Jordan frame, then the attractor picture occurs in the Jordan frame too, with the difference that the observational indices have no $\alpha$ dependence. Let us start our Jordan frame considerations by firstly finding the vacuum $F(R)$ gravity \cite{reviews1,reviews2,reviews3,reviews4} that can generate potentials as in Eq. (\ref{smallalphaemodelpot}). This limiting case covers both the E-models and T-models in the small $\alpha$ limit. Before proceeding to the main focus of this article, we recall some essential features of the connection between the Einstein and Jordan frame equivalent theories \cite{reviews1,reviews2,slowrollsergei,sergeioikonomou1,sergeioikonomou2,sergeioikonomou4}. Consider the following $F(R)$ gravity action, \begin{equation}\label{pure} \mathcal{S}=\frac{1}{2}\int\mathrm{d}^4x \sqrt{-\hat{g}}F(R)\, , \end{equation} where $\hat{g}_{\mu \nu}$ is the metric tensor in the Jordan frame. Introducing the auxiliary field $A$ in the Jordan frame action (\ref{pure}), the latter can be written as follows, \begin{equation}\label{action1dse111} \mathcal{S}=\frac{1}{2}\int \mathrm{d}^4x\sqrt{-\hat{g}}\left ( F'(A)(R-A)+F(A) \right )\, . \end{equation} By varying the action of Eq. (\ref{action1dse111}) with respect to the scalar field $A$, it yields the solution $A=R$, and this proves the mathematical equivalence of the actions (\ref{action1dse111}) and (\ref{pure}). A crucial step in finding the Einstein frame canonical scalar-tensor theory corresponding to the $F(R)$ gravity (\ref{pure}) is to perform a canonical transformation. It is important to note that the canonical transformation should not contain the parameter $\alpha$, see the discussion in the Appendix on this issue. The canonical transformation that connects the Einstein and Jordan frames is the following, \begin{equation}\label{can} \varphi =\sqrt{\frac{3}{2}}\ln (F'(A)) \end{equation} with $\varphi$ being the canonical scalar field in the Einstein frame. Upon conformally transforming the Jordan frame metric $\hat{g}_{\mu \nu }$ as follows, \begin{equation}\label{conftransmetr} g_{\mu \nu}=e^{-\varphi }\hat{g}_{\mu \nu } \end{equation} where $g_{\mu \nu}$ denotes the Einstein frame metric, we obtain the following Einstein frame canonical scalar field action, \begin{align}\label{einsteinframeaction} & \mathcal{\tilde{S}}=\int \mathrm{d}^4x\sqrt{-g}\left ( R-\frac{1}{2}\left (\frac{F''(A)}{F'(A)}\right )^2g^{\mu \nu }\partial_{\mu }A\partial_{\nu }A -\left ( \frac{A}{F'(A)}-\frac{F(A)}{F'(A)^2}\right ) \right ) \\ \notag & = \int \mathrm{d}^4x\sqrt{-g}\left ( R-\frac{1}{2}g^{\mu \nu }\partial_{\mu }\varphi\partial_{\nu }\varphi -V(\varphi )\right ) \end{align} The Einstein frame potential $V(\varphi )$ of the canonical scalar field $\varphi $ is equal to, \begin{align}\label{potentialvsigma} V(\varphi )=\frac{1}{2}\left(\frac{A}{F'(A)}-\frac{F(A)}{F'(A)^2}\right)=\frac{1}{2}\left ( e^{-\sqrt{2/3}\varphi }R\left (e^{\sqrt{2/3}\varphi} \right )- e^{-2\sqrt{2/3}\varphi }F\left [ R\left (e^{\sqrt{2/3}\varphi} \right ) \right ]\right )\, . \end{align} The Ricci scalar as a function of the scalar field can be found by solving Eq. (\ref{can}) with respect to $A$, having in mind of course the equivalence of $R$ and $A$. It is straightforward to obtain the $F(R)$ gravity that generates a specific potential, by simply combining Eqs. (\ref{potentialvsigma}) and (\ref{can}). Indeed, by taking the derivative of both sides of Eq. (\ref{potentialvsigma}), with respect to the Ricci scalar, and also due to the fact that $\frac{\mathrm{d}\varphi}{\mathrm{d}R}=\sqrt{\frac{3}{2}}\frac{F''(R)}{F'(R)}$, we obtain the following relation, which is crucial for the analysis that follows, \begin{equation}\label{solvequation} RF_R=2\sqrt{\frac{3}{2}}\frac{\mathrm{d}}{\mathrm{d}\varphi}\left(\frac{V(\varphi)}{e^{-2\left(\sqrt{2/3}\right)\varphi}}\right) \end{equation} with $F_R=\frac{\mathrm{d}F(R)}{\mathrm{d}R}$. The above differential equation (\ref{solvequation}) combined with the solution of Eq. (\ref{can}) with respect to $R$, will provide us with the $F(R)$ gravity that generates some of the $\alpha$-attractors potential we presented earlier. For illustrative purposes let us see how the $F(R)$ reconstruction method works, given the Einstein frame. Consider the Starobinsky potential (\ref{starobmodel}), so by substituting this in Eq. (\ref{solvequation}), and also using the fact that $F_R=e^{\sqrt{\frac{2}{3}}\varphi}$, we obtain the following algebraic equation, \begin{equation}\label{staroalgebreqn} F_R R-\left(4 F_R^2 \mu ^2-4 F_R \mu ^2\right)=0\, , \end{equation} which has the solution, \begin{equation}\label{starsol1} F_R=\frac{4 \mu ^2+R}{4 \mu ^2}\, , \end{equation} so by integrating with respect to $R$ we obtain the well-known $R^2$ model, which is $F(R)=R+\frac{R^2}{8 \mu^2}$. Note that the latter result gives implicitly the corresponding $F(R)$-gravity alpha-attractor. \section{$F(R)$ Gravity Description: Some Examples for Specific and Limiting Cases of $\alpha$} \subsection{The Case $\alpha=1/4$} By using the reconstruction method we presented we will investigate which $F(R)$ gravities can generate the $\alpha$-attractors potential we discussed earlier. We shall be interested in the large $\varphi$ values which correspond to the inflationary de Sitter plateau in the Einstein frame, or near the pole at $\phi=\sqrt{6}$ in the $\phi$-Jordan frame. Suppose that $\alpha$ is not specified, so by substituting the potential (\ref{smallalphaemodelpot}) in Eq. (\ref{solvequation}), we obtain the following algebraic equation, \begin{equation}\label{algegeneralalpha} F_R R-4 \alpha \mu ^2 F_R^{-\left(\sqrt{\frac{1}{\alpha }}-2\right)} \left(F_R^{\sqrt{\frac{1}{\alpha }}}+\left(\sqrt{\frac{1}{\alpha }}-2\right) n\right)=0\, . \end{equation} However for general $\alpha$ it is a rather formidable task to solve the algebraic equation (\ref{algegeneralalpha}), so we shall specify the value of $\alpha$ for various interesting cases. An interesting case, and one of the few that can be analytically solved, is for $\alpha=1/4$ since the parameter $\alpha$ is smaller than unity. Consider that $\alpha=1/4$, in which case the algebraic equation (\ref{algegeneralalpha}) is simplified as follows, \begin{equation}\label{casealpha1} F_R R-F_R^2 \mu ^2=0\, , \end{equation} and the non-trivial solution to (\ref{casealpha1}) is $F_R(R)=\frac{R}{\mu^2}$, therefore, the resulting $F(R)$ gravity is, \begin{equation}\label{casealpjha1solutin1} F(R)=\frac{R^2}{2\mu^2}+\Lambda\, . \end{equation} The integration constant $\Lambda$ can only be specified if we follow the inverse reconstruction procedure and we identify the resulting potential with (\ref{smallalphaemodelpot}), for $\alpha=1/4$. Indeed, by using Eq. (\ref{can}), we obtain that $R=\mu^2e^{\sqrt{\frac{2}{3}}\varphi}$, so by combining this with the resulting $F(R)$ gravity (\ref{casealpjha1solutin1}) and by substituting in the first equation in Eq. (\ref{potentialvsigma}), we obtain the following potential, \begin{equation}\label{respot1} V(\varphi)=\frac{\mu^2}{4}\left( 1-\frac{2\Lambda}{\mu^2}e^{-2\sqrt{\frac{2}{3}}}\right)\, . \end{equation} The potential (\ref{respot1}) has to be identical to the one in Eq. (\ref{smallalphaemodelpot}), so the parameter $\Lambda$ is $\Lambda=n \mu^2$. Having the $F(R)$ gravity equivalent theory of the $\alpha$-attractor potential (\ref{smallalphaemodelpot}), we can calculate the slow-roll indices and the corresponding observational indices in the Jordan frame and see whether the attractor picture remains, as in the Einstein frame. Let us start by finding an approximate expression for the Hubble rate at early times, as a function of the cosmic time $t$. In order to do this we will need the cosmological equations for the FRW metric (\ref{metricfrw}) in the case a general $F(R)$ gravity is used. By varying the action (\ref{pure}), with respect to the corresponding metric, we obtain the following cosmological equations, \begin{align}\label{cosmoeqns} & 6F_RH^2=F_RR-F-6H\dot{F}_R ,\\ \notag & -2\dot{H}F_R=\ddot{F}_R-H\dot{F}_R\, , \end{align} so by using these and the slow-roll approximation, we will be able to obtain an approximate form for the Hubble rate during the slow-rolling phase of inflation. We shall use the first equation in Eq. (\ref{cosmoeqns}), so by substituting the expressions for $F_R$ and $F(R)$ from Eq. (\ref{casealpjha1solutin1}) and also the expression for the Ricci scalar (\ref{ricciscal}), we obtain the following differential equation, \begin{equation}\label{tenfrl} 36 H''(t)+\frac{\mu ^4 n-18 H'(t)^2}{H(t)}+108 H(t) H'(t)=0\, . \end{equation} The only dominant term during the slow-roll phase is the last one, so by solving it we obtain $H(t)=H_0$, which describes a de Sitter solution. However, it can be checked that the exact de Sitter solution is not a solution to the following equation, \begin{equation}\label{dffd} 2F(R)-RF'(R)=0\, , \end{equation} when the $F(R)$ gravity is equal to the one appearing in Eq. (\ref{casealpjha1solutin1}), so this means that the approximate solution $H(t)\simeq H_0$ is a leading order result and more terms are needed in order to better describe the solution. The solution $H(t)=H_0$ during the slow-roll era where $F'(R)\gg 1$ can also be verified by using well-known results in the literature \cite{reviews1}, where for an $F(R)$ gravity of the form $F(R)=C+\alpha R^n$, $n>0$, the first slow-roll index during the slow-rolling phase is calculated to be, \begin{equation}\label{snresult} \epsilon_1=\frac{2-n}{(n-1)(2n-1)}\, , \end{equation} which is identical to the slow-roll index corresponding to an $F(R)=R+\alpha R^n$ gravity. Therefore for $n=2$ the first slow-roll index is zero which implies that $\dot{H}(t)=0$ and hence the Hubble rate $H(t)\simeq H_0$ describes the evolution during the slow-roll phase. However, as in the case we described earlier, the exact de Sitter solution is not a solution to the equation (\ref{dffd}) for both the $C+\alpha R^n$ and the $R+\alpha R^n$ model, even for $n=2$. Therefore we seek a leading order quasi-de Sitter evolution, exactly as in the case of the $R^2$ gravity model. So we differentiate the first equation in Eq. (\ref{cosmoeqns}), with respect to the cosmic time, and we get the following differential equation, \begin{align}\label{difeqn} & 6 H'(t) R'(t) F''(R(t))+6 H(t)^2 R'(t) F''(R(t))-R(t) R'(t) F''(R(t))+6 H(t) \left(R'(t)^2 F'''(R(t))+R''(t) F''(R(t))\right)\\ \notag &+12 H(t) F'(R(t)) H'(t)+F'(R(t)) R'(t)-F'(R(t)) R'(t)=0\, . \end{align} In effect, by substituting the $F(R)$ gravity and its higher derivatives with respect to the Ricci scalar, we obtain the following differential equation, \begin{equation}\label{approx2} \frac{36 H'''(t)}{H(t)}+108 H''(t)+\frac{216 H'(t)^2}{H(t)}=0\,. \end{equation} During the slow-roll phase, the first and last terms are subdominant, since $H(t)\gg H^{(3)}(t) $ and $H'(t)\ll H(t)$, plus the last term contains a higher power of $H'(t)$. So the only term that yields the leading order solution is the second term, so by solving the resulting differential equation we obtain the Hubble rate during the slow-roll phase which is, \begin{equation}\label{quasidesitter} H(t)=H_0-H_i (t-t_k)\, , \end{equation} which is a quasi de Sitter evolution, and $H_0$, $H_i$ are arbitrary integration constants. Note that $t_k$ is chosen to be the time that the horizon crossing occurred, at which time the comoving wavenumber satisfied $k=a(t)H(t)$, with $a(t)$ being the scale factor. Also the minus sign in the Hubble evolution (\ref{quasidesitter}) has be chosen in order the first slow-roll parameter at the end of inflation has a positive sign. Having the approximate expression for the Hubble rate during the slow-rolling phase, will enable us to calculate the observational indices in the $F(R)$ gravity case. The general expressions of the slow-roll indices for an $F(R,\phi)$ gravity with gravitational action (setting $\kappa=1$), \begin{equation}\label{asx1} \mathcal{S}=\int \mathrm{d}^4x\sqrt{-g}\left (F(R,\phi)-\frac{\omega (\phi)}{2}g^{\mu \nu}\partial_{\mu}\phi\partial{\nu}\phi-V(\phi) \right)\, , \end{equation} are equal to \cite{noh}, \begin{equation}\label{asx2} \epsilon_1=-\frac{\dot{H}}{H^2},\,\,\,\epsilon_2=\frac{\ddot{\phi}}{H\dot{\phi}},\,\,\,\epsilon_3=\frac{\dot{F}'(R,\phi)}{2 HF'(R,\phi)},\,\,\,\epsilon_4\simeq \frac{\dot{E}}{2HE}\, , \end{equation} where $E$ is equal to, \begin{equation}\label{epsilonspecif} E=F'(R,\phi)\omega(\phi)+\frac{3\dot{F}'(R,\phi)^2}{2\dot{\phi}^2}\, . \end{equation} Specifying now, the slow-roll indices as functions of the Hubble rate for a general $F(R)$ gravity, are equal to \cite{noh}, \begin{equation}\label{slowrollparameters} \epsilon_1=-\frac{\dot{H}}{H^2},\,\,\,\epsilon_2=0,\,\,\,\epsilon_3=\simeq \epsilon_1,\,\,\,\epsilon_4\simeq -3\epsilon_1+\frac{\dot{\epsilon}_1}{H(t)\epsilon_1}\, , \end{equation} and the the spectral index of primordial curvature perturbations $n_s$ and the scalar-to-tensor ratio $r$ reads, \begin{equation}\label{nsdf} n_s\simeq 1-6\epsilon_1-2\epsilon_4\simeq 1-\frac{2\dot{\epsilon}_1}{H(t)\epsilon_1},\,\,\,r=48\epsilon_1^2\, . \end{equation} The slow-roll approximation breaks down at first order when the first slow-roll parameter becomes of order one, that is $\epsilon_1\simeq \mathcal{O}(1)$, so at that point, say at $t=t_f$, assume that the Hubble rate is $H(t_f)=H_f$. From the expression of the first slow-roll parameter $\epsilon_1$, we get $1\simeq \frac{H_i}{H_f^2}$, so $H_f\simeq \sqrt{H_i}$. Then from Eq. (\ref{quasidesitter}) we obtain that, \begin{equation}\label{finalghf} H_f-H_0\simeq -H_i (t_f-t_k)\, , \end{equation} so by substituting $H_f$ we get, \begin{equation}\label{timerelation} t_f-t_k=\frac{H_0}{H_i}-\frac{\sqrt{H_i}}{H_i}\, . \end{equation} Since $H_0$ and $H_i$ are expected to be of the same order during the slow-roll and also it is expected that these parameters have quite large values. In effect, the second term in Eq. (\ref{timerelation}) is subdominant, and therefore we have, \begin{equation}\label{timerelation1} t_f-t_k\simeq \frac{H_0}{H_i}\, . \end{equation} In order to introduce the $e$-foldings number into the calculation, we use the relation that expresses the $e$-folding number as a function of the Hubble rate, \begin{equation}\label{efold1} N=\int_{t_k}^{t_f}H(t)\mathrm{d}t\, , \end{equation} calculated from the horizon crossing time until the end of inflation time. Substituting Eq. (\ref{quasidesitter}) in Eq. (\ref{efold1}) we get, \begin{equation}\label{nefold1} N=H_0(t_f-t_k)-\frac{H_i(t_f-t_k)^2}{2}\, , \end{equation} so by using (\ref{timerelation1}) we finally obtain, \begin{equation}\label{nfinal1} N=\frac{H_0^2}{2H_i}\, . \end{equation} In effect, we have at leading order, \begin{equation}\label{leadingtf} t_f-t_k\simeq \frac{2N}{H_0}\, , \end{equation} so by calculating the slow-roll indices, we easily find that the spectral index and the scalar-to-tensor ratio are equal to, \begin{equation}\label{spectrscatotensor} n_s\simeq 1-\frac{4 H_i}{\left(H_0-\frac{2 H_i N}{H_0}\right)^2},\, \,\,r=\frac{48 H_i^2}{\left(H_0-\frac{2 H_i N}{H_0}\right)^4}\, . \end{equation} In the large $N$ limit the observational indices read, \begin{equation}\label{spectrscatotensor1} n_s\simeq 1-\frac{H_0^2}{H_i N^2},\, \,\,r=\frac{3 H_0^4}{H_i^2 N^4}\, , \end{equation} so by substituting Eq. (\ref{nfinal1}) in Eq. (\ref{spectrscatotensor1}) we get, \begin{equation}\label{jordanframeattract} n_s\simeq 1-\frac{2}{N},\,\,\,r\simeq \frac{12}{N^2}\, . \end{equation} This result is identical to the one of $R^2$-inflation \cite{starob1} or Higgs inflation \cite{higgs} due to the well-established equivalence of spectral index and of the scalar-to-tensor ratio in the Einstein and $F(R)$ frames \cite{newsergei}, when the slow-roll approximation is assumed. Therefore, we demonstrated that even for a general value of the parameter ``$\alpha$'', in the Jordan frame, the general $\alpha$-$F(R)$ gravity models result to the same spectral index and scalar-to-tensor ratio, therefore the attractor picture remains in the Jordan frame too, at least at leading order. We need to note that a crucial assumption for the calculation was that a slow-roll era is realized in the model and also the large $N$ limit was taken in the end. Finally, also note that graceful exit in this theory is achieved in the same way as in $R^2$ inflation, or may be generated by growing curvature perturbations. The calculations we performed here could be performed because the $\alpha=1/4$ case was easy to deal analytically. However for general values of $\alpha$ this is not possible. Take for example the $\alpha=1/9$ case, in which case, the algebraic equation (\ref{algegeneralalpha}) becomes, \begin{equation}\label{algbreeqn19} R F_R-\frac{4 \mu ^2 \left(F_R^3+n\right)}{9 F_R}=0\, , \end{equation} and the solution to this equation is, \begin{align}\label{sol1} & F_R= \frac{3 R}{4 \mu ^2}-\frac{9 R^2}{4 \mu ^2 \sqrt[3]{8 \sqrt{16 \mu ^{12} n^2-27 \mu ^6 n R^3}+32 \mu ^6 n-27 R^3}} -\frac{\sqrt[3]{8 \sqrt{16 \mu ^{12} n^2-27 \mu ^6 n R^3}+32 \mu ^6 n-27 R^3}}{4 \mu ^2}\,. \end{align} As it is obvious, this equation cannot be solved analytically with respect to $R$ and hence, we cannot find an analytic expression for the potential. Before closing we need to examine whether the value for $\alpha$ we chose, namely $\alpha=1/4$, yields viable results in the Einstein frame. So we examine the Einstein frame observational indices of Eq. (\ref{observslowroli1}), for $\alpha=1/4$. For the set of values $(N,\alpha)=(60,1/4)$, we obtain $n_s\simeq 0.966667$ and also $r\simeq 0.000833333$, which are compatible with the Planck data which constraint $n_s$ and $r$ as follows \cite{planck}, \begin{equation} \label{planckdata} n_s=0.9644\pm 0.0049\, , \quad r<0.10\, . \end{equation} Also for the set $(N,\alpha)=(50,1/4)$ we get, $n_s\simeq 0.966667$ and also $r\simeq 0.0012$, which are also in agreement with the Planck data of Eq. (\ref{planckdata}). Hence, for all the physically relevant values of the $e$-foldings number $N$, which lie in the interval $N=(50,60)$, the value $\alpha=1/4$ makes the Einstein frame observables compatible with the Planck data. \subsection{The Case $\alpha=4$} Another case that can be treated analytically is the case $\alpha=4$, in which case the algebraic equation (\ref{algegeneralalpha}) becomes, \begin{equation}\label{casealpha1newsec} F_R R-16 F_R^{3/2} \mu ^2 \left(\sqrt{F_R}-\frac{3 n}{2}\right)=0\, , \end{equation} and there are two non-trivial solutions to Eq. (\ref{casealpha1newsec}), which are, \begin{equation}\label{newfrnontrivs1} F_R=\frac{18 \mu ^4 n^2+6 \sqrt{9 \mu ^8 n^4+\mu ^6 n^2 R}+\mu ^2 R}{16 \mu ^4}\, , \end{equation} \begin{equation}\label{newfrnontrivs2} F_R=\frac{18 \mu ^4 n^2-6 \sqrt{9 \mu ^8 n^4+\mu ^6 n^2 R}+\mu ^2 R}{16 \mu ^4}\, , \end{equation} but the only solution which can yield the potential (\ref{smallalphaemodelpot}) is that of Eq. (\ref{newfrnontrivs1}), as we now show. Indeed, the $F(R)$ gravity corresponding to Eq. (\ref{newfrnontrivs1}) is equal to, \begin{equation}\label{caseonefrnontriv} F(R)=\frac{9 n^2 \sqrt{\mu ^6 n^2 \left(9 \mu ^2 n^2+R\right)}}{4 \mu ^2}+\frac{R \sqrt{\mu ^6 n^2 \left(9 \mu ^2 n^2+R\right)}}{4 \mu ^4}+\frac{9 n^2 R}{8}+\frac{R^2}{32 \mu ^2}+\Lambda\, , \end{equation} where $\Lambda$ is a constant the value of which will be determined shortly. Correspondingly, the $F(R)$ gravity corresponding to Eq. (\ref{newfrnontrivs2}) is equal to, \begin{equation}\label{caseonefrnontriv1} F(R)=-\frac{9 n^2 \sqrt{\mu ^6 n^2 \left(9 \mu ^2 n^2+R\right)}}{4 \mu ^2}-\frac{R \sqrt{\mu ^6 n^2 \left(9 \mu ^2 n^2+R\right)}}{4 \mu ^4}+\frac{9 n^2 R}{8}+\frac{R^2}{32 \mu ^2}+\Lambda\, . \end{equation} The Einstein frame potential corresponding to the $F(R)$ gravities above can be easily calculated by using the canonical transformation relation (\ref{can}), which for both the $F(R)$ gravities of Eqs. (\ref{caseonefrnontriv}) and (\ref{caseonefrnontriv1}) yields the following two solutions, \begin{equation}\label{r1} R=8 \mu ^2 e^{\frac{\varphi }{\sqrt{6}}} \left(3 n+2 e^{\frac{\varphi }{\sqrt{6}}}\right), \end{equation} \begin{equation}\label{r2} R=8 \mu ^2 e^{\frac{\varphi }{\sqrt{6}}} \left(3 n-2 e^{\frac{\varphi }{\sqrt{6}}}\right), \end{equation} Consider first the case for which the $F(R)$ gravity is given by (\ref{caseonefrnontriv1}), so by combining Eqs. (\ref{r1}) and (\ref{potentialvsigma}), the resulting Einstein frame potential is, \begin{equation}\label{akyron1} V(\varphi)=-\frac{1}{2} \Lambda e^{-2 \sqrt{\frac{2}{3}} \varphi }+4 \mu ^2-\frac{1}{8} 27 \mu ^2 n^4 e^{-2 \sqrt{\frac{2}{3}} \varphi }-27 \mu ^2 n^3 e^{-\sqrt{\frac{3}{2}} \varphi }-36 \mu ^2 n^2 e^{-\sqrt{\frac{2}{3}} \varphi }-\mu ^2 n e^{-\frac{\varphi }{\sqrt{6}}}\, , \end{equation} which cannot be equal to the potential of Eq. (\ref{smallalphaemodelpot}), regardless of the value of the parameter $\Lambda$. The same applies if we choose the solution (\ref{r2}), for the $F(R)$ gravity of Eq. (\ref{caseonefrnontriv1}). So the only $F(R)$ gravity with physical interest which reproduces correctly the Einstein frame potential (\ref{potentialvsigma}) is that of Eq. (\ref{caseonefrnontriv}), which for the solution (\ref{r2}) it becomes, \begin{equation}\label{einsteinpotentunbroken} V(\varphi )=-\frac{1}{2} \Lambda e^{-2 \sqrt{\frac{2}{3}} \varphi }+4 \mu ^2+\frac{27}{8} \mu ^2 n^4 e^{-2 \sqrt{\frac{2}{3}} \varphi }-8 \mu ^2 n e^{-\frac{\varphi }{\sqrt{6}}}\, , \end{equation} so by choosing the constant $\Lambda$ to be equal to $\Lambda=\frac{27 \mu ^2 n^4}{4}$, the potential (\ref{einsteinpotentunbroken}) is identical to the potential of Eq. (\ref{potentialvsigma}). Now let us find an approximate expression for the Hubble rate during the slow-roll era, and in order to do this, we use the expressions for the $F(R)$ gravity and its derivative given in Eqs. (\ref{caseonefrnontriv}) and (\ref{newfrnontrivs1}), and plug this in the first equation of Eq. (\ref{cosmoeqns}), so after some algebra we obtain, \begin{align}\label{simslowroll1} & \frac{\Lambda}{H(t)^2}+\frac{27 H'(t)}{4 \mu ^2}+\frac{3 \sqrt{3} H(t) \sqrt{\mu ^6 n^2}}{2 \mu ^4}+\frac{9 \sqrt{3} n^2 \sqrt{\mu ^6 n^2}}{2 \mu ^2 H(t)}+\frac{27 n^2}{4}\\ \notag & +\frac{9 \sqrt{3} \sqrt{\mu ^6 n^2} H''(t)}{8 \mu ^4 H(t)^2}+\frac{9 H''(t)}{4 \mu ^2 H(t)}+\frac{3 \sqrt{3} \sqrt{\mu ^6 n^2} H'(t)}{\mu ^4 H(t)}+\frac{9 H'(t)^2}{8 \mu ^2 H(t)^2}=0\, , \end{align} and therefore in the slow-roll limit, this differential equation becomes, \begin{equation}\label{tenfrlnewsec} \frac{27 H'(t)}{4 \mu ^2}+\frac{3 \sqrt{3} H(t) \sqrt{\mu ^6 n^2}}{2 \mu ^4}+\frac{27 n^2}{4}=0\, . \end{equation} The above differential equation can be solved analytically, so the resulting solution is, \begin{equation}\label{mainalpha4solution} H(t)\simeq \mathcal{C}e^{-Hi (t-t_k)}-H_0\, , \end{equation} where the parameter $\mathcal{C}$ is an arbitrary integration constant, $t_k$ is the horizon crossing time, while $H_i$ and $H_0$ are equal to, \begin{equation}\label{ho} H_0=\frac{3}{2} \sqrt{3} \mu n,\,\,\,H_i=\frac{2 \mu n }{3 \sqrt{3}}\, . \end{equation} Note that since the cosmic time in the solution (\ref{mainalpha4solution}) takes values of the order $\sim \mathcal{O}(10^{-20})$sec, the exponential is of the order $\sim \mathcal{O}(1)$, so practically the evolution is a nearly de Sitter evolution. Also, in order for the Hubble rate to to have negative values, the parameter $\mathcal{C}$ must satisfy $\mathcal{C}\gg H_0$. However, the evolution is actually a quasi de Sitter expansion, and this can be seen by expanding the exponential in (\ref{mainalpha4solution}) in powers of the cosmic time, so the evolution becomes, \begin{equation}\label{mainalpha4solution1} H(t)\simeq \mathcal{C}-H_0-\mathcal{C}H_i (t-t_k)\, . \end{equation} Hence by having the evolution (\ref{mainalpha4solution1}) at hand, we can easily calculate the observational indices. Following the steps of the $\alpha=1/4$ case, the spectral index of the primordial curvature perturbations in terms of the $e$-foldings number $N$, is at leading order, \begin{equation}\label{spectrscatotensornewsec} n_s\simeq 1-\frac{4 C H_i}{\left(C \left(\frac{H_i N}{C-H_0}-1\right)+H_0\right)^2}, \end{equation} while the scalar-to-tensor ratio is, \begin{equation}\label{spectrscatotensornewsec1} r\simeq \frac{48 \mathcal{C}^2 H_i^2}{\left(-\frac{\mathcal{C} H_i N}{\mathcal{C}-H_0}+\mathcal{C}-H_0\right)^4}. \end{equation} Therefore, in the large $N$ limit, the observational indices read, \begin{equation}\label{spectrscatotensor1newsec} n_s\simeq 1-\frac{2}{N},\, \,\,r=\frac{12}{N^2}\, , \end{equation} where we have used the fact that during the slow-roll era, $N\simeq \frac{(\mathcal{C}-H_0)^2}{2 \mathcal{C} H_i}$. So the resulting observational indices are identical to the ones of Eq. (\ref{jordanframeattract}), which means that the $R^2$ model in the slow-roll approximation is the attractor of this $\alpha$ model too, in the large $N$ limit of course. As we did in the previous case, we need to examine whether the value $\alpha=4$, yields viable results also in the Einstein frame. So we examine the Einstein frame observational indices of Eq. (\ref{observslowroli1}), for $\alpha=4$ in this case. For the set of values $(N,\alpha)=(60,4)$, we obtain $n_s\simeq 0.966667$ and also $r\simeq 0.0133333$, which are compatible with the Planck data of Eq. (\ref{planckdata}). In addition, for the set of values $(N,\alpha)=(50,1/4)$ we obtain $n_s\simeq 0.966667$, and the predicted scalar-to-tensor ratio is $r\simeq 0.0192$, so these are also compatible to the Planck constraints. Therefore, the case $\alpha=4$ yields physically viable observational indices, for all the physically relevant values of the $e$-foldings number $N$, which lie in the interval $N=(50,60)$. However, the case $\alpha>4$ is somewhat more involved and certain constraints should be imposed on $\alpha$, in order for the Einstein frame observables to be compatible with the observational data. For example if $N=60$, the parameter $\alpha$ should satisfy $\alpha<29$, and for $N=50$, the parameter $\alpha$ should satisfy $\alpha<20$. Nevertheless we will not further discuss these cases, since it is difficult to obtain analytical solutions in the Jordan frame, for these values of $\alpha$. \subsection{Limiting Cases of $\alpha$} However, let us briefly discuss the small-$\alpha$ limit of the algebraic equation (\ref{algegeneralalpha}), in order to see how the $F(R)$ gravity behaves. We shall present only the leading order behavior. Let us start with the leading order result, in which case, the algebraic equation (\ref{algegeneralalpha}) in the limit $\alpha \ll 1$ becomes, \begin{equation}\label{limalgrbea} R F_R^{\sqrt{\frac{1}{\alpha }}}-4 \alpha \mu ^2 \left(F_R^{\sqrt{\frac{1}{\alpha }}}+\left(\sqrt{\frac{1}{\alpha }}-2\right) n\right)=0\, , \end{equation} so the solution to this equation is \cite{slowrollsergei}, \begin{equation}\label{kkouanalyticleading} F_R(R)=\frac{R}{4\alpha\mu^2}+n(2-\frac{1}{\sqrt{2}})R^{1-\frac{1}{\sqrt{\alpha}}}\, , \end{equation} which is valid for $0<\frac{1}{\sqrt{\alpha}}$. By integrating, we find at leading order that the resulting $F(R)$ gravity is equal to, \begin{equation}\label{kkouanalyticleadingsmallalpha} F(R)=\frac{R^2}{8\alpha\mu^2}+n R^{2-\frac{1}{\sqrt{\alpha}}}\, . \end{equation} Since $R\gg \gamma$, the leading order that controls the dynamical evolution in the small-$\alpha$ limit is the term $\sim R^2$, therefore that is $R^22$ inflation what drives the evolution. It is interesting that the inflationary $F(R)$ gravity of Eq. (\ref{kkouanalyticleadingsmallalpha}) reminds the sector of unified inflation-dark energy $F(R)$ gravity studied in Refs. \cite{reviews1,eli1}. Then, by adding to above approximate expression for alpha-attractor $F(R)$ inflationary theory the exponential dark energy sector $F(R)=R-2\Lambda (1-e^{R/R_0})$, we get unification of alpha-attractor inflation with dark energy in $F(R)$ gravity. We can easily find the observational indices for the $F(R)$ gravity of Eq. (\ref{kkouanalyticleadingsmallalpha}), so by using the first equation in Eq. (\ref{cosmoeqns}), upon differentiation with respect to the cosmic time and by keeping leading order terms in the slow-roll approximation we get the following differential equation, \begin{equation}\label{oneofthefinals} \frac{9 \gamma ^2 H(t) H^{(3)}(t)}{\mu ^2}+\frac{27 \gamma ^2 H(t)^2 H''(t)}{\mu ^2}+\frac{54 \gamma ^2 H(t) H'(t)^2}{\mu ^2}=0\, , \end{equation} and by dividing with $H(t)^2$ we get, \begin{equation}\label{enyaex} \frac{9 \gamma ^2 H^{(3)}(t)}{\mu ^2 H(t)}+\frac{27 \gamma ^2 H''(t)}{\mu ^2}+\frac{54 \gamma ^2 H'(t)^2}{\mu ^2 H(t)}=0\, . \end{equation} So the dominant term in the slow-roll approximation is the second, and by solving the resulting differential equation we obtain the following Hubble rate, \begin{equation}\label{hubbnewslw} H(t)\simeq C_1-C_2 t\, , \end{equation} which is valid during the slow-roll era. Hence the resulting evolution is a quasi-de Sitter evolution, and by using the same line of research as in the previous sections, the resulting observational indices are identical to the ones appearing in Eqs. (\ref{spectrscatotensor1newsec}) and (\ref{jordanframeattract}). So actually, the $R^2$ model is the attractor of all the $F(R)$ gravity equivalent theories of the Einstein frame $\alpha$-attractor models, always in the slow-roll approximation. It is easy to see that due to the relation (\ref{can}), the Ricci scalar as a function of the canonical scalar field will be $R=\frac{1}{A}e^{-\sqrt{\frac{2}{3\alpha}}\varphi}$, so a leading order term in the potential is, \begin{equation}\label{vpotleading} V(\varphi)\sim \frac{\sqrt{\alpha}}{A}e^{-\sqrt{\frac{2}{3\alpha}}\varphi}\, , \end{equation} so indeed, the potential of Eq. (\ref{smallalphaemodelpot}) is partially reconstructed. As it can be easily checked the leading order term in the potential (\ref{vpotleading}) is generated by the $R^2$ term in the $F(R)$ gravity. Let us discuss another limiting case of $\alpha$, in which case $\alpha$ is too large. In the large-$\alpha$ limit, the algebraic equation (\ref{algegeneralalpha}) becomes approximately, \begin{equation}\label{limalgrbealargealpha} F_R R-4 \alpha F_R^2 \mu ^2 (1-2 n)=0\, , \end{equation} so the resulting $F(R) $ gravity is at leading order, \begin{equation}\label{forevone} F(R)\simeq \frac{R^2}{8 \mu ^2 (\alpha -2 \alpha n)}\, . \end{equation} Therefore in this case too, the $R^2$ model is the attractor of the $F(R)$ gravities, at least when the slow-roll approximation. The resulting behavior in all the $F(R)$ cases we studied indicates that the $F(R)$ gravity equivalent theories of the Einstein frame $\alpha$-attractors models, have a unique attractor when the slow-roll limit is used, and this is the $R^2$ model. Therefore, regardless how the $F(R)$ gravity looks at second to leading order, the observational indices are affected mainly by the leading order term in the $F(R)$ gravity, and this is the reason why the resulting observational indices are identical to the ones of the Starobinsky model. \section{Conclusions} In this paper we investigated the $F(R)$ gravity equivalent theory of some classes of Einstein frame $\alpha$-attractors models. The full analytic treatment of the problem is not possible, so we chose a convenient Einstein frame $\alpha$-attractor model, and we calculated in detail the slow-roll indices in the slow-roll limit of the $F(R)$ gravity theory. As we demonstrated, in the Jordan frame, the attractors picture remains, since the resulting spectral index of primordial curvature perturbations and the scalar-to-tensor ratio remain attractors of the conveniently chosen $F(R)$ models. Interestingly enough, the resulting observational indices in the Jordan frame, are identical to the indices of the Starobinsky model, and actually the $R^2$ model is the attractor in the Jordan frame, at least when the slow-roll approximation is used. This result is not accidental, since in all the cases we studied, the $F(R)$ gravity in the limit $R\gg 1$, are approximately equal to $\sim R^2$, so the behavior is similar to the $R^2$ model. An important issue is that the scalar-to-tensor ratio in the Jordan frame does not depend on the ``$\alpha$'' parameter, and this is a difference between the Einstein and Jordan frame description. The question then is, does this occur due to the fact that the slow-roll approximation is used? Do the slow-roll conditions in the two frames impose different conditions on the resulting evolution? The quick answer is no, the two frames are equivalent, but this can be shown explicitly only for the $R^2$ model. However, for the simple $\alpha$-model we used, we showed that this is not true, so the question is if this holds true in general, or it holds true only for the specific models we studied. We defer this task in the future since the lack of analyticity forbids us for the moment to have a definite answer on this. \section*{Acknowledgments} This work is supported by MINECO (Spain), project FIS2013-44881 (S.D.O) and by Min. of Education and Science of Russia (S.D.O and V.K.O). \section*{Appendix: The Case of an $\alpha$-dependent Canonical Transformation} Consider the case that the canonical transformation which connects the Einstein and the Jordan frame is $\alpha$-dependent. In this case, the transformation (\ref{can}) becomes, \begin{equation}\label{canalpha} \varphi =\sqrt{\frac{3\alpha}{2}}\ln (F'(A)) \end{equation} In this case, the metric in the Einstein and Jordan frames, namely $\hat{g}_{\mu \nu}$ and $g_{\mu \nu}$, are related as follows, \begin{equation}\label{conftransmetralpha} g_{\mu \nu}=e^{-\sqrt{\frac{2}{3\alpha}}\varphi }\hat{g}_{\mu \nu }\, , \end{equation} where $g_{\mu \nu}$ denotes the Einstein frame metric. In order to make the presentation more transparent, we will adopt another notation different from the one we used in the main text. Suppose that we identify $\psi^2=e^{\sqrt{\frac{2}{3\alpha}}\varphi }$, then the Ricci scalar transforms as, \begin{equation}\label{Ricciconftrans} R=\psi^2\left( \tilde{R}+6\tilde{\square}\Psi-6\tilde{g}^{\mu \nu}\partial_{\mu}\Psi\partial_{\nu}\Psi\right)\, , \end{equation} where $\Psi=\ln \psi$. We can rewrite the Jordan frame $F(R)$ action (\ref{pure}) as follows, \begin{equation}\label{actionpuretranpro} \mathcal{S}=\int \mathrm{d}^4x\sqrt{-g}\left( \frac{1}{2}F'(R)R-V\right)\, , \end{equation} where the potential $V$ is equal to, \begin{equation}\label{potentialeqn} V=\frac{F'R-F}{2}\, . \end{equation} The determinant of the metric under the transformation (\ref{conftransmetralpha}) transforms as $\sqrt{-g}=\psi^{-4}\sqrt{\tilde{g}}$, where in terms of $\psi$, the scalar field is written as $\varphi=2\sqrt{\frac{3\alpha}{2}}\ln \psi$. By combining the above, the resulting Einstein frame action reads, \begin{equation}\label{profinal} \mathcal{S}=\int \mathrm{d}^4x \sqrt{-\tilde{g}}\left( \tilde{R}-3\tilde{g}^{\mu \nu}\partial_{\mu}\Psi\partial_{\nu}\Psi-V \right)\,. \end{equation} It can be easily shown that $\Psi=\varphi/\sqrt{6\alpha}$, so the Einstein action can be written in terms of the scalar field $\varphi$, and we have, \begin{equation}\label{profinal1} \mathcal{S}=\int \mathrm{d}^4x \sqrt{-\tilde{g}}\left( \tilde{R}-\frac{1}{2\alpha}\tilde{g}^{\mu \nu}\partial_{\mu}\varphi\partial_{\nu}\varphi-V(\varphi) \right)\,, \end{equation} where in this case the potential $V(\varphi)$ is equal to, \begin{equation}\label{potentialfinalform} V(\varphi)= \frac{1}{2}\left ( e^{-\sqrt{2/(3\alpha )}\varphi }R\left (e^{\sqrt{2/(3\alpha )}\varphi} \right )- e^{-2\sqrt{2/(3\alpha )}\varphi }F\left [ R\left (e^{\sqrt{2/(3\alpha )}\varphi} \right ) \right ]\right ) \end{equation} Hence by looking the resulting scalar action (\ref{profinal1}), it can be seen that the $\alpha$-dependent conformal transformation leads to a non-canonical scalar-tensor theory. Note that by further re-scaling the scalar field $\varphi\to \phi\sqrt{\alpha}$, one obtains a canonical scalar field theory, however in this case, the canonical transformation (\ref{canalpha}) becomes, \begin{equation}\label{canalphanewsqu} \phi =\sqrt{\frac{3}{2}}\ln (F'(A))\, , \end{equation} which does not depend on $\alpha$ and therefore it is identical and it leads to the same results as the transformation we used in Eq. (\ref{can}).
3d69981610b1982c91d80f9ecb6cdd856d727d95
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} The study of the nature and evolution of interstellar medium (ISM) is one of the indispensable steps for understanding the physical processes that govern star formation in the universe, because stars are formed in and return to it. Molecular medium, which is a dense and cold phase of ISM, is the very site and material of star formation. The molecular medium in \textcolor{black}{the Milky Way} and galaxies show a wide range of physical scales, from sub-parsec scale to a few hundred parsec scale. They are referred to cores, clumps, giant molecular clouds (GMCs), and giant molecular associations (GMAs) (\cite{Scoville1987}; \cite{Rand1990}). In \textcolor{black}{the Milky Way}, 90\% of molecular gas is in the form of GMCs with a size of 20 pc or larger and a typical mass of $\sim4 \times 10^5 ~M_\odot$, and massive stars are formed as a cluster in such GMCs (\cite{Scoville1987}; \cite{Scoville2013}; \cite{Lada2003}). The physical and chemical properties of GMCs are greatly influenced by massive stars, owing to their stellar winds and supernovae \textcolor{black}{in the final stage}; for example, the surrounding ISM are compressed by them and new stars are formed there. Hence, it is essential to study the physical and chemical properties of GMCs, because these key processes will be imprinted into them. Classically, it is well-known that the molecular clouds follow scaling relations such as the size-line width relation, a.k.a. Larson's law (\cite{Larson1981}; \cite{Solomon1987}; \cite{Rosolowsky2008}; \cite{Bolatto2008}), and star formation laws, such as the Schmidt-Kennicutt law (\textcolor{black}{the S-K law hereafter,} \cite{Schmidt1959}; \cite{Kennicutt1998}; \cite{Kennicutt2012}). These are known to be very universal\textcolor{black}{, even in a high-redshift starburst galaxy (\citet{Swinbank2015} for the Larson's law, and \citet{Hatsukade2015} for the S-K law in the lensed starburst galaxy SDP.81 at $z=3.1$)}. However, recent studies suggest that such universality breaks down at some spatial scales and/or environments. For instance, the S-K law is shown to be valid only at scales larger than that of the GMCs, i.e., the breakdown of the S-K law occurs at $\lesssim$ 80 pc in the GMCs of M33 (\cite{Onodera2010}). \textcolor{black}{The size-line width relations are found to be different in the inner a few 100 pc regions of the \textcolor{black}{Milky Way} and galaxies such as NGC 253 (e.g., \cite{Leroy2016} and references therein; see also \citet{Utomo2015} for the absence of such clear size-line width correlation for the GMCs in NGC 4526, an early-type galaxy).} \textcolor{black}{\citet{Hughes2010} have reported some of GMC properties in the Large Magellanic Cloud (LMC), such as the CO surface brightness, can vary depending on the local environments including the stellar mass surface density.} Based on \textcolor{black}{the PdBI Arcsecond Whirlpool Survey} (PAWS), an extensive imaging survey of GMCs in M51 (\cite{Schinnerer2013}), the environmental dependence of the GMC mass spectrum has also been reported (\cite{Colombo2014}). \textcolor{black}{All these recent progresses are} motivating us to conduct more wider surveys of GMCs with a sufficiently high angular resolution (e.g., \cite{Leroy2016}). These previous studies of GMCs in galaxies are mainly based on $^{12}$CO observations. The $^{12}$CO emission is a bright, well-established proxy of the whole molecular gas, which is crucial for a statistical study of extragalactic GMCs. It is primarily very optically thick and molecular gas masses are derived by using a CO-to-H$_2$ conversion factor. It has been suggested that the CO-to-H$_2$ conversion factors in galaxies can vary by a factor of 10 or more depending on the environments of molecular clouds (see \cite{Bolatto2013} for a recent review), which can therefore introduce environment-dependent uncertainties. The isotopologue of CO, $^{13}$CO, gives \textcolor{black}{another empirical measure of molecular gas mass}, and extensive studies have been made using single-dish telescopes (e.g., \cite{Aalto1995}; \cite{Tosaki2002}; \cite{Matsushita2010}; \cite{Tan2011}; \cite{Vila-Vilaro2015}). However, only limited measurements of $^{13}$CO have been made so far for high- resolution, interferometric studies of extragalactic GMCs/GMAs (e.g., \cite{Helfer1995}; \cite{Papadopoulos1996}; \cite{{Aalto1997}}; \cite{Kikumoto1998}; \cite{Matsushita1998}; \cite{Meier2004}; \cite{Aalto2010}; \cite{Schinnerer2010}; \cite{Pan2015}; \cite{Alatalo2016}; \cite{Watanabe2016}; \cite{Konig2016}). This is particularly true for ``statistical'' studies of physical properties GMCs in the disk regions using $^{13}$CO; for example, 25 GMCs were identified in M64 based on $3''.5$ or 75 pc resolution $^{13}$CO($J$=1--0) observations using the BIMA array, which shows a different size-line width relationship from Larson's law (\cite{Rosolowsky2005a}). \citet{Hirota2011} conducted $\sim$50 pc resolution $^{13}$CO($J$=1--0) imaging of the northern spiral arm segment of IC 342 with the Nobeyama Millimeter Array and revealed that dissipation of excess kinetic energy of GMCs is a required condition for the onset of massive star formation based on 26 $^{12}$CO-identified GMCs. A significant development of such studies is highly expected in the ALMA era. GMC-scale chemical properties of extragalactic molecular clouds are also expected to bring new insights to the study of activities in galaxies. In the Galactic star forming regions, chemical evolution and spatial variation are found at core and clump scales (i.e., a few parsec or smaller), and it has been suggested that these chemical properties are intimately connected to star formation and associated processes such as outflows and resultant shocks (\textcolor{black}{e.g., \cite{Bachiller1997}; \cite{Sakai2007}; \cite{Ohashi2014}}). However, it is yet unexplored whether any chemical variation and evolution of ISM at a GMC scale (approximately a few tens to hundreds of parsecs) exists in disk regions of galaxies; \citet{Watanabe2016} studied chemical properties of five GMAs traced by C$^{18}$O at $\sim$300 pc resolution using CARMA, but the sensitivity and angular resolution are both not yet sufficient to address the variation of chemical properties and its link to the environment. Most of the high-resolution interferometric multi-molecule observations of galaxies have been mainly made in the nuclear regions of active galactic nuclei (AGNs) such as the circumnuclear disks (CNDs) of NGC 1068 (e.g., \cite{Krips2011}; \cite{GarciaBurillo2014}; \cite{Viti2014}; \cite{Takano2014}; \cite{Nakajima2015}) and NGC 1097 (\cite{Martin2015}), the nuclear regions of star-forming galaxies (\cite{Meier2005}; \cite{Meier2015}) and (ultra) luminous infrared galaxies (e.g., \cite{Martin2011}; \cite{Sakamoto2013,Sakamoto2014}; \cite{Saito2015}; \cite{Costagliola2015}; \cite{Tunnard2015}; \cite{Lindberg2016}). Here we present high-resolution ($1''.4$ or 98 pc at a distance of 14.4 Mpc, \cite{Tully1988}; \cite{Bland-Hawthorn1997}) and sensitive $^{13}$CO($J$=1--0), C$^{18}$O($J$=1--0), CS($J$=2--1) and CH$_3$OH($J_K$=$2_K$--$1_K$) imaging of the central 1 arcmin (4.2 kpc) diameter region of NGC 1068, one of the nearest Seyfert/starburst hybrid galaxies in the local universe. We selected this galaxy because we find evidence for striking spatial variation of molecular line distributions among molecular species including $^{13}$CO, C$^{18}$O, CS, HC$_3$N, CH$_3$CN, and CH$_3$OH across the starbusting spiral arms in our previous (cycle 0) ALMA observations (\cite{Takano2014}). Among these molecular lines, we focus on $^{13}$CO, C$^{18}$O, CS, and CH$_3$OH lines, and new observations were made during the ALMA cycle 2 operation. The angular resolution and sensitivity have been improved significantly compared with pre-ALMA $^{13}$CO and C$^{18}$O observations (e.g., \cite{Helfer1995}; \cite{Papadopoulos1996}; \cite{Krips2011}) and the ALMA cycle-0 measurement (\cite{Takano2014}). In particular, the noise level is improved by a factor of 6--10 (1$\sigma$ = 1.1 -- 1.7 mJy beam$^{-1}$ for a velocity resolution of $\sim$19 km s$^{-1}$ in \citet{Takano2014}, whereas the new observations reach a noise level of 0.6 mJy beam$^{-1}$ for a velocity resolution of 1.5 km s$^{-1}$; see section 2 for details). This significant improvement of sensitivity allows us to conduct the first extensive statistical study of physical and chemical properties of GMCs including methanol in disk regions of galaxies. Methanol is formed by hydrogenation of CO on the surface of dust grains in a low temperature condition ($\sim$10--15 K; e.g., \cite{Watanabe2003}), and comes into a gas phase due to relatively weak shocks (\cite{Flower2012}; \cite{Viti2011}; \cite{Bachiller1995}). Therefore, methanol can be regarded as a tracer of weak shocks in molecular medium. Methanol emission has been detected \textcolor{black}{using interferometers} in the central region of the star-forming galaxy IC 342 (\cite{Meier2005}), \textcolor{black}{Maffei 2 (\cite{Meier2012})}, the interacting galax{ies} VV 114 (\cite{Saito2015}, \textcolor{black}{\yearcite{Saito2016}}), \textcolor{black}{NGC 4038 (\cite{Ueda2016})} and a spiral arm in M51 (\cite{Watanabe2016}); no $\leq$100 pc resolution observations in disk regions of galaxies have been reported so far. With these improved ALMA data sets, our goals are (1) to study the physical and chemical properties of giant molecular clouds (GMCs) based on the $^{13}$CO-selected cloud catalog, and (2) to test whether these GMC-scale properties are linked to the larger-scale galactic environment. Note that we focus on the properties of GMCs in the spiral arms and inter-arm regions in this study; specifically, $5'' < r < 30''$ or 350 pc $< r <$ 2.1 kpc from the nucleus (see section 3.3). This is because our selection criteria of clumps using $^{13}$CO($J$=1--0) emission are not optimized for clouds in the CND. The impact of the AGN on the surrounding dense molecular medium, i.e., physical and chemical properties of GMCs in the CND of NGC 1068 will be reported in a forthcoming paper. The paper is organized as follows. The ALMA observations and data reduction procedures are described in section 2. After the data presentation and flux comparison with the previous single-dish measurements, in \textcolor{black}{section} 3 we outline the methodology of cloud identification, and then the derived GMC catalog is presented along with the derivation of physical quantities of GMCs. In \textcolor{black}{section} 4, we discuss the physical and chemical properties of GMCs and their possible dependence on environment. The outcomes of this study are summarized in \textcolor{black}{section} 5. \textcolor{black}{The data products of this study, including the FITS file of the $^{13}$CO data cube, will be publicly available from a dedicated web page\footnote{http://www.juen.ac.jp/lab/tosaki/n1068/}.} \section{Observations and Data Reduction} The central $\sim$1 arcmin diameter region of NGC 1068 was observed using the Band 3 receiver on ALMA with the 2SB dual-polarization setup, as a cycle 2 early science program (Project code = 2013.0.00060.S; PI name = T.~Tosaki). The central frequencies of four spectral windows, i.e., spw0, spw1, spw2, and spw3, were tuned to 110.000 GHz, 109.174 GHz, 97.174 GHz, and 96.741 GHz, respectively. Each spectral window has a bandwidth of 1.875 GHz with 3840 channels, giving a total frequency coverage of 7.5 GHz and a spectral resolution of 0.488 MHz. This configuration allows us to simultaneously observe the $J$=1--0 lines of $^{13}$CO (the adopted rest frequency $f_{\rm rest}$ = 110.201353 GHz, \cite{Lovas1992}) and C$^{18}$O (109.782160 GHz) in the upper sideband (USB), and the $J$=2--1 line of CS (97.980968 GHz) and $J_K$=$2_K$--$1_K$ group lines of CH$_3$OH (96.74142 GHz)\footnote{Transitions contributing to this $2_K$--$1_K$ group are $J_{K_a,K_c}$=$2_{-1,2}$--$1_{-1,1}E$, $2_{0,2}$--$1_{0,1}A$+, $2_{0,2}$--$1_{0,1}E$, and $2_{1,1}$--$1_{1,0}E$. The indicated rest frequency in the text is for $2_{0,2}$--$1_{0,1}A$+.} in the lower sideband (LSB), along with other rarer species such as HNCO and HC$_3$N (not presented in this paper). The ALMA observations were taken during three separated periods with six execution blocks in total. Table \ref{table:1} summarizes the details of the ALMA observations. Bright quasars including J0108+0135, J0224+0659, J0238+1636, J0241-0815, and J0423-0120 were observed to calibrate pass-band characteristics, and Mars, Uranus, and Neptune were used as primary flux calibrators. The resultant accuracy of the absolute flux scale is reported to be better than 10\% according to the ALMA Technical Handbook\footnote{\textcolor{black}{https://almascience.nao.ac.jp/documents-and-tools/cycle4/alma-technical-handbook}}. Calibration of the raw visibility data was conducted using the Common Astronomy Software Applications (CASA; \cite{McMullin2007}; \cite{Petry2012}). Version 4.3.1 \textcolor{black}{of the} program was used for the first two data sets, whereas the remaining four data sets were processed using the version 4.2.2 with pipeline. The image reconstruction from the calibrated visibilities and deconvolution were done with the task \verb|CLEAN| in CASA version 4.5.0. After the initial imaging with natural weighting, we determined the line-free channels and then continuum emission was subtracted from the visibilities using the task \verb|uvcontsub|. In order to obtain a similar beam size for the target lines with different frequencies, i.e., $^{13}$CO($J$=1--0) and C$^{18}$O($J$=1--0) in the USB, and CH$_3$OH($J_K$=$2_K$--$1_K$) and CS($J$=2--1) in the LSB, we created images using Briggs weighting with different robust parameters: \verb|robust| = 0.4 for $^{13}$CO and C$^{18}$O, 0.2 for CS, and 0.1 for CH$_3$OH. These parameters give the native beam sizes of $1''.39 \times 1''.09$, $1''.33 \times 1''.06$, $1''.38 \times 1''.12$, and $1''.39 \times 1''.12$ for $^{13}$CO, C$^{18}$O, CS, and CH$_3$OH, respectively. The CLEANed images were then finally convolved to a circular Gaussian with the FWHM of $1''.4$, giving a spatial resolution of 98 pc at the adopted distance of NGC 1068 (14.4 Mpc). Regarding the deconvolution, we employed the multi-scale CLEAN algorithm; the conventional CLEAN algorithm models an image by a sum of point sources (i.e., Gaussians with a single FWHM value), whereas the multi-scale CLEAN algorithm models an image by a collection of multiple Gaussians with different spatial scales. This method is known to recover higher fraction of flux for extended emission than the conventional CLEAN algorithm (\cite{Cornwell2008}; \cite{Rich2008}). We adopt the \verb|multiscale = [0, 10, 25]| with \verb|cell = '0.25arcsec'| in the task \verb|CLEAN|, specifying spatial scales of the synthesized beam size (i.e., the scale used in the conventional CLEAN), $2''.5$, and $6''.25$. This setup roughly corresponds to the scales of a point source (i.e., the beam size), the beam size $\times$2, and the beam size $\times$5, which are recommended in the CASA guide\footnote{https://casa.nrao.edu/Release4.1.0/doc/UserMan/UserMansu270.html}. The resultant 1$\sigma$ noise levels in the channel maps, which are measured in the line-free channels, are 0.64, 0.61, 0.59, and 0.70 mJy beam$^{-1}$ for $^{13}$CO, C$^{18}$O, CS, and CH$_3$OH, respectively, at a velocity resolution of 1.5 km s$^{-1}$. \section{Results} \subsection{Global distributions of molecular \textcolor{black}{emission lines}} The total integrated intensity maps of $^{13}$CO($J$=1--0), C$^{18}$O($J$=1--0), CS($J$=2--1) and CH$_3$OH($J_K$=$2_K$--$1_K$) emission lines are displayed in figure \ref{fig:1}. We computed 0th moment images of these molecular lines by using the \verb|CASA| task \verb|immoments|, over a velocity range of $V_{\rm LSR}$ = 955--1315 km s$^{-1}$ without any clipping to obtain these images. We detect all four emission lines in the CND and the starburst ring or tightly winding two-armed spirals. In particular, the high dynamic range ($\sim$72 in the channel maps) $^{13}$CO image, which is produced with a wide range of baseline length (see table \ref{table:1}, corresponding to 5.5 k$\lambda$ -- 290 k$\lambda$), excellent uv-coverage accomplished with multiple configuration observations with 30--40 antennas, and the multi-scale CLEAN implemented in \verb|CASA|, successfully reveals detailed sub-structures of spiral arms and faint clouds in the inter-arm regions, which are similar to recent fully-sampled sensitive $^{12}$CO observations of M51 (\cite{Koda2009}; \cite{Schinnerer2013}) and often reproduced in state-of-the-art numerical simulations (e.g., \cite{Baba2013}). The \textcolor{black}{emission lines} of $^{13}$CO and C$^{18}$O in the CND are relatively weak, whereas we find striking peaks in CS and CH$_3$OH, as pointed out by the previous ALMA cycle 0 observations (\cite{Takano2014}). The positions of peak intensities in the $^{13}$CO, C$^{18}$O, and CS maps are similar to each other, whereas the distribution of prominent CH$_3$OH peaks is significantly different from these three lines. This point will be discussed more quantitatively in section \ref{sec:GMC_properties}. \subsection{Comparison of flux with single-dish telescopes} The $^{13}$CO($J$=1--0) flux obtained from ALMA were compared with those from single-dish telescopes (NRO 45-m and IRAM 30-m) to estimate the recovery fraction of the total line flux. First, the $^{13}$CO($J$=1--0) data cube was convolved to $16''$, the beam size of the NRO 45-m telescope at 110 GHz. We then obtained a spectrum at the position of the nucleus, which was the pointing position of the 45-m telescope observation, and the velocity-integrated flux was computed. The resultant $^{13}$CO($J$=1--0) flux is 18.8 Jy km s$^{-1}$. On the other hand, a NRO 45-m telescope measurement gives a $^{13}$CO($J$=1--0) flux of $8.9\pm0.2$ K km s$^{-1}$ in the main-beam temperature scale (Takano et al., in preparation), corresponding to a flux density of $22.5\pm 0.51$ Jy km s$^{-1}$. If we consider the absolute flux calibration accuracy of the NRO 45-m telescope (typically $\pm 15$~\%, e.g., \cite{Yoshida2015}) and ALMA ($\pm 5$\%, the ALMA Technical Handbook\footnote{\textcolor{black}{https://almascience.nao.ac.jp/documents-and-tools/cycle4/alma-technical-handbook}}), the ratio of these two flux measurements is $\textcolor{black}{(18.8 \pm 0.9)}/(22.5 \pm 3.4) = 0.84 \pm 0.13$. Similarly, we then compare the ALMA flux with that using the IRAM 30-m telescope in the same manner; the ALMA $^{13}$CO cube was convolved to $21''$, the 30-m beam size at 110 GHz, and we obtained a spectrum at the center to compute the integrated intensity. It was found to be $37.2\pm1.86$ Jy km s$^{-1}$, whereas the IRAM 30-m measurement gave $13.10 \pm 0.24$ K km s$^{-1}$ (\cite{Aladro2015}) or $57.0 \pm 1.0$ Jy km s$^{-1}$. Again, considering a typical absolute calibration accuracy of IRAM 30-m telescope ($\sim \pm10$\%, EMIR Users Guide\footnote{http://www.iram.es/IRAMES/mainWiki/EmirforAstronomers}), the fraction of the recovered flux is then $\textcolor{black}{(37.2 \pm 1.9)/(57.0 \pm 5.7) = 0.65 \pm 0.07}$. These two measurements of the recovered flux ratios, i.e., ALMA/45-m = $84 \pm 13$\%, and ALMA/30-m = \textcolor{black}{$65 \pm 7$\%}, suggest that the overall $^{13}$CO flux seems to be recovered in these ALMA observations, despite the lack of Atacama Compact Array (ACA, a.k.a.~Morita Array) measurements. Note that almost all single-dish flux is claimed to be recovered in the ALMA cycle 0 observations of CO($J$=3--2) in NGC 1068 without ACA measurements (\cite{GarciaBurillo2014}). Although the difference between the recovered flux ratios of ALMA/45-m and ALMA/30-m is \textcolor{black}{not significant}, such a difference can happen indeed, if we recall the spatial extent of the $^{13}$CO($J$=1--0) emission in the central region of NGC 1068 and the difference of the NRO 45-m and IRAM 30-m beam sizes; a significant fraction of $^{13}$CO($J$=1--0) emission comes from the starburst ring with a diameter of $\sim 30''$, and the IRAM 30-m beam ($21''$) can couple with the emission from the starburst ring. Such extended emission tends to be resolved out by interferometric observations, giving a smaller recovered flux for IRAM measurements. On the other hand, the beam size of NRO 45m observations ($16''$) is significantly smaller than the size of the starburst ring, and it mainly catches the $^{13}$CO($J$=1--0) emission in the CND, which is spatially compact ($<$ $4''$), resulting in a higher recovered flux ratio. By adding ACA (including total power array) measurements to our ALMA data in the near future, we will be able to obtain fully sampled $^{13}$CO (and other molecular lines) data sets to confirm such estimations quantitatively. In any case, we suggest that the missing flux (if any) will have rather minor impact on the following analysis, because we mainly focus on the clumpy structures, i.e., GMCs, in this region. \subsection{Cloud identification using $^{13}$CO data cube} We applied the \verb|CLUMPFIND| algorithm (\cite{Williams1994}; \cite{Rosolowsky2005a}) to the $^{13}$CO($J$=1--0) data cube in order to decompose the $^{13}$CO emission into discrete clouds and measure their physical properties. We adopt a peak threshold of 8$\sigma$ (5.12 mJy beam$^{-1}$) and a contour increment of 4$\sigma$. This resulted in the detection of 265 clumps. After excluding clumps with the derived clump diameter smaller than $1''.4$ (i.e., the CLEAN beam), we constructed a catalog of clouds containing 187 individual clumps in the observed field of NGC 1068. In this study, we adopted a fairly high cut-off level (i.e., a threshold of 8$\sigma$ and a contour increment of 4$\sigma$), for the following reasons. Firstly, one of our goals of this study is to measure much weaker \textcolor{black}{emission lines} from rarer species such as C$^{18}$O and methanol, so the base GMC sample using $^{13}$CO should be detected in high signal-to-noise ratios (S/N). Secondly, another goal is to study the GMC mass spectrum in this region, and in order to compare with the previous studies, the molecular gas mass range of our prime interest is $\sim 10^4$ $M_\odot$ or larger (see section \ref{section:GMC_Mass_Function}); the adopted threshold of 8$\sigma$ corresponds to a molecular gas mass of $\sim 1.7 \times 10^4~ M_\odot$ (for a cloud with a size comparable to the beam size), and therefore the high cut-off is sufficient for this purpose, too. Lastly, the \verb|CLUMPFIND| algorithm was originally implemented for high S/N data, and some issues are known when applying it to the data with a low dynamic range (\cite{Rosolowsky2005a}; \cite{Sheth2008}). By setting such a high threshold, we can construct a robust GMC catalog which is less affected by software issues. It should also be emphasized that high velocity resolution (1.5 km s$^{-1}$) is essential for the study of molecular gas in the disk region of galaxies because the line widths of molecular clouds in the galactic disks can be just a few km s$^{-1}$, as shown in the later sections. We also note that because of the high S/N of GMCs, the accuracy of positions of the identified GMCs shall be significantly better than the beam size $\theta_{\rm beam}$ (as it scales as $\propto \theta_{\rm beam}$/(S/N)). The choice of the analysis algorithms, i.e., CPROPS (\cite{RosolowskyLeroy2006}) and dendrogram (\cite{Rosolowsky2008}), can be another issue \textcolor{black}{(e.g., \cite{DonovanMeyer2013})}. Intensity-based approaches, rather than decomposing the emission into clumps, have also been proposed (\cite{Sawada2012}; \cite{Hughes2013}; \cite{Leroy2016}). Detailed analysis of clump identification and/or intensity-based characterization of $^{13}$CO-identified cloud properties down to the sensitivity limit (e.g., 5$\sigma$ or less) with multiple-algorithm and approach comparison is beyond the scope of this paper, and such analysis will be presented elsewhere. \subsection{Physical properties and molecular line ratios of the identified clouds} \label{sec:M_13CO-Mvir} We estimate the molecular gas mass of each cloud from the velocity-integrated $^{13}$CO intensity, $M_{\rm ^{13}CO}$, using the following equation, which assumes local thermodynamic equilibrium (LTE) and optically thin emission (e.g., \cite{Meier2000}), \begin{eqnarray} N({\rm H_2})_{^{13}{\rm CO}}({\rm cm}^{-2}) = 2.41 \times 10^{14} \frac{[{\rm H}_2]}{[^{13}{\rm CO}]} \frac{e^{5.29/T_{\rm ex}(K)}}{e^{5.29/T_{\rm ex}(K)}-1} & \nonumber\\ \times I_{^{13}{\rm CO}} ({\rm K~ km~s}^{-1}), \end{eqnarray} where $N({\rm H_2})$ is the molecular hydrogen column density. Adopting an abundance ratio of $[{\rm H}_2]/[^{13}{\rm CO}] = 5.0 \times 10^5$ (\cite{Dickman1978}) and an excitation temperature $T_{\rm ex}$ of 8.5 K (\cite{Nakajima2015}), we obtain the column density, and it is then multiplied by the cloud area, \textcolor{black}{which is the projected area of the clump on the sky obtained by the} \verb|CLUMPFIND|, to obtain the molecular gas mass $M_{^{\rm 13}{\rm CO}}$. We obtain the molecular gas masses of GMCs $M_{^{\rm 13}{\rm CO}}$ in NGC 1068 ranging from $1.8 \times 10^4$ to $4.2 \times 10^7 M_\odot$. The averaged $M_{^{\rm 13}{\rm CO}}$ is 2.9 $\times 10^6 M_\odot$. Assuming a spherical cloud with a density profile $\rho \propto r^{-\beta}$, we calculate the virial mass of GMC using the following formula (e.g., \cite{MacLaren1988}\textcolor{black}{; see also \cite{Hirota2011}; \cite{DonovanMeyer2012}}), \begin{eqnarray} M_{\rm vir} (M_\odot) = 126 \frac{5-2\beta}{3-\beta} \left(\frac{R}{{\rm pc}}\right) \left(\frac{\Delta v}{{\rm km~s}^{-1}}\right)^2, \end{eqnarray} where $R$ is the radius of the cloud \textcolor{black}{(the effective circular radius defined by \citet{Williams1994}, which is deconvolved by the beam size)} and $\Delta v$ is the velocity width of the $^{13}$CO profile of the cloud. Here we adopt $\beta=1$ for consistency with previous similar measurements. The derived virial masses have a range from $2.7\times 10^4$ to $1.7\times 10^7 M_\odot$, and the average value, $2.6 \times 10^6 M_\odot$, is similar to that of $M_{^{13}{\rm CO}}$. The derived physical parameters of GMCs, i.e., position in the 3D-cube (dR.A., dDecl., and velocity), peak flux of $^{13}$CO emission, cloud radius, and velocity width (the full width of half maximum or FWHM of the $^{13}$CO($J$=1--0) spectrum) of 187 GMCs are listed in table \ref{table:2}, along with the molecular gas mass estimated from the $^{13}$CO emission and virial mass. Note that the cloud position, dR.A. and dDecl., means the distance from the center of Field of View, (R.A.(J2000) = \timeform{2h42m40.7s}, Decl.(J2000) = \timeform{-0D0'47.9''}). Figure \ref{fig:2} shows histograms of the size, velocity width, molecular gas mass ($M_{^{13}{\rm CO}}$), and virial mass for the identified clouds. The averaged size and velocity width of the identified clouds are 102 pc and 14.9 km s$^{-1}$, respectively. These value are slightly larger than the typical value for GMCs in the Milky Way (\cite{Sanders1985}), and smaller than those of GMAs in other galaxies (e.g. \cite{Rand1990}, \cite{Muraoka2009}). We refer to the identified clouds as GMCs. Here we display the identified clouds on the $^{13}$CO integrated intensity map in figure \ref{fig:3}. The identified clouds are located in the $5'' < r < 30''$ or 350 pc $< r <$ 2.1 kpc region from the nucleus. Note that no GMCs were identified in the central region including the CND \textcolor{black}{under our criterion}, although we do detect significant ($>5\sigma$) $^{13}$CO emission in the CND. The physical properties of GMCs in the CND, which are expected to be impacted by activities from the central AGN, will be reported in a separate paper. Although a number of clouds are overlapped in this 2-dimensional view, figure \ref{fig:4} gives a clearer 3-dimensional view of the positions of the identified clouds. In figures \ref{fig:3} and \ref{fig:4}, the sizes of the symbols are proportional to their molecular gas mass $M_{^{13}{\rm CO}}$. Although most GMCs are located on the starburst ring, some GMCs are also seen outside of the ring. We find a wide range of GMC masses on the starburst ring, whereas GMCs outside the ring are predominantly less massive. Table \ref{table:3} shows the intensities of $^{13}$CO($J$=1--0), C$^{18}$O($J$=1--0), CS($J$=2--1), CH$_3$OH($J_K$=$2_K$--$1_K$) and the virial parameters for each GMC. These molecular line intensities in table \ref{table:3} are derived from the integrated intensity over pixels of each GMC. 187, 182, and 121 GMCs with peak intensities of larger than $3\sigma$ are detected for C$^{18}$O, CS and CH$_3$OH, respectively. Figure \ref{fig:5} shows histograms of the line intensity ratios for the GMCs. The histograms of integrated intensity ratios show that C$^{18}$O/$^{13}$CO and CS/$^{13}$CO have a single peaked distribution. The CS/$^{13}$CO ratios show a similar trend to the C$^{18}$O/$^{13}$CO ratios, although their scatter is larger than the C$^{18}$O/$^{13}$CO ratios. On the other hand, CH$_3$OH/$^{13}$CO shows a wide-spread distribution ($\leq$ 0.01 to 0.1--0.2). Table \ref{table:4} shows the averaged values and ranges of intensit\textcolor{black}{y} ratios and virial parameters for these clouds together with their physical properties. Here we mention the possible systematic uncertainties of these physical quantities. The molecular gas mass estimated from $^{13}$CO, $M_{\rm ^{13}CO}$, can have two possible error sources: the $^{13}$CO fractional abundance, $[{\rm H}_2]/[^{13}{\rm CO}]$, and the adopted excitation temperature $T_{\rm ex}$. The adopted $^{13}$CO fractional abundance (\cite{Dickman1978}) is based on the measurements of GMCs in the Galactic disk regions, and hence it is not trivial to determine if such a value is applicable to the starburst ring of NGC 1068. Future direct estimation of the $^{13}$CO fractional abundance will be necessary. Regarding the excitation temperature of $^{13}$CO, it is thought to be reliable because this is based on the measurements of $^{13}$CO \textcolor{black}{$J=$3--2 and 1--0 lines} in the starburst ring (\cite{Nakajima2015}). Nevertheless, some cloud-to-cloud variation of $T_{\rm ex}$ may exist, depending on the star-forming activities of each cloud. For instance, some spatial variation of the CO($J$=3--2)/CO($J$=1--0) ratios has been reported across the starburst ring of NGC 1068 (\cite{GarciaBurillo2014}), implying a possible variation of physical parameters. Again, we may need additional observations such as $J$=2--1 and/or 3--2 of $^{13}$CO with a similar angular resolution, to make accurate measurements of $T_{\rm ex}$ of individual clouds in the future. \section{Discussion} \subsection{Comparison with GMCs in the Milky Way and local galaxies} The size - velocity width relation of identified GMCs is shown in figure \ref{fig:6}, which also shows the correlation of molecular clouds in the Milky Way (\cite{Sanders1985}, based-on $^{12}$CO($J$=1--0)), cores/clumps in the Galactic star-forming region W51 (\cite{Parsons2012}, $^{13}$CO($J$=3--2)), and GMCs/GMAs in local galaxies including LMC (\cite{Fukui2008}, $^{12}$CO($J$=1--0)), M33 (\cite{Onodera2012}, $^{12}$CO($J$=1--0)), M51 (\cite{Colombo2014}, $^{12}$CO($J$=1--0)) and M83 (\cite{Muraoka2009}, $^{12}$CO($J$=3--2)). \textcolor{black}{This plot demonstrates that the Larson's relation observed with $^{13}$CO in the central region of NGC 1068 exhibits overall agreement with those measured with $^{12}$CO in local spiral galaxies (e.g., \cite{Gratier2012}; \cite{DonovanMeyer2013}; \cite{Rebolledo2015}).} We found a trend for the GMCs with large velocity width in NGC 1068 similar to GMCs in the Milky Way, though the scatter is large. One of the possible cause of this scatter could be an effect of size overestimation for small GMCs due to the spatial resolution in the \verb|CLUMPFIND| algorithm. \textcolor{black}{We also note that $^{12}$CO emission tends to trace the outskirts of GMCs better than $^{13}$CO, so the extents of GMCs defined in $^{12}$CO and $^{13}$CO could be different. A high cut-off level of cloud identification in $^{13}$CO may also give a smaller cloud size than $^{12}$CO measurements, although these effects are not clearly evident in this figure.} Figure \ref{fig:7} shows the correlation between LTE mass ($M_{\rm ^{13}CO}$) and virial mass of the identified GMCs. The virial masses in NGC 1068 are proportional to $M_{\rm ^{13}CO}$ as well as those of GMCs in the Milky Way \textcolor{black}{(W51)}, LMC, M33, and M83, although $M_{\rm vir}$ is slightly larger than $M_{^{13}{\rm CO}}$. This suggests that the most of the \textcolor{black}{observed} GMCs \textcolor{black}{, traced by $^{13}$CO,} are self-gravitating\textcolor{black}{, as found in recent high-resolution interferometric $^{12}$CO observations of GMCs in local galaxies (e.g., \cite{DonovanMeyer2012}, 2013; \cite{Colombo2014}; \cite{Rebolledo2012}, 2015; \cite{Utomo2015}; \cite{Faesi2016})}. \subsection{Mass function of GMCs in the disk of NGC 1068} \label{section:GMC_Mass_Function} The mass spectrum or mass distribution of GMCs is one of the fundamental characteristics of molecular medium in \textcolor{black}{the Milky Way} and galaxies (e.g., \cite{Rosolowsky2005b}). The GMC mass distribution is often modeled as a power law: \begin{equation} f(M) = \frac{dN}{dM} \propto M^{\gamma}, \end{equation} in a differential form, where $M$ is the molecular gas mass and $N$ is the number of molecular clouds. In the case of GMCs in the inner disk of the Milky Way, $\gamma$ is know to be in the range from $-1.6$ (\cite{Williams1997}) to $-1.9$ (\cite{Heyer2001}). However, recent statistical studies of GMCs in the outer disk of the Milky Way and local galaxies such as LMC, M33, M51\textcolor{black}{, and NGC 4526} suggest that the shape of the GMC mass function can vary depending on the environment, i.e., the regions where GMCs reside (e.g., \cite{Rosolowsky2005b}; \cite{Wong2011}; \cite{Gratier2012}; \cite{Colombo2014} \textcolor{black}{; \cite{Utomo2015}}), and theoretical studies of such environmental dependence will bring insights into the formation processes of molecular clouds (e.g., \cite{Inutsuka2015}). Because all of these studies are based on $^{12}$CO measurements in quiescent and relatively moderate star-forming galaxies, it is of interest to test whether the GMC mass function derived from $^{13}$CO in more actively star-forming regions gives a similar slope or not. Figure \ref{fig:8} shows the derived LTE mass function in an integral form for the identified 187 GMCs in the central $\sim$4 kpc region of NGC 1068. Note that the mass threshold of our GMC sample, $\sim1.7 \times 10^4$ $M_\odot$, corresponds to $>$8 $\sigma$ in the $^{13}$CO data cube. Therefore we will be comfortably safe to discuss the slope of the mass function above $\sim 10^4$ $M_\odot$ here. We find that the overall shape of the $^{13}$CO-based GMC mass function in NGC 1068 is similar to those in the Milky Way and local galaxies based on $^{12}$CO; i.e., it can be expressed as a power law with a cut-off at the high mass end ($M > 10^{7} M_\odot$). Here we adopt a truncated power law, \begin{equation} N(M'>M) = N_0 \left[\left(\frac{M}{M_0}\right)^{\gamma+1}-1\right], \end{equation} where $N_0$ is the number of clouds more massive than $2^{1/(\gamma+1)}M_0$, which is the mass where the distribution deviates from a single power law (e.g., \cite{Mckee1997}; \cite{Wong2011}; \cite{Colombo2014}). The fit to the data (for $M>10^5 M_\odot$) gives the slope $\gamma=-1.25\pm0.07$, the maximum mass $M_0 = (5.92 \pm 0.63) \times 10^7$ $M_\odot$, and $N_0$ = $54.4 \pm 28.2$. The derived slope of the mass function for GMC ($M>10^5 M_\odot$) in the disk of NGC 1068, $\gamma = -1.25\pm0.07$, is significantly shallower compared with those of Galactic GMCs ($\gamma = -1.6$: \cite{Williams1997}, $\gamma = -1.9$: \cite{Heyer2001}) and the disk GMCs in M51 ($\gamma = -1.8$ to $-2.5$, \cite{Colombo2014}) and NGC 300 ($\gamma = -1.8$, \cite{Faesi2016}), indicating that the more massive clouds dominate the mass budget in NGC 1068 than these disk GMCs. Furthermore, although we observed a cut-off of the mass function at the high mass end, as in the case of GMCs in the Milky Way and other local galaxies such as M51, the measured maximum mass, $M_0 \sim 6 \times 10^7 M_\odot$, is one order of magnitude larger than that in the nuclear bar region of M51 (a shallow slope, $\gamma \sim -1.3$, similar to NGC 1068, and a sharp high-mass cut-off above $M_0 \sim 5.5 \times 10^6$ $M_\odot$ are reported \cite{Colombo2014}). Presumably the presence of such more massive GMCs in NGC 1068 than \textcolor{black}{the Milky Way} and M51 could be a driver of intense starburst there; the elevated star formation activities in the starburst ring of NGC 1068 ($\Sigma_{\rm SFR} \sim 1 - 2$ $M_\odot$ yr$^{-1}$ kpc$^{-2}$, \cite{Tsai2012}), which is significantly larger than those in the spiral arms of M51 (mostly $\Sigma_{\rm SFR}$ $\sim$ a few $\times 10^{-1}$ $M_\odot$ yr$^{-1}$ kpc$^{-2}$ or less, e.g., \cite{Watanabe2016}), could be caused by such \textcolor{black}{a} difference in the GMC mass function. \subsection{Spatial distribution of virial parameters} Figure \ref{fig:VirialParameter-on-13COmap} displays the positions of the GMCs along with their virial parameters. We find that the supercritical GMCs, i.e., $M_{\rm ^{13}CO}/M_{\rm vir} > 1$, are preferentially found on the starburst ring, while the subcritical GMCs are mainly located both inside and outside of the ring. These are consistent with the trend of GMAs in M83 based on $^{12}$CO($J$=3--2) which trace dense gas (\cite{Muraoka2009}). It is remarkable that these trends have also been found by observations of $^{13}$CO($J$=1--0), which is an optically thin molecular mass tracer, at GMC scales. Such environmental difference \textcolor{black}{depending on the location of GMCs, i.e., whether {\it on-arm} or {\it inter-arm},} has been reported by \textcolor{black}{$^{12}$CO-based} statistical studies of GMCs in IC 342 (\cite{Hirota2011}), M51 (\cite{Colombo2014}) and \textcolor{black}{local spiral galaxies such as NGC 6946 and M101 (e.g., \cite{Rebolledo2015})}. \textcolor{black}{Our result} suggests that GMCs outside the starburst ring \textcolor{black}{in NGC 1068} are not gravitationally bound, giving an insight into formation mechanism of GMCs especially in the inter-arm regions. \subsection{GMC-scale variation of molecular line properties and their link to the larger scale environments in NGC 1068} \label{sec:GMC_properties} In section \ref{sec:M_13CO-Mvir}, we have shown that the histograms of C$^{18}$O/$^{13}$CO and CS/$^{13}$CO \textcolor{black}{integrated intensity ratios (hereafter we refer to as ratios)} have single peaked distribution, while the histogram of CH$_3$OH/$^{13}$CO ratios exhibits widely scattered distribution (\textcolor{black}{figure} \ref{fig:5}). In order to test if these molecular line ratios follow any trend with other physical quantities, these line ratios are plotted against the molecular gas masses and virial parameters of each GMC (figure \ref{fig:9}). We find C$^{18}$O/$^{13}$CO ratios are mostly uniform regardless of the molecular gas masses and virial parameters (the \textcolor{black}{top panels} of figure \ref{fig:9}), whereas CS/$^{13}$CO ratios show a positive trend with the gas mass and virial parameters as shown in the \textcolor{black}{middle panels} of figure \ref{fig:9}. On the other hand, CH$_3$OH/$^{13}$CO ratios exhibit different behavior from C$^{18}$O/$^{13}$CO and CS/$^{13}$CO ratios; they show significantly larger scatters and no clear systematic tendency was found. We then display the spatial distributions of these molecular line ratios in figure \ref{fig:10}, in order to see any connection exists between the GMC-scale molecular line ratios and larger scale environments such as spiral arms and circumnuclear starburst regions. In this figure, the size of each circle is proportional to the ratios of each molecule with respect to $^{13}$CO. It is very clearly displayed that the diameters of circles representing the C$^{18}$O/$^{13}$CO and CS/$^{13}$CO ratios are almost uniform across the starburst ring, as expected from the narrow variation of the ratio (figure \ref{fig:5}). On the other hand, in the top panel of figure \ref{fig:10}, the circle diameters vary significantly from cloud to cloud in the CH$_3$OH/$^{13}$CO map. The CH$_3$OH/$^{13}$CO ratios are smallest around the bar-end, and become larger along the spiral arms. That is, importantly, we find systematic spatial variations of the CH$_3$OH/$^{13}$CO ratios across the starburst ring of NGC 1068 at a GMC scale. Here we discuss the possible physical and chemical processes which govern the observed behaviors of these molecular line ratios below. \subsubsection{C$^{18}$O/$^{13}$CO ratios: uniformity among GMCs} \label{sec:line_ratios_vs_gasmass} The C$^{18}$O/$^{13}$CO ratios (the \textcolor{black}{top panels} of figure \ref{fig:9}) are found to be mostly uniform regardless of the molecular gas mass and virial parameter, as expected from figure 5, demonstrating that the C$^{18}$O/$^{13}$CO ratios are almost within a factor of 2. The average value of the C$^{18}$O/$^{13}$CO ratios is $0.27 \pm 0.05$ as summarized in table 4. This value is similar to other high-resolution (GMC or GMA-scale) extragalactic C$^{18}$O/$^{13}$CO ratios such as in NGC 6946 (\cite{Meier2004}), IC 342 (\cite{Meier2005}) and M 51 (\cite{Watanabe2016}); see also the top panel of figure 12, summarizing these extragalactic, GMC/GMA-scale C$^{18}$O/$^{13}$CO ratio measurements. Note that the 10-pc scale measurements of C$^{18}$O/$^{13}$CO ratios of GMCs in LMC reveal significantly lower values than these galaxies ($<0.1$; \cite{Nishimura2016}). This could presumably be due to the selective far-UV dissociation of C$^{18}$O (\cite{vanDishoeck1988}), which is indeed observed in the Orion-A giant molecular cloud (\cite{Shimajiri2014}). However, the observed spatial uniformity of the C$^{18}$O/$^{13}$CO ratios indicate that such isotope-selective photo-dissociation, which is expected to be significant in GMCs on spiral arms (where intense starbursts occur) and less significant in inter-arm GMCs, plays only a negligible role in these measured GMC-scale C$^{18}$O/$^{13}$CO ratios. Because both the C$^{18}$O and $^{13}$CO $J$=1--0 lines are considered to be optically thin, the ratio can be regarded as the [C$^{18}$O]/[$^{13}$CO] fractional abundance ratio in the 0-th approximation. In \textcolor{black}{the Milky Way}, the [C$^{18}$O]/[$^{13}$CO] abundance ratio is known to be $\sim$1/7 -- 1/8 (\cite{Henkel1993}). Owing to the different production processes of C$^{18}$O (comes from short-lived massive stars during an early phase of a starburst event, \cite{Prantzos1996}) and $^{13}$CO (produced later on in intermediate-mass stars, \cite{Wilson1992}; \cite{Meier2004} and references therein), it is suggested that GMCs preferentially forming massive stars can display elevated C$^{18}$O abundance with respect to $^{13}$CO (\cite{Henkel1993}; \cite{Meier2004}; \cite{Konig2016}). The measured C$^{18}$O/$^{13}$CO ratio or [C$^{18}$O]/[$^{13}$CO] abundance ratio of 0.27 or $\sim$1/4, is larger than that in the Galactic GMCs by a factor of 2, and may be a reflection of the on-going dusty starburst activities in the observed spiral arms of NGC 1068. Surprisingly, an extremely elevated [C$^{18}$O]/[$^{13}$CO] abundance ratio of close to unity, has been reported in the gravitationally-magnified submillimeter galaxy (i.e., a dusty extreme starburst galaxy) SMM J2135-0102 at $z=2.325$, indicating the presence of intense star-formation preferentially biased to high-mass stars (\cite{Danielson2013}). Increasing the measurements of C$^{18}$O and $^{13}$CO lines in various environments will be crucial to understand and establish the [C$^{18}$O]/[$^{13}$CO] abundance ratio as a tool for studying the evolution of galaxies near and far in the ALMA era. \subsubsection{CS/$^{13}$CO ratios: tracing a variation of dense gas fraction among GMCs?} In the \textcolor{black}{middle panels} of figure \ref{fig:9}, we find a trend of CS/$^{13}$CO ratios with respect to the molecular gas mass and virial ratios; massive GMCs and supercritical GMCs tend to show higher CS/$^{13}$CO ratios, whereas smaller CS/$^{13}$CO ratios are found among less massive or subcritical GMCs. CS/$^{13}$CO ratios can be regarded as a proxy of dense gas fraction, because CS traces significantly denser molecular gas than $^{13}$CO owing to its higher critical density ($n_{\rm crit} \sim10^5$ cm$^{-3}$ for CS($J$=2--1)). Therefore, the relationship between these line ratios and GMC masses suggest that more massive GMCs form dense molecular gas more efficiently than less massive ones. This is in fact exactly what we found in M33, i.e., more massive GMCs tend to show higher $^{12}$CO($J$=3--2)/$^{12}$CO($J$=1--0) line ratios, which can be interpreted as another indicator of dense gas fraction for relatively quiescent disk regions (\cite{Onodera2012}). \subsubsection{Spatial variation of CH$_3$OH$/^{13}$CO ratios: caused by mild shocks along spiral arms?} In figure \ref{fig:10}, we find that the CH$_3$OH/$^{13}$CO ratios are smallest around the bar-end, and become larger along the spiral arms, i.e., we find systematic spatial variations of the CH$_3$OH/$^{13}$CO ratios across the starburst ring of NGC 1068 at a GMC scale. In fact, similar tendencies have been reported in the central regions of IC 342 and Maffei2; the distribution of CH$_3$OH is spatially anti-correlated with those of $^{13}$CO and C$^{18}$O (\cite{Meier2005}; \cite{Meier2012}), giving a coherent view with our findings. Here, we discuss the origin of the observed systematic spatial variation of CH$_3$OH/$^{13}$CO ratios across the starburst ring in NGC 1068. If we consider the direction of the galactic rotation in the disk of NGC 1068 (gas particles go around counter-clockwise), we can say that the CH$_3$OH/$^{13}$CO ratios become larger in the up-stream side, and then are decreasing in the down-stream direction. Two possibilities are: (1) GMCs with different line ratios are produced locally, and (2) GMCs with high line ratios in the up-stream side of spiral arms travel along the arms, and then the line ratios are gradually decreasing. The chemical timescale of CH$_3$OH formation and destruction is known to be $10^5$ years (\cite{Nomura2004}). The distance of the starburst ring/spiral arms from the nucleus is approximately 20$^{\prime\prime}$ - 30$^{\prime\prime}$ (1.6 - 2.4 kpc), and therefore the dynamical timescale (i.e., the time necessary for one rotation) is 5 - 7 $\times 10^7$ years if we adopt a circular rotation velocity of 200 km s$^{-1}$ at this radius (\cite{Schinnerer2000}). Two distinct GMCs with high and low CH$_3$OH/$^{13}$CO line ratios are separated by $\sim$0.5 rotation, and it takes approximately $\sim 10^7$ years to travel between these two positions. This time scale is significantly longer than the chemical formation/destruction time scale of $10^5$ years. We therefore conclude that scenario (1) is more likely than (2), i.e., GMCs with different chemical properties are locally formed in different places. What is the cause of such local GMC variation in methanol intensity with respect to $^{13}$CO? One possible cause is the local difference of either shock strength or dust temperature. CH$_3$OH is suggested to be a tracer of mild shock, because its gas-phase abundance is enhanced in a moderately intense shock whereas it is destroyed under the strong shock environments (\cite{Viti2011}). Shock strengths should be stronger in the bar-end than spiral arms because the bar-end is the orbit-crowding region of two types of gas motion, $x_1$ and $x_2$ orbits (e.g. \cite{Athanassoula1999}). Although shock is expected along the spiral arms (spiral shock waves, \citet{Fujimoto1968}, \citet{Roberts1969}, see also \citet{Baba2013} for a recent theoretical view on such spiral shocks), it can be rather milder than that in the bar-end. It is expected that the shock in the bar-end is so strong that methanol molecules are destroyed, although no significant SiO emission is reported around the bar-end of NGC 1068 (\cite{GarciaBurillo2010}), in spite of the fact that SiO is a tracer of strong shock. Can we see a possible link between the shock strengths and methanol abundance enhancement? One of the hint can be drawn from the comparison of the CH$_3$OH/$^{13}$CO ratios and velocity widths of GMCs, because the velocity widths are dominated by internal turbulent velocities (although systematic velocity gradient such as disk rotation should be appropriately modeled for accurate analysis). In figure \ref{fig:12}, the CH$_3$OH/$^{13}$CO integrated intensity ratios of GMCs are plotted against the velocity widths of GMCs in the central $\sim$4 kpc region of NGC 1068, along with the C$^{18}$O/$^{13}$CO and CS/$^{13}$CO ratios. The CH$_3$OH/$^{13}$CO ratios in local galaxies including IC 342 (\cite{Meier2005}; \cite{Meier2012}, $\sim$100 pc resolution), M 51 (\cite{Watanabe2016}, $\sim$200 pc), and LMC (\cite{Nishimura2016}, $\sim$10 pc, upper limits only) are also shown for comparison. We find no clear trend between CH$_3$OH/$^{13}$CO ratios and velocity widths of clouds, implying that the correlation between shock and methanol abundance is not very obvious; presumably, a putative correlation may be weakened due to the fact that strong shock is not preferable for methanol. In fact, no clear correlation between star formation activities and enhancement of methanol was found in our previous ALMA study of NGC 1068 (\cite{Takano2014}), implying that strong shocks caused by successive supernovae in active starburst regions are not responsible for the enhanced methanol emission. Another test to address the link between methanol abundance and shock is to compare the velocity widths of GMCs with and without methanol detection; figure \ref{fig:13} shows histograms of line widths of GMCs with and without detection of methanol. The mean and standard deviation are 16.56 km s$^{-1}$ and 5.61 km s$^{-1}$ for methanol-detected GMCs, and 11.97 km s$^{-1}$ and 4.87 km s$^{-1}$ for methanol-lacking GMCs, respectively, suggesting that methanol-detected GMCs seem to have larger velocity widths. In fact, a two-sample Kolmogorov-Smirnov test gives a $p$ value of less than $10^{-5}$, i.e., a hypothesis that these two samples are originated from the same distribution is rejected with a $>99.9$\% significance. The comparison of these two histograms suggests that the GMCs with methanol emission tend to have larger velocity line widths than those without methanol emission. This implies that methanol emission originates from shocks which are ubiquitous in spiral arms, regardless of the intensity of star formation activities there. We therefore suggest that spiral shocks associated with spiral arms are responsible for the presence of methanol emission in the spiral arms in NGC 1068 and other galaxies such as M 51. Next, we consider the role of different temperatures as a driver of methanol abundance variation among the GMCs. It is know\textcolor{black}{n} that the yield of methanol molecule becomes maximum around the dust temperature of 15 K, and it decreases above the higher dust temperature ($\geq$ 20K; \cite{Watanabe2003}). This means that only the molecular (and dust) clouds staying at a low temperature ($\leq$ 20-30K) for a chemical time scale can enhance the abundance of CH$_3$OH. Therefore, we can hypothesize that the temperature of gas and dust around the bar-end is significantly high ($\geq$ 30K) and that is the cause of low CH$_3$OH/$^{13}$CO ratio. It is natural to expect that massive star-formation heats the ISM; in fact, the measured star-formation rates in the bar-end region is $\sim$2 times higher than those in the spiral arms (\cite{Tsai2012}). With these pieces of evidence and implications, we can draw a scenario as follows. Methanol comes into gas phase by mild shock when molecular gas is accumulated and compressed along the spiral arms and bar-end. In the bar-end, however, more active massive star-formation occurs and therefore gas/dust temperature becomes higher than in the spiral arms. The production of methanol is then suppressed in the bar-end. In this way, we observe low CH$_3$OH/$^{13}$CO ratio in the bar-end, whereas elevated CH$_3$OH/$^{13}$CO ratios were found along the spiral arms. In order to verify this scenario quantitatively, we need to determine the spatial distribution of gas and dust temperatures along the starbursting spiral arms, which is not yet available. Future ALMA observations will play a role in conducting such studies. \section{Summary} We present a statistical study of physical and chemical properties of giant molecular clouds (GMCs) in the central $1'$ (4.2 kpc) region of NGC 1068, based on $1''.4$ (98 pc) resolution ALMA observations of $^{13}$CO($J$=1-0), C$^{18}$O($J$=1-0), CS($J$=2-1), and CH$_3$OH($J_K$=$2_K$-$1_K$). The observed region contains both active galactic nuclei and extended active star-forming regions, which do not exist in Galactic disks; this allows us to address the question of whether these extreme activities will have significant impact on the physical and chemical properties of ISM at a GMC scale. High sensitivity ALMA observations allowed us to produce 3-dimensional data cubes of these molecular lines with a high spectral resolution (1.5 km s$^{-1}$), which is essential for a study of GMC-scale molecular gas in the disk regions of galaxies. The major outcomes of this study are summarized as follows. \begin{enumerate} \item A high dynamic range (72) $^{13}$CO(1-0) spectral data cube, produced with a wide range of baseline lengths (5.5 k$\lambda$ -- 290 k$\lambda$) and multi-scale CLEAN, reveals detailed sub-structures of spiral arms and faint clouds in the inter-arm regions. \item We have identified 187 high significance GMCs from the $^{13}$CO(1-0) cube by employing the \verb|CLUMPFIND| algorithm with high level threshold (8 $\sigma$) and increment (4 $\sigma$) in order to ensure reliable decomposition of the $^{13}$CO(1-0) emission into individual clouds. This is one of the largest samples of GMCs in galaxies constructed based on $^{13}$CO(1-0) emission. \item We find that the cataloged GMCs tend to follow the known size to line-width relation of the Galactic (i.e., non-starbursting) GMCs. The molecular gas masses of GMCs $M_{\rm 13CO}$, derived from the $^{13}$CO data, range from $1.8 \times 10^4 ~ M_\odot$ to $4.2 \times 10^7 ~M_\odot$, and their ratios to the virial masses $M_{\rm vir}$, i.e., virial parameters, $M_{\rm 13CO}/M_{\rm vir}$, show that the super-critical GMCs (i.e., $M_{\rm 13CO}/M_{\rm vir}>1.1$) are preferentially found on the starburst ring, while the sub-critical GMCs are mainly located both inside and outside of the ring. \item A mass function of GMCs, $N(M'>M) = N_0 \left[\left(\frac{M}{M_0}\right)^{\gamma+1}-1\right]$, in the central $\sim$4 kpc region of NGC 1068 has been obtained for the first time at $\sim$ 100 pc resolution. We find the slope of the mass function $\gamma = -1.25 \pm 0.07$ for a mass range of $M \geq 10^5M_\odot$. This is shallower than the Galactic GMCs ($\gamma = -1.6$ to $-1.9$) and GMCs in the disk regions of M51 ($\gamma = -1.8$ to $-2.5$) and NGC 300 ($\gamma = -1.8$), presumably caused by the presence of more massive clouds in the massive end of the mass function in NGC 1068. A large maximum mass, $M_0 = (5.92 \pm 0.63) \times 10^7$ $M_\odot$, also supports this view. \item The observed C$^{18}$O($J$=1--0)/$^{13}$CO($J$=1--0) intensity ratios are found to be fairly uniform \textcolor{black}{(0.27 $\pm$ 0.05)} among the identified GMCs. In contrast, the CH$_3$OH/$^{13}$CO ratios exhibit striking variation across the disk, with the smallest values around the bar-end ($<$ 0.03), and becoming larger along the spiral arms ($\sim$ 0.1--0.2). \item Conceivable causes which may govern the variation of CH$_3$OH/$^{13}$CO ratios across the disk of NGC 1068 have been investigated. We find that GMCs with methanol emission tend to have systematically larger velocity widths than those without methanol emission. We suggest that relatively weak shocks, which are ubiquitous in disk regions, i.e., spiral shocks, are responsible for the enhancement of the CH$_3$OH/$^{13}$CO ratios of GMCs in the disk of NGC 1068. \end{enumerate} Finally, we stress that our study demonstrates the power of ALMA which relates the GMC-scale physical and chemical properties to the larger-scale environment of galaxies. Chemical properties revealed by ALMA will be an indispensable tool to study the physical processes of galaxy evolution near and far by its unprecedentedly high angular resolution and sensitivity. \bigskip \begin{ack} \textcolor{black}{We thank the anonymous referee for helpful comments for improving the paper.} {Data analysis were in part carried out on common use data analysis computer system at the Astronomy Data Center, ADC, of the National Astronomical Observatory of Japan (NAOJ).} This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2012.1.00060.S. ALMA is a partnership of ESO (representing its member states), NSF (USA), and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ. \textcolor{black}{T.T. was supported by the ALMA Japan Research Grant of NAOJ Chile Observatory, NAOJ-ALMA-0156. K.K. acknowledges support from JSPS KAKENHI grant number 25247019.} \end{ack}
6912489260746657a2e23b63b8d3328b5eeb5cbf
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{introduction} In this notes, we consider a log-gas, also called $\beta$-ensemble or one component plasma, in dimension $1$ or $2$ at inverse temperature $\beta=2$. Let $\mathfrak{X} = \mathbb{R}$ or $\mathbb{C}$ equipped with the Lebesgue measure $d\mu = dx$ or the area measure $d\mu= d\mathrm{A} = \frac{r dr d\theta}{\pi}$ respectively, and let $V: \mathfrak{X} \to\mathbb{R}$ be a continuous function such that there exists $\nu >0$ so that \begin{equation} \label{potential} V(z) \ge (1+\nu) \log|z| \hspace{.5cm}\text{as}\hspace{.4cm} |z| \to\infty . \end{equation} We consider the probability measure on $\mathfrak{X}^N$ with density $G_N(x)=e^{-\beta \mathscr{H}^N_V(x)}/Z^N_V$ where the Hamiltonian is \begin{equation} \label{log_gas} \mathscr{H}^N_V(x)= \hspace{-.15cm} \sum_{1\le i<j \le N} \hspace{-.15cm}\log| x_i -x_j|^{-1} + N \sum_{j=1}^N V(x_j) . \end{equation} The condition $\beta=2$ implies that if the configuration $(\lambda_1,\dots, \lambda_N)$ is distributed according to $G_N$, then the point process $\Xi := \sum_{k=1}^N \delta_{\lambda_k}$ is determinantal with a correlation kernel: \begin{equation} \label{kernel} K^N_V(z,w) = \sum_{k=0}^{N-1} \varphi_k(z) \varphi_k(w) \end{equation} with respect to $\mu$. Moreover, for all $k\ge 0$, \begin{equation} \label{OP} \varphi_k(x) = P_k(x) e^{-N V(x)} , \end{equation} where $\{P_k \}_{k=0}^\infty$ is the sequence of orthonormal polynomials with respect to the weight $e^{- 2N V(x)}$ on $L^2(\mu)$. It turns out that when $\beta=2$, the Hamiltonian \eqref{log_gas} also describes the eigenvalues of the ensembles of Hermitian (or complex normal) matrices with weight $e^{-2NV(M)}$ on $\mathfrak{X}=\mathbb{R}$ (or $\mathfrak{X}=\mathbb{C}$). In particular, when $V(z)=|z|^2$, these correspond to the well-know Gaussian Unitary (GUE) or Ginibre ensembles respectively. \\ In the following, we consider the so-called {\it incomplete} or {\it thinned point process} which arises after performing a Bernoulli percolation on the configuration of the process $\Xi$. This process, denoted by $\hat\Xi$, is obtained by deleting independently each particle with probability $q_N$ or by keeping it with probability $p_N=1-q_N$ and it is determinantal with correlation kernel $p_NK^N_V(z,w)$; see the appendix~\ref{A:kernel} for a background overview. This procedure was introduced in context of random matrix theory by Bohigas and Pato, \cite{BP04, BP06}, in order to investigate a crossover to Poisson statistics. These types of transitions are supposed to arise in various context in statistical physics, such as the Anderson metal-insulator transition, the crossover from chaotic to regular dynamics in the theory of quantum billiards, or in the spectrum of certain band random matrices, see \cite{Spencer11, EK14a, EK14b} and reference therein. Although such transitions are believed to be non-universal, the model of Bohigas and Pato is arguably one of the most tractable to study this phenomenon because it is determinantal. Note that, based on the theory of \cite{HKPV06}, an alternative correlation kernel for this process is given by \begin{equation} \label{kernel_2} \widehat{K}^N_V(z,w) = \sum_{k=0}^\infty J_k \varphi_k(z) \varphi_k(w) \end{equation} where $(J_k)_{k=1}^\infty$ is a sequence of i.i.d. Bernoulli random variables with expected values $\mathbb{E}[J_k]= p_N \mathbf{1}_{k<N} $. This shows that removing particles builds up randomness in the system and when the disorder becomes sufficiently strong, it will behave like a Poisson process rather than according to random matrix theory. This heuristics applies to general determinantal processes as well and it is straightforward to check that the method presented in section~\ref{sect:cumulant} also applies to the sine or the $\infty$-Ginibre processes which describes the local limits of the log-gases in dimensions $1$ and $2$ respectively. In fact, this paper is motivated by an analogous result obtained recently by Berggren and Duits for smooth linear statistics of the incomplete sine and CUE processes, \cite{Berggren17}. Based on the fact that these processes come from integrable operators, they fully characterized the transition for a large class of mesoscopic linear statistics using the corresponding Riemann-Hilbert problem described in~\cite{Deift99}. In particular, they obtain analogous crossover for both models and suggested that this should be universal for thinned point processes coming from random matrix theory. Prior to~\cite{Berggren17}, they have also been results for the gap probabilities of the critical thinned ensembles. In~\cite{BDIK15, BDIK16}, for the sine process, Deift et al.~computed very detailed asymptotics for the crossover from the {\it Wigner surmise} to the exponential distribution making rigorous a prediction of Dyson, \cite{Dyson95}, and Charlier and Claeys obtained an analogous result for the CUE, \cite{CC17}. The contribution of this paper is to elaborate on universality for smooth linear statistics of $\beta$-ensemble in dimension $1$ or $2$ when $\beta=2$. Although our proof still relies on the determinantal structure of these models, instead of the connection with Riemann-Hilbert problems used in the previous works, we apply the cumulants method, which has a combinatorial flavor and appears to be robust to study the asymptotic fluctuations of smooth linear statistics. \\ To keep the proofs as simple as possible, we will restrict ourself to real-analytic~$V$, although the results should be valid for more general potential as well (specially in dimension~1 where the asymptotics of the correlation kernels have been studied in great generality). We keep track of the crossover by looking at linear statistics $\widehat{\Xi}(f)$ of smooth test functions. The random matrix regime is characterized by the property that the fluctuations of the random variable $\widehat{\Xi}(f)$ are of order 1 and described by a {\it universal Gaussian noise} as the number of particles $N\to\infty$. On the other hand, in the Poisson regime, the variance of any non-trivial statistic diverges and, once properly renormalized, the point process converges to a {\it white noise}. In the remainder of this introduction, we will formulate our assumptions and limit theorems for the 2-dimensional log-gases. We refer to section~\ref{sect:1} and~\ref{sect:1'} for the analogous results in dimension~1. The proofs consist in applying the cumulant method introduced by Soshnikov, \cite{Soshnikov00}, to describe the asymptotic laws of linear statistics of the incomplete process $\widehat\Xi$. The details of our strategy are explained in section~\ref{sect:cumulant} and, in particular, they do not depend on the dimension. \\ In what follows, we let $\mathcal{C}^k_0(\S)$ be the set of function with $k$ continuous derivatives and compact support in $\S \subset \mathfrak{X}$ and we use the notation: $$ \partial = (\partial_x - i \partial_y)/2 \ , \hspace{.5cm} \bar{\partial} = (\partial_x + i \partial_y)/2 \hspace{.5cm}\text{and}\hspace{.4cm} \Delta = \partial\bar{\partial} . $$ If the potential $V$ is real-analytic and satisfies the condition \eqref{potential}, then the log-gas is asymptotically distributed on a compact set $\mathscr{D}_V \subset \mathbb{C}$ which is called the {\it droplet} and its equilibrium density is given by $\varrho_V : = 2\Delta V \mathbf{1}_{\mathscr{D}_V}$ with respect to the area measure $d\mathrm{A}$. Note that this density may vanish inside the droplet and the {\it bulk} is defined by \begin{equation}\label{bulk} \S_V:= \mathscr{D}_V^\circ \cap \big\{ z\in\mathbb{C} : \Delta V(z) >0 \big\} . \end{equation} It is well known that the fluctuations of the eigenvalues of normal random matrices around the equilibrium configuration are described by a centered Gaussian process $\mathrm{X}$ with correlation structure: \begin{equation} \label{H1_noise} \mathbb{E}\big[\mathrm{X}(f) \mathrm{X}(g) \big] = \frac{1}{4} \int_\mathbb{C} \nabla f(z) \cdot \nabla g(z)\ d\mathrm{A}(z) = \int_\mathbb{C} \partial f(z) \bar{\partial} g(z) d\mathrm{A}(z) , \end{equation} for any (real-valued) smooth functions $f$ and $g$. Modulo constant functions, the RHS~of formula \eqref{H1_noise} defines an Hilbert space, denoted $ H^{1}(\mathbb{C})$, with norm: \begin{equation} \label{H1_norm} \|f \|_{H^1(\mathbb{C})}^2 = \int_\mathbb{C} \left| \partial f(z) \right|^2 d\mathrm{A}(z) . \end{equation} It was proved in \cite{AHM15} that for any smooth function $f$ with compact support, \begin{equation} \label{clt_1} \Xi(f) - \mathbb{E}\big[\Xi(f) \big] \Rightarrow \mathrm{X}(f^\dagger) \end{equation} as $N\to\infty$, where $f^\dagger$ is the (unique) continuous and bounded function on $\mathbb{C}$ such that $f^\dagger= f$ on the droplet $\mathscr{D}_V$ and $\Delta f^\dagger =0$ on $\mathbb{C}\backslash\mathscr{D}_V$. This CLT was first established for the Ginibre process by Rider and Vir\`{a}g in \cite{RV07a} for $\mathcal{C}^1$ test function with at most exponential growth at infinity. Moreover, if $\operatorname{supp}(f) \subset \S_V $, then the function $f^\dagger = f$ on $\mathbb{C}$ and the CLT was obtained previously in the paper \cite{AHM11} from which part of the method below is inspired.\\ We now come to our results. In order to describe the crossover, let $\Lambda_\eta$ be a mean-zero Poisson process with intensity $\eta \in L^\infty(\mathfrak{X})$. This process is characterized by the fact that for any function $f\in \mathcal{C}_0(\mathfrak{X})$, we have \begin{equation}\label{Poisson} \log \mathbb{E}\big[ \exp \Lambda_{\nu}(f) \big] = \int_{\mathfrak{X}} \big( e^{f(z)} - 1 - f(z) \big) \eta(z) d\mu(z) . \end{equation} \begin{theorem} \label{thm:crossover_2} Let $\mathrm{X}$ be a $H^1$-Gaussian noise and $\Lambda_{\tau\varrho_V}$ be an independent Poisson process with intensity $\tau\varrho_V$ where $\tau>0$ defined on the same probability space. Let $f \in \mathcal{C}^3_0(\S_V)$, $p_N = 1-q_N$, and let $T_N = N q_N$. As $N\to\infty$ and $q_N \to 0$, we have \begin{align} \label{Normal_2} \widehat{\Xi}(f) - \mathbb{E}\big[\widehat{\Xi}(f) \big] &\Rightarrow \mathrm{X}(f) &\text{if }\ T_N \to 0 , \hspace{.2cm}\\ \label{Poisson_2} \frac{\widehat{\Xi}(f) - \mathbb{E}\big[\widehat{\Xi}(f) \big]}{\sqrt{T_N} } & \Rightarrow \mathcal{N}\left(0, \int f(z)^2 \varrho_V(z) d\mathrm{A}(z) \right) &\text{if }\ T_N \to \infty ,\\ \label{crossover_2} \widehat{\Xi}(f) - \mathbb{E}\big[\widehat{\Xi}(f) \big] & \Rightarrow \mathrm{X}(f) + \Lambda_{\tau\varrho_V}(-f) &\text{if }\ T_N \to \tau . \hspace{.25cm} \end{align} \end{theorem} The proof of theorem~\ref{thm:crossover_2} relies on the approximation of the correlation kernel $K^N_V$ used in \cite{AHM11}, see lemma~\ref{thm:Bergman_kernel} below, and it restricts us to work with test functions which are supported inside the bulk. However, the result should be true for general functions if we replace $f$ by $f^\dagger$ on the RHS~of \eqref{Normal_2} and \eqref{crossover_2}. In fact, for the Ginibre ensemble, $V(z)=|z|^2$, or other nice rotationally invariant potential, one can also apply the combinatorial method of \cite{RV07a} to prove the counterpart of theorem~\ref{thm:crossover_2} for any test function $f(z)$ which is a bi-variate polynomial in $z$ and $\bar{z}$. \\ In the regime $Nq_N \to 0$, virtually no particles are deleted and linear statistics behave according to random matrix theory. On the other hand, in the regime $Nq_N \to \infty$, the variance of any linear statistic diverges and the random variable $\hat{\Xi}(f)$ has to be normalized to get a classical CLT. Moreover, formula \eqref{Poisson_2} shows that the normalized process converges to a white noise supported on the droplet $\mathscr{D}_V$ whose intensity is the equilibrium measure~$\varrho_V$. In the critical regime, when the expected number of deleted particles equals $\tau>0$, \eqref{crossover_2} shows that the limiting process is the superposition of a $H^1$-correlated Gaussian noise and an independent mean-zero Poisson process applied to $-f$. Finally, by using formula \eqref{Poisson}, it is not difficult to check that, as $\tau\to\infty$, the random variable $$ \Lambda_{\tau\varrho_V}(-f) / \sqrt{\tau} \Rightarrow \mathcal{N}\left(0, \int f(z)^2 \varrho_V(z) d\mathrm{A}(z) \right) , $$ so that the critical law interpolates between the regimes \eqref{Normal_2} and~\eqref{Poisson_2}. \\ The counterpart of theorem~\ref{thm:crossover_2} also holds in the mesoscopic regime. The density of the log-gas is of order $N$ and one can also investigate statistics by zooming in the bulk of the process. Namely, if $L_N \nearrow \infty$, $x_0 \in \S_V$, and $f\in \mathcal{C}_0(\mathbb{C})$, we consider the test function \begin{equation} \label{meso} f_N(z) = f\big(L_N(z-x_0) \big) . \end{equation} The regime $L_N=N^{1/\d}$ is called microscopic and it was shown in~\cite[Proposition~7.5.1]{AHM11} that $$\Xi (f_N) \Rightarrow \Xi^\infty_{\varrho_V(x_0)}(f) , $$ where the process $ \Xi^\infty_{\rho}$ is called the {\bf $\infty$-Ginibre process} with density $\rho>0$. It is a determinantal process on $\mathbb{C}$ with correlation kernel \begin{equation}\label{Ginibre} K^\infty_\rho(z,w) = \rho e^{\rho(2z\bar{w} - |z|^2 -|w|^2 )/2} . \end{equation} Based on the same argument, it is not difficult to verify that the incomplete process has a local limit as well: \begin{equation}\label{local_limit_1} \widehat{\Xi} (f_N) \Rightarrow \widehat{\Xi}^{\infty, p}_{\varrho_V(x_0)}(f) \hspace{.6cm}\text{as}\hspace{.4cm}p_N\to p \hspace{.4cm}\text{and}\hspace{.4cm} N\to\infty. \end{equation} For any $0<p\le 1$, the process $ \hat{\Xi}^{\infty,p}_{\varrho}$ is determinantal with correlation kernel $p K^\infty_\varrho(z,w) $. This process is constructed by running an independent Bernoulli percolation with parameter $p$ on the point configuration on the $\infty$-Ginibre process with density $\rho>0$. When $p=1$, it coincides with the $\infty$-Ginibre process and \eqref{local_limit_1} shows that one needs to delete a non-vanishing fraction of the $N$ particles of the gas in order to get a local limit which is different from random matrix theory. It was proved in \cite{RV07b} that, as the density $\rho\to\infty$, the fluctuations of the $\infty$-Ginibre process are of order 1 and described by the $H^1$-Gaussian noise: $$ \Xi^\infty_\rho(f) - \rho \int_{\mathbb{C}} f(z) d\mathrm{A}(z)\ \Rightarrow \mathrm{X}(f) $$ for any $f \in H^1\cap L^1(\mathbb{C})$. Therefore it is expected that, in the mesoscopic regime, $\text{i.e. }L_N \prec \sqrt{N}$, the asymptotic fluctuations of the linear statistic $\Xi(f_N)$ are universal and described by $\mathrm{X}(f)$. However, to our best knowledge, a proof was missing from the literature and, in section~\ref{sect:2}, we show that this fact can be derived by combining the ideas of \cite{RV07b} and \cite{AHM11}. \begin{theorem} \label{meso:2} Let $x_0 \in \S_V$, $f \in \mathcal{C}_0^3(\mathbb{C})$, $\alpha \in (0,1/2)$, and let $f_N$ be given by formula \eqref{meso} with $L_N =N^\alpha$. Then, the mesoscopic linear statistics $$ \Xi(f_N) - \mathbb{E}\big[ \Xi(f_N)\big] \Rightarrow \mathrm{X}(f) $$ as $N \to\infty$. \end{theorem} Using the same method, we can also analyze the fluctuations of smooth mesoscopic linear statistics of the incomplete process. \begin{theorem} \label{thm:crossover_3} Let $\mathrm{X}$ be a $H^1$-Gaussian noise and $\Lambda_{\tau}$ be an independent Poisson process with intensity $\tau>0$ on $\mathbb{C}$. Let $x_0 \in \S_V$, $f \in \mathcal{C}^3_0(\S_V)$, $\alpha \in (0,1/2)$, and let $f_N$ be the mesoscopic test function given by formula \eqref{meso} with $L_N =N^\alpha$. We also let $p_N = 1-q_N$ and $T_N= N q_N L_N^{-2} \varrho_V(x_0)$. We have \begin{align*} \widehat{\Xi}(f_N) - \mathbb{E}\big[\widehat{\Xi}(f_N) \big] &\Rightarrow \mathrm{X}(f) &\text{if }\ T_N \to 0 , \hspace{.2cm}\\ \frac{\widehat{\Xi}(f_N) - \mathbb{E}\big[\widehat{\Xi}(f_N) \big]}{\sqrt{T_N} } & \Rightarrow \mathcal{N}\left(0, \int f(z)^2 d\mathrm{A}(z) \right) &\text{if }\ T_N \to \infty ,\\ \widehat{\Xi}(f_N) - \mathbb{E}\big[\widehat{\Xi}(f_N) \big] & \Rightarrow \mathrm{X}(f) + \Lambda_{\tau}(-f) &\text{if }\ T_N \to \tau ,\hspace{.25cm} \end{align*} as $N\to\infty$ and $q_N\to0$. \end{theorem} This result shows that, at mesoscopic scales, the transition occurs when the {\it mesoscopic density of deleted particles} given by the parameter $T_N$ converges to a positive constant $\tau$. It is interesting that we obtain a crossover where the fluctuations are non-Gaussian and can be described in a simple way. This is to be compared to other analogous transitions for determinantal processes on~$\mathbb{R}$. For instance, one can obtain a crossover from Poisson to GUE eigenvalues by letting independent points evolved according to Dyson's Brownian motion. This leads to a determinantal process sometimes called the {\it deformed GUE} whose kernel depends on the diffusion time $\tau$, \cite{Johansson01}. The transition for mesoscopic linear statistics has been analyzed in \cite{DJ16} and it was proved that the critical fluctuations are Gaussian but the variance depends non-trivially on the test function. One can also consider non-intersecting Brownian motions on a cylinder. It turns out that this point process describes the positions of free fermions confined in a harmonic trap at a temperature $\tau>0$. It was established in \cite{Johansson07}, see also \cite{DDMS14}, that the corresponding grand canonical ensemble is determinantal with a correlation kernel of the form \eqref{kernel_2} with for all $k\ge 0$, $$\mathbb{E}[J_k] = \frac{1}{1+ \exp(\frac{k-N}{\tau})} . $$ For sufficiently small temperature, this system behaves like its ground-state, the GUE, while it behaves like a Poisson process at larger temperature. It was proved in \cite{JL15} that this leads to yet another crossover where non-Gaussian fluctuations are observe at the critical temperature. However, to the author's knowledge, in contrast to the incomplete ensembles considered in this note, the critical processes discovered in \cite{JL15} cannot be described in simple terms like~\eqref{crossover_2}. \\ The rest of the paper is organized as follows. In section~\ref{sect:cumulant}, we review Soshnikov's cumulants method and how to apply it to the incomplete ensemble $\hat{\Xi}$. In section~\ref{sect:2}, we prove theorems~\ref{thm:crossover_2}--\ref{thm:crossover_3} for the 2-dimensional log-gases. The proof relies on estimates for the correlation kernel \eqref{kernel} which come from the paper~\cite{AHM11} and are collected in the appendix~\ref{A:damping}. The counterparts of theorem~\ref{thm:crossover_2} and theorem~\ref{thm:crossover_3} in dimension $1$ are stated and proved in section~\ref{sect:1} and~\ref{sect:1'} respectively. In the global regime, we give a combinatorial proof which is based on the asymptotics of the recurrence coefficients of the orthogonal polynomials \eqref{OP}. This argument can be generalized to other biorthogonal ensembles; see remark~\ref{rk:biorthogonal}. In the mesoscopic regime, the analysis is similar to the two-dimensional case and relies on the asymptotics of the correlation kernel $K^N_V$ and the techniques introduced in~\cite{Lambert_a}. Finally, $C>0$ denotes a numerical constant which changes from line to line and, for any $n\in\mathbb{N}$, we let for all $\mathrm{x} \in\mathfrak{X}^n$, $$ d\mu^n(\mathrm{x}) = d\mu(x_1) \cdots d\mu(x_n) . $$ \subsection*{Acknowledgements} I thank Tomas Berggren and Maurice Duits from which I learned about the model of Bohigas and Pato, for sharing their inspiring work with me and for the valuable discussions which followed. I also thank Mariusz Hynek for reading this draft and for many interesting discussions. \section{Outline of the proof} \label{sect:cumulant} If $(\mathrm{u}_N)$ and $(\mathrm{v}_N)$ are two sequences, we will use the notation: $$\mathrm{u}_N \simeq \mathrm{v}_N \hspace{.5cm}\text{if} \hspace{.5cm} \lim_{N\to\infty}( \mathrm{u}_N - \mathrm{v}_N) =0 $$ and $$ \mathrm{u}_N \prec \mathrm{v}_N \hspace{.5cm}\text{if} \hspace{.5cm} \lim_{N\to\infty} \mathrm{u}_N / \mathrm{v}_N =0 . $$ Suppose that $\Xi$ is a determinantal point process on $\mathfrak{X}$ with a correlation kernel $K\in L^2(\mathfrak{X}\times \mathfrak{X}, \mu\times \mu)$ which is reproducing: \begin{equation} \label{projection} \int_\mathfrak{X} K(z, x) K(x, w) d\mu(x) = K(z,w) \end{equation} for all $z,w \in \mathfrak{X}$. Since it is the case for most ensembles, we shall also assume that the kernel $K(z,w)$ is continuous on $\mathfrak{X}\times \mathfrak{X}$. % The cumulants method to analyze the asymptotic distribution of linear statistic of determinantal processes goes back to the work of Costin and Lebowitz, \cite{CL95}, for count statistics of the sine process and the general theory was developed by Soshnikov, \cite{Soshnikov, Soshnikov00, Soshnikov01}, and subsequently applied to many different ensembles coming from random matrix theory, see for instance \cite{RV07a, RV07b, AHM11, BD16, BD17, JL15, Lambert_a, Lambert_b}. In this section, we show how to implement it to describe the asymptotics law of linear statistics of the incomplete ensemble $\hat{\Xi}$ with correlation kernel $p K(z,w)$ when $0<p<1$. \\ Let $$ \mho= \bigcup_{l=1}^\infty \big\{ \k=(k_1,\dots, k_l) \in \mathbb{N}^l \big\} $$ and let $\ell(\k)= l$ denote the length of $\k$. Then, the set of compositions of the integer $n>0$ is $$ \big\{ \k \vdash n \big\} = \big\{ \k\in\mho : k_1+ \cdots + k_l = n \} . $$ We denote by $n\in \mho$ the trivial composition. For any map $\Upsilon : \mho \mapsto \mathbb{R} $, for any function $f: \mathfrak{X} \to \mathbb{R}$, and for any $n\in\mathbb{N}$, we define for all $x\in \mathfrak{X}^n$, \begin{equation}\label{Upsilon_f} \Upsilon^n[f](x) = \sum_{\k \vdash n} \Upsilon(\k)\hspace{-.2cm} \prod_{1 \le j\le \ell(\k)} \hspace{-.2cm}f(x_j)^{k_j} . \end{equation} If $\k \vdash n$, we let $\displaystyle \mathrm{M}(\k) =\frac{n!}{k_1!\cdots k_l!}$ be the multinomial coefficient and for all integers $n\ge 1$ and $ m \in\{0, \dots , n\}$, we define the coefficients \begin{equation}\label{Gamma} \gamma^n_m = \sum_{ \k \vdash n}\frac{(-1)^{\ell(\k)}}{\ell(\k)} {\ell(\k) \choose m} \mathrm{M}(\k) . \end{equation} We will also use the notation: $\displaystyle\delta_{k}(n) = \begin{cases} 1 &\text{if}\ n=k \\ 0 &\text{else} \end{cases}$ for any $k\in\mathbb{Z}$. \begin{lemma} \label{thm:combinatorics} For all $n\in \mathbb{N}$, we have $\gamma^n_0 = \delta_{1}(n)$ and $ \gamma^n_1 = (-1)^n$. \end{lemma} \proof The coefficients \eqref{Gamma} have the generating function: $$ \sum_{n=1}^\infty \sum_{m=0}^n \gamma^n_m \frac{x^n q^m}{n!} = - \log\big(1+(1+ q)(e^x-1)\big) . $$ In particular, setting $q=0$, we see that $ \gamma^n_0 = \delta_{1}(n) $. Moreover, since $$ \sum_{n=1}^\infty \gamma^n_1 \frac{x^n}{n!} =\left. \frac{d}{dq}\log\big(1+(1+ q)(e^x-1)\big) \right|_{q=0} =1-e^{-x} , $$ we also see that $ \gamma^n_1 = (-1)^n$ .\qed\\ Given a test function $f: \mathfrak{X} \to \mathbb{R}$, the cumulant generating function of the random variable $\Xi(f)$ is $$ \log \mathbb{E}\big[ \exp\{ \lambda \Xi(f)\} \big] = \sum_{n=1}^\infty \frac{\lambda^n}{n!} \operatorname{C}^n_{K}[f] . $$ It is known that under general assumptions, the cumulants $\operatorname{C}^n_{K}[f] $ characterized the law of the linear statistics $\Xi(f)$ and it was proved by Soshnikov that for any $n\in\mathbb{N}$, \begin{equation} \label{Soshnikov} \operatorname{C}^n_{K}[f] = - \sum_{l=1}^n \frac{(-1)^l}{l} \sum_{\begin{subarray}{c} \k \vdash n \\ \ell(\k) =l \end{subarray}} \mathrm{M}(\k) \underset{x_{0} =x_l}{\int_{\mathfrak{X}^l}} f(x_1)^{k_1} \cdots f(x_l)^{k_l} \prod_{1\le j\le l} K(x_j, x_{j-1}) d\mu^l(x) , \end{equation} whenever this formula makes sense. Note that we use the convention that the variables $x_0$ and $x_l$ are identified in the previous integral. If the correlation kernel $K$ is reproducing, then we can rewrite this formula: \begin{align} \notag \operatorname{C}^n_{K}[f] &= - \sum_{\k \vdash n} \frac{(-1)^{\ell(\k)}}{\ell(\k)} \mathrm{M}(\k) \underset{x_{0} =x_n}{\int_{\mathfrak{X}^n}} f(x_1)^{k_1} \cdots f(x_l)^{k_l} \prod_{1\le j\le n} K(x_j, x_{j-1}) d\mu^n(x) \\ &\label{cumulant_0} = -\underset{x_{0} =x_n}{\int_{\mathfrak{X}^n}}\Upsilon^n_0[f](x) \prod_{1\le j\le n} K(x_j, x_{j-1}) d\mu^n(x) , \end{align} where for any $\k\in\mho$, \begin{equation}\label{Upsilon_0} \Upsilon_0(\k) = \frac{(-1)^{\ell(\k)}}{\ell(\k)} \mathrm{M}(\k) . \end{equation} A simple observation which turns out to be very important when it comes to the asymptotic analysis is that, by lemma~\ref{thm:combinatorics}, for all $n\ge 2$, \begin{equation} \label{normalization_0} \sum_{\k \vdash n} \Upsilon_0(\k) =\gamma_0^n =0 . \end{equation} For any $m\in\mathbb{N}$, we define \begin{equation} \label{Upsilon} \begin{cases} \displaystyle \Upsilon_m(\k) = \frac{(-1)^{\ell(\k)}}{\ell(\k)} {\ell(\k) \choose m} \mathrm{M}(\k) & \text{if } \ell(\k) \ge 2 \\ \displaystyle \Upsilon_m(\k) =- \delta_{1}(m)-\gamma^n_m & \text{if } \k=n \end{cases} . \end{equation} By convention $ {l \choose m} = 0 $ if $m>l$ and these functions are constructed so that we have for all $n,m\in\mathbb{N}$, \begin{equation} \label{normalization} \sum_{\k \vdash n} \Upsilon_m(\k) = 0 . \end{equation} Let $0<q<1$ and $p=1-q$. According to formula \eqref{Soshnikov}, the cumulants of the process $\hat{\Xi}$ with correlation kernel $\widehat{K} = p K$ are given by \begin{equation*} \operatorname{C}^n_{\widehat{K}}[f] = - \sum_{l=1}^n \frac{(-1)^l}{l} p^l \sum_{\begin{subarray}{c} \k \vdash n \\ \ell(\k) =l \end{subarray}} \mathrm{M}(\k) \underset{x_{0} =x_l}{\int_{\mathfrak{X}^l}} f(x_1)^{k_1} \cdots f(x_l)^{k_l} \prod_{1\le j\le l} K(x_j, x_{j-1}) d\mu^l(x) \end{equation*} and since the kernel $K$ is reproducing, using the binomial formula, we obtain \begin{align} \label{cumulant} \operatorname{C}^n_{\widehat{K}}[f] = \operatorname{C}^n_{K}[f] &- \sum_{m=1}^n (-q)^m \gamma^n_m \int_\mathfrak{X} f(x)^{n} K(x,x) d\mu(x) \\ &\notag- \sum_{m=1}^n (-q)^m \underset{x_{0} =x_n}{\int_{\mathfrak{X}^n}} \Upsilon^n_m[f](x) \prod_{1\le j\le n} K(x_j, x_{j-1}) d\mu^n(x) . \end{align} Now, let $(K^N)_{N\in\mathbb{N}}$ be a sequence of reproducing kernels and suppose that there exists a probability measure $\mu_{*}$ so that for any $n\in\mathbb{N}$, \begin{equation} \label{LLN_1} \int_\mathfrak{X} f(x)^n K^N(x,x) d\mu(x) \simeq N \int_\mathfrak{X} f(x)^n d\mu_{*}(x). \end{equation} We also assume that there exists $\kappa>0$ such that for all $n\ge 2$ and all $m\ge 1$, \begin{equation}\label{assumption_1} \left| \underset{x_{0} =x_n}{\int_{\mathfrak{X}^n}} \Upsilon^n_m[f](x) \prod_{1\le j\le n} K^N(x_j, x_{j-1}) d\mu^n(x) \right| \prec (\log N)^\kappa . \end{equation} In the case of the log-gases, formula \eqref{LLN_1} follows either from the asymptotics of the correlation kernel \eqref{kernel} or from potential theory and $\mu_{*}$ is called the equilibrium measure. Moreover, under the assumption \eqref{potential}, this measure is absolutely continuous and it has compact support. The main technical challenge of this paper is to verify that the estimate \eqref{assumption_1} holds for a large class of test functions. In fact, if $f$ is smooth, we expect these integrals to be uniformly bounded in $N$ for general determinantal processes but a logarithmic error is allowed for our applications. In the remainder of this section, we will show how these assumptions and the CLT for the classical log-gas imply theorem~\ref{thm:crossover_2}. The proofs of theorem~\ref{thm:crossover_3}, as well as theorems~\ref{thm:crossover_1} and~\ref{thm:crossover_4} in the 1-dimensional setting, all follow the same strategy and the details are given in sections~\ref{sect:2},~\ref{sect:1} and~\ref{sect:1'} respectively. \\ For a 2-dimensional log-gas, the equilibrium measure satisfies $d\mu_{*}= \varrho_V d\mathrm{A}$ where $\varrho_V = 2 \Delta V \mathbf{1}_{\mathscr{D}_V}$ and, by lemma~\ref{thm:Bergman_kernel} below, formula \eqref{LLN_1} holds for any test function $f\in \mathcal{C}_0(\S_V)$. Moreover, it was also proved in~\cite[Theorem~4.4]{AHM11} that if $f\in \mathcal{C}_0^3(\S_V)$, then \begin{equation} \label{CLT_0} \lim_{N\to\infty} \operatorname{C}^n_{K^N_V}[f] = \begin{cases} \|f \|_{H^1(\mathbb{C})}^2 &\text{if } n=2 \\ 0 &\text{if } n>2 \end{cases} . \end{equation} Let $0<q_N<1$ be a sequence which converges to 0 as $N\to\infty$, $p_N = 1- q_N$ and let $\widehat{K}^N_V = p_NK^N_V$. By formula \eqref{cumulant}, for any function $f\in \mathcal{C}_0^3(\S_V)$ which satisfies the condition~\eqref{assumption_1}, we obtain \begin{equation} \label{limit_1} \operatorname{C}^n_{\widehat{K}^N_V}[f]=\operatorname{C}^n_{K^N_V}[f] + T_N \int_\mathbb{C} f(z)^{n} \varrho_V(z) d\mathrm{A}(z) + \underset{N\to\infty}{o}(T_N) , \end{equation} where $T_N = Nq_N$. In the regime where $T_N \to 0$, formula \eqref{limit_1} implies that $\operatorname{C}^n_{\widehat{K}^N_V}[f] \simeq \operatorname{C}^n_{K^N_V}[f]$ for all $n\ge 2$. Hence, once centered, both statistics $\Xi(f)$ and $\hat{\Xi}(f)$ converge to a Gaussian random variable with variance $\|f \|_{H^1(\mathbb{C})}^2$. \\ On the other hand, in the regime where $T_N \to \infty$, if we let $h_N := f/\sqrt{T_N} $, by rescaling formula \eqref{limit_1} we obtain \begin{equation*} \operatorname{C}^n_{\widehat{K}^N_V}[h_N] = T_N^{1-n/2}\int_\mathbb{C} f(x)^{n} \varrho_V(z) d\mathrm{A}(z)+ \O\big(T_N^{-n/2} \big) . \end{equation*} Just like for a Poisson point process with intensity function $T_N \varrho_V$, this implies that $\hat{\Xi}(h_N)- \mathbb{E}\big[\hat{\Xi}(h_N)\big]$ converges in distribution to a Gaussian random variable with variance $\displaystyle \int_\mathbb{C} f(x)^{2} \varrho_V(z) d\mathrm{A}(z)$.\\ Finally in the critical case where $T_N \to \tau$, formulae \eqref{limit_1} and \eqref{CLT_0} implies that \begin{equation} \label{limit_2} \lim_{N\to\infty}\operatorname{C}^n_{\widehat{K}^N_V}[f] = \delta_2(n) \|f \|_{H^1(\mathbb{C})}^2 + \tau \gamma^n_1 \int_\mathbb{C} f(x)^{n} \varrho_V(z) d\mathrm{A}(z) . \end{equation} By formula \eqref{Poisson}, the cumulants of the random variable $\Lambda_{\eta}(f)$ are given by $$ \operatorname{C}^n\big( \Lambda_{\eta}(f)\big) = \int_{\mathfrak{X}} f(z)^{n} \eta(z) d\mu(z) \hspace{.5cm}\text{for all}\hspace{.3cm} n\in \mathbb{N} , $$ and by lemma~\ref{thm:combinatorics}, the coefficient $\gamma^n_1= (-1)^n$. Hence, formula \eqref{limit_2} implies that the random variable $ \hat{\Xi}(f) -\mathbb{E}\big[ \hat{\Xi}(f)\big] $ converges in distribution to the sum of a Gaussian random variable with variance $\|f \|_{H^1(\mathbb{C})}^2$ and an independent random variable $\Lambda_{t \varrho_V}(-f)$. \\ \section{Transition for the log-gases in dimension 2}\label{sect:2} We begin by reviewing the basics of the theory of eigenvalues of random normal matrices developed by Ameur, Hedenmalm and Makarov. In particular, we are interested in the properties of the correlation kernel \eqref{kernel} in the bulk of the gas. Recall that the equilibrium measure is $\varrho_V = 2\Delta V \mathbf{1}_{\mathscr{D}_V}$, the droplet $\mathscr{D}_V$ is a compact set with a nice boundary, and the set $ \S_V$ is given by \eqref{bulk}. It was pointed out in \cite{AHM11} that, if the potential $V$ is real-analytic, in order to compute the asymptotics of the cumulants of a smooth linear statistic, instead of working with the correlation kernel $K_N^V$, one can use the so-called approximate Bergman kernel, \begin{equation}\label{B_kernel} B^N_V(z,w) = \big( N b_0(z, \bar{w}) + b_1(z, \bar{w}) \big) e^{N \{ 2 \Phi(z, \bar{w}) - V(z) - V(w) \} } . \end{equation} The functions $b_0(z,w)$, $b_1(z,w)$ and $\Phi(z,w)$ are the (unique) bi-holomorphic functions defined in a neighborhood in $\mathbb{C}^2$ of the set $\big\{ (z ,\bar{z}) : z\in \S_V \big\}$ such that $b_0(z,\bar{z}) = 2 \Delta V(z)$, $b_1(z,\bar{z}) = \frac{1}{2} \Delta \log( \Delta V)(z)$, and $\Phi(z,\bar{z}) = V(z)$. \begin{lemma}[Lemma~1.2 in \cite{AHM11}, see also \cite{Berman09, AHM10}] \label{thm:Bergman_kernel} For any $x_0 \in \S_V $, there exists $\epsilon_0>0$ and $C>0$ so that \begin{equation} \label{approximation_1} \left| K^N_V(z, w) - B^N_V(z,w) \right| \le C N^{-1} \end{equation} for all $z, w \in \mathbb{D}(x_0 , \epsilon_0)$. \end{lemma} Moreover, at sufficiently small mesoscopic scale, up to a gauge transform, the asymptotics of the approximate Bergman kernel $B^N_V$ is universal. \begin{lemma} \label{thm:Ginibre_approximation} Let $\kappa>0$ and $\epsilon_N= \kappa N^{-1/2}\log N $ for all $N\in \mathbb{N}$. For any $x_0 \in \S_V$, there exists $\varepsilon_0>0$ and a function $\mathfrak{h} : \mathbb{D}(0,\varepsilon_0) \to \mathbb{R}$ such that if the parameter $N$ is sufficiently large, the function \begin{equation} \label{B} \widetilde{B}^N_{V, x_0}( u ,v) =\frac{B^N_V(x_0 + u , x_0 + v)e^{i N \mathfrak{h}(u)}}{e^{i N \mathfrak{h}(v)}} \end{equation} satisfies \begin{equation} \label{asymptotics_B} \widetilde{B}^N_{V, x_0}( u , v) = K^\infty_{N \varrho_V(x_0)}(u, v)\left\{ 1 + \underset{N\to\infty}{O}\big( (\log N)^{2}\epsilon_N \big) \right\} \end{equation} uniformly for all $u, v \in \mathbb{D}(0, \epsilon_N)$, where $K^\infty_{N \varrho_V(x_0)}$ is the $\infty$-Ginibre kernel with density $N \varrho_V(x_0)= 2N \Delta V(x_0)$. \end{lemma} A key ingredient in the paper \cite{AHM11}, as well as \cite{RV07b}, is to reduce the domain of integration in formula \eqref{cumulant_0}, using the exponential off-diagonal decay of the correlation kernels $K_V^N$, to a set where we can use the asymptotics \eqref{asymptotics_B} and approximate the function $\Upsilon^n_0[f]$ by a multivariate polynomial of degree 2 using Taylor's theorem. This reduction step is formulated in the following lemma. For completeness, the proofs of lemmas~\ref{thm:Ginibre_approximation} and~\ref{thm:localization} are given in the appendix~\ref{A:damping}. \begin{lemma} \label{thm:localization} Let $n\in\mathbb{N}$ and $\epsilon_N = \kappa N^{-1/2} \log N$ for some constant $\kappa>0$ which is sufficiently large compared to $n$. We let \begin{equation} \label{A} \mathscr{A}(z_0; \epsilon) = \big\{ \mathrm{z}\in\mathbb{C}^n : |z_j - z_{j-1}| \le \epsilon \text{ for all } j=1, \dots n \big\} . \end{equation} Let $\S$ be a compact subset of $\S_V$, $N_0\in\mathbb{N}$, and $F_N : \mathbb{C}^{n+1} \to \mathbb{R} $ be a sequence of continuous functions such that \begin{equation}\label{estimate_0} \sup\big\{ |F_N(z_0, \mathrm{z})| : \mathrm{z} \in \mathbb{C}^{n} , N \ge N_0 \big\} \le C \mathbf{1}_{z_0 \in\S} . \end{equation} We have \begin{align} &\label{localization} \underset{z_0 = z_{n+1}}{\int_{\mathbb{C}^{n+1}}}\hspace{-.2cm} F_N(z_0, \mathrm{z}) \prod_{j=0}^n K^N_V(z_{j}, z_{j+1}) d\mathrm{A}(z_0) d\mathrm{A}^{n}(z) \\ &\notag\hspace{1cm} = \int_\S d\mathrm{A}(z_{0}) \underset{z_{n+1} = z_{0}}{\int_{\mathscr{A}(z_{0}; \epsilon_N)}}\hspace{-.2cm} F_N(z_0,\mathrm{z}) \prod_{j=0}^n K^N_V(z_{j}, z_{j+1}) d\mathrm{A}^{n}(\mathrm{z})\ + \O(N^{-1}) . \end{align} \end{lemma} Using the fact that \begin{equation} \label{Ginibre_bound} \big| K^\infty_\rho(z,w)\big| = \rho e^{-\rho |z-w|^2/2} \end{equation} for all $z, w\in\mathbb{C}$ and $\rho>0$, it is easy to obtain a similar estimate for the $\infty$-Ginibre kernel, see lemma~\ref{thm:delocalization}. The next proposition is the technical step to prove that the 2-dimensional log-gases satisfy the assumption \eqref{assumption_1}. \begin{proposition} \label{thm:approximation} Let $\mathcal{K}\subset \mathbb{C}$ be a compact set, $f\in \mathcal{C}^3_0(\mathcal{K})$ and let $\Upsilon: \mho \to\mathbb{R}$ be any map such that $\sum_{\k \vdash n} \Upsilon(\k) = 0$ for all $n\ge2$. Let $L_N$ be an increasing sequence such that $1\prec L_N \prec \sqrt{N}/(\log N)^{3}$, $x_0 \in \S_V$ and $f_N=f(L_N(z- x_0))$. We also denote by $H^n(\lambda; \mathrm{w} )$ the second order Taylor polynomial at $0$ of the function $\mathrm{w}\in\mathbb{C}^n \mapsto \Upsilon^{n+1}[f](\lambda, \lambda+ \mathrm{w})$. Then, we have for any $n\ge 1$, \begin{align} \label{approximation} & \underset{z_0 = z_{n+1}}{\int_{\mathbb{C}^{n+1}}}\hspace{-.2cm} \Upsilon^{n+1}[f_N](z_0, \mathrm{z}) \prod_{j=0}^n K^N_V(z_{j}, z_{j+1}) d\mathrm{A}(z_0) d\mathrm{A}^{n}(\mathrm{z}) \\ &\notag = \int_{\mathcal{K}} d\mathrm{A}(\lambda) \hspace{-.3cm}\underset{w_0=w_{n+1}=0}{\int_{\mathbb{C}^n}}\hspace{-.4cm} H^n(\lambda; \mathrm{w}) \prod_{j=0}^{n} K^\infty_{\eta_N(\lambda)}(w_{j}, w_{j+1}) \ d\mathrm{A}^n(\mathrm{w}) +\O\left( \frac{(\log N)^4}{L_N} \right) , \end{align} where the density is given by $\eta_N(\lambda) = N L_N^{-2} \varrho_V(x_0+ \lambda/L_N)$. \end{proposition} \begin{remark} \label{rk:macro} In the macroscopic regime ($L_N=1$ and $x_0=0$), the proof below shows that the asymptotics \eqref{approximation} still holds as long as $\mathcal{K} \subset\S_V$. It is also worth noting that the error term in formula \eqref{approximation} is not optimal and it is expected to converge to 0 for any scale $1\le L_N \prec \sqrt{N}$. \end{remark} \proof We let $F_N = \Upsilon^{n+1}[f_N]$ and $$\mathscr{J}^{n}_N : = \underset{z_{n+1}=z_0}{\int_{\mathbb{C}^{n+1}}}\hspace{-.2cm} F_N(z_0, \mathrm{z}) \prod_{j=0}^n K^N_V(z_{j}, z_{j+1}) d\mathrm{A}(z_0) d\mathrm{A}^{n}(\mathrm{z}) . $$ Since $x_0\in\S_V$, there exists a compact set $\S\subset \S_V$ so that $\operatorname{supp}(f_N) \subseteq \S$ when the parameter $N$ is sufficiently large and, according to formula \eqref{Upsilon_f}, the function $F_N$ satisfies the assumption \eqref{estimate_0}. By lemma~\ref{thm:localization}, this implies that \begin{equation} \label{cumulant_1} \mathscr{J}^{n}_N \simeq \int_{\S} d\mathrm{A}(z_0) \underset{z_{n+1} = z_0}{\int_{\mathscr{A}(z_0; \epsilon_N)}}\hspace{-.3cm} F_N(z_0,\mathrm{z}) \prod_{j=0}^n K^N_V(z_{j}, z_{j+1}) d\mathrm{A}^{n}(\mathrm{z}) . \end{equation} By \eqref{A}, \begin{equation*} \mathscr{A}(z_0; \epsilon) \subset \big\{ \mathrm{z}\in\mathbb{C}^n : z_1, \dots, z_n \in \mathbb{D}(z_0,n \epsilon_N) \big\} , \end{equation*} and we can apply lemma~\ref{thm:Bergman_kernel} to replace the kernels $K^N_V(z_j,z_{j+1})$ in formula \eqref{cumulant_1}. Namely, if $z_{n+1} = z_0 $ and $\mathrm{z} \in \mathscr{A}(z_0; \epsilon)$, then \begin{equation*} \Bigg| \prod_{j=0}^n K^N_V(z_{j}, z_{j+1}) - \prod_{j=0}^{n} B^N_V(z_{j}, z_{j+1}) \Bigg| \le C \sum_{k=1}^{n+1} N^{-k} S_N^{n+1-k} , \end{equation*} where $S_N = \sup\big\{ |B^N_V(z,w)| : z, w \in \mathbb{D}(z_0, n \epsilon_N) , z_0 \in \S \big\} $. By lemma~\ref{thm:Ginibre_approximation}, for all $u,v \in \mathbb{D}(0, n\epsilon_N) $, $$ \big| B^N_V(z_0 + u , z_0 + v) \big| \le C \big| K^\infty_{N\varrho_V(z_0)}(u,v) \big| . $$ and, by formula \eqref{Ginibre_bound}, this implies that $S_N \le C N$. If we combine these estimates with formula \eqref{cumulant_1}, since the function $F_N$ are uniformly bounded, we obtain \begin{align*} \mathscr{J}^{n}_N = \int_{\S} d\mathrm{A}(z_0) \underset{z_{n+1} = z_0}{\int_{\mathscr{A}(z_0; \epsilon_N)}}\hspace{-.3cm} F_N(z_0,\mathrm{z}) \prod_{j=0}^n B^N_V(z_{j}, z_{j+1}) d\mathrm{A}^{n}(\mathrm{z})\\ \hspace{3cm}+\O\left( N^{n-1} \int_{\S} d\mathrm{A}(z_0) \big|\mathscr{A}(z_0; \epsilon_N)\big| \right) \end{align*} By definition, $\epsilon_N = \kappa N^{-1/2} \log N$ so that $ \big| \mathscr{A}(z_0;\epsilon_N) \big| \le C N^{-n} (\log N)^{2n}$ for all $z_0\in \mathbb{C}$. Thus, the previous error term converges to 0 like $(\log N)^{2n}/ N$. Hence, if we make the change of variables $\mathrm{z} = z_0 +\u$ and the appropriate {\it gauge transform} in the previous integral, according to formula \eqref{B}, we obtain \begin{equation} \label{cumulant_2} \mathscr{J}^{n}_N \simeq \int_{\S} d\mathrm{A}(z_0) \underset{u_{n+1} = u_0=0}{\int_{\mathscr{A}(0; \epsilon_N)}}\hspace{-.3cm} F_N(z_0 +\u) \prod_{j=0}^n \widetilde{B}^N_{V, z_0}(u_{j}, u_{j+1}) d\mathrm{A}^{n}(\u) . \end{equation} Note that in formula \eqref{cumulant_2}, the integral is over a small subset of the surface $\{ \u \in \mathbb{C}^{n+2} : u_0 = u_{n+1} =0\} $ and we denote $F_N(z_0 +\u) = F_N(z_0, z_0+u_1,\dots, z_0+u_n)$. Then, we can apply lemma~\ref{thm:Ginibre_approximation} to replace the kernel $ \widetilde{B}^N_{V, z_0}$ by its local limit $K^\infty_{N \varrho_V(z_0)}$ in formula \eqref{cumulant_2}, we obtain \begin{equation*} \mathscr{J}^{n}_N \simeq \int_{\S} d\mathrm{A}(z_0) \underset{u_{n+1} = u_0=0}{\int_{\mathscr{A}(0; \epsilon_N)}}\hspace{-.3cm} F_N(z_0 +\u)\chi_N(z_0,\u) \prod_{j=0}^n K^\infty_{N \varrho_V(z_0)}(u_{j}, u_{j+1}) d\mathrm{A}^{n}(\u) , \end{equation*} where $\displaystyle \chi_N(z_0,\u) = 1 + \underset{N\to\infty}{O}\big( (\log N)^{2}\epsilon_N \big)$ uniformly for all $z_0\in\S$ and all $\u\in \mathscr{A}(0; \epsilon_N)$. \\ Let $F=\Upsilon^{n+1}[f]$, $\delta_N = \epsilon_N L_N$ and $\eta_N(\lambda) =N L_N^{-2} \varrho_V(x_0 + \lambda/L_N) $. By definition, $F_N(z_0 +\u) = F\big(L_N(z_0- x_0 +\u )\big) $ and we can make the change of variables $\lambda = L_N(z_0-x_0)$ and $\mathrm{w}= L_N \u$ to get rid of the scale $L_N$ and the fixed point $x_0$ in the previous integral. Using the obvious scaling property of the $\infty$-Ginibre kernel, \eqref{Ginibre}, we obtain \begin{equation} \label{cumulant_3} \mathscr{J}^{n}_N \simeq \int_{\mathcal{K}} d\mathrm{A}(\lambda) \underset{w_{n+1} = w_0=0}{\int_{\mathscr{A}(0; \delta_N)}}\hspace{-.3cm} F(\lambda+\mathrm{w})\tilde{\chi}_N(\lambda,\mathrm{w}) \prod_{j=0}^n K^\infty_{\eta_N(\lambda)}(w_{j}, w_{j+1}) d\mathrm{A}^{n}(\mathrm{w}) , \end{equation} where $\displaystyle \tilde{\chi}_N(\lambda,\mathrm{w})= 1 + \underset{N\to\infty}{O}\big( (\log N)^{2}\epsilon_N \big)$ uniformly for all $\lambda\in\mathcal{K}$ and for all $\u\in \mathscr{A}(0; \delta_N)$. The condition $\sum_{\k \vdash n+1} \Upsilon(\k) = 0$ implies that $F(\lambda+0)=0$ for all $\lambda \in \mathbb{C}$ so that $$ \big| F(\lambda+\mathrm{w}) \big| \le C \delta_N$$ for all $\lambda\in \mathcal{K}$ and all $\mathrm{w}\in \mathscr{A}(0; \delta_N)$. Moreover, by formula~\eqref{Ginibre_bound}, for any $n \in \mathbb{N}$, \begin{equation} \label{estimate_5} \prod_{j=0}^{n} \big|K^\infty_{\rho}(w_{j}, w_{j+1})\big| \le \rho^{n+1} \prod_{j=1}^{n} e^{- \rho | v_j |^2 /2 } \end{equation} where $v_j = w_j - w_{j-1}$ for all $j=1,\dots, n$. Hence, we see that $$ \bigg| \underset{w_{n+1} = w_0=0}{\int_{\mathscr{A}(0; \delta_N)}}\hspace{-.3cm} F(\lambda+\mathrm{w}) \prod_{j=0}^n K^\infty_{\eta_N(\lambda)}(w_{j}, w_{j+1}) d\mathrm{A}^{n}(\mathrm{w}) \bigg| \le C \delta_N \eta_N(\lambda) $$ and, since $\eta_N(\lambda) \le C N L_N^{-2}$ for all $\lambda\in \mathcal{K}$, we deduce from formula \eqref{cumulant_3} that \begin{align} \label{cumulant_4} \mathscr{J}^{n}_N = \int_{\mathcal{K}} d\mathrm{A}(\lambda) \underset{w_{n+1} = w_0=0}{\int_{\mathscr{A}(0; \delta_N)}}\hspace{-.3cm} F(\lambda+\mathrm{w}) \prod_{j=0}^n K^\infty_{\eta_N(\lambda)}(w_{j}, w_{j+1}) d\mathrm{A}^{n}(\mathrm{w}) \\ + \notag\O\big( N L_N^{-2} \delta_N \epsilon_N (\log N)^2 \big) . \end{align} Recall that $\delta_N= L_N \epsilon_N$ and $\epsilon_N=\kappa N^{-1/2}\log N$, so that the error term in \eqref{cumulant_4} is of order $(\log N)^4 L_N^{-1}$. Moreover, if $L_N \prec \sqrt{N}/\log N$, a Taylor approximation shows that for any $\mathrm{w} \in \mathscr{A}(0; \delta_N) $, \begin{equation*} \label{Taylor} F(\lambda,\lambda+w_1,\dots, \lambda+w_n) =H^n( \lambda; \mathrm{w}) + \O(\delta_N^3) . \end{equation*} Using the estimate \eqref{estimate_5} once more, by formula \eqref{cumulant_4}, this implies that \begin{align} \label{cumulant_5} \mathscr{J}^{n}_N = \int_{\mathcal{K}} d\mathrm{A}(\lambda) \underset{w_{n+1} = w_0=0}{\int_{\mathscr{A}(0; \delta_N)}}\hspace{-.3cm} H^n( \lambda; \mathrm{w}) \prod_{j=0}^n K^\infty_{\eta_N(\lambda)}(w_{j}, w_{j+1}) d\mathrm{A}^{n}(\mathrm{w}) \\ + \notag\O\big( N L_N^{-2} \delta_N^3 \vee (\log N)^4 L_N^{-1} \big) . \end{align} By lemma~\ref{thm:delocalization}, the leading term in formula \eqref{cumulant_5} has the same limit (up to an arbitrary small error term) as $$ \int_{\mathcal{K}} d\mathrm{A}(\lambda) \underset{w_{n+1} = w_0=0}{\int_{\mathbb{C}^n}}\hspace{-.3cm} H^n( \lambda; \mathrm{w}) \prod_{j=0}^n K^\infty_{\eta_N(\lambda)}(w_{j}, w_{j+1}) d\mathrm{A}^{n}(\mathrm{w}) $$ and, since $N L_N^{-2} \delta_N^3 \to0$ when $L_N \prec \sqrt{N}/(\log N)^3$, this completes the proof. \qed\\ Since the function $\mathrm{w}\mapsto H^n(\lambda; \mathrm{w} )$ is a multivariate polynomial of degree 2, the leading term in the asymptotics~\eqref{approximation} can be computed explicitly using the reproducing property of the $\infty$-Ginibre kernel; see for instance~\cite{RV07b}. For any $\rho>0$, the function $(z,w) \mapsto e^{\rho z \bar{w}}$ is the reproducing kernel for the Bergman space with weight $\rho e^{-\rho|z|^2/2}$ on $\mathbb{C}$. This implies that for any $w_1, w_2 \in \mathbb{C}$ and for all integer $k\ge 0$, \begin{equation*} \begin{cases} \displaystyle \int_\mathbb{C} K^\infty_{\rho}(w_1,w_2) w_2^k K^\infty_{\rho}(w_2,w_3) d\mathrm{A}(w_2) = w_1^kK^\infty_{\rho}(w_1,w_3) \vspace{.2cm} \\ \displaystyle \int_\mathbb{C} K^\infty_{\rho}(w_1,w_2) \bar{w_2}^k K^\infty_{\rho}(w_2,w_3) d\mathrm{A}(w_2) = \bar{w_3}^kK^\infty_{\rho}(w_1,w_3) \end{cases} . \end{equation*} As a basic application of these identities, we obtain the following lemma. \begin{lemma} \label{thm:reproducing} Let $n \ge 1$ and $\rho>0$. For any polynomial $H(\mathrm{w})$ of degree at most~2 in the variables $w_1,\dots, w_n, \bar{w_1}, \dots, \bar{w_n}$, we have \begin{equation} \label{reproducing} \underset{w_{n+1} = w_0=0}{\int_{\mathbb{C}^n}}\hspace{-.3cm} H^n( \mathrm{w}) \prod_{j=0}^nK^\infty_{\rho}(w_{j}, w_{j+1}) \ d\mathrm{A}^n(\mathrm{w}) = \sum_{1 \le r \le s\le n} \partial_s \bar{\partial}_r H |_{\mathrm{w}=0} . \end{equation} \end{lemma} According to formulae \eqref{normalization_0} and \eqref{normalization}, combining proposition~\ref{thm:approximation} and lemma~\ref{thm:reproducing}, we obtain for any test function $f\in \mathcal{C}^3_0(\mathcal{K})$ and for all $n\in \mathbb{N}$ and $m\ge 0$, \begin{align} \label{cumulant_6} & \underset{z_0 = z_{n+1}}{\int_{\mathbb{C}^{n+1}}}\hspace{-.2cm} \Upsilon^{n+1}_m[f_N](z_0, \mathrm{z}) \prod_{j=0}^n K^N_V(z_{j}, z_{j+1}) d\mathrm{A}(z_0) d\mathrm{A}^{n}(\mathrm{z}) \\ &\notag \hspace{1cm}= \sum_{2 \le r \le s \le n+1}\int_{\mathcal{K}} \partial_s \bar{\partial}_r \Upsilon^{n+1}_m[f] (\lambda,\dots, \lambda) d\mathrm{A}(\lambda) +\O\left( \frac{(\log N)^4}{L_N} \right) . \end{align} In the macroscopic regime ($L_N=1$ and $x_0=0$), this proves the estimate \eqref{assumption_1} as long as the support of the test function $\mathcal{K}\subset \S_V$, so that theorem~\ref{thm:crossover_2} follows directly from the asymptotic expansion \eqref{limit_1} as explained in section~\ref{sect:cumulant}.\\ In the mesoscopic regime, we claim that the asymptotics \eqref{cumulant_6} with $m=0$ implies the central limit theorem~\ref{meso:2}. The fact that the cumulants of order $n\ge 3$ vanish comes from the following combinatorial lemma. \begin{lemma}[\cite{RV07b}, Lemma~9] \label{thm:RV} For any $n\ge 1$, let \begin{align*} \mathscr{Y}_n= -\sum_{k\vdash n} \Upsilon_0(\k) \left\{ \sum_{2 \le r < s\le \ell(\k)} k_r k_s + \sum_{r=2}^{\ell(\k)} k_r (k_r-n) \right\}. \end{align*} We have $\displaystyle \mathscr{Y}_n = \begin{cases} 1 &\text{if } n=2 \\ 0 &\text{else} \end{cases} . $ \end{lemma} {\it Proof of theorem~\ref{meso:2}.} Let $\lambda\in\mathbb{C}$ and $\boldsymbol\lambda =(\lambda,\dots, \lambda)\in\mathbb{C}^{n+1}$. According to formula \eqref{Upsilon_f}, an elementary computation shows that for any $2 \le r < s\le n+1$, \begin{equation} \label{combinatorics_1} \partial_s \bar{\partial}_r \Upsilon_0^{n+1}[f](\boldsymbol\lambda) = \partial f(\lambda) \bar{\partial} f(\lambda) f(\lambda)^{n-1} \sum_{\k \vdash n+1} \Upsilon_0(\k) k_r k_s \mathbf{1}_{s \le \ell(k)} \end{equation} and \begin{align} \notag \partial_r \bar{\partial}_r \Upsilon_0^{n+1}[f]((\boldsymbol\lambda) &= \partial f(\lambda) \bar{\partial} f(\lambda) f(\lambda)^{n-1} \sum_{\k \vdash n+1} \Upsilon_0(\k) k_r(k_r-1) \mathbf{1}_{r \le \ell(k)}\\ &\hspace{.2cm} \label{combinatorics_2} + \Delta f(\lambda) f(\lambda)^{n}\sum_{\k \vdash n+1} \Upsilon_0(\k) k_r \mathbf{1}_{r \le \ell(k)} . \end{align} Since, by integration by parts, $$ \int_\mathbb{C} \Delta f(\lambda) f(\lambda)^{n} d\mathrm{A}(\lambda) = - n \int_\mathbb{C} \partial f(\lambda) \bar{\partial} f(\lambda) f(\lambda)^{n-1} d\mathrm{A}(\lambda) , $$ we deduce from formulae \eqref{combinatorics_1} and \eqref{combinatorics_2} that \begin{align*} \sum_{2 \le r \le s \le n+1}\int_{\mathbb{C}} \partial_s \bar{\partial}_r \Upsilon^{n+1}_m[f] (\boldsymbol\lambda) d\mathrm{A}(\lambda) = -\mathscr{Y}_{n+1} \int_{\mathbb{C}} \partial f(\lambda) \bar{\partial} f(\lambda) f(\lambda)^{n-1} d\mathrm{A}(\lambda) . \end{align*} When $ L_N =N^\alpha$ and $0<\alpha<1/2$, formulae \eqref{cumulant_0} and \eqref{cumulant_6} with $m=0$ imply that for any $n\ge 1$, \begin{equation}\label{cumulant_7} \lim_{N\to\infty}\operatorname{C}^{n+1}_{K^N_V}[f_N] = -\mathscr{Y}_{n+1} \int_{\mathbb{C}} \partial f(\lambda) \bar{\partial} f(\lambda) f(\lambda)^{n-1} d\mathrm{A}(\lambda) . \end{equation} By lemma~\ref{thm:RV}, this proves that for any test function $f\in\mathcal{C}^3_0(\mathbb{C})$, the centered mesoscopic linear statistics $\Xi(f_N) -\mathbb{E}\big[\Xi(f_N) \big] $ converges to a mean-zero Gaussian random variable $\mathrm{X}(f)$ with variance $$ \operatorname{Var}\big[ \mathrm{X}(f) \big] = \int_{\mathbb{C}} \partial f(\lambda) \bar{\partial} f(\lambda) d\mathrm{A}(\lambda) = \|f \|_{H^1(\mathbb{C})}^2 , $$ as $N\to\infty$. \qed\\ Using formula \eqref{cumulant} and the asymptotics~\eqref{cumulant_6}, we can describe in a similar fashion the mesoscopic fluctuations of the incomplete log-gas $\hat{\Xi}$. \\ {\it Proof of theorem~\ref{thm:crossover_3}.} By lemma~\ref{thm:Bergman_kernel}, for any bounded function $f$ with compact support, we have \begin{align*} \int_\mathbb{C} f_N(z)^n K^N_V(z,z) d\mathrm{A}(z) &= N \int_\mathbb{C} f_N(z)^n 2 \Delta V(z) d\mathrm{A}(z) + \O(1) \\ & = N L_N^{-2} \varrho_V(x_0) \int_\mathbb{C} f(z)^n d\mathrm{A}(z) + \O(N L_N^{-3} ) . \end{align*} Let $T_N= N q_N L_N^{-2} \varrho_V(x_0)$. By \eqref{cumulant} and \eqref{cumulant_6}, this shows that \begin{equation} \label{cumulant_8} \operatorname{C}^n_{\widehat{K}^N_V}[f] = \operatorname{C}^n_{K^N_V}[f] + T_N \gamma^n_m \int_\mathbb{C} f(z)^n d\mathrm{A}(z) + \underset{N\to\infty}{o}(T_N) \end{equation} as $N\to\infty$ and $q_N\to0$. As we already discussed in section~\ref{sect:cumulant}, the asymptotics \eqref{cumulant_7} and \eqref{cumulant_8} yield theorem~\ref{thm:crossover_3}. \qed\\ \section{Transition for the 1-dimensional log-gases - the global regime. }\label{sect:1} As we explained in the introduction, there is an analogous transition for other determinantal processes and, in particular, for linear statistics of the eigenvalues of Hermitian random matrices or 1-dimensional log-gases. A good reference for the results discussed in this section are the chapters 11--14 in the book of Pastur and Shcherbina, \cite{PS11}. Let $V: \mathbb{R} \to\mathbb{R}$ be a continuous function which satisfies the condition~\eqref{potential} and let $\Xi$ and $\widehat{\Xi}$ be the determinantal processes with correlation kernels $K^N_V$ and $\widehat{K}^N_V = p_N K^N_V$ respectively. It was established in \cite{Johansson98} that there exists an equilibrium measure $\mu_{*}$ such that the one-point function $u^N_V(x) : = N^{-1}K^N_V(x,x)$ viewed as a probability measure on $\mathbb{R}$ converges weakly to $\mu_{*}$ as $N\to\infty$. Like in the 2-dimensional setting, the equilibrium measure has compact support and it is absolutely continuous with respect to the Lebesgue measure with a density $\varrho_V$. Let \begin{equation} \label{bulk'} \mathscr{I}_V = \{ x\in \mathbb{R} : \varrho_V(x)>0 \} . \end{equation} We can always make an affine transformation of the potential and assume that $\mathscr{I}_V \subset [-1,1] $. Moreover, if $V$ is a polynomial and $\mathscr{I}_V =(-1,1)$, then linear statistics of the process $\Xi$ satisfy a CLT: \begin{equation} \label{Johansson} \Xi(f) - \int_\mathbb{R} f(x) \varrho_V(x) dx \Rightarrow \mathrm{Y}(f) \hspace{.5cm}\text{as }N\to\infty , \end{equation} for any $f \in \mathcal{C}^2(\mathbb{R})$ so that $f'(x)$ grows at most polynomially as $|x|\to\infty$. The process $\mathrm{Y}$ is a centered Gaussian noise defined on $[-1,1]$ with covariance structure: \begin{equation} \label{Chebychev_covariance} \mathbb{E}\big[ \mathrm{Y}(f)\mathrm{Y}(g) \big] = \frac{1}{4} \sum_{k=1}^\infty k \mathrm{c}_k(f) \mathrm{c}_k(g) . \end{equation} Let $(T_k)_{k=0}^\infty$ denote the Chebyshev polynomials of the first kind, i.e. $T_k(\cos \theta) = \cos(k\theta)$ for any $k\ge 0$. In formula \eqref{Chebychev_covariance}, $\mathrm{c}_k$ are the Fourier-Chebychev coefficients: \begin{equation}\label{Fourier_T} \mathrm{c}_k(f)=\frac{2}{\pi} \int_{-1}^1 f(x) T_k(x) \frac{dx}{\sqrt{1-x^2}} , \end{equation} for any function $f\in L^\infty([-1,1])$. The CLT \eqref{Johansson} holds for more general potentials and for other orthogonal polynomial ensembles as well, \cite[section~11.3]{PS11} or \cite{BG13a, BD16, Lambert_b} and it is known that the one-cut condition, i.e.~the assumption that the support of the equilibrium measure is connected, is necessary. Otherwise, the asymptotic fluctuations of a generic linear statistic $\Xi(f)$ are still of order~1 but there are not Gaussian, see \cite{Pastur06, Shcherbina13, BG13b}. In this section, we will establish the counterpart of the CLT \eqref{Johansson} for the incomplete ensembles and for polynomial test functions. In particular, the estimate \eqref{assumption_1} will be proved using the approach introduced in \cite{BD17} which consists in using the three-terms recurrence relation of the orthogonal polynomials $\{P_k \}_{k=0}^\infty$ with respect to the measure $d\mu_N = e^{- 2N V(x)}dx$ to express the cumulants. For any $N\in \mathbb{N}$, there exists two sequences $a^N_k >0$ and $b^N_k \in \mathbb{R}$ such that the orthogonal polynomials $P_k$ satisfy \begin{equation}\label{recurrence} xP_k(x) = a^N_k P_{k+1}(x) + b^N_k P_k(x) + a^N_{k-1} P_{k-1}(x) . \end{equation} \begin{theorem} \label{thm:crossover_1} Let $\mathrm{Y}$ be a mean-zero Gaussian process with covariance \eqref{Chebychev_covariance} and let $\Lambda_{\tau\varrho_V}$ be an independent Poisson process with intensity $\tau\varrho_V$ where $\tau>0$. Let $p_N = 1-q_N$ and $T_N = N q_N$. Suppose that the recurrence coefficients of the orthogonal polynomials $\{P_k \}_{k=0}^\infty$ satisfy for all $j\in\mathbb{Z}$, \begin{equation}\label{recurrence_limit} \lim_{N\to\infty} a^N_{N+j} = 1/2 \hspace{.5cm}\text{and}\hspace{.5cm} \lim_{N\to\infty}b^N_{N+j} = 0 , \end{equation} and suppose that for all $k \in\mathbb{N}$, \begin{equation} \label{LLN_2} \lim_{N\to\infty} \int_\mathbb{R} x^k u^N_V(x) dx = \int_\mathbb{R} x^k \varrho_V(x) dx . \end{equation} Then, for any polynomial $Q\in\mathbb{R}\langle X\rangle$, we obtain as $N\to\infty$ and $q_N \to 0$, \begin{align*} \hat{\Xi}(Q) - \mathbb{E}\big[\hat{\Xi}(Q) \big] &\Rightarrow \mathrm{Y}(Q) &\text{if }\ T_N \to 0 , \hspace{.2cm}\\ \frac{\hat{\Xi}(Q) - \mathbb{E}\big[\hat{\Xi}(Q) \big]}{\sqrt{T_N} } & \Rightarrow \mathcal{N}\left(0, \int_{\mathbb{R}} Q(x)^2 \varrho_V(x) dx \right) &\text{if }\ T_N \to \infty ,\\ \hat{\Xi}(Q) - \mathbb{E}\big[\hat{\Xi}(Q) \big] & \Rightarrow \mathrm{Y}(Q) + \Lambda_{\tau\varrho_V}(-Q) &\text{if }\ T_N \to \tau . \hspace{.25cm} \end{align*} \end{theorem} The condition \eqref{LLN_2} can easily be deduced from the following estimate for the one-point function or density of the gas, $$ u^N_V(x) = N^{-1}K^N_V(x,x) \le e^{- 2N \{ V(x) - \log(1+|x|) - C\} } , $$ where $C$ is a constant which depends only on the potential $V$; c.f.~ \cite[Theorem~11.1.2]{PS11}. In particular, the assumption \eqref{potential} implies that the function $u_N(x)$ decays faster than any polynomial outside a compact set and the limits \eqref{LLN_2} can be deduced from the weak convergence of $u_N (x)$ to the equilibrium measure $d\mu_{*} = \varrho_V dx$. It was proved in~\cite{Johansson98} that if $V$ is a convex polynomial and $\mathscr{I}_V =(-1,1)$, then the condition \eqref{recurrence_limit} is satisfied and theorem~\ref{thm:crossover_1} holds. In fact, this assumption should be generic and it is an interesting question whether it is equivalent to the condition $\mathscr{I}_V =(-1,1)$. \\ The completion $\mathscr{L}_N$ of the space of polynomials with respect to $L^2(\mathbb{R},\mu_N)$ is isomorphic to $L^2(\mathbb{N}_0)$ and formula \eqref{recurrence} implies that the multiplication by $x$ on $\mathscr{L}_N$ is unitary equivalent to the matrix \begin{equation}\label{Jacobi} \mathbf{J} := \begin{bmatrix} b_0^N & a_0^N & 0 & 0 & 0 \\ a_0^N & b_1^N & a_1^N & 0 & 0 &\mathbf{0} \\ 0 & a_1^N & b_2^N & a_2^N & 0 \\ 0 & 0 & a_2^N & b_3^N & a_3^N \\ &\mathbf{0} & & \ddots &\ddots&\ddots \end{bmatrix} . \end{equation} This matrix is called the {\bf Jacobi matrix}. The connection with linear statistics of eigenvalues comes from the fact that for any polynomial $Q\in\mathbb{R}\langle X\rangle$ and for any composition $\k\vdash n$, one easily sees that \begin{align*} \notag &\underset{x_{0} =x_l}{\int_{\mathfrak{X}^l}} Q(x_1)^{k_1} \cdots Q(x_l)^{k_l} \prod_{0\le j\le l} K^N_V(x_j, x_{j-1}) dx_1 \cdots dx_l \\ & \hspace{2cm}= \sum_{m=0}^{N-1} \sum_{\pi \in \Gamma_{m}^n } \prod_{j=1}^l \mathbf{1}_{\pi(k_1+\cdots+ k_j) < N} \prod_{i=0}^{n-1} Q(\mathbf{J})_{\pi(i)\pi(i+1)}. \end{align*} where $\mathcal{G}$ denotes the adjacency graph of the matrix $Q(\mathbf{J})$ and $$ \Gamma_{m}^n = \big\{ \text{paths }\pi \text{ on the graph }\mathcal{G}\text{ of length }n \text{ such that }\pi(0)=\pi(n)=m \big\} . $$ For any composition $\k\vdash n$, we let \begin{equation} \Phi_\pi^N(\k) := \mathbf{1}_{\hspace{-.1cm}\displaystyle\max_{1\le j <\ell(\k)}\pi(k_1+\cdots+ k_j) \ge N } . \end{equation} Observe that $$ \prod_{1\le j <\ell(\k)} \mathbf{1}_{\pi(k_1+\cdots+ k_j) < N} = 1- \Phi_\pi^N(\k) , $$ so that by formula \eqref{cumulant_0}, the cumulants of a polynomial linear statistics are given by \begin{equation} \label{cumulant_15} \operatorname{C}^n_{K^N_V}[Q] = - \sum_{m =0}^{N-1} \sum_{\pi \in \Gamma_{m}^n } \prod_{i=0}^{n-1} Q(\mathbf{J})_{\pi(i)\pi(i+1)} \sum_{\k \vdash n}\Upsilon_0(\k)\big( 1-\Phi_\pi^N(\k)\big) . \end{equation} Note that there exists a constant $M>0$ which only depends on the degree of $Q$ and $n$ so that $\Phi_\pi^N=0 $ for any path $\pi \in \Gamma^n_m$ as long as $m < N - M$. Since $ \sum_{\k \vdash n}\Upsilon_0(\k) = 0$ for all $n \ge 2$, formula \eqref{cumulant_15} implies that \begin{equation*} \operatorname{C}^n_{K^N_V}[Q] = \sum_{m =N -M}^{N-1} \sum_{\pi \in \Gamma_{m}^n } \prod_{i=0}^{n-1} Q(\mathbf{J})_{\pi(i)\pi(i+1)} \sum_{\k \vdash n}\Upsilon_0(\k)\Phi_\pi^N(\k) . \end{equation*} In particular, if the matrix $\mathbf{J}$ has a right-limit, i.e.~there exists an (infinite) matrix $\L$ such that for all $i,j \in\mathbb{Z}$, \begin{equation}\label{right_limit} \lim_{N\to\infty} \mathbf{J}_{N+i, N+j} = \L_{i,j} , \end{equation} then \begin{equation} \lim_{N\to\infty}\operatorname{C}^n_{K^N_V}[Q] = \sum_{m =1}^{M} \sum_{\pi \in \widetilde{\Gamma}_{m}^n } \prod_{i=0}^{n-1} Q(\L)_{\pi(i)\pi(i+1)} \sum_{\k \vdash n}\Upsilon_0(\k)\Phi_\pi^0(\k) , \end{equation} where $\widetilde{\mathcal{G}}$ denotes the adjacency graph of the matrix $Q(\L)$ and $$ \widetilde{\Gamma}_{m}^n = \big\{ \text{paths }\pi \text{ on the graph }\widetilde{\mathcal{G}}\text{ of length }n \text{ such that }\pi(0)=\pi(n)=-m \big\} . $$ Since $\sum_{\k \vdash n} \Upsilon_m(\k) = 0$ for all $n, m\in\mathbb{N}$, the very same computation shows that \begin{align} \notag &\lim_{N\to\infty} \underset{x_{0} =x_n}{\int_{\mathfrak{X}^n}} \Upsilon^n_m[Q](x) \prod_{1\le j\le n} K^N_V(x_j, x_{j-1}) d^nx \\ &\label{cumulant_16}\hspace{2cm} = \sum_{m =1}^{M} \sum_{\pi \in \widetilde{\Gamma}_{m}^n } \prod_{i=0}^{n-1} Q(\L)_{\pi(i)\pi(i+1)} \sum_{\k \vdash n}\Upsilon_m(\k)\Phi_\pi^0(\k) . \end{align} Hence, the condition \eqref{assumption_1} obviously holds and, by formula \eqref{LLN_2}, this yields that for all $n \ge 2$, \begin{equation*} \operatorname{C}^n_{\widehat{K}^N_V}[Q]=\operatorname{C}^n_{K^N_V}[Q] + Nq_N \int_\mathbb{R} Q(x)^{n} \varrho_V(z) dx + \O(Nq_N^2) . \end{equation*} The condition \eqref{recurrence_limit} implies that the right-limit of the Jacobi matrix is a tridiagonal matrix $\L$ such that $\L_{jj}= 0$ and $\L_{j, j\pm 1} =1/2$ for all $j \in \mathbb{Z}$ and it was proved in \cite{Lambert_b}, see also~\cite{BD17}, that in this case: $$ \lim_{N\to\infty} \operatorname{C}^n_{K^N_V}[Q] = \begin{cases} \displaystyle \frac{1}{\pi^2} \sum_{k=1}^{\operatorname{deg} Q} k \bigg( \int_{-1}^1 Q(x) T_k(x) \frac{dx}{\sqrt{1-x^2}} \bigg)^2 &\text{if } n=2 \\ 0 &\text{else} \end{cases} . $$ As we discussed in section~\ref{sect:cumulant}, these asymptotics imply theorem~\ref{thm:crossover_1}. \begin{remark}[Generalizations of theorem~\ref{thm:crossover_1}] \label{rk:biorthogonal} Note that we have formulated theorem~\ref{thm:crossover_1} for a log-gas at inverse temperature $\beta=2$, but the previous proof can be generalized to other biorthogonal ensemble with a correlation kernel of the form \begin{equation*} K^N(z,w) = \sum_{k=0}^{N-1} \varphi_k^N(z) \varpi_k^N(w) . \end{equation*} The appropriate assumptions are that the equilibrium measure $\mu_{*}$ exists, formula \eqref{LLN_1} holds for all polynomials, the family $\{\varphi_k^N \}_{k=0}^\infty $ satisfies a $q$-term recurrence relation for all $N\in\mathbb{N}$ and the corresponding recurrence matrix $\mathbf{J}$ has a right-limit~$\L$ as $N\to\infty$. This applies to other orthogonal polynomial ensembles, such as the discrete point processes coming from domino tilings of hexagons, as well as some non-symmetric biorthogonal ensembles such as the Muttalib-Borodin ensembles, square singular values of product of complex Ginibre matrices or certain two-matrix models, see \cite{BD17, Lambert_b} for more details. Note also that formula \eqref{cumulant_16} still holds in the case where the right-limit $\L$ exists but is not Toeplitz (i.e.~constant along its main diagonals). Then, we obtain a crossover from a non-Gaussian process (described by $\L$ in the regime where $T_N\to 0$) to a Poisson process (when $T_N\to \infty$). For instance, such a transition arises when considering linear statistics of the log-gases in the multi-cut regime. \end{remark} \section{Transition for the 1-dimensional log-gases - the mesoscopic regime. }\label{sect:1'} The combinatorial method used in the previous section is well-suited to investigate the global properties of the processes $\Xi$ and $\hat{\Xi}$ but it is difficult to implement in the mesoscopic regime (in this case, polynomial linear statistics do not converge without normalization). Although there is a mesoscopic theory based on the asymptotics of the recurrence matrix \eqref{Jacobi} developed in \cite{BD16}, it is simpler to prove the mesoscopic counterpart of theorem~\ref{thm:crossover_1} using the sine-kernel asymptotics of the correlation kernel $K^N_V$ and the method of~\cite{Lambert_a}. This reduces the proof to carefully apply the classical asymptotics of \cite{Deift99+}. These results of Deift et al.~rely on the Riemann-Hilbert problem associated to the orthogonal polynomials $\{P_k \}_{k=0}^\infty$ and holds as long as the potential $V$ is real-analytic (including the multi-cut and non-regular cases). The steepest descent method of \cite{Deift99+} has also been established for many other invariant ensembles, c.f.~\cite{Kuijlaars11}, for which it is also possible to extract the asymptotics \eqref{sine_1} for the correlation and deduce the analogue of theorem~\ref{thm:crossover_4}. We let $x_0\in\mathscr{I}_V$, $f\in \mathcal{C}^2_0(\mathbb{R})$, $\alpha\in (0,1)$, and $f_N(x) = f\big(N^\alpha(x-x_0) \big)$. In the following, we denote by $$ \hat{f}(u) = \int_\mathbb{R} f(x) e^{-2\pi i x u} dx $$ the Fourier transform of the function $f$. \begin{theorem} \label{thm:crossover_4} Let $\mathrm{Z}$ be a mean-zero Gaussian process on $\mathbb{R}$ with correlation structure: \begin{equation} \mathbb{E}\big[ \mathrm{Z}(h)\mathrm{Z}(g) \big] = \int_{0}^\infty u \hat{h}(u) \hat{g}(-u) du \end{equation} for any functions $h, g \in H^{1/2}(\mathbb{R})$ and let $\Lambda_{\tau}$ be an independent Poisson process with intensity $\tau>0$ on $\mathbb{R}$. Let $p_N = 1-q_N$ and $T_N = q_NN^{1-\alpha} \varrho_V(x_0)$.\\ We obtain as $N\to\infty$ and $q_N\to0$, \begin{align} \label{Normal_4} \hat{\Xi}(f_N) - \mathbb{E}\big[\hat{\Xi}(f_N) \big] &\Rightarrow \mathrm{Z}(f) &\text{if }\ T_N \to 0 , \hspace{.2cm}\\ \label{f_Noisson_4} \frac{\hat{\Xi}(f_N) - \mathbb{E}\big[\hat{\Xi}(f_N) \big]}{\sqrt{T_N} } & \Rightarrow \mathcal{N}\left(0, \int_{\mathbb{R}} f(x)^2 dx \right) &\text{if }\ T_N \to \infty ,\\ \label{crossover_4} \hat{\Xi}(f_N) - \mathbb{E}\big[\hat{\Xi}(f_N) \big] & \Rightarrow \mathrm{Z}(f) + \Lambda_{\tau}(-f) &\text{if }\ T_N \to \tau . \hspace{.25cm} \end{align} \end{theorem} Since $\operatorname{Var}\big[ \mathrm{Z}(f)\big] = \| f\|_{H^{1/2}(\mathbb{R})}$, the process $\mathrm{Z}$ is usually called the $H^{1/2}$-Gaussian noise. It describes the mesoscopic fluctuations of the eigenvalues of Hermitian random matrices, see \cite{BD16, Lambert_a, LS15, HK16}, as well as the mesoscopic fluctuations of the log-gases for general $\beta>0$, \cite{BL16}, and of random band matrices in the appropriate regime \cite{EK14a, EK14b}. \\ Our proof of theorem~\ref{thm:crossover_4} is based on the strategy explained in section~\ref{sect:cumulant} and is postponed to the end of this section after we review some of the ideas of~\cite{Lambert_a}. By making a change of variables in formula \eqref{cumulant}, the cumulants of the linear statistics of the incomplete ensembles are given by \begin{align} \label{cumulant_9} \operatorname{C}^n_{\widehat{K}^N_V}[f_N] = \operatorname{C}^n_{K^N_V}[f_N] &- \sum_{m=1}^n (-q_N)^m \gamma^n_m \int_\mathbb{R} f(x)^{n} \widetilde{K}^N_{V,x_0}(x,x) dx \\ &\notag- \sum_{m=1}^n (-q_N)^m \underset{x_{0} =x_n}{\int_{\mathbb{R}^n}} \Upsilon^n_m[f](x) \prod_{j\le n} \widetilde{K}^N_{V,x_0}(x_j, x_{j-1}) dx , \end{align} where $\widetilde{K}^N_{V,x_0}$ denotes the rescaled correlation kernel $$\widetilde{K}^N_{V,x_0}(x, y) = \frac{1}{N^\alpha} K_{V}^N\left(x_0+\frac{x}{N^{\alpha}} ,x_0 +\frac{y}{N^{\alpha}}\right) . $$ Recall that $\varrho_V$ is the density of the equilibrium measure and define {\it the integrated density of states}: \begin{equation} \label{IDS} F_V(x) = \int_0^x \varrho_V(s) ds . \end{equation} Using the recurrence relation \eqref{recurrence}, one can rewrite the correlation kernel as \begin{equation*} K^N_V(x,y) = a^N_{N-1} \frac{\varphi_N(x) \varphi_{N-1}(y) - \varphi_N(y) \varphi_{N-1}(x)}{x-y} . \end{equation*} Then, based on the results of \cite{Deift99+}, one can show that for any $\alpha \in (0,1]$, we have \begin{equation} \label{sine_1} \widetilde{K}^N_{V,x_0}(x, y) = \frac{\sin \left[ \pi N \big(( F_V(x_0+ x N^{-\alpha})-F_V(x_0+ y N^{-\alpha})\big)\right]}{\pi(x-y)}+ \underset{N\to\infty}{O}\left(N^{-\alpha}\right) , \end{equation} uniformly for all $x_0$ in compact subsets of $\mathscr{I}_V$ and for all $x, y$ in compact subsets of $\mathbb{R}$; c.f.~\cite[Poposition~3.5]{Lambert_a}. In particular, by continuity along the diagonal, for any $x_0\in \mathscr{I}_V$, $$\widetilde{K}^N_{V,x_0}(x, x) = N^{1-\alpha} \varrho_V(x_0) + \underset{N\to\infty}{O}\left(N^{-\alpha}\right) , $$ and this implies that for all $n\in\mathbb{N}$, \begin{equation} \label{LLN_4} \int_\mathbb{R} f(x)^{n} \widetilde{K}^N_{V,x_0}(x,x) dx \simeq N^{1-\alpha} \varrho_V(x_0) \int_\mathbb{R} f(x)^{n} dx . \end{equation} We define the sine kernel with density $\rho>0$ on $\mathbb{R}$ by \begin{equation} \label{sine} K^{\sin}_\rho(x,y) = \frac{\sin[\pi \rho(x-y)]}{\pi(x-y)} . \end{equation} We see by taking $\alpha=1$ in formula \eqref{sine_1} that the sine process with correlation kernel \eqref{sine} describe the local limit in the bulk of the 1-dimensional log-gases. In the mesoscopic regime, it was proved in \cite{Lambert_a} that, up to a change of variable, it is possible to replace the kernel $\widetilde{K}^N_{V,x_0}$ by an appropriate sine-kernel using the asymptotics \eqref{sine_1} in the cumulant formulae. Namely, for any $n\ge 2$, \begin{equation} \label{sine_2} \operatorname{C}^n_{K^N_V}[f_N] \simeq \operatorname{C}^n_{K^{\sin}_{\eta_N(x_0) }}[ f\circ\zeta_N] \end{equation} where \begin{equation} \label{zeta_N} \zeta_N(x) = N^\alpha \left\{ G_V\left( F_V(x_0) + \varrho_V(x_0) \frac{x}{N^\alpha} \right)-x_0\right\} . \end{equation} Here, the function $G_V$ denotes the inverse of the integrated density of sates $F_V$, \eqref{IDS}. \ By the inverse function theorem, it exists in a neighborhood of any point $F_V(x_0)$ when $x_0\in\mathscr{I}_V$ and the map $\zeta_N$ is well-defined on any compact subset of $\mathbb{R}$ as long as the parameter $N$ is sufficiently large. The combinatorics of the cumulants of linear statistics of a determinantal process are associated with the map $\Upsilon_0$, see formulae \eqref{cumulant_0}--\eqref{Upsilon_0}. However, we deduce from the proof of proposition~2.2 in \cite{Lambert_a} that a similar asymptotics holds for any function $\Upsilon:\mho\to \mathbb{R}$. In particular, we get for any $m \ge 1$, \begin{equation}\label{cumulant_10} \underset{x_{0} =x_n}{\int_{\mathbb{R}^n}} \Upsilon^n_m[f](x) \prod_{j\le n} \widetilde{K}^N_{V,x_0}(x_j, x_{j-1}) d^nx \ \simeq \underset{x_{0} =x_n}{\int_{\mathfrak{X}^n}} \hspace{-.2cm} \Upsilon^n_m[h_N](x) \prod_{1\le j\le n} K^{\sin}_{\eta_N}(x_j, x_{j-1}) d^nx \end{equation} where $h_N = f\circ \zeta_N$. \begin{proposition} \label{thm:g} Suppose that $\operatorname{supp}(f) \subset (-L, L)$. There exists $N_0 >0$ such that for all $N \ge N_0$, the functions $h_N= f\circ \zeta_N$ are well-defined on $\mathbb{R}$, $ h_N\in C^2_0([-L,L])$ and for all $u\in\mathbb{R}$, \begin{equation} \label{g_estimate} \big| \widehat{h_N}(u) \big| \le \| f \|_{\mathcal{C}^2(\mathbb{R})} \frac{C }{1+ |u|^2} . \end{equation} \end{proposition} \proof When the potential $V$ is analytic, the {\it bulk} $\mathscr{I}_V$ is composed on finitely many open intervals and the equilibrium density $\varrho_V$ is smooth on $\mathscr{I}_V$. Since $x_0\in\mathscr{I}_V$, by formula \eqref{zeta_N}, the function $\zeta_N$ is increasing and smooth on the interval $[-L,L]$, and we have $$ \zeta_N''(x) = \varrho_V(x_0)^2 G_V''\big(F_V(x_0) + \varrho_V(x_0) xN^{-\alpha}\big ) N^{-\alpha}. $$ Moreover, since $\zeta_N(0)=0$ and $\zeta'_N(0) =G_V'\big(F_V(x_0)\big) \varrho_V(x_0) =1$, this implies that $$ \zeta_N(x) = x + \O(N^{-\alpha}) $$ uniformly for all $x\in[-L,L]$. Since the interval $(-L, L)$ contains the support of the test function $f$, this estimate shows that when the parameter $N$ is large, we can define $h_N(x) = f\big( \zeta_N(x) \big)$ for all $x\in[-L, L]$ and extend it by 0 on $\mathbb{R} \backslash [-L,L]$. Then $h_N \in C^2_0(\mathbb{R})$ and \begin{equation} \label{h_N''} h_N''(x) = \zeta_N''(x) f'(\zeta_N(x)) + \zeta_N'(x)^2f''(\zeta_N(x)) \end{equation} for all $x\in[-L,L]$. Moreover, we can use the estimate \begin{equation} \label{Fourier_estimate} \big| \widehat{h_N}(u) \big| \le C \frac{\| h_N \|_\infty + \| h_N''\|_\infty}{1+ |u|^2} \end{equation} to get the upper-bound \eqref{g_estimate}. Plainly $\| h_N \|_\infty \le \| f\|_\infty$ and it is easy to deduce from formula \eqref{h_N''} that $$ \| h_N'' \|_\infty \le \| f' \|_\infty + C \|f''\|_\infty . $$ \qed For any $u \in \mathbb{R}^n$ and for any composition $\k\vdash n$, define $$ \Psi_u(\k) = 2\max_{1\le j <\ell(\k)}\{0, u_1+\cdots + u_{k_1+\cdots +k_j} \} . $$ To compute the limit of the RHS of \eqref{cumulant_10}, we need the following asymptotics. \begin{lemma} \label{thm:sine} Let $n\ge 2$ and let $\eta_N>0$ such that $\eta_N\nearrow \infty$. Suppose that $h_N$ is a sequence of integrable functions such that \begin{equation}\label{assumption_2} \lim_{N\to\infty} \hspace{-.3cm} \underset{\begin{subarray}{c} u_1+\cdots +u_{n}=0 \\ |u_1|+\cdots +|u_n| > \eta_N \end{subarray}}{\int_{\mathbb{R}^{n-1}}} \hspace{-.4cm} \big| \widehat{h_N}(u_1)\cdots \widehat{h_N}(u_n) \big| |u_1| d^{n-1}\mathrm{u} =0 . \end{equation} Then, for any map $\Upsilon: \mho \to\mathbb{R}$ such that $\sum_{\k \vdash n} \Upsilon(\k) =0$, we have \begin{equation}\label{cumulant_11} \hspace{-.5cm} \underset{x_{0} =x_n}{\int_{\mathfrak{X}^n}} \hspace{-.2cm} \Upsilon^n[h_N](x) \prod_{1\le j\le n} K^{\sin}_{\eta_N}(x_j, x_{j-1}) d^nx\ \simeq - \hspace{-.3cm}\underset{u_1+\cdots +u_n=0}{\int_{\mathbb{R}^{n-1}}} \hspace{-.3cm} \Re\bigg\{ \prod_{j=1}^n \widehat{h_N}(u_j) \bigg\}\sum_{\k \vdash n}\Upsilon(\k)\Psi_u(\k) d^{n-1}u . \end{equation} \end{lemma} \proof The argument is the same as the proof of Lemma~1 in Soshnikov's paper \cite{Soshnikov00} on linear statistics of the CUE, see also \cite{Spohn87} or section~2.1 in \cite{JL15}. Based on the formula $$ K^{\sin}_{\eta_N}(x,y) = \int_\mathbb{R} \mathbf{1}_{\{ |u|< \eta_N/2\} } e^{2\pi i u (x-y)} du , $$ we obtain \begin{align*} \mathscr{T}_N&:=\underset{x_{0} =x_n}{\int_{\mathfrak{X}^n}} \hspace{-.2cm} \Upsilon^n[h_N](x) \prod_{1\le j\le n} K^{\sin}_{\eta_N}(x_j, x_{j-1}) d^nx \\ & = \hspace{-.2cm}\underset{u_1+\cdots +u_n=0 \hspace{.1cm}}{\int_{\mathbb{R}^{n-1}}} \prod_{j=1}^n \widehat{h_N}(u_j) \sum_{\k \vdash n}\Upsilon(\k) \max\big\{0, \eta_N - \Psi_u(\k)/2 - \Psi_{-u}(\k)/2 \big\} d^{n-1}u . \end{align*} Then, the condition $\sum_{\k \vdash n} \Upsilon(\k) =0$ implies that \begin{align*} &\bigg| \mathscr{T}_N + \hspace{-.2cm}\underset{u_1+\cdots +u_n=0 \hspace{.1cm}}{\int_{\mathbb{R}^{n-1}}} \prod_{j=1}^n \widehat{h_N}(u_j) \sum_{\k \vdash n}\Upsilon(\k) \frac{ \Psi_u(\k) + \Psi_{-u}(\k)}{2} d^{n-1}u \bigg| \\ & \le \hspace{-.2cm}\underset{u_1+\cdots +u_n=0 \hspace{.1cm}}{\int_{\mathbb{R}^{n-1}}} \bigg| \prod_{j=1}^n \widehat{h_N}(u_j) \bigg| \sum_{\k \vdash n} |\Upsilon(\k) | \Psi_u(\k) \mathbf{1}_{ \{\Psi_u(\k) + \Psi_{-u}(\k) \ge 2 \eta_N \} }d^{n-1}u \end{align*} Since $ \big| \Psi_u(\k)/2 \big| \le |u_1|+\cdots +|u_n|$ for any $\k\vdash n$, the condition \eqref{assumption_2} is sufficient to obtain formula \eqref{cumulant_11}. \qed\\ {\it Proof of theorem~\ref{thm:crossover_4}.} Using the estimate \eqref{g_estimate}, we see that when the parameter is sufficiently large, there exists a constant $C>0$ so that \begin{align} \notag &\Bigg| \underset{u_1+\cdots +u_n=0}{\int_{\mathbb{R}^{n-1}}} \hspace{-.3cm} \Re\bigg\{ \prod_{j=1}^n \widehat{h_N}(u_j) \bigg\}\sum_{\k \vdash n}\Upsilon_m(\k)\Psi_u(\k) d^{n-1}u \Bigg| \\ &\label{cumulant_12} \hspace{2cm} \le C \hspace{-.2cm} \underset{u_1+\cdots +u_n=0}{\int_{\mathbb{R}^{n-1}}} \frac{|u_1|+\cdots +|u_n|}{(1+ |u_1|^2) \cdots (1+|u_n|^2)} d^{n-1}u , \end{align} A similar upper-bound shows that the sequence $h_N = f\circ \zeta_N$ satisfies the condition \eqref{assumption_2} of lemma~\ref{thm:sine}. Hence, combining the asymptotics \eqref{cumulant_10} and \eqref{cumulant_11}, we obtain \begin{align}\label{cumulant_13} & \underset{x_{0} =x_n}{\int_{\mathbb{R}^n}} \Upsilon^n_m[f](x) \prod_{j\le n} \widetilde{K}^N_{V,x_0}(x_j, x_{j-1}) d^nx \\ &\notag \hspace{2cm}\simeq - \hspace{-.3cm}\underset{u_1+\cdots +u_n=0}{\int_{\mathbb{R}^{n-1}}} \hspace{-.3cm} \Re\bigg\{ \prod_{j=1}^n \widehat{h_N}(u_j) \bigg\}\sum_{\k \vdash n}\Upsilon_m(\k)\Psi_u(\k) d^{n-1}u . \end{align} Thus, by \eqref{cumulant_12}, the integral~\eqref{cumulant_13} is uniformly bounded in $N$ for any $m \ge 1$. This property combined with the asymptotics \eqref{LLN_4} and formula \eqref{cumulant_9} implies that for all $n \ge 2$, \begin{equation} \label{cumulant_14} \operatorname{C}^n_{\widehat{K}^N_V}[f_N] = \operatorname{C}^n_{K^N_V}[f_N] + \gamma_1^n q_NN^{1-\alpha} \varrho_V(x_0) \int_\mathbb{R} f(x)^{n} dx + \O(q_N^2N^{1-\alpha} ) . \end{equation} Then, using the asymptotics \eqref{sine_2} and Soshnikov's main combinatorial lemma, it was proved in \cite{Lambert_a} that $$ \lim_{N\to\infty} \operatorname{C}^n_{K^N_V}[f_N] = \begin{cases} \displaystyle \int_{0}^\infty u \big| \hat{f}(u) \big|^2 du &\text{if } n=2 \\ 0 &\text{if } n \ge 3 \end{cases}. $$ Hence, the expansion \eqref{cumulant_14} yields the weak convergence results \eqref{Normal_4}--\eqref{crossover_4} in the different regimes. \qed \begin{appendices}
2acb5103f5f9072d900fb8691df08d84da3a0d73
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Music production has traditionally relied on a wide range of expertise far exceeding the capabilities of most individuals. Recent advancements in music technology and the parallel development of electronic music has brought the joy of music-making to the masses. Among the countless amateurs who enjoy expressing their creativity through music-making, there has emerged a newly-leveled playing field for professional and semi-professional musicians who are able to independently produce music with no more than a personal computer. Technological advances have greatly simplified music production through systems such as Logic Pro\footnote{http://www.apple.com/logic-pro/}, Live\footnote{https://www.ableton.com/en/live/}, or even the freely available Garage Band\footnote{http://www.apple.com/mac/garageband/}. In addition to placing a wide range of instruments at our fingertips, without requiring the corresponding training, these systems provide a variety of mastering tools that enable the creation of high quality music with relative ease. Yet, despite this paradigm shift, the writing of songs - comprised of both melody and lyrics - remains an exclusive domain. A careful examination of electronic music reveals these limitations. The genre relies heavily on remixing existing music and much attention is paid to sound design, which does not require original composition or its juxtaposition with lyrics. The creation of a new song continues to rely on diverse skills rarely possessed by one individual. The score alone calls for a combination of poetic ability to create the lyrics, musical composition for the melody, and expertise in combining lyrics and melody into a coherent, meaningful and aesthetically pleasing whole. Once the score is complete, vocal skill and production expertise are needed to turn it into a finalized musical piece. While some may be satisfied with remixing existing music, the creation of truly original songs is often desired, in part due to restrictions associated with performing and remaking existing works. Can technology assist in the creation of original songs, allowing users to express their creativity through the skills they possess while letting technology handle the rest? We introduce ALYSIA, a songwriting system specifically designed to help both musicians and amateurs write songs, with the goal of producing professional quality music. ALYSIA focuses on one of the most challenging aspects of the songwriting process, enabling the human creator to discover novel melodies to accompany lyrics. To the best of our knowledge, ALYSIA is the first songwriting system to utilize formal machine learning methodology. This lets us evaluate the success of our model based on its ability to correctly predict the pitch and rhythm of notes in the test set (which was not used to train the model). On a corpus of pop music, the system achieves an impressive accuracy of 86.79\% on rhythms and 72.28\% on scale-degrees. In the generation phase, ALYSIA is given short lyrical phrases, for which it can then provide numerous melody options in the style of the given corpus. As it stands today, ALYSIA is a co-creative system that allows the user to retain a sense of creative control while eliminating one of the most essential skills traditionally required for songwriting: the ability to effectively explore the space of potential accompanying melodies. As a co-creative songwriting partner, ALYSIA assists human creators in the song-making progress by filling in gaps in the user's expertise. Helping people be creative, particularly in a manner that aids their social engagement both on and offline, can have substantial positive impact on their sense of well-being and happiness~\cite{gauntlett2013making}. Technological advancement has led to the radical expansion of those who are able to engage in music making, and made possible today's large online communities for sharing musical works. Widening access to original songwriting stands not only to ease the path to professional musicianship, but also makes it possible for more people to enjoy the psychological benefits of engaging in this creative and socially rewarding activity. Without any changes to the system, ALYSIA could be used to create complete melodies without human interference by simply selecting her first suggestion for each line of lyrics. However, a little human interaction can result in better music and gives the user creative freedom that would be lacking from even the best autonomous system. We begin with an examination of previous work and how it relates to our contributions, followed by a technical description of our system. We then move onto the application of ALYSIA to songwriting, starting with a detailed discussion of our proposed co-creative process. Finally, we showcase three musical pieces created with ALYSIA, and conclude with a discussion of our vision for the future of algorithmic songwriting. \section{Previous work} Algorithmic composition has a rich history dating back to the 1950s. A wide range of techniques have been applied to music generation (with no lyrics), spanning from expert systems to neural networks. See a survey by Fern{\'a}ndez and Vico~\cite{fernandez2013ai} for an excellent overview. Our focus here is on the creation of songs, which introduces a lyrical component to algorithmic composition. Algorithmic songwriting is relatively new, and so far the state of art addresses it using Markov chains. For example, M.U. Sucus-Apparatusf~\cite{toivanen2013automatical} by Toivanen et al, is a system that was used for the generation of Finnish art songs. Rhythm patterns are randomly chosen from among those typically found in art songs. Chord progressions are subsequently generated by using second order Markov chains. Lastly, pitches are generated using a joint probability distribution based on chords and the previous note. M.U. Sucus-Apparatusf integrates the entire songwriting process, from lyric generation to melody and accompaniment. Yet, as with most previous systems, the major weakness is a lack of clear phrase structure \cite{toivanen2013automatical}. Another full-cycle automated songwriting system comes from the work of Scirea et al~\cite{scirea2015smug}. Their songwriting system SMUG was used to write songs from academic papers. The methodology relies on the evolution of Markov chains. It uses a corpus of mixed-genre popular songs, and integrates several rules, such as the high probability of rests at the ends of words. Monteith at al. \cite{monteith2012automatic} study the generation of melodic accompaniments. While utilizing a corpus, the system works by generating hundreds of corpus-driven random options guided by a few rules, and then chooses among them with an evaluation criteria that incorporates some musical knowledge. This process is applied for both the rhythm and melody generation. For example, for generating rhythms, the system produces 100 possible downbeat assignments. The text of each line is distributed across four measures, so four syllables are randomly selected to carry a downbeat. In other related works, Nichols~\cite{nichols2009lyric} studies a sub-component of the problem we address here, particularly lyric-based rhythm suggestion. He considers the set of all accompanying rhythms, and defines a fully rule-based function to evaluate them, via considerations such as rare word emphasis through strong beats. As future work, Nichols suggests solving the rhythm generation processes through machine learning techniques, which is one of our contributions here. Lastly, in the work of Oliveira~\cite{oliveira2015tra}, the pipeline is inverted and lyrics are generated based on the rhythm of the provided melody. Our approach does not encode any musical rules and uses Random forests instead of Markov chains. The lack of directly encoded musical knowledge gives our system true genre independence. Musical genres are incredibly diverse, and rules that make sense for one genre can make it impossible to generate songs in another. For example, penalizing melodies with many repeating consecutive pitches is often a sensible choice, but it is unsuited to pop music, where such sequences are common. A machine learning model that does not explicitly encode musical expertise not only applies to existing genres, but can also adapt to all future musical genres, which can differ from prior music in unpredictable ways. Random forests offer some advantages over using Markov chains. One draw-back of Markov chains is that while they can model likelihoods of sequences, they do not encapsulate the ``why'' of one note being chosen over another after the same initial sequence. For example, if note A is the first note, why is the next note B most of the time? Perhaps it is only B most of the time if the key signature is C-major, and it is actually F\# most of the time under a different context. Such context can be discovered and used in systems like random forests given a suitable feature set. Additionally, we rely on machine learning methodology to formally evaluate our model, which has not been done in previous work on automated songwriting. Another difference is that we rely on a co-creative process, which aids with our vision to create high quality music. Finally, ALYSIA is the first songwriting system whose songs were recorded and produced. \begin{comment} By the spring of 2016, ALYSIA had been developed, three pop songs written using the system, and a draft of this paper was complete. The songs and a brief description of the system have been posted since August 2016 (link omitted for double-blind review). In late September 2016, SONY released a pair of pop songs written using their Flow Machine system\footnote{\begin{scriptsize}\url{http://www.digitalmusicnews.com/2016/09/28/sony-pop-songs-artificial-intelligence/}{}\end{scriptsize}}, claiming to be the first pop songs made using artificial intelligence. Our work has been complete and posted online before SONY released their songs, and the research was done independently. Furthermore, we were unable to locate the scientific work related to the creation of these songs\footnote{SONY has published how Flow Machines were used to generated music without lyrics (see, for example, ~\cite{papadopoulos2016assisted} and \cite{pachet2016joyful}). In previous published works, Flow Machine were used to write lead sheets through a combination of Markov sequences and stochastic temporal constraints~\cite{papadopoulos2016assisted}. This suggests that the methodology used for SONY's pop songs is likely entirely different from ours. }. As such, the current paper is, to our best knowledge, the first scientific paper about the creation of recorded pop songs using artificial intelligence. \end{comment} \section{The making of ALYSIA} ALYSIA is an entirely data-driven system that is based upon a prediction model. We chose to construct two models. The first predicts note duration, which we refer to as the \emph{rhythm model}. The second predicts the scale degree of a note, with possible accidentals, which we refer to as the \emph{melody model}. Both models rely on a similar set of features, though depending on which model is run first, one model will have access to extra information. In our case, the melody model benefits from the predictions made by the rhythm model. Note that combing the models would increase the number of output classes, requiring a larger corpus. The most important component of our model is the set of features. The aim is to construct features that will allow the model to learn the mechanics and technicalities behind the structure of songs and their composition. Since we provide melodies for text provided by the user, our training set consists of vocal lines of music, and not arbitrary melodies. The corpus consists of Music-XML (MXL) files, with a single instrument corresponding to the vocal line and the accompanying lyrics. As such, each note has a corresponding syllable. The system consists of five distinct components. \begin{enumerate} \item The corpus \item Feature extraction \item Model building and evaluation \item User lyrics to features \item Song generation \end{enumerate} We now discuss each of these in detail. \subsection{Feature Extraction} In order to build a model, we first extract a set of features from the Music-XML (MXL) files. For each note, we extract the following features: \begin{itemize} \item First Measure - A boolean variable indicating whether or not the current note belongs to the first measure of the piece \item Key Signature - The key signature that applies to the current note \item Time Signature - The time signature that applies to the current note \item Offset - The number of beats since the start of the music \item Offset within Measure - The number of beats since the start of the measure \item Duration - The length of the note \item Scale Degree - The scale degree of the note (1-7) \item Accidental - The accidental of the note (flat, sharp, or none) \item Beat Strength - The strength of beat as defined by music21. We include the continuous and categorical version (Beat Strength Factor) of this variable. \item Offbeat - A boolean variable specifying whether or not the note is offbeat \item Syllable Type\textbf{*} - Classifies the current syllable in one of the following four categories: Single (the word consists of a single syllable), Begin (the current syllable is the first one in its word), Middle (the current syllable occurs in the middle of its word), End (the current syllable is the last one in its word). \item Syllable Number\textbf{*} - The syllable number within its word \item Word Frequency\textbf{*} - The word frequency of the word which the current note/syllable is part of. This value is obtained through the dictionary frequency as calculated by Pythons NLTK (\href{http://www.nltk.org/}{http://www.nltk.org/}) library using the Brown corpus~\footnote{Brown corpus: \href{http://www.hit.uib.no/icame/brown/bcm.html}{http://www.hit.uib.no/icame/brown/bcm.html}}. \item Word Rarity\textbf{*} - A function of word frequency, as defined by \cite{nichols2009lyric}. $WordRarity = 2(1-\frac{\log_{10}(WordFrequency)}{7}).$ \item Word Vowel Strength\textbf{*} - The word vowel strength (primary, secondary, or none) as indicated by the CMUDict v0.07.\footnote{CMUDict v0.07, Carnegie Mellon University.} \item Number of Vowels in the Word\textbf{*} - The number of vowels found in the word corresponding to the current note/syllable. \item Scale Degree, Accidental, and Duration of previous 5 notes \end{itemize} The featured marked with an * are generated from lyrics found within the vocal line. These same features must be generated using the user's lyrics when ALYSIA is used in generation mode. The features marked with an * were used by \cite{nichols2009lyric} as part of a rule-based evaluation criteria for rhythm suggestion. \subsection{Model Building and Evaluation} Using R, we train two models using random forests. Random forests were chosen for several reasons. They are well-suited for large numbers of categorical variables. Additionally, they allow non-linearity in combining features, without an explosion in the required size of the training set, in order to avoid overfitting. Non-linearity makes random forests more powerful than linear models such as logistic regression. We randomly split the data using stratified sampling into a training set (75\% of the data) with the rest for testing. The accuracy of the rhythm model on test data was 86.79\%. Figure~\ref{fig:confRhythm} shows the confusion matrix. The row labels correspond to the duration found in the test sets, and column labels are the predicted labels. The number after the underscore corresponds to the number of dots in the durations. So, for example, a dotted sixteenth note is 16th\_1. As you can see in the confusion matrix, the model tends to perform better on rhythms which occur more frequently in the data set. A larger data set will enable us to test whether the trend will continue. \begin{figure*} \advance\leftskip-2cm \includegraphics[width = 16cm]{rhythmModel.jpg} \caption{Confusion matrix for Rhythm Model. Cell ($i$,$j$) in the above table specifies how often a node with duration type $i$ was classified as having during $j$. The last column shows the error rate for each duration type (NaN indicating that no notes of this duration were found in the corpus).} \label{fig:confRhythm} \end{figure*} The accuracy of the melody model was 72.28\%. Figure~\ref{fig:confRhythm} shows the confusion matrix. The numbers correspond to the scale degrees, and the label after the underscore represents the accidentals. So, for example, 1\_none represents the first scale degree without accidentals. As with the rhythm model, we see that accuracy is higher when there are more examples. Lastly, our machine learning model allows us to evaluate the importance of each feature. Figure~\ref{fig:modelDecAccuracy} depicts the mean decrease Gini (which measures node impurity in tree-based classification) of each feature in the rhythm model (left) and the melody model (right). In both models, all 5 previous notes' scale degrees are some of the most important features. This suggests that including features for additional previous notes may be helpful. This also explains why Markov chains seem to do well for this task. However, we see that other features are also prominent. For example, the rhythm model is most heavily influenced by beat strength, revealing the potential of using a data-driven approach over Markov chains, which do not give access to such features. Finally, it is important to note that lyrical features such as word rarity and frequency also play an important role. \begin{figure*} \advance\leftskip-2cm \includegraphics[width = 16cm]{Confusion_melody} \caption{Confusion matrix for Melody Model. Cell ($i$,$j$) in the above table specifies how often scale degree $i$ was classified as scale degree $j$. The last column shows the error rate for each scale degree.} \label{fig:confRhythm} \end{figure*} \begin{figure*} \advance\leftskip-1cm \includegraphics[angle=90, height = 22.5cm]{rhythmMelodyGini} \caption{Importance plot for the rhythm model (left) and the melody model (right), representing the significance of each feature in the respective models.} \label{fig:modelDecAccuracy} \end{figure*} \subsection{User Lyrics to Features} In order to generate music for a new set of lyrics using our models, we need to extract the same set of lyric features we were using for our models. This was done using a python script, which outputs a feature matrix for importing into R. \subsection{Song Generation} Using R, we loop through the lyric feature set one sentence at a time. For each line, we read the feature set from the lyrics given by the user and generate the rhythm, followed by the melody. Note that we generate one note at a time for each syllable in the lyrical phrases. We also keep track of 5 previous notes, since these are very important features in the model. Finally, we generate a file detailing the generated melodies for each line. There may be many depending on how many melodies per line were requested. \subsection{Corpus} Our corpus consists of 12,358 observations on 59 features that we extract from MXL files of 24 pop songs. From each song, only the melody lines and lyrics were used. For model building and evaluation, the data is split using stratified sampling along the outcome variable, which is scale degree with accent for the melody model, and note duration for the rhythm model. Seventy-five percent of the data is used for training, while the remaining data is used for evaluation. Please note that unlike MP3 files, MXL files containing lyrics are much more difficult to acquire, limiting the size of our data. Previous work on automated songwriting at times does not use any training (it is instead rule-based), or trains separately on lyrics and melodies - which eliminates the possibility of learning how they interact. The size of the training set in previous work is often omitted\footnote{Due to the difficulties in attaining MXL files, we are currently creating a new larger corpus of songs, which will be analyzed in future work.}. \subsection{Parameters} When used in generation mode, ALYSIA allows the user to tune the following parameters: \subsubsection{Explore/Exploit} This parameter determines how heavily we lean on the model. Specifically, for each scale degree/note duration we generate, our model outputs a distribution over all possible outcomes. This parameter determines how many independent draws we make from this distribution. The final resulting note is the most common draw, with ties being broken by the original outcome distribution. That is, if scale degree 1 and 2 are tied after four draws, we take the scale degree that was originally more likely in the distribution output by the model for this data point. Given this definition, a higher explore/exploit parameter value means we exploit because we are almost always going to output the scale degree or duration that has the highest probability of occurring. This parameter allows us to favor the scale degrees and durations that are most likely, versus potentially taking a more varied approach and generating music that could be considered more experimental. \subsubsection{Melody Count} We generate a set number of melodies per line, outputting them to a file for later translation into midi and MXL files. \subsubsection{Rhythm Restriction} We allow the user to restrict the possible rhythmic outcome if they would like to omit certain note durations. For example, the user may want to disallow whole notes, or some faster notes such as a 32nd note. \newpage \section{The Co-Creative Process of Songwriting with ALYSIA} In this section, we describe the co-creative process of songwriting with ALYSIA. This is a new approach to writing songs that requires minimal to no musical training. ALYSIA makes it easy to explore melodic lines, reducing songwriting to the ability to select melodies based on one's musical taste. The user provides ALYSIA with the lyrics broken down into separate lines. Then, the system gives the specified number of melodies (notes/rhythm combinations) to which the given lyrics can be sang. The number of melodies per line is specified by the user. ALYSIA outputs the melodies in both MXL and MIDI format. The MIDI form is particularly helpful, since it allows the user to play the variations, select the desired one, and directly utilize the MIDI to produce the song. We now describe the workflow we used with ALYSIA. We typically ask for between 15 to 30 melodic variations per line of text. Among these, we select between three and ten melodic lines. It should be noted that nearly all of ALYSIA's suggestions are reasonable. For example, Figure~\ref{fig:variations} lists the first 12 melody options provided by ALYSIA for the phrase \emph{everywhere I look I find you looking back.} All of these melodies work fairly well with the underlying text and could be extended into full songs. One may ask why we choose to look at 15 to 30 options if all are reasonable. Having a variety of options can lead to better quality songs while also enabling artists to incorporate their own musical preferences, without possessing the composition and text/melody juxtaposition skills traditionally required to engage in this art form. When making our selections, we look for melodies that are independently interesting, and have intriguing relationships with the underlying text. We search for lines that match the emotional meaning of the text (happy or sad), as well as interesting word emphasis. For example, if the word ``sunshine'' appears in the text, we may select a melody that rises with this word (see Figure~\ref{fig:Rainbows}). ALYSIA often suggests melodic variations for which we find interesting explanations. For example, in the original song \emph{Why Do I Still Miss You} (see Figure~\ref{fig:WhyDoIScore}), ALYSIA suggested a melody where the phrase ``went wrong'' is on the lower end of scale, giving these words an appropriately dark interpretation. The next step involves combining the melodic lines to form the complete song. This takes some trial and error, and several interesting variations can typically be attained. The most efficient approach we had found was to construct the song around a few melodic lines. Say, if there are particularly interesting melodies for lines 1 and 3, then lines 2 and 4 are chosen to fit with these. Having multiple options allows the artist to create several variations for verses and chorus repetitions, as often found in songs. \noindent% \begin{minipage}{\linewidth}% \makebox[\linewidth]{ \includegraphics[width=11cm]{firstline.jpg}} \vspace{-15mm} \captionof{figure}{The first 12 melodies generated by ALYSIA to accompany the first line of the song \emph{Everywhere} (the full song appears in the following section). All melodies are sensible and could be used towards the creation of a complete melody line. Having many reasonable options allows the user to incorporate their musical taste into songwriting with ALYSIA.} \label{fig:variations} \end{minipage} \newpage \section{Songs made with ALYSIA} Three songs were created using ALYSIA. The first two rely on lyrics written by the authors, and the last borrows its lyrics from a 1917 Vaudeville song. The songs were consequently recorded and produced. All recordings are posted on \href{http://bit.ly/2eQHado}{http://bit.ly/2eQHado}. The primary technical difference between these songs was the explore/exploit parameter. \emph{Why Do I Still Miss You} was made using an explore vs exploit parameter that leads to more exploration, while \emph{Everywhere} and \emph{I'm Always Chasing Rainbows} were made with a higher exploit setting. \subsection{\emph{Why Do I Still Miss You}} \emph{Why Do I Still Miss You}, the first song ever written with ALYSIA, was made with the goal of testing the viability of using computer-assisted composition for production-quality songs. The creation of the song started with the writing of original lyrics, written by the authors. The goal was to come up with lyrics that stylistically match the style of the corpus, that is, lyrics in today's pop genre (See Figure~\ref{fig:WhyDoIScore}). Next, each lyrical line was given to ALYSIA, for which she produced between fifteen and thirty variations. In most cases, we were satisfied with one of the options ALYSIA proposed within the first fifteen tries. We then combined the melodic lines. See Figure~\ref{fig:WhyDoIScore} for the resulting score. Subsequently, the song was recorded and produced. One of the authors, who is a professionally trained singer, recorded and produced the song. A recording of this song can be heard at \href{http://bit.ly/2eQHado}{http://bit.ly/2eQHado}. \vspace{4mm} \noindent% \begin{minipage}{\linewidth}% \makebox[\linewidth]{ \includegraphics[width=10cm]{WhyDoIStillMissYou-short.jpg}} \vspace{-3mm} \captionof{figure}{An original song co-written with ALYSIA.} \label{fig:WhyDoIScore} \end{minipage} \vspace{2mm} This experiment, intentionally spanning the entire songwriting process from the lyrics to the final production, demonstrates the viability of our approach for making production-worthy works. It also shows that ALYSIA makes songwriting accessible to those with no songwriting ability, as the song was written by one of the authors, who does not possess this skill unassisted by our computer system. Informal assessment by the authors and many others consistently point to the surprising quality of this song, gauging it as having quality on par with that of fully human-made songs. Future user studies will help to formally assess this claim. For this song, we used an explore/exploit parameter of 1, meaning that the distribution of notes output by the model was sampled only once. With regard to improving the application of our system, the making of \emph{Why Do I Still Miss You} suggested that we should increase the exploit parameter. This was particularly important for the rhythm suggestion, which we found to be too varied. As such, \emph{Why Do I Still Miss You} fully utilized the melody model, while making some manual adjustment to the rhythm. We stress that even with this sub-optimal setting for the exploit-explore setting, ALYSIA made songwriting accessible to someone who previously could not engage in this artform. The two subsequent songs discussed below exploit our model more heavily, using sampling four times for the rhythm model and two times for the melody model, selecting the majority, which corresponds to an explore/exploit parameter of 4 and 2, respectively. Heavier reliance on the model allowed us to use the rhythm suggestions without alterations and also raised the proportion of high quality scale degree sequences. \subsection{\emph{Everywhere}}\label{sec:Everywhere} \emph{Everywhere} is the first original song written with ALYSIA using a high setting of the exploit parameter. This made it even easier to discover new melodies. In fact, so many of ALYSIA's melodies were interesting that combining melodies became more time consuming (of course, one could simply consider a smaller set of melodies per line to overcome this challenge). Nevertheless, the writing of \emph{Everywhere} from start to finish took only about two hours. Figure~\ref{fig:everywhere} shows the score co-created with ALYSIA. A recording appears on \href{http://bit.ly/2eQHado}{http://bit.ly/2eQHado} \subsection{\emph{I'm Always Chasing Rainbows}} The final song made with ALYSIA so far relies on lyrics from the song \emph{I'm Always Chasing Rainbows}. The original music is credited to Harry Carroll, although the melody was in fact adapted from Chopin's Fantaisie-Impromptu. Lyrics were written by Joseph McCarthy. The song's appearance in Broadway musicals and motion pictures contributed to its popularity. As it was published in 1917, \emph{I'm Always Chasing Rainbows} is in public domain. Due to its Vaudeville's style and classical roots, we thought it would be interesting to recreate \emph{I'm Always Chasing Rainbows} as a pop song. ALYSIA's version is given in Figure~\ref{fig:Rainbows}. A production of ALYSIA's pop version of this Vaudeville song has been posted on \href{http://bit.ly/2eQHado}{http://bit.ly/2eQHado}, where you can also find the original score. \noindent% \begin{minipage}{\linewidth}% \makebox[\linewidth]{ \includegraphics[width=9cm]{Everywhere.jpg}} \vspace{-15mm} \captionof{figure}{An original song co-written with ALYSIA.} \label{fig:everywhere} \end{minipage} \section{Discussion: Co-Creative and Autonomous Songwriting} Songwriting is the art of combining melodies and lyrics; It is not enough to have beautiful melodies and poetic lyrics, the music and words must fit together into a coherent whole. This makes Algorithmic Songwriting a distinct sub-field of Algorithmic Composition. ALYSIA is the first machine-learning system that learns the relationship between melodies and lyrics, and uses the resulting model to create new songs in the style of the corpus. The unique challenges of songwriting were observed during the creation of the first musical to rely on computational creativity systems, Beyond the Fence, which played in London in the early months of 2016~\cite{jordanous2016has}. During the panel on Beyond the Fence held at the Seventh International Conference of Computational Creativity, it was noted that the juxtaposition of music and text posed a substantial challenge in the making of this musical. This further illustrates the need for systems that focus on uncovering the complex relationships between music and lyrics. \vspace{-1mm} \noindent% \begin{minipage}{\linewidth}% \makebox[\linewidth]{ \includegraphics[width=9cm]{ChasingRainbows-ALYSIA.jpg}} \captionof{figure}{Melody and rhythm by ALYSIA, lyrics by Joseph McCarthy from the popular Vaudeville song \emph{I'm Always Chasing Rainbows}.} \label{fig:Rainbows} \end{minipage} \vspace{2mm} Algorithmic songwriting offers intriguing challenges as both an autonomous and a co-creative system. An autonomous songwriting system producing works on par with those of expert human songwriters would mark a significant achievement. Yet, we can go beyond the score. What if, in addition to writing the song, an automated system could also perform and record its own compositions? A truly independent system would not only create a score, but incorporate the full spectrum of expertise required for the creation of a complete song, including the vocal performance, expressive rendition, and automated music production. As we aspire to create autonomous songwriters, artists and hobbyists alike are thirsty for our help. Even if we had access to a fully autonomous songwriter, it would not replace the need for a corresponding co-creative system. Whereas an autonomous songwriter could be used when complete works are desired, a co-creative variation would satisfy the human need for music making - much like the difference between the joys of eating and cooking. A co-creative algorithmic songwriter would expand our creative repertoire, making songwriting accessible to those who cannot otherwise enjoy this art-form, and be of great use to semi-professional and amateur musicians (particularly singers) who do have the luxury of hiring a songwriter. In the development of a co-creative songwriting system, it is desired that the users retain creative control, allowing them to claim ownership of the resulting works, or at least experience the process as an expression of creativity. The goal is to relieve the burden of having to master all of the diverse skills needed for the creation of a song, giving users the freedom to focus on aspects of the creative process in which they either specialize or find most enjoyable. From a technical viewpoint, the goals of autonomous and co-creative algorithmic songwriters converge - the primary difference being that a co-creative system allows for more human interaction. ALYSIA is a long-term project in which we will simultaneously explore both of these paths as the system's technical foundation continues to expands. Short-term goals for ALYSIA include the automation of chord progressions that would underlie the melody. A couple of user studies are being planned where people at different levels of musical expertise will evaluate the songs made with ALYSIA. To further our machine learning methodology, we are putting together a corpora of songs in Music XML format. For several different genres, a large corpus will be used to train a model using a neural network, which requires large data sets to avoid over-fitting. This will also let us explore differences in melodies resulting from different machine learning models. We are also experimenting with our system's potential for songwriting in different genres and languages, ranging from English pop songs discussed here to the creation of new arias in the style of classical Italian composer Giacomo Puccini. \vspace{-2mm} \bibliographystyle{plain}
805027a2cebe86334635c09a5ecdb4d843b94b74
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} The novel coronavirus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), that emerged in late 2019 belongs to the coronaviridae family. SARS-CoV-2 has an unparalleled capacity for human-to-human transmission and became the reason for the COVID-19 pandemic. Having witnessed two recent pandemics caused by coronaviridae, namely SARS-CoV in 2002 and MERS-CoV in 2012, there was an immediate research interest in studying the zoonotic origin, transmissibility, mutation and variants of SARS-COV-2~\cite{kuzmin2020machine,Laporte2020TheSA}. SARS-CoV-2 has a positive-strand RNA genome of about 30Kb and encodes two categories of proteins: structural and non-structural (see Figure~\ref{fig_spike_seq_example}). The spike protein is one of the substantial structural proteins of the virus, having $1160$ to $1542$ amino acids. The spike protein's primary function is to serve as a mechanism for the virus to enter inside the human cell by binding the ACE2 receptor. Detailed study of the structure of spike glycoprotein unveils the molecular mechanism behind host cell recognition, attachment, and admission. Notably, the spike glycoprotein of SARS-CoV-2 has two subunits, $S_1$ and $S_2$, belonging to the N and C terminals, respectively~\cite{galloway2021emergence,kuzmin2020machine}. The receptor binding domain (RBD) ($\approx$ $220$ amino acids) of the $S_1$ subunit helps the virus attach to the host cell by binding the ACE2 receptor protein, and the $S_2$ subunit helps to insert into the cell. SARS-CoV-2 continues to mutate over time, resulting in changes in its amino acid sequences. The change in the spike protein's amino acids, specifically in the RBD, makes the virus more transmissible and adaptable to the human immune system. In the language of phylogenetics and virology, the virus is creating new variants and strains by accruing new amino acid changes in the spike protein and its genome \cite{galloway2021emergence,naveca2021phylogenetic,yadav2021neutralization,zhang2021emergence}. The state-of-the-art mRNA vaccines train the host immune system to create specific antibodies that can bind to spike protein, which leads to preventing the virus from entering inside the host cell. Therefore, changing amino acids in the spike protein generates new variants which could potentially be more contagious and more resistant to vaccines~\cite{Krishnan2021PredictingVaccineHesitancy}. \begin{figure}[!ht] \centering \includegraphics[scale=0.4,page=1]{Figures/spike_sequence_example_figure.pdf} \caption{The SARS-CoV-2 genome is around 29--30kb encoding structural and non-structural proteins. ORF1ab encodes the non-structural proteins and the four structural proteins: spike, envelope, membrane, and nucleocapsid encoded by their respective genes. The spike protein has 1160 to 1542 amino acids.} \label{fig_spike_seq_example} \end{figure} In-depth studies of alterations in the spike protein to classify and predict amino acid changes in SARS-CoV-2 are crucial in understanding the immune invasion and host-to-host transmission properties of SARS-CoV-2 and its variants. Knowledge of mutations and variants will help identify transmission patterns of each variant that will help devise appropriate public health interventions to prevent rapid spread \cite{Ahmad2016AusDM,ahmad2017spectral,Tariq2017Scalable,AHMAD2020Combinatorial}. This will also help in vaccine design and efficacy. A massive amount of genomic sequences of SARS-CoV-2 are available from different parts of the globe, with various clinical, epidemiological, and pathological information from the GISAID open source database\footnote{\url{https://www.gisaid.org/}}. In this study, we design sophisticated machine learning models which will leverage the open-source genomic data and metadata to understand, classify and predict the changes in amino acid in SARS-CoV-2, most notably in its spike protein~\cite{Krishnan2021PredictingVaccineHesitancy,Laporte2020TheSA,Lokman2020ExploringTG}. When sequences have the same length and they are aligned, i.e., a one-to-one correspondence between positions or indices of the sequences is established, machine learning methods devised for vectors in metric spaces can be employed for sequence analysis. This approach treats sequences as vectors, considering each character (e.g., amino acid or nucleotide) as the value of the vector at a coordinate, e.g., using one-hot encoding~\cite{kuzmin2020machine}. In this case, the order of positions loses its significance, however. Since the order of indices in sequences is a defining factor, ignoring it may result in performance degradation of the underlying machine learning models. In representation based data analysis, on the other hand, each data object is first mapped to a vector in a fixed-dimensional vector space, taking into account important structure in the data (such as order). Vector space machine learning algorithms are then used on the vector representations of sequences. This approach has been highly successful in the analysis of data from various domains such as graphs~\cite{hassan2020estimating,Hassan2021Computing}, nodes in graphs~\cite{ali2021predicting}, electricity consumption~\cite{ali2019short,Ali2020ShortTerm} and images ~\cite{Bo_ImageKernel}. This approach yields significant success in sequence analysis, since the features representation takes into account the sequential nature of the data, such as texts~\cite{Shakeel2020LanguageIndependent,Shakeel2020Multi,Shakeel2019MultiBilingual}, electroencephalography and electromyography sequences~\cite{atzori2014electromyography,ullah2020effect}, Networks~\cite{Ali2019Detecting1}, and biological sequences~\cite{leslie2002mismatch,farhan2017efficient,Kuksa_SequenceKernel,ali2021effective}. For biological sequences (DNA and protein), a feature vector based on counts of all length $k$ substrings (called $k$-mers) occurring exactly or inexactly up to $m$ mismatches (mimicking biological mutations) is proposed in~\cite{leslie2002mismatch}. The kernel value between two sequences --- the dot product between the corresponding feature vectors --- serves as a pairwise proximity measure and is the basis of kernel based machine learning. We provide the technical definition of feature maps and the computational aspects of kernel computation in Section~\ref{sec_proposed_approach}. In this paper, our contributions are the following: \begin{enumerate} \item We propose a method based on $k$-mers (for feature vector generation) and kernel approximation (to compute pairwise similarity between spike sequences) that classify the variants with very high accuracy. \item We show that spike sequences alone can be used to efficiently classify different COVID-19 variants. \item We show that the proposed method outperforms the baseline in terms of accuracy, precision, recall, F1-measure, and ROC-AUC. \item We show that with only $1\%$ of the data for training, we can achieve high prediction accuracy. \end{enumerate} The rest of the paper is organised as follow: Section~\ref{related_work} contains the previous work related to our problem. Section~\ref{sec_proposed_approach} contains the proposed approach of this paper in detail along with the description of baseline model. Section~\ref{sec_data_set_detail} contains the information related to different datasets. We show our results in Section~\ref{sec_results_and_discussion}. Finally, we conclude the paper in Section~\ref{sec_conclusion}. \section{Related Work }\label{related_work} Phylogeny based inference of disease transmission~\cite{Dhar2020TNet} and sequence homology (shared ancestry) detection between a pair of proteins are important tasks in bioinformatics and biotechnology. Sequence classification is a widely studied problem in both of these domains~\cite{Krishnan2021PredictingVaccineHesitancy}. Most sequence classification methods for viruses require the alignment of the input sequence to fixed length predefined vectors, which enables the machine learning algorithm to compare homologous feature vectors~\cite{Dwivedi2012ClassificationOH}. Pairwise local and global alignment similarity scores between sequences were used traditionally for sequence analysis~\cite{Chowdhury2017MultipleSequences}. Alignment-based methods are computationally expensive, especially for long sequences, while heuristic methods require a number of ad-hoc settings such as the penalty of gaps and substitutions, and alignment methods may not perform well on highly divergent regions of the genome. To address these limitations, various alignment-free classification methods have been proposed \cite{farhan2017efficient,Chang2014ANA}. The use of $k$-mer (substrings of length $k$) frequencies for phylogenetic applications started with~\cite{Blaisdell1986AMeasureOfSimilarity}, who reported success in constructing accurate phylogenetic trees from several coding and non-coding nDNA sequences. Typically, $k$-mer frequency vectors are paired together with a distance function to measure the quantitative similarity score between any pair of sequences~\cite{Zielezinski2017AlignmentfreeSC}. However, the basic bottleneck for these techniques is the quadratic (in the lengths of the sequences) runtime of kernel evaluation. Farhan et al.~\cite{farhan2017efficient} proposed an algorithm for kernel computation efficiently (see Theorem~\ref{thm_kernel}). They provide an approach to compute the approximate pairwise proximity matrix that can be used for various machine learning tasks like classification through kernel PCA~\cite{hoffmann2007kernel}. With the global outbreak of Covid-19 in 2020, different mutations of its variants were discovered by the genomics community. Massive testing and large-scale sequencing produced a huge amount of data, creating ample opportunity for the bioinformatics community. Researchers started exploring the evolution of SARS-CoV-2~\cite{Ewen2021TargetedSelf} to vaccine landscapes~\cite{su2021learning} and long-term effects of covid to patients~\cite{tankisi2020critical}. In~\cite{Laporte2020TheSA}, the authors indicate how the coronavirus spike protein is fine-tuned towards the temperature and protease conditions of the airways, to enhance virus transmission and pathology. After the global spread of the coronavirus, researchers started exploring ways to identify new variants and measuring vaccine effectiveness. In~\cite{Leila2021Genotype}, the authors study genome structures of SARS-CoV-2 and its previous versions, while~\cite{Lokman2020ExploringTG} explores the genomics and proteomic variants of the SARS-CoV-2 spike glycoprotein. It is also important to study the conditional dependencies between the attributes (amino acids) and the class label (if any such dependency exist). Studying these dependencies can help finding any (hidden) relationships/dependencies, which in turn can help in the analysis of the protein sequences. \section{Proposed Approach}\label{sec_proposed_approach} Given a set of spike sequences, our goal is to find the similarity score between each pair of sequences (kernel matrix). The resultant kernel matrix is given as input to the kernel PCA method for dimensionality reduction. The reduced dimensional principal components-based feature vector representation is given as input to the classical machine learning models. We discuss each step in detail below. \subsection{$k$-mers Computation} For mapping protein sequences to fixed-length vectors, it is important to preserve the order of the amino acids within a sequence. To preserve the order, we use substrings (called mers) of length $k$. For each spike sequence, the total number of $k$-mers are following: \begin{equation} \text{Total number of $k$-mers} = N - k + 1 \end{equation} where $N$ is the total number of amino acids in the spike sequence ($1274$), and $k$ is a user-defined parameter for the size of each mer. An example of $k$-mers (where k = $4$) is given in Figure \ref{fig_k_mer_demo}. \begin{figure}[!ht] \centering \includegraphics[scale = 0.4] {Figures/k_mers_demo.png} \caption{Example of $k$-mers (where k = 4) in a spike sequence "MFVFVFVLPLV".} \label{fig_k_mer_demo} \end{figure} However, since we do not know the total unique $k$-mers in all spike sequences, we need to consider all possible pairs of $k$-mers to design a general purpose feature vector representation for spike sequences in a given dataset. In this way, given an alphabet $\Sigma$ (a finite set of symbols), we know that a spike sequence $X \in \Sigma$ ($X$ contains a list of ``ordered'' amino acids from $\Sigma$). Similarly, we can extract sub-strings (mers) from $X$ of length $k$, which we called $k$-mers. Given $X$, $k$, and $\Sigma$, we have to design a frequency feature vector $\Phi_k (X)$ of length $\vert \Sigma \vert^k$, which will contain the exact number of occurrences of each possible $k$-mer in $X$. The distance between two sequences $X$ and $Y$ is then simply the hamming distance $d_H$ (count the number of mismatched values). After computing the feature vector, a kernel function is defined that measures the pairwise similarity between pairs of feature vectors (usually using dot product). The problem we consider so far is the huge dimensionality of the feature vector $\vert \Sigma \vert^k$ that can make kernel computation very expensive. Therefore, in the so-called {\em kernel trick}, kernel values are directly evaluated instead of comparing indices. The algorithm proposed in \cite{farhan2017efficient} takes the feature vectors (containing a count of each $k$-mers) as input and returns a real-valued similarity score between each pair of vectors. Given two feature vectors $A$ and $B$, the kernel value for these vectors is simply the dot product of $A$ and $B$. For example, given a $k$-mer, if the frequency of that $k$-mer in $A$ is $2$ and $B$ is $3$, its contribution towards the kernel value of $A$ and $B$ is simply $2 \cdot 3$. The process of kernel value computation is repeated for each pair of sequences and hence we get a (symmetric) matrix (kernel matrix) containing a similarity score between each pair of sequences. \begin{theorem}\label{thm_kernel} The runtime of the kernel computation is bounded above by $O(k^2 \ n \ log \ n)$ \cite{farhan2017efficient}, where $k$ is the length of $k$-mers and $n$ is the length of sequences \end{theorem} Note that $k$ is a user-defined parameter --- in our experiments we use $k = 9$. \subsection{Kernel PCA} Due to a high-dimensional kernel matrix, we use Kernel PCA (K-PCA) \cite{hoffmann2007kernel} to select a subset of principal components. These extracted principal components corresponding to each spike sequence act as the feature vector representations for the spike sequences (we selected 50 principal components for our experiments). \subsection{Machine Learning Classifiers} Various machine learning algorithms have been utilized for the classification task. K-PCA output, which is $50$ components fed to different classifiers for prediction purposes. We use Support Vector Machine (SVM), Naive Bayes (NB), Multi-Layer Perceptron (MLP), K-Nearest Neighbour (KNN) (with $K = 5$), Random Forest (RF), Logistic Regression (LR), and Decision Tree (DT) classifiers. For the training of classifiers, default parameters are used from the literature. All experiments are done on Core i5 system with Windows 10 OS and 32 GB RAM. Implementation of our algorithm is done in Python. Our code and pre-processed datasets are available online \footnote{\url{https://github.com/sarwanpasha/covid_variant_classification}}. For evaluation of the results, we use Weka software\footnote{\url{https://www.cs.waikato.ac.nz/ml/weka/}}. The evaluation metrics that we are using to measure the goodness of the classifiers are average accuracy, precision, recall, F1 (weighted), F1 (macro), and receiver operator curve (ROC) area under the curve (AUC). \subsection{Baseline Model} We consider the approach of~\cite{kuzmin2020machine} as a baseline model. The authors of~\cite{kuzmin2020machine} convert spike sequences into one-hot encoding vectors that are used to classify the viral hosts. We have the 21 amino acids ``\textit{ACDEFGHIKLMNPQRSTVWXY}" (unique alphabets forming $\Sigma$). The length of each spike sequence is $1273$ (with $*$ at the $1274^{th}$ location). After converting sequences into one-hot encoding vectors we will get a $26,733$ dimensional vector ($21 \times 1273 = 26,733$). Principal Component Analysis (PCA) on these vectors is applied to reduce dimensionality for the underlying classifiers. For reference, we use the name ``One-Hot'' for this baseline approach in the rest of the paper. For PCA, we select $100$ principal components (see Figure~\ref{fir_pca_components}). \begin{figure}[!ht] \centering \includegraphics{results/pca.tikz} \caption{Explained variance of principal components for GISAID 1 dataset.} \label{fir_pca_components} \end{figure} \section{Dataset Description and Preprocessing} \label{sec_data_set_detail} We sampled three subsets of spike sequences from the largest known database of Covid-19 in humans, GISAID \footnote{\url{https://www.gisaid.org/}}. We refer to those $3$ subsets as GISAID 1, GISAID 2, and GISAID 3, having $7000$, $7000$, and $3029$ spike sequences (aligned), respectively, each of length 1274 from $5$ variants. For GISAID 1 and GISAID 2 datasets, we preserve the proportion of each variant as given in the original GISAID database. For GISAID 3 dataset, we use comparatively different proportion of variants to analyse the behavior of our algorithm. See Table~\ref{tbl_variant_information} for more information. \begin{table}[H] \centering \begin{tabular}{lllc | l} \hline \begin{mybox} Pango\\Lineage\end{mybox} & Region & Labels & \begin{mybox} Num mutations\\S-gene/Genome \end{mybox} & \begin{mybox}\hskip.3in Num sequences in\\ GISAID 1 \hskip.01in GISAID 2 \hskip.01in GISAID 3 \end{mybox} \\ \hline \hline B.1.1.7 & UK~\cite{galloway2021emergence} & Alpha & 8/17 & \hskip.1in 5979 \hskip.3in 5979\hskip.4in 2055\\ B.1.351 & South Africa~\cite{galloway2021emergence} & Beta & 9/21& \hskip.1in 124 \hskip.4in 124\hskip.4in 133\\ B.1.617.2 & India~\cite{yadav2021neutralization} & Delta & 8/17 & \hskip.1in 596 \hskip.4in 596\hskip.4in 44\\ P.1 & Brazil~\cite{naveca2021phylogenetic} & Gamma & 10/21 & \hskip.1in 202 \hskip.4in 202\hskip.4in 625\\ B.1.427 & California~\cite{zhang2021emergence} & Epsilon & 3/5 & \hskip.1in 99 \hskip.5in 99\hskip.4in 182\\ \hline \end{tabular} \caption{Variants information and distribution in the three datasets. The S/Gen. column represents number of mutations on the S gene / entire genome.} \label{tbl_variant_information} \end{table} \vskip-.3in \noindent To visualize the local structure of spike sequences, we use t-distributed stochastic neighbor embedding (t-SNE) \cite{van2008visualizing} that maps input sequences to 2d real vectors. The t-SNE helps to visualize (hidden) clusters in the data. The visualization results are shown in Figure~\ref{fig_tsn_embedding}, revealing that variants are not well separated; making variant classification a challenging task. It is clear from Figure~\ref{fig_tsn_embedding} that the dominant Alpha variant is not in a single cluster and smaller variants are scattered around (e.g. the least frequent variant, B.1.427, appears in most clusters). \begin{figure}[!ht] \centering \includegraphics[scale = 0.55,page = 1] {Figures/t_sne/tsne_fig.pdf} \caption{t-SNE embeddings of spike sequences.} \label{fig_tsn_embedding} \end{figure} \section{Experimental Evaluation} \label{sec_results_and_discussion} In this section, we first report the performance of different classifiers using multiple performance metrics. Then we analyze the importance of the positions of each amino acid in the spike sequence using information gain. Results for the GISAID 1, 2 and 3 datasets are given in Tables~\ref{tbl_avg_classification_results_second_dataset}--\ref{tbl_avg_classification_results_third_dataset}. We present results for each classifier separately for the baseline method and compare it with our proposed method. We can observe that for most of the classifiers, our proposed method is better than the baseline. For example, in the case of SVM classifier, the one-hot method got $0.962$ F1-Macro score for the GISAID 1 dataset while our proposed model got $0.973$, which is a significant improvement considering that all values are on the higher side. Similar behavior is observed for other classifiers also. For all of these results, we use $1\%$ data for training and $99\%$ for testing purposes. Since we are getting such a higher accuracies, we can conclude that with a minimum amount of available data, we can train a classifier that can classify different variants very efficiently. Also, we can observe that the SVM classifier is consistently performing best for all the datasets. Note that results in Tables~\ref{tbl_avg_classification_results_second_dataset}--\ref{tbl_avg_classification_results_third_dataset} are averaged over all variants. \begin{table}[!ht] \centering \begin{tabular}{cp{0.8cm}cccccc} \hline Approach & ML Algo. & Acc. & Prec. & Recall & F1 (weighted) & F1 (Macro) & ROC-AUC \\ \hline \hline \multirow{7}{*}{One-Hot \cite{kuzmin2020machine}} & SVM & 0.990 & 0.990 & 0.990 & 0.990 & 0.962 & 0.973 \\ & NB & 0.957 & 0.964 & 0.951 & 0.952 & 0.803 & 0.881 \\ & MLP & 0.972 & 0.971 & 0.975 & 0.974 & 0.881 & 0.923 \\ & KNN & 0.978 & 0.964 & 0.977 & 0.965 & 0.881 & 0.900 \\ & RF & 0.964 & 0.962 & 0.961 & 0.963 & 0.867 & 0.878 \\ & LR & 0.985 & 0.981 & 0.983 & 0.984 & 0.935 & 0.950 \\ & DT & 0.941 & 0.945 & 0.947 & 0.944 & 0.793 & 0.886\\ \hline \multirow{7}{*}{Kernel Approx.} & SVM & \textbf{0.994} & \textbf{0.994} & \textbf{0.995} & \textbf{0.995} & \textbf{0.973} & \textbf{0.988} \\ & NB & 0.987 & 0.985 & 0.985 & 0.986 & 0.901 & 0.912 \\ & MLP & 0.975 & 0.977 & 0.976 & 0.978 & 0.921 & 0.935 \\ & KNN & 0.979 & 0.967 & 0.979 & 0.967 & 0.887 & 0.904 \\ & RF & 0.981 & 0.987 & 0.988 & 0.980 & 0.944 & 0.945 \\ & LR & 0.992 & 0.990 & 0.993 & 0.992 & 0.991 & 0.990 \\ & DT & 0.985 & 0.981 & 0.985 & 0.987 & 0.898 & 0.944\\ \hline \end{tabular} \caption{Variants Classification Results (1\% training set and 99\% testing set) for GISAID 1 Dataset. Best values are shown in bold} \label{tbl_avg_classification_results_second_dataset} \end{table} \begin{table}[!ht] \centering \begin{tabular}{cp{0.8cm}cccccc} \hline Approach & ML Algo. & Acc. & Prec. & Recall & F1 (weighted) & F1 (Macro) & ROC-AUC \\ \hline \hline \multirow{7}{*}{One-Hot \cite{kuzmin2020machine}} & SVM & 0.994 & 0.994 & 0.993 & 0.992 & 0.975 & 0.983 \\ & NB & 0.912 & 0.936 & 0.912 & 0.920 & 0.794 & 0.913 \\ & MLP & 0.970 & 0.970 & 0.970 & 0.969 & 0.880 & 0.921 \\ & KNN & 0.960 & 0.960 & 0.960 & 0.958 & 0.841 & 0.863 \\ & RF & 0.966 & 0.967 & 0.966 & 0.964 & 0.888 & 0.885 \\ & LR & 0.993 & 0.993 & 0.993 & 0.993 & 0.968 & 0.973 \\ & DT & 0.956 & 0.957 & 0.956 & 0.956 & 0.848 & 0.913 \\ \hline \multirow{7}{*}{Kernel Approx.} & SVM & \textbf{0.998} & \textbf{0.997} & \textbf{0.997} & \textbf{0.998} & \textbf{0.998} & \textbf{0.997} \\ & NB & 0.985 & 0.988 & 0.985 & 0.984 & 0.946 & 0.967 \\ & MLP & 0.973 & 0.971 & 0.972 & 0.970 & 0.889 & 0.925 \\ & KNN & 0.965 & 0.962 & 0.963 & 0.967 & 0.845 & 0.867 \\ & RF & 0.990 & 0.992 & 0.991 & 0.996 & 0.978 & 0.977 \\ & LR & 0.997 & 0.994 & 0.996 & 0.997 & 0.991 & 0.993 \\ & DT & 0.991 & 0.990 & 0.994 & 0.996 & 0.952 & 0.963 \\ \hline \end{tabular} \caption{Variants Classification Results (1\% training set and 99\% testing set) for GISAID 2 Dataset. Best values are shown in bold} \label{tbl_avg_classification_results_first_dataset} \end{table} \begin{table}[!ht] \centering \begin{tabular}{cp{0.8cm}cccccc} \hline Approach & ML Algo. & Acc. & Prec. & Recall & F1 (weighted) & F1 (Macro) & ROC-AUC \\ \hline \hline \multirow{7}{*}{One-Hot \cite{kuzmin2020machine}} & SVM & 0.988 & 0.986 & 0.987 & 0.982 & 0.924 & 0.961 \\ & NB & 0.764 & 0.782 & 0.761 & 0.754 & 0.583 & 0.747 \\ & MLP & 0.947 & 0.941 & 0.944 & 0.942 & 0.813 & 0.898 \\ & KNN & 0.920 & 0.901 & 0.924 & 0.901 & 0.632 & 0.773 \\ & RF & 0.928 & 0.935 & 0.922 & 0.913 & 0.741 & 0.804 \\ & LR & 0.982 & 0.981 & 0.983 & 0.984 & 0.862 & 0.921 \\ & DT & 0.891 & 0.891 & 0.890 & 0.895 & 0.679 & 0.807\\ \hline \multirow{7}{*}{Kernel Approx.} & SVM & \textbf{0.991} & \textbf{0.993} & \textbf{0.995} & \textbf{0.991} & \textbf{0.989} & \textbf{0.997} \\ & NB & 0.864 & 0.922 & 0.861 & 0.884 & 0.783 & 0.887 \\ & MLP & 0.926 & 0.922 & 0.921 & 0.923 & 0.805 & 0.909 \\ & KNN & 0.947 & 0.921 & 0.942 & 0.934 & 0.701 & 0.826 \\ & RF & 0.975 & 0.971 & 0.971 & 0.972 & 0.904 & 0.918 \\ & LR & 0.991 & 0.990 & 0.994 & 0.990 & 0.983 & 0.992 \\ & DT & 0.960 & 0.969 & 0.964 & 0.967 & 0.812 & 0.891\\ \hline \end{tabular} \caption{Variants Classification Results (1\% training set and 99\% testing set) for GISAID 3 Dataset. Best values are shown in bold} \label{tbl_avg_classification_results_third_dataset} \end{table} We also show the variant-wise performance of our best classifier (SVM). Table~\ref{tbl_svm_heatmap} contains the resulting confusion matrices using the kernel based and One-Hot approaches for GISAID 1. Clearly, the kernel-based approach performs better than the One-Hot approach for most of the variants. \begin{table}[h!] \footnotesize \subfloat{ \begin{tabular}{@{}l|cccccccc@{}} \toprule Variant & {\bf Alpha} & {\bf Beta} & {\bf Delta} & {\bf Gamma} & {\bf Epsi.} \\ \midrule {\bf Alpha} & 5373 & 3 & 7 & 0 & 5 \\ {\bf Beta} & 6 & 110 & 0 & 0 & 0 \\% {\bf B.1.351} {\bf Delta} & 6 & 0 & 523 & 0 & 0 \\ {\bf Gamma} & 0 & 0 & 0 & 176 & 0 \\ {\bf Epsilon} & 2 & 0 & 0 & 0 & 89 \\ \bottomrule \end{tabular} } \subfloat{ \footnotesize \begin{tabular}{@{}|cccccccc@{}} \toprule {\bf Alpha} & {\bf Beta} & {\bf Delta} & {\bf Gamma} & {\bf Epsi.} \\ \midrule 5371 & 9 & 5 & 0 & 3 \\ 13 & 103 & 0 & 0 & 0 \\ 8 & 0 & 521 & 0 & 0 \\ 0 & 0 & 0 & 176 & 0 \\ 7 & 0 & 3 & 0 & 81 \\ \bottomrule \end{tabular} } \caption{Confusion matrices for SVM classifiers using kernel approximation approach(left) and using One-Hot approach(right) for GISAID 1 dataset.} \label{tbl_svm_heatmap} \end{table} \vskip-.2in \subsection{Importance of Amino Acid Positions} To evaluate importance positions in spike sequences, we find the subset of positions contributing the most towards predicting a variant. We use the correlation-based feature selection (CFS) that evaluates a subset of positions by considering the individual predictive ability of each feature along with the degree of redundancy between them. Using the step-wise forward greedy search (SFGS), we select a subset of features, which are highly correlated with the class (variants) while having low inter-dependency. SFGS may start with no/all amino acids or from an arbitrary point in the space and it stops when the addition/deletion of any remaining amino acids results in a decrease in evaluation. SFGS can also produce a ranked list of amino acids by traversing the space from one side to the other and recording the order of amino acids that are selected. The subset of features selected for each dataset are given in Table~\ref{tbl_important_subset_of_attributes}. \begin{table}[!ht] \centering \begin{tabular}{ccc} \hline Dataset & Total Amino Acids & Selected Amino Acids Positions \\ \hline \hline GISAID 1 & 10 & 19, 152, 417, 452, 570, 681, 950, 982, 1118, 1176\\ GISAID 2 & 10 & 19, 152, 417, 452, 570, 681, 950, 982, 1118, 1176 \\ GISAID 3 & 10 & 13, 258, 417, 452, 570, 681, 701, 1027, 1118, 1176 \\ \hline \end{tabular} \caption{Subset of positions that contributed the most in variant prediction} \label{tbl_important_subset_of_attributes} \end{table} To evaluate individual positions, we measure the Information Gain (IG) for each position with respect to the variant defined as $IG(Class,position) = H(Class) - H(Class | position)$, where $H= \sum_{ i \in Class} -p_i \log p_i$ is the entropy, and $p_i$ is the probability of the class $i$. Figure~\ref{fig_IG_dataset_1_2_3} depicts how informative a position is to determine variants (higher value is better). We observe that, as in Table~\ref{tbl_important_subset_of_attributes} positions such as 452, 570 and 681 are more informative across all datasets. The USA's CDC also declared mutations at these positions from one variant to the other, which validates our feature selection algorithm. For instance, R452L is present in B.1.427(Epsilon) and B.1.617 (Kappa, Delta) lineages and sub-lineages. The combination of K417N, E484K, and N501Y substitutions are present in B.1.351 (Beta). Similarly, K417T, E484K, and N501Y substitutions are present in P.1(Gamma)\footnote{\url{https://www.cdc.gov/coronavirus/2019-ncov/variants/variant-info.html}} (they can be seen having higher IG in Figure~\ref{fig_IG_dataset_1_2_3}). \begin{figure}[t] \centering \centering \includegraphics{Tikz_Figures/information_gain_gisaid_1.tikz} \caption{Information gain of each amino acid position with respect to variants. $x$-axis corresponds to amino acid positions in the spike sequences.} \label{fig_IG_dataset_1_2_3} \end{figure} \section{Conclusion and Future Directions} \label{sec_conclusion} We propose an approach to efficiently classify COVID-19 variants using spike sequences. Results show that the $k$-mer based sequence representation outperforms the typical one-hot encoding approach since it preserves the actual order of amino acids. We showed the importance of specific amino acids and demonstrate that it agrees with the CDC definitions of variants. In the future, we will work towards detecting new (unknown) variants based on the whole genome sequences. Another exciting future work is considering other attributes like countries, cities, and dates to design richer feature vector representations for spike sequences. \bibliographystyle{splncs04}
f79165f66a26fd1c6c60fc8232f58dfb68e6d03b
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \par Transient Current Technique (TCT) is a well-known method to study irradiated silicon sensors. In one flavour of this technique so-called Top-TCT, the Device Under Test (DUT) is illuminated by $\alpha$-particles \cite{kramberger2002effective, kramberger2002determination,brodbeck2000new} or red-light laser \cite{kraner1993use,beattie1999carrier, fink2006characterization}, and the induced current transient is recorded. One can determine the collected charge by integrating this transient. The result of this measurement is a quantity called "Charge Collection Efficiency" or $CCE$ defined as the ratio between the collected charge by the electrodes and the produced charge inside the sensor. \par Due to the short absorption depth of $\alpha$-particles and red light in silicon, all electron-hole pairs are generated close to the surface of the diode. Therefore, only one of the charge carrier types (electrons or holes) is drifting through the bulk region while the other is collected immediately at the surface implant. The induced signal in the electrodes of the diode is dominated by the charge carrier type which is drifting through the entire bulk and the other type has a minor contribution to the signal. As a result, one can study the trapping times of electrons and holes separately using this technique for irradiated sensors. \par Top-TCT, is limited for studying pad diodes irradiated with fluence up to the order of $10^{15} \; \text{cm}^{-2}$. Two important limitations of TCT measurements are: \begin{itemize} \item Since charge carriers are generated close to the surface, it is difficult to obtain information on charge collection as a function of depth using this technique. \item The estimated $CCE$ values from TCT measurements are very sensitive to possible inactive layers in the sensor implants. There is a field-free region in the diode implant where the generated electron-hole pairs can be trapped before they diffuse to the active region of the diode (the region with a non-zero electric field)\cite{scharf2015measurement, scharf2018radiation}. Simulation of a non-irradiated diode shows that this region remains field-free independent of the applied voltage. As a result, the measured $CCE$ values with the Top-TCT method underestimate the actual $CCE$ in the active region of irradiated diodes and are affected by the change of the thickness of the non-depleted layer, depending on the irradiation fluence. \end{itemize} \par In another flavor of this technique called "edge-TCT" or E-TCT, the sensor is illuminated by focussed Infra-Red (IR) laser light from the edge \cite{Kramberger:2010tem}. By scanning the light along the edge, one can obtain information on the charge collection as a function of depth. This method has been used to study strip sensors to extract their charge and electric field profiles \cite{mandic2015edge, mandic2013tct, klanner2020determination}. An issue with this technique is that the waist radius of the light spot changes as it travels through the sensor and the beam becomes defocused. As a result, it can only be used for segmented sensors (strips and pixels) where the charge produced below a narrow strip is collected and read out. This technique cannot be used for measuring pad diodes where there is only one readout implant. Another issue is that one cannot normalize E-TCT measured charges to an absolute value as the exact number of electron-hole pairs generated by light in the sensor is not known. In addition, it has been shown that the absorption length of infrared light changes after irradiation \cite{scharf2020influence}. Therefore, a direct comparison between irradiated and non-irradiated strips is not straightforward. \par This work uses a high-energy electron beam (4.2 GeV) for edge-on measurements on pad diodes. The method was introduced in \cite{gorivsek2016edge}. In that work, pions with an energy of 120 GeV were used to study 300 $\upmu \text{m}$ thick pad diodes. The advantages of this technique are: \begin{itemize} \item Radius and direction of the beam do not change as it travels through the sensor. Therefore, it can be used to study pad diodes. \item An absolute normalization is possible as the energy lost of the beam in the sensor, i.e. $dE/dx$, is well known. \item Ionization of the electron beam does not change after irradiation. Therefore, one can directly compare the results of the non-irradiated and irradiated diodes together. \end{itemize} \par The main aim of this study is to obtain the position-dependent charge collection. Charge collection length, or the mean distance in which charge carriers drift inside the sensor before trapping, can be estimated using the results of these measurements. The disadvantageous of this technique are: \begin{itemize} \item Due to the large capacitance of pad diodes, rise times of induced signals in the electrodes are in order of few nanoseconds. As a result, it is not straightforward to extract information about charge carrier velocity and electric field from the transients. \item The alignment procedure proposed in this paper is limited for highly irradiated sensors where the charge collection profile is not uniform. \end{itemize} In this paper, after introducing the method, the in-situ alignment procedure is explained. In the results section, the charge profile of three pad diodes, one non-irradiated and two irradiated, are presented. \section{Experiment Setup} \label{Experiment} \par The measurements were performed in at Deutsches Elektronen-Synchrotron (DESY) test beam facility. In this facility, electron/positron beams with energy in a range of 1-6 GeV are available \cite{diener2019desy}. For this work, the electron beam with an energy of 4.2 GeV was chosen. The reason for this choice came from considerations on beam rate (at lower beam energy, beam rate is higher with a maximum rate around 3 GeV) and the telescope resolution (at higher beam energy, the spatial resolution is better). \cref{setup} shows a schematic view of the setup used for these measurements. In \cref{TB_setup} shows a photo of the measurement setup. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.7\linewidth} \includegraphics[width=0.7\linewidth]{Setup.jpg} \caption{Schematic view of the beam test setup used for edge-on measurements.} \label{setup} \end{subfigure} \begin{subfigure}[b]{0.8\linewidth} \includegraphics[width=0.8\linewidth]{TB_Setup1.jpg} \caption{A photo of the measurement set-up including telescope planes and the cold box of the DUT. In this study, only the upstream planes were used for the track reconstruction.} \label{TB_setup} \end{subfigure} \caption{Setup for test beam measurements.} \label{TB} \end{figure} \par The tracks were reconstructed using three planes of an EUDET-type telescope. Each plane of the telescope consists of a MIMOSA 26 sensor \cite{jansen2016performance}. This is a monolithic pixel sensor produced in 350 nm CMOS technology. The pixel size is $18.4 \times 18.4 \; \upmu \text{m}^2$ and the thickness of the sensor is $54.5 \pm 3.6 \; \upmu \text{m}$. Each sensor contains 1152 columns and 576 rows which cover an area of $21.2 \times 10.6 \; \text{mm}^2$. The estimated intrinsic hit resolution of a single plane is $3.24 \pm 0.9 \; \upmu \text{m}$ \cite{hu2010first}. The threshold of the telescope sensors was set to $4 \; \text{mV}$ corresponding to $\sim 70 \; \text{e}^{-}$. \par The integration time of the MIMOSA 26 planes is relatively long ($115 \; \upmu\text{s}$) and several tracks can pass through the setup during one readout cycle. Therefore, a time reference was needed to select a subset of tracks that generate the trigger signal. The time reference was provided by a CMS phase 1 pixel module. This module contains a silicon sensor with a pixel size of $100 \times 150 \; \upmu\text{m}^2$ and a PSI46dig readout chip. The internal clock of the readout chip was 40 MHz (correspond to the 25 ns readout cycle). \par The trigger signal for readout of the telescope, timing reference module, and DUT was provided by two orthogonally mounted scintillators coupled to Photo Multiplier Tubes (PMTs). Scintillators were placed in front of the telescope planes. The coincidence signal between them provided a triggering area of $8 \times 3 \; \text{mm}^2$. The output of PMTs was fed to a device called Trigger Logic Unit (TLU). The TLU provided the trigger signal through a logic AND between signals of two scintillators. \par The diodes in this work were $p$-type ($n^+pp^+$configuration) with an active thickness of 150 $\: \upmu $m. The area of diodes was $\approx \: $25 mm\textsuperscript{2}. \cref {diode_top} and \cref {diode_xs} show the top view and the cross section of the studied diodes, respectively. Two diodes were irradiated with 23 MeV protons at a 1 MeV neutron equivalent fluence $\Phi_{eq}$ of $2\times 10^{15} \; \text{cm}^{-2}$ and $4\times 10^{15} \; \text{cm}^{-2}$. For the calculation of $\Phi_{eq}$, a hardness factor $\kappa$ = 2.2 was used \cite{allport2019experimental}. The depletion voltage of the non-irradiated diode was 75 V and the doping density of the bulk region was $4.5 \times 10^{12} \; \text{cm}^{-3}$ . The guard ring of diodes was floating during the measurement. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.7\linewidth} \includegraphics[width=1\linewidth]{top_view1.jpg} \caption{Top view of the diode (dimensions in $\upmu \text{m}$) } \label{diode_top} \end{subfigure} \begin{subfigure}[b]{0.7\linewidth} \includegraphics[width=1\linewidth]{diode_xs.jpg} \caption{Cross section of the diode, different colors corresponding to: 1.$n^+$ implant 2. $\text{SiO}_2$ layer 3. passivation layer 4. metallisation layer.} \label{diode_xs} \end{subfigure} \caption{Schematic of the diode} \end{figure} \par A Rohde \& Schwarz oscilloscope with an analog bandwidth of 4 GHz and a sampling rate of 20 GS/s recorded the transients. A Femto HSA-X-40 amplifier with a bandwidth of 2.0 GHz and a nominal gain of 100 amplified the signal from the diodes with an AC coupling \cite{Femto}. The trigger was provided externally through the TLU. The external triggering made it possible to record all transients without zero suppression. \par To calculate the collected charge, transients are integrated in a time window (gate). \cref{transient} shows an average transient for the non-irradiated diode biased at 100 V. The transient is an average of events with amplitude higher than 20 mV. To correct for the offset of the baseline, the average of the transient in the pre-pulse region (shown in \cref{transient}) is subtracted before integration. The charge is calculated as: \begin{equation} {Q}=\int_{{t}_0}^{{t}_1} \frac{{U(t)}}{G \cdot {R}_{{L}}} dt \label{E1} \end{equation} $U(t)$ is the average transient after the baseline correction and $R_L$ is the input impedance of the oscilloscope ($50 \; \ohm $) and $G$ is the nominal gain of the amplifier (100). For this study, a gate width of 30 ns was chosen, as shown in \cref{transient}. \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth]{pulse1_37982.jpg} \caption{Average transient of the non-irradiated diode for all events with an amplitude higher than 20 mV. The bias voltage for this measurement was 100 V.} \label{transient} \end{figure} \par To measure the transients, the diode was mounted on a PCB which provided the electrical connectors. The parallel beam can produce showers inside the PCB. To cancel the effects of these showers on the response of the diode, two spacers were inserted between the diode and the PCB, as shown in \cref{spacer}. These spacer were metal bars with a width of $w = 1 \; \text{mm}$ and a thickness of $d= 0.5 \; \text{mm}$ \begin{figure}[ht] \centering \includegraphics[width=0.6\linewidth]{spacer.jpg} \caption{Spacers inserted between the diode and the PCB. The sketch is not to scale.} \label{spacer} \end{figure} \section{Online alignment} \label{alignment} \par The method proposed in this paper only works if tracks are parallel to the diode surface. To ensure this condition, an "in-situ alignment" procedure was used before data taking. \par In this procedure, the average collected charge was measured as a function of angle with fine steps ($0.25^ \circ$ or $ \approx 4 \; \text{mrad} $). The minimum step size of the rotation stage is $0.01^\circ$. The minimum or "zero" angle was defined as the angle where the measured charge is maximum. If tracks and diode surface are not parallel, a part of the tracks leaves the diode through the top or bottom plane and therefore deposit less charge than tracks that cross the whole diode (5 mm). As a result, the average charge is maximal when the angle between the tracks and the diode is minimum. The important assumption in this procedure is that the collected charge is uniform as a function of depth. \par The precision for an angular scan can be estimated using the following equation: \begin{equation} {\sigma_{\theta}}= \frac{\sigma_x \cdot \sqrt{12}}{5 \; \text{mm}} \label{E1} \end{equation} \par Where $\sigma_{\theta}$ and $\sigma_x$ are the RMS spread of angle and position, respectively. $\sigma_x$ is related to the extrapolated telescope resolution at the DUT surface. In \cite{jansen2016performance}, the telescope resolution was calculated as a function of the DUT-to-telescope distance (i.e. $dz_{DUT}$ in \cref{setup}). By assuming a resolution around 10 $\upmu \text{m}$, the value for $\sigma_{\theta}$ is estimated to be 6.9 mrad or $0.4^\circ$. Therefore, the chosen step size, i.e. $0.25^\circ$, is small enough. \par \cref{charge_mean} shows the average collected charge of the non-irradiated diode as a function of angle. The measurement was done at a bias voltage of 100 V and at room temperature. For each angle, 20,000 triggers were collected. To obtain this plot, an offline threshold of 20 mV was applied to the recorded transients. \cref{charge_dist} shows the charge distributions for three angles of incidence. One can notice, for non-zero angles, the charge distributions show a tail below the peak around 75 fC towards the lower charges. This tail corresponds to the tracks which partially crossed the diode and it is minimum when the angle of incidence ($\theta$) is minimum. A low charge cut-off around 0 fC can be seen which is due to the offline threshold (20 mV). Based on \cref{charge_dist} plot, one can define a ratio between the number of events in tail of the charge distribution, i.e. $Q<60 \; \text{fC}$, and number of events in the peak, i.e. $Q>60 \; \text{fC}$. It is expected that at a minimum angle this ratio is minimum. \cref{ratio} shows this ratio as a function of angle. One can see the results are compatible with \cref{charge_dist}. The advantage of using this quantity instead of the mean charge is its higher sensitivity to changes in the angle. \par The procedure exploited a simple geometrical relation between the track length inside the DUT and the angle of incidence. The advantage of using this procedure was that no track reconstruction was required. This procedure had to be repeated once the diode was changed or the cooling box was dismounted. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.8\linewidth} \includegraphics[width=0.8\linewidth]{qvsturn_Dec.jpg} \caption{Mean collected charge as a function of the angle of incidence for the non-irradiated diode. The plot is obtained for transients with amplitude higher than 20 mV.} \label{charge_mean} \end{subfigure} \begin{subfigure}[b]{0.8\linewidth} \includegraphics[width=0.8\linewidth]{Chargrdists.jpg} \caption{Collected charge distribution for three angles of incidence for the non-irradiated diode.} \label{charge_dist} \end{subfigure} \caption{Results of the angle scan} \label{anglescan} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{Ratiovsturn.jpg} \caption{The ratio of the number of events in the tail of the charge distribution ($\text{Q}<60 \; \text{fC}$ in \cref{charge_dist}) to the number of events in the peak ($\text{Q}>60 \; \text{fC}$)} \label{ratio} \end{figure} \section{Results for the non-irradiated diode} \label{Non-irrad} \par After finding the minimum angle, the beam tracks are reconstructed and projected to the surface of the diode facing the beam. To gain maximum acceptance, the centering of the diode with respect to the trigger area was checked. For this purpose, the position distribution of events with pulse amplitude larger than 20 mV is obtained, as shown in \cref{Rectangular}. A rectangular shape with a size of $5 \times 0.15 \; \text{mm}^2$ can be recognized. This plot indicates that the entire diode is within the trigger acceptance. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{Rectangular3.jpg} \caption{Position distribution of the number of tracks with the amplitude of the diode transients larger than 20 mV.} \label{Rectangular} \end{figure} \par \cref{Nonirradvsx} shows the charge profile as a function of depth for the non-irradiated diode at a bias voltage of 100 V and at room temperature. Each point of this profile is the average value of the charge distribution and the error bar is estimated as the RMS value divided by the square root of the number of entries. For clarity, the assumed position of $n^+$ and $p^+$ implants (at $x= \pm 75 \; \upmu \text{m}$ ) are shown in the figure. A series of geometrical and timing cuts were applied to the data to obtain this profile. In Appendix A, these cuts are explained. \cref{Nonirrad_dist} shows the charge distribution for bins with $ -10 \; \upmu \text{m}<x<+10 \; \upmu \text{m}$. \par From \cref{Nonirradvsx}, one can see the charge is uniform as a function of depth in the central region of the diode. One notes that the charge collection is not zero outside the active region of the diode. This is due to the finite track resolution of the beam telescope. In the future, one can improve the resolution by decreasing the diode-to-telescope distance (i.e. ${dz}_{{DUT}} $ in \cref{setup}) or increasing the energy of the electron beam which reduces the multiple scattering. \begin{figure}[h!] \centering \begin{subfigure}[b]{1\linewidth} \includegraphics[width=1\linewidth]{ChargeVsX1_37982.jpg} \caption{Charge profile of the non-irradiated diode at bias voltage of 100 V} \label{Nonirradvsx} \end{subfigure} \begin{subfigure}[b]{1\linewidth} \includegraphics[width=1\linewidth]{qdist_twocentralbin_37982.jpg} \caption{The charge distribution for a central bin with $ -10 \; \upmu \text{m}<x<10 \; \upmu \text{m}$} \label{Nonirrad_dist} \end{subfigure} \caption{Charge profile and distribution of the non-irradiated diode at bias voltage of 100 V. The measurement was done at the room temperature.} \label{Nonirrad} \end{figure} \par The measurements were repeated for bias voltages in the range of $20-300 \; \text{V}$. \cref{Nonirradbiasscan} shows the charge profiles obtained from these measurements. One can see that for bias voltage above 100 V, the profile remains unchanged. At $V_{bias} = 20 \; \text{V}$ , the charge collection is zero in the region near the $p^{+}$ implant. This can be explained by the fact that the electric field grows from the $n^{+}$ to the $p^{+}$ implant. At bias voltages below the full depletion voltage, $75 \; \text{V}$, the electric field is zero near the $p^{+}$ region and results in a region with zero charge collection. \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{ChargeVsX1_nonirrad.jpg} \caption{Charge profiles of the non-irradiated diode for several bias voltages} \label{Nonirradbiasscan} \end{figure} \section{Results for irradiated diodes} \label{Irr} \par The measurements explained in \cref{Non-irrad,alignment} were repeated for two irradiated diodes. In order to keep the leakage current of the diodes low, diodes were mounted in a cold box (as shown in \cref{TB_setup}). The box was cooled down using a water chiller (operating at $ - 20~^\circ$C) and two Peltier elements. The exact temperature inside the box was not measured. \cref{irradvsx_2e15,irradvsx_4e15} show the charge profiles for irradiated diodes at $\Phi_{{eq}} = 2 \times 10 ^{15} \text{cm}^{-2}$ and $4 \times 10 ^{15} \text{cm}^{-2}$, respectively. From these figures, one can see that the charge profile is not uniform as a function of depth, especially at low bias voltages. For both diodes, the collected charge near the $n^+$ implant, which is dominated by holes, is higher than at the $p^+$ implant. \begin{figure}[h!] \centering \begin{subfigure}[b]{1\linewidth} \includegraphics[width=1\linewidth]{qvsx_2E15.jpg} \caption{$\Phi_{eq} = 2 \times 10 ^{15} \text{cm}^{-2}$} \label{irradvsx_2e15} \end{subfigure} \begin{subfigure}[b]{1\linewidth} \includegraphics[width=1\linewidth]{qvsx_4E15.jpg} \caption{$\Phi_{eq} = 4 \times 10 ^{15} \text{cm}^{-2}$} \label{irradvsx_4e15} \end{subfigure} \caption{Charge profiles of two irradiated diodes for various bias voltages} \label{irrad} \end{figure} Using plots shown in \cref{irrad}, the $CCE$ of irradiated diodes is calculated by the following relationship: \begin{equation} {CCE} ({V}_{{bias}})= \frac{\sum_{x=-150 \upmu \text{m}}^{+ 150 \upmu \text{m}} {Q_{x,\Phi} ({V}_{{bias}})} }{\sum_{x=-150 \upmu \text{m}}^{+ 150 \upmu {m}} {Q_{x,0} (V_{{bias}} = 100 \; \text{V})}} \label{E2} \end{equation} Where $Q_{x,0}$ and $Q_{x,\Phi}$ are charge profiles of the non-irradiated and of the irradiated diodes, respectively. The statistical uncertainty of the CCE is estimated by propagating the error of the mean charge for each bin. \par \cref{CCE} shows the $CCE$ values as a function of bias voltage for the two irradiated diodes. The statistical error bars are small compared to the markers and not visible in the plot. Other systematic uncertainties such as errors on integration gate and incidence angle are not included in the plot. As expected, the $CCE$ increases with bias voltage for both diodes. The $CCE$ values of the diode with lower irradiation fluence, i.e. $2 \times 10^{15} \text{cm}^{-2}$ are larger than the values for diode with higher fluence for all bias voltages. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{CCEvsBias.jpg} \caption{CCE as a function of bias voltage for two irradiated diodes} \label{CCE} \end{figure} \section{Conclusion} In this paper, we demonstrate the feasibility of using an electron beam entering the side of a silicon pad diode to obtain the charge collection profile. \par It is shown that the charge collection profile of a non-irradiated pad diode is uniform as a function of depth for bias voltages above full depletion. The profile of irradiated diodes, however, is not uniform, especially for lower bias voltages. For the highest applied bias voltage, 800 V, the $CCE$ is less than 1 for investigated irradiated diodes. \par Qualitatively, the data suggests that the charge collection length of holes is higher than electrons for irradiated diodes. This effect becomes more pronounced at lower bias voltages and higher irradiation fluence. \par This method can be further optimized. The resolution can be improved by minimizing the distance between the telescope and the DUT (${dz}_{DUT}$) and increasing the beam energy. The efficiency of data selection can be improved by optimizing the geometrical match between the DUT and the trigger scintillator. \section{Acknowledgements} \par The authors acknowledge support by the BMBF, the German Federal Ministry of Education and Research. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy -- EXC 2121 ``Quantum Universe'' -- 390833306. \par The measurements leading to these results has been performed at the Test Beam Facility at DESY Hamburg (Germany), a member of the Helmholtz Association (HGF)". \section{Appendix A: Data selection cuts} To obtain the charge profile for diodes, a series of geometrical and timing cuts are applied in the offline analysis. In this section, these cuts are explained. \subsection{ Selection of in-time tracks} As mentioned in \cref{Experiment}, the readout cycle of the telescope is slow($115 \; \upmu s$). To deal with this issue, a timing reference module was used in front of the setup. This module has a fast readout (25 ns). In the off-line analysis, the crossing point between the tracks and the reference module is reconstructed. Then, the residual between the track intersection and the cluster position in the module, $ \Delta x_{REF} $ and $ \Delta y_{REF} $ is calculated. A track is considered in time and accepted if it meets the following conditions: \begin{align*} \lvert \Delta x_{REF} \rvert & < 150 \; \upmu \text{m}, \\ \lvert \Delta y_{REF} \rvert & < 150 \; \upmu \text{m}. \end{align*} \cref{timecut} shows the distribution of number of tracks in $x$ and $y$ directions before and after applying this cut. From this plot, one can estimate the reduction of events by a factor of 16 after selection of in-time tracks. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.8\linewidth} \includegraphics[width=0.8\linewidth]{trix1vstrix2.jpg} \caption{The distribution of tracks in $x$-direction before and after selection of in-time tracks. } \label{trix12} \end{subfigure} \begin{subfigure}[b]{0.8\linewidth} \includegraphics[width=0.8\linewidth]{triy1vstriy2_37982.jpg} \caption{The distribution of tracks in $y$-direction before and after selection of in-time tracks.} \label{triy12} \end{subfigure} \caption{Selection of in-time tracks} \label{timecut} \end{figure} \subsection{Geometrical cuts} Two geometrical cuts are applied to the events in the off-line analysis: \begin{itemize} \item Cut on ${y}_{DUT}$: in order to reduce the backgrounds, a cut is applied on the $y$-direction of the reconstructed tracks in the diode surface, i.e. ${y}_{DUT}$ as follows (see \cref{ydut}): \begin{align*} -2.5 \; \text{mm} <y_{DUT} < 2.5 \; \text{mm} \end{align*} \cref{ydut} shows the selection range in the $y$-direction. This cut reduces the number of events by a factor of 1.56. \item Cut on the beam divergence $\theta_x $: the slope tracks can be reconstructed using the telescope planes. \cref{angular} shows the distribution of the slope of the track in the $x$-direction. To ensure a parallel beam, a cut was applied on the slope as follows: \begin{align*} \vert\theta_{x} \vert < 1.0 \; \text{mrad}. \end{align*} \cref{angular} shows the applied cuts on the angular distribution of tracks. This cut further reduces the number of tracks by a factor of 1.3. \end{itemize} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.8\linewidth} \includegraphics[width=0.8\linewidth]{triy1_37982.jpg} \caption{The distribution of reconstructed tracks in $y$-coordinate at the DUT surface } \label{ydut} \end{subfigure} \begin{subfigure}[b]{0.8\linewidth} \includegraphics[width=0.8\linewidth]{tritx_37982.jpg} \caption{The angular distribution of reconstructed tracks} \label{angular} \end{subfigure} \caption{Geometrical cuts in the off-line analysis} \label{Nonirrad1} \end{figure}
ef592b69868b4fd5e4c2c80e21e884deed823024
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Shaping materials into reduced-dimensional structures offers additional flexibility to manipulate their physical properties, especially electronic transport properties. As for quasi-one-dimensional (quasi-1D) structures in uniform electronic states, high aspect ratio, large surface-to-volume ratio, or quantum confinement lead to a spectrum of intriguing effects, \emph{e}.\emph{g}., tunable bandgaps in semiconductor nanowires \cite{1}, and quantum interference in topological insulator nanoribbons \cite{2}. In particular, strong anisotropic magnetic and magneto-transport behaviors \cite{3,4,5,6}, as well as manipulation of individual skyrmions \cite{7} have been demonstrated in quasi-1D magnetic structures. In contrast to the usually observed homogeneous electronic states, electronic phase separation (EPS) at mesoscopic scale can be stabilized in so-called electronically soft matters \cite{8}. A prototypical example is doped perovskite manganites, where antiferromagnetic charge-ordered insulating (COI) phase, ferromagnetic metallic (FMM) phase, and paramagnetic insulating (PI) phase may coexist in a certain range of temperature, due to the delicate coupling between the spin, charge, lattice, and orbital degrees of freedom \cite{9,10,11}. Dimensionality reduction has been adopted to further manipulate their physical properties. For example, spatial confinement engineering of phase-separated manganites, \emph{e}.\emph{g}., (La,Pr,Ca)MnO$_3$, effectively suppresses the number of transport paths, and develops interesting electronic transport phenomena, including giant resistance jumps \cite{12,13,14,15}, reentrant metal to insulator transitions \cite{16}, and intrinsic tunneling magnetoresistance \cite{17}, where the evolution of limited numbers of electronic domains plays critical roles. Previous studies of manganites usually adopted top-down lithography technique to achieve spatial confinement \cite{12,13,14,15,16,17,18,19,20}. The very compound nature of manganites introduces complexities at the etched edges, which may affect their properties substantially \cite{19,20,21}. Here we prepare edge-free La$_{0.33}$Pr$_{0.34}$Ca$_{0.33}$MnO$_3$ (LPCMO)/MgO core-shell nanowires with superior structural quality by bottom-up method, which offer an ideal platform to investigate the intrinsic transport properties under quasi-1D confinement. We reveal that the quasi-1D confinement on the electronically phase-separated manganites is very sensitive to the order parameter fluctuations and even manipulates the phase stability, eventually leading to emergent physical effects. \section{Results and discussion} The single-crystalline LPCMO/MgO core-shell nanowires were grown by a two-step method following our previous work.\cite{22} The MgO nanowires were grown on MgO(001) substrates using chemical vapor deposition. The LPCMO shell layers were then deposited on the as-prepared MgO nanowires at $750\,^{\circ}\mathrm{C}$ using pulsed laser deposition. A repetition rate of 3 Hz was used, and the O$_2$ pressure was kept at 40 Pa during the deposition. After growth, the samples were annealed at the growth temperature with 1000 Pa O$_2$ pressure for 1 hour. The morphology of the as-grown nanowires is shown in the scanning electron microscopy (SEM) image (Figure S1 in the Supporting Information). The length of the nanowires ranges between 1 and 10 $\mu$m. The core-shell structure, as well as the high single-crystalline quality with the absence of grain boundary, are clearly evidenced from the transmission electron microscopy (TEM) images in Figure 1a. The diameter of the MgO cores is about 20 nm, and the thickness of the LPCMO shells is around 20-30 nm. The slight thickness difference for the left and right sides of the LPCMO nanowire is due to that the two sides face oppositely during the growth.\cite{22,23} High-resolution TEM image in Figure 1a shows the LPCMO nanowires grow along the [001] direction, with a lattice constant of about 3.9 \AA, which is quite close to the bulk value of about 3.84 \AA. The LPCMO/MgO nanowires will be referred to as LPCMO nanowires hereafter for simplicity. For LPCMO, FMM phase is revealed to dominate at low temperature \cite{9,24}. The aspect ratio of the LPCMO nanowires can be as high as 100, and thus should lead to giant magnetic shape anisotropy at low temperature. To avoid large paramagnetic signal from the MgO substrates, the nanowires were transferred onto a non-magnetic polydimethylsiloxane (PDMS) substrate. Figure 1b displays the typical zero field-cooled magnetic hysteresis loops for the LPCMO nanowires measured at 10 K. The apparent saturation moment is larger when the magnetic field is applied in-plane than out-of-plane. As discussed in the Supporting Information, such difference actually reflects the different degrees of magnetization when applying magnetic field along the two directions, suggesting magnetic anisotropy with easy axis along the nanowires. It is noted that the nanowires are oriented randomly on the substrate, so the magnetic anisotropy of a single nanowire should be far more significant than that observed for abundant nanowires (more details concerning the magnetic anisotropy are described in the Supporting Information). The nanowires were then transferred onto SiO$_2$/Si substrates and patterned with the Cr/Au electrodes for transport measurements, with a typical device shown in Figure 1c. The magnetic anisotropy is also reflected by the magnetoresistance (MR, defined as [\emph{R}(\emph{H})-\emph{R}(\emph{H}=0)]/\emph{R}(\emph{H}=0)) as a function of the angle between the applied magnetic field and the nanowire long axis, as shown in Figure 1d. At 65 K, the MR value reaches maximum (-66.2\%) when the magnetic field is parallel to the nanowire and reaches minimum (-62.4\%) when vertical to the nanowire, which further validates the quasi-1D nature of the nanowires. Such quasi-1D confinement may further tune the physical properties substantially. Next we focus on the temperature-dependent transport behaviors. Figures 2a and 2b show the resistance (\emph{R}) dependent on the temperature (\emph{T}) at different magnetic fields with the field parallel and perpendicular to the LPCMO nanowire, respectively. These \emph{R}-\emph{T} curves reveal exotic new features, which are absent for LPCMO bulk \cite{9}, thin films \cite{24} and strip structures \cite{12,13,14,15,16,17}. First, a resistance kink is observed at \emph{T}* $\sim$ 200 K, and the rapid increasing trend of \emph{R} with decreasing \emph{T} is substantially suppressed below \emph{T}*, which strongly suggests the development of more conducting domains. Therefore the development of insulating states (for example, COI state and cluster glass state \cite{25}) can be excluded unambiguously. On the other hand, \emph{T}* is well above the Curie temperature (\emph{T}$_{\rm{c}}$ $\sim$ 150 K, which is determined from the \emph{T}-dependent magnetic moment (\emph{m}-\emph{T}) curve shown in Figure S4a), the resistance kink thus cannot be attributed to the onset of long-range ferromagnetic order. For correlated electron systems, precursor phases with order parameter fluctuations may appear before the development of long-range-order phases. For example, the pseudogap phase could be the precursor of the global superconducting phase \cite{26}. As for the manganites, nanoscale droplet with short-range ferromagnetic interaction was proposed to emerge above \emph{T}$_{\rm{c}}$, which is the precursor of the FMM phase \cite{9,27,28}. Here the observed resistance kink at \emph{T}* therefore can be mainly attributed to the onset of such magnetic nano-droplet state. When \emph{T} decreases, the magnetic nano-droplets grow in size and eventually stabilize the FMM phase with long-range order below \emph{T}$_{\rm{c}}$. The application of magnetic field favors the growth of the magnetic nano-droplets and increases \emph{T}$_{\rm{c}}$ accordingly, as illustrated in Figure S4 and Figure 3a. On the other hand, \emph{T}* is almost independent on the magnetic field (Figure 2 and Figure 3a), suggesting that the onset of the magnetic nano-droplet state is insensitive to the applied magnetic field. Such resistance kink is absent for the LPCMO bulk, thin films or strips, and could be ascribed to the fact that the size of the magnetic nano-droplets is much smaller than the lateral size of these samples, and their contribution to the electronic transport is therefore negligible. In sharp contrast, for the LPCMO nanowires with extreme aspect ratio, the size of the magnetic nano-droplets is comparable to the lateral size, and their appearance essentially modifies the transport behavior, manifested as a kink in the \emph{R}-\emph{T} curves. Below \emph{T}$_{\rm{c}}$, FMM phase emerges, coexisting with the insulating phases, developing an EPS state, in which the FMM phase grows in proportion with further decreasing \emph{T}. For the LPCMO nanowires, the wire width (below 100 nm) is much smaller than the size of the electronic domains in the EPS state (submicrometer) \cite{9,29,30}, and each metallic or insulating domain thus spans the width of the nanowires to connect each other in series. As shown in Figure S5, the \emph{R}-\emph{T} curve below \emph{T}* can indeed be well fitted by a series two-resistor model (more details are described in the Supporting Information), validating that only a single transport path instead of transport network exists along the LPCMO nanowire. It is noted that the fitting curve substantially deviates from the experimental one below a critical temperature \emph{T}$_{\rm{qp}}$ ($\sim$65 K). An apparent insulator-metal transition at \emph{T}$_{\rm{qp}}$ is featured for zero or low magnetic fields, while the residual resistance remains extreme high at low temperature. When the magnetic field is higher than 6 T, the apparent insulator-metal transition disappears, and a high-resistance plateau develops in the low temperature range below \emph{T}$_{\rm{qp}}$. These behaviors differ essentially from that of LPCMO thin films, where only a typical insulator-metal transition shows up with low residual resistance at low temperature (Figure S6a). It is also noted that \emph{T}$_{\rm{qp}}$ decreases with increasing magnetic field for the LPCMO nanowires, while the insulator-metal transition temperature increases with increasing magnetic field for the LPCMO thin films (Figure S6a), further suggesting that they have different origins. The observed high-resistance plateau is also different from the behavior of insulators, which shows exponentially increasing resistance with decreasing \emph{T}. For LPCMO thin films, as \emph{T} decreases, the insulating phases is getting more and more energetically unfavorable, and the fraction of the FMM phase increases continuously at the expense of the insulating phases. When it is above the percolation threshold (about 25\% and 50\% for three-dimensional simple cubic lattice and two-dimensional square lattice, respectively \cite{31}), percolative conducting paths form through the connected FMM domains, manifested as an insulator-metal transition, leading to a low residual resistance \cite{24,29}. For a LPCMO nanowire with extreme aspect ratio, however, only a single transport path exists as discussed earlier, and thus each domain should contribute to the transport substantially. In the classic 1D percolation picture, percolative conduction develops only when the insulating regions disappear completely, namely, the percolation threshold for the FMM phase is 100\% for such quasi-1D transport. Therefore at low temperature, the carriers still have to go through the much reduced insulating domains, although the FMM phase dominates the nanowires. Strikingly, when an insulating domain is shrunk to be thin enough, it may act as an intrinsic tunneling barrier to allow electron tunneling between its two neighboring FMM domains. It has been shown that among the three Rowell criteria commonly adopted to identify tunneling junctions, only the weak insulating-like temperature dependence of the resistance remains a solid criterion \cite{32}. As seen in Figure 2, all the observed high resistance plateaus under magnetic fields higher than 6 T indeed exhibit weak insulating behavior. The \emph{R}-\emph{T} curves below \emph{T}$_{\rm{qp}}$ measured at 6 T, 9 T and 14 T were then fitted using the well-established tunneling equation: $R = \;\frac{W}{{1 + {{\left( {\frac{T}{{{T_0}}}} \right)}^2}}}$ \cite{33}, where \emph{W} and \emph{T}$_0$ are the fitting parameters. As plotted in Figure 2c, all these experimental curves can be well fitted in the low temperature range, further verifying that well-defined tunneling junctions are formed under high field below \emph{T}$_{\rm{qp}}$. The \emph{T}-dependent transport behaviors under different magnetic fields can be understood as follows. For all cases, \emph{T}$_{\rm{qp}}$ signifies the onset of quantum tunneling, instead of an insulator-metal transition. For zero or low magnetic fields, only a very small fraction of the insulating domains are thin enough to act as tunneling barriers at \emph{T}$_{\rm{qp}}$ (see Figure 3b), which can be further verified from the weak tunneling MR effect obtained in the low temperature regime (see Figure S7 and detailed discussions in the Supporting Information). The remaining insulating regions further convert into the FMM phase with decreasing \emph{T}, leading to a decreasing resistance with decreasing \emph{T} below \emph{T}$_{\rm{qp}}$. While for magnetic fields higher than 6 T, the insulating phases are effectively suppressed by the high field, and only survive as a new type of domain walls between FMM domains to act as well-defined tunneling barriers at \emph{T}$_{\rm{qp}}$ (also see Figure 3b), resulting in the observed high-resistance plateaus below \emph{T}$_{\rm{qp}}$. The survival of the insulating phases below \emph{T}$_{\rm{qp}}$ even under high magnetic fields is also confirmed from the magnetic measurements. Figures S4a-f show the \emph{m}-\emph{T} curves under different magnetic fields. The zero-field-cooling (ZFC) curve and the field-cooling (FC) curve deviate below a blocking temperature \emph{T}$_{\rm{b}}$. Such deviation identifies a blocked state with EPS \cite{34}. As summarized in Figure 4a, \emph{T}$_{\rm{b}}$ decays slowly in the high field region, and is about 20 K even under field up to 7 T. This observation suggests that the insulating phases are robust against applied magnetic field in the LPCMO nanowires, and can serve as tunneling barriers when they are shrunk to be thin enough. In contrast, \emph{T}$_{\rm{b}}$ almost goes to zero under high magnetic field for the LPCMO thin films (see Figure 4a and Figures S6b-g, which implies the complete disappearance of the insulating phases under high magnetic field in the thin films. The distribution of the magnetic domains was further directly imaged using a home-made magnetic force microscopy (MFM) \cite{35,36}. Figure 4b shows the topography image of a LPCMO nanowire with length of about 1 $\mu$m, where the width is broadened by tip geometric effect. The corresponding MFM images under different applied magnetic fields scanned at 50 K are displayed in Figures 4c,d and more in Figures S8b-j. It is clear that in response to the increasing magnetic field, the FMM domains grow in size at the expense of the insulating phases. Nevertheless, the insulating phases still survive even under magnetic field as high as 3 T, as evidenced from the contrast variation in Figure 4d and the corresponding line profile shown in Figure 4g. It is noted that there is no direct correlation between the MFM images and the topology image. Moreover, the locations of the insulating domains and their distances change with magnetic field. When the field is altered from 0.03 T to 3 T, for example, the distance variation between the two insulating domains indicated by the arrows shown in Figures 4f and 4g is about 100 nm. These observations strongly suggest that the robustness of the insulating phases under high magnetic field in the low temperature range is not originated from defect pinning, and collectively point to an intrinsic origin of quasi-1D confinement. The apparent insulator-metal transition with high residual resistance at low temperature has also been observed for LPCMO submicrometer-wide bridges, and was also attributed to the tunneling across the insulating barriers \cite{17}. However the insulating barriers are metastable in the bridges, and disappear upon application of a relatively weak magnetic field (below 1 T) \cite{17}. In sharp contrast, the insulating barriers are so robust and survive even under magnetic field up to 14 T for the LPCMO nanowires. It is noted that no giant resistance jump is observed in the \emph{R}-\emph{T} curves (Figure 2). Correspondingly, the growth of the FMM domains and the shrink of the insulating domains in size only evolve gradually from the magnetic-field-dependent MFM images. In contrast, giant resistance jump was generally observed for the micrometer-wide strips or submicrometer-wide bridges, which was attributed to a sudden phase conversion of a single domain from insulating to FMM \cite{13,14}. Our observations further support the robustness of the insulating phases in the LPCMO nanowires with extreme aspect ratio. We also performed \emph{R}-\emph{T} measurements for several other LPCMO nanowires, and the results shown in Figure S9 reveal the same features as in Figure 2, suggesting that the properties discussed above are universal for the LPCMO nanowires. On the other hand, strain between the LPCMO overlayer and the MgO template may also play a role to determine the observed properties. From the TEM characterization (Figure 1a), the lattice spacing of the LPCMO part is about 3.9 \AA, close to the bulk lattice spacing of about 3.84 \AA, while significantly different from the MgO lattice spacing of about 4.2 \AA. Moreover, Moir\'{e} pattern is also observed, further confirming the lattice difference between the LPCMO overlayer and the MgO template. These results suggest that the stress induced by the lattice mismatch is quickly released in the initial few LPCMO layers, and cannot account for the observed novel phenomena in the LPCMO nanowires. Therefore the dimensionality should be mainly responsible for the observed effects, especially the robust quantum tunneling across the intrinsic insulating barriers. It has been proposed theoretically that in an EPS system such as manganite, two ferromagnetic domains can be separated by a new type of domain wall, i.e., a stripe of antiferromagnetic insulating phase\cite{37,38,39}. Based on a standard double exchange model, it was predicted that an insulating domain wall can be energetically lower than a conventional Bloch domain wall, provided that the magnetic anisotropy is sufficiently strong \cite{38}. Unlike the conventional domain wall, the insulating domain wall may still survive even when the spins of the ferromagnetic domains are aligned. Here for the present LPCMO nanowires with extreme aspect ratio, significant magnetic anisotropy is revealed from the magnetic and transport measurements, and can thus stabilize the insulating domain walls between FMM domains under magnetic field up to 14 T, leading to the observed tunneling transport. As illustrated in the previous sections, this picture was also verified by the MFM characterization and magnetic measurement. In contrast, the conventional Bloch domain walls disappear at relative small field, which can be inferred from the close of hysteresis loops at about 0.5 T shown in Figure 1b. We stress here again that the robustness of the insulating domain walls is not caused by the pinning effect from defects, as discussed earlier. Instead, this robustness should be attributed to the strong magnetic anisotropy of the quasi-1D nanowires, in consistent with the theoretical prediction.\cite{38} Previous studies have revealed that the quantum nature of electrons modifies the classic percolation picture for three-dimensional and two-dimensional electron systems in that the quantum interference plays an important role \cite{40}. Analogously, the quantum behavior of electrons may also alter the classic 1D percolation picture, in which percolative conduction can be only reached when the system is completely in the metallic phase. Here we demonstrate that this classic percolation picture is modified essentially, and a novel quantum percolation state is realized by quantum tunneling across the intrinsic insulating barriers along the quasi-1D transport path. Such quantum percolation state is robust in the low temperature range even under magnetic field up to 14 T, and the revised phase diagram for the LPCMO nanowires is depicted in Figure 3. In summary, by using single-crystalline LPCMO/MgO core-shell nanowires as a model system, we demonstrate that the quasi-1D confinement on the electronically phase-separated manganites substantially enhances the sensitivity of transport to even probe the magnetic fluctuations, \emph{e}.\emph{g}., the magnetic nano-droplets in the insulating matrix, which is the precursor to the FMM phase. More interestingly, the quasi-1D confinement can even modify the phase competition to stabilize thin insulating domains at low temperatures, which serve as tunneling barriers to form intrinsic tunneling junctions. Such tunneling effect survives even under magnetic field up to 14 T, and essentially modifies the classic 1D percolation picture to stabilize a novel quantum percolation state. A new phase diagram for this model manganite system under quasi-1D confinement is thus established for the first time, which differs substantially from that for the bulk or thin films. Our findings inspire new insight into understanding and manipulation of the EPS and corresponding magneto-transport properties in electronically soft matters via dimensionality control, and thus hold great promises towards electronic device applications. \section{Introduction} Shaping materials into reduced-dimensional structures offers additional flexibility to manipulate their physical properties, especially electronic transport properties. As for quasi-one-dimensional (quasi-1D) structures in uniform electronic states, high aspect ratio, large surface-to-volume ratio, or quantum confinement lead to a spectrum of intriguing effects, \emph{e}.\emph{g}., tunable bandgaps in semiconductor nanowires \cite{1}, and quantum interference in topological insulator nanoribbons \cite{2}. In particular, strong anisotropic magnetic and magneto-transport behaviors \cite{3,4,5,6}, as well as manipulation of individual skyrmions \cite{7} have been demonstrated in quasi-1D magnetic structures. In contrast to the usually observed homogeneous electronic states, electronic phase separation (EPS) at mesoscopic scale can be stabilized in so-called electronically soft matters \cite{8}. A prototypical example is doped perovskite manganites, where antiferromagnetic charge-ordered insulating (COI) phase, ferromagnetic metallic (FMM) phase, and paramagnetic insulating (PI) phase may coexist in a certain range of temperature, due to the delicate coupling between the spin, charge, lattice, and orbital degrees of freedom \cite{9,10,11}. Dimensionality reduction has been adopted to further manipulate their physical properties. For example, spatial confinement engineering of phase-separated manganites, \emph{e}.\emph{g}., (La,Pr,Ca)MnO$_3$, effectively suppresses the number of transport paths, and develops interesting electronic transport phenomena, including giant resistance jumps \cite{12,13,14,15}, reentrant metal to insulator transitions \cite{16}, and intrinsic tunneling magnetoresistance \cite{17}, where the evolution of limited numbers of electronic domains plays critical roles. Previous studies of manganites usually adopted top-down lithography technique to achieve spatial confinement \cite{12,13,14,15,16,17,18,19,20}. The very compound nature of manganites introduces complexities at the etched edges, which may affect their properties substantially \cite{19,20,21}. Here we prepare edge-free La$_{0.33}$Pr$_{0.34}$Ca$_{0.33}$MnO$_3$ (LPCMO)/MgO core-shell nanowires with superior structural quality by bottom-up method, which offer an ideal platform to investigate the intrinsic transport properties under quasi-1D confinement. We reveal that the quasi-1D confinement on the electronically phase-separated manganites is very sensitive to the order parameter fluctuations and even manipulates the phase stability, eventually leading to emergent physical effects. \section{Results and discussion} The single-crystalline LPCMO/MgO core-shell nanowires were grown by a two-step method following our previous work.\cite{22} The MgO nanowires were grown on MgO(001) substrates using chemical vapor deposition. The LPCMO shell layers were then deposited on the as-prepared MgO nanowires at $750\,^{\circ}\mathrm{C}$ using pulsed laser deposition. A repetition rate of 3 Hz was used, and the O$_2$ pressure was kept at 40 Pa during the deposition. After growth, the samples were annealed at the growth temperature with 1000 Pa O$_2$ pressure for 1 hour. The morphology of the as-grown nanowires is shown in the scanning electron microscopy (SEM) image (Figure S1 in the Supporting Information). The length of the nanowires ranges between 1 and 10 $\mu$m. The core-shell structure, as well as the high single-crystalline quality with the absence of grain boundary, are clearly evidenced from the transmission electron microscopy (TEM) images in Figure 1a. The diameter of the MgO cores is about 20 nm, and the thickness of the LPCMO shells is around 20-30 nm. The slight thickness difference for the left and right sides of the LPCMO nanowire is due to that the two sides face oppositely during the growth.\cite{22,23} High-resolution TEM image in Figure 1a shows the LPCMO nanowires grow along the [001] direction, with a lattice constant of about 3.9 \AA, which is quite close to the bulk value of about 3.84 \AA. The LPCMO/MgO nanowires will be referred to as LPCMO nanowires hereafter for simplicity. For LPCMO, FMM phase is revealed to dominate at low temperature \cite{9,24}. The aspect ratio of the LPCMO nanowires can be as high as 100, and thus should lead to giant magnetic shape anisotropy at low temperature. To avoid large paramagnetic signal from the MgO substrates, the nanowires were transferred onto a non-magnetic polydimethylsiloxane (PDMS) substrate. Figure 1b displays the typical zero field-cooled magnetic hysteresis loops for the LPCMO nanowires measured at 10 K. The apparent saturation moment is larger when the magnetic field is applied in-plane than out-of-plane. As discussed in the Supporting Information, such difference actually reflects the different degrees of magnetization when applying magnetic field along the two directions, suggesting magnetic anisotropy with easy axis along the nanowires. It is noted that the nanowires are oriented randomly on the substrate, so the magnetic anisotropy of a single nanowire should be far more significant than that observed for abundant nanowires (more details concerning the magnetic anisotropy are described in the Supporting Information). The nanowires were then transferred onto SiO$_2$/Si substrates and patterned with the Cr/Au electrodes for transport measurements, with a typical device shown in Figure 1c. The magnetic anisotropy is also reflected by the magnetoresistance (MR, defined as [\emph{R}(\emph{H})-\emph{R}(\emph{H}=0)]/\emph{R}(\emph{H}=0)) as a function of the angle between the applied magnetic field and the nanowire long axis, as shown in Figure 1d. At 65 K, the MR value reaches maximum (-66.2\%) when the magnetic field is parallel to the nanowire and reaches minimum (-62.4\%) when vertical to the nanowire, which further validates the quasi-1D nature of the nanowires. Such quasi-1D confinement may further tune the physical properties substantially. Next we focus on the temperature-dependent transport behaviors. Figures 2a and 2b show the resistance (\emph{R}) dependent on the temperature (\emph{T}) at different magnetic fields with the field parallel and perpendicular to the LPCMO nanowire, respectively. These \emph{R}-\emph{T} curves reveal exotic new features, which are absent for LPCMO bulk \cite{9}, thin films \cite{24} and strip structures \cite{12,13,14,15,16,17}. First, a resistance kink is observed at \emph{T}* $\sim$ 200 K, and the rapid increasing trend of \emph{R} with decreasing \emph{T} is substantially suppressed below \emph{T}*, which strongly suggests the development of more conducting domains. Therefore the development of insulating states (for example, COI state and cluster glass state \cite{25}) can be excluded unambiguously. On the other hand, \emph{T}* is well above the Curie temperature (\emph{T}$_{\rm{c}}$ $\sim$ 150 K, which is determined from the \emph{T}-dependent magnetic moment (\emph{m}-\emph{T}) curve shown in Figure S4a), the resistance kink thus cannot be attributed to the onset of long-range ferromagnetic order. For correlated electron systems, precursor phases with order parameter fluctuations may appear before the development of long-range-order phases. For example, the pseudogap phase could be the precursor of the global superconducting phase \cite{26}. As for the manganites, nanoscale droplet with short-range ferromagnetic interaction was proposed to emerge above \emph{T}$_{\rm{c}}$, which is the precursor of the FMM phase \cite{9,27,28}. Here the observed resistance kink at \emph{T}* therefore can be mainly attributed to the onset of such magnetic nano-droplet state. When \emph{T} decreases, the magnetic nano-droplets grow in size and eventually stabilize the FMM phase with long-range order below \emph{T}$_{\rm{c}}$. The application of magnetic field favors the growth of the magnetic nano-droplets and increases \emph{T}$_{\rm{c}}$ accordingly, as illustrated in Figure S4 and Figure 3a. On the other hand, \emph{T}* is almost independent on the magnetic field (Figure 2 and Figure 3a), suggesting that the onset of the magnetic nano-droplet state is insensitive to the applied magnetic field. Such resistance kink is absent for the LPCMO bulk, thin films or strips, and could be ascribed to the fact that the size of the magnetic nano-droplets is much smaller than the lateral size of these samples, and their contribution to the electronic transport is therefore negligible. In sharp contrast, for the LPCMO nanowires with extreme aspect ratio, the size of the magnetic nano-droplets is comparable to the lateral size, and their appearance essentially modifies the transport behavior, manifested as a kink in the \emph{R}-\emph{T} curves. Below \emph{T}$_{\rm{c}}$, FMM phase emerges, coexisting with the insulating phases, developing an EPS state, in which the FMM phase grows in proportion with further decreasing \emph{T}. For the LPCMO nanowires, the wire width (below 100 nm) is much smaller than the size of the electronic domains in the EPS state (submicrometer) \cite{9,29,30}, and each metallic or insulating domain thus spans the width of the nanowires to connect each other in series. As shown in Figure S5, the \emph{R}-\emph{T} curve below \emph{T}* can indeed be well fitted by a series two-resistor model (more details are described in the Supporting Information), validating that only a single transport path instead of transport network exists along the LPCMO nanowire. It is noted that the fitting curve substantially deviates from the experimental one below a critical temperature \emph{T}$_{\rm{qp}}$ ($\sim$65 K). An apparent insulator-metal transition at \emph{T}$_{\rm{qp}}$ is featured for zero or low magnetic fields, while the residual resistance remains extreme high at low temperature. When the magnetic field is higher than 6 T, the apparent insulator-metal transition disappears, and a high-resistance plateau develops in the low temperature range below \emph{T}$_{\rm{qp}}$. These behaviors differ essentially from that of LPCMO thin films, where only a typical insulator-metal transition shows up with low residual resistance at low temperature (Figure S6a). It is also noted that \emph{T}$_{\rm{qp}}$ decreases with increasing magnetic field for the LPCMO nanowires, while the insulator-metal transition temperature increases with increasing magnetic field for the LPCMO thin films (Figure S6a), further suggesting that they have different origins. The observed high-resistance plateau is also different from the behavior of insulators, which shows exponentially increasing resistance with decreasing \emph{T}. For LPCMO thin films, as \emph{T} decreases, the insulating phases is getting more and more energetically unfavorable, and the fraction of the FMM phase increases continuously at the expense of the insulating phases. When it is above the percolation threshold (about 25\% and 50\% for three-dimensional simple cubic lattice and two-dimensional square lattice, respectively \cite{31}), percolative conducting paths form through the connected FMM domains, manifested as an insulator-metal transition, leading to a low residual resistance \cite{24,29}. For a LPCMO nanowire with extreme aspect ratio, however, only a single transport path exists as discussed earlier, and thus each domain should contribute to the transport substantially. In the classic 1D percolation picture, percolative conduction develops only when the insulating regions disappear completely, namely, the percolation threshold for the FMM phase is 100\% for such quasi-1D transport. Therefore at low temperature, the carriers still have to go through the much reduced insulating domains, although the FMM phase dominates the nanowires. Strikingly, when an insulating domain is shrunk to be thin enough, it may act as an intrinsic tunneling barrier to allow electron tunneling between its two neighboring FMM domains. It has been shown that among the three Rowell criteria commonly adopted to identify tunneling junctions, only the weak insulating-like temperature dependence of the resistance remains a solid criterion \cite{32}. As seen in Figure 2, all the observed high resistance plateaus under magnetic fields higher than 6 T indeed exhibit weak insulating behavior. The \emph{R}-\emph{T} curves below \emph{T}$_{\rm{qp}}$ measured at 6 T, 9 T and 14 T were then fitted using the well-established tunneling equation: $R = \;\frac{W}{{1 + {{\left( {\frac{T}{{{T_0}}}} \right)}^2}}}$ \cite{33}, where \emph{W} and \emph{T}$_0$ are the fitting parameters. As plotted in Figure 2c, all these experimental curves can be well fitted in the low temperature range, further verifying that well-defined tunneling junctions are formed under high field below \emph{T}$_{\rm{qp}}$. The \emph{T}-dependent transport behaviors under different magnetic fields can be understood as follows. For all cases, \emph{T}$_{\rm{qp}}$ signifies the onset of quantum tunneling, instead of an insulator-metal transition. For zero or low magnetic fields, only a very small fraction of the insulating domains are thin enough to act as tunneling barriers at \emph{T}$_{\rm{qp}}$ (see Figure 3b), which can be further verified from the weak tunneling MR effect obtained in the low temperature regime (see Figure S7 and detailed discussions in the Supporting Information). The remaining insulating regions further convert into the FMM phase with decreasing \emph{T}, leading to a decreasing resistance with decreasing \emph{T} below \emph{T}$_{\rm{qp}}$. While for magnetic fields higher than 6 T, the insulating phases are effectively suppressed by the high field, and only survive as a new type of domain walls between FMM domains to act as well-defined tunneling barriers at \emph{T}$_{\rm{qp}}$ (also see Figure 3b), resulting in the observed high-resistance plateaus below \emph{T}$_{\rm{qp}}$. The survival of the insulating phases below \emph{T}$_{\rm{qp}}$ even under high magnetic fields is also confirmed from the magnetic measurements. Figures S4a-f show the \emph{m}-\emph{T} curves under different magnetic fields. The zero-field-cooling (ZFC) curve and the field-cooling (FC) curve deviate below a blocking temperature \emph{T}$_{\rm{b}}$. Such deviation identifies a blocked state with EPS \cite{34}. As summarized in Figure 4a, \emph{T}$_{\rm{b}}$ decays slowly in the high field region, and is about 20 K even under field up to 7 T. This observation suggests that the insulating phases are robust against applied magnetic field in the LPCMO nanowires, and can serve as tunneling barriers when they are shrunk to be thin enough. In contrast, \emph{T}$_{\rm{b}}$ almost goes to zero under high magnetic field for the LPCMO thin films (see Figure 4a and Figures S6b-g, which implies the complete disappearance of the insulating phases under high magnetic field in the thin films. The distribution of the magnetic domains was further directly imaged using a home-made magnetic force microscopy (MFM) \cite{35,36}. Figure 4b shows the topography image of a LPCMO nanowire with length of about 1 $\mu$m, where the width is broadened by tip geometric effect. The corresponding MFM images under different applied magnetic fields scanned at 50 K are displayed in Figures 4c,d and more in Figures S8b-j. It is clear that in response to the increasing magnetic field, the FMM domains grow in size at the expense of the insulating phases. Nevertheless, the insulating phases still survive even under magnetic field as high as 3 T, as evidenced from the contrast variation in Figure 4d and the corresponding line profile shown in Figure 4g. It is noted that there is no direct correlation between the MFM images and the topology image. Moreover, the locations of the insulating domains and their distances change with magnetic field. When the field is altered from 0.03 T to 3 T, for example, the distance variation between the two insulating domains indicated by the arrows shown in Figures 4f and 4g is about 100 nm. These observations strongly suggest that the robustness of the insulating phases under high magnetic field in the low temperature range is not originated from defect pinning, and collectively point to an intrinsic origin of quasi-1D confinement. The apparent insulator-metal transition with high residual resistance at low temperature has also been observed for LPCMO submicrometer-wide bridges, and was also attributed to the tunneling across the insulating barriers \cite{17}. However the insulating barriers are metastable in the bridges, and disappear upon application of a relatively weak magnetic field (below 1 T) \cite{17}. In sharp contrast, the insulating barriers are so robust and survive even under magnetic field up to 14 T for the LPCMO nanowires. It is noted that no giant resistance jump is observed in the \emph{R}-\emph{T} curves (Figure 2). Correspondingly, the growth of the FMM domains and the shrink of the insulating domains in size only evolve gradually from the magnetic-field-dependent MFM images. In contrast, giant resistance jump was generally observed for the micrometer-wide strips or submicrometer-wide bridges, which was attributed to a sudden phase conversion of a single domain from insulating to FMM \cite{13,14}. Our observations further support the robustness of the insulating phases in the LPCMO nanowires with extreme aspect ratio. We also performed \emph{R}-\emph{T} measurements for several other LPCMO nanowires, and the results shown in Figure S9 reveal the same features as in Figure 2, suggesting that the properties discussed above are universal for the LPCMO nanowires. On the other hand, strain between the LPCMO overlayer and the MgO template may also play a role to determine the observed properties. From the TEM characterization (Figure 1a), the lattice spacing of the LPCMO part is about 3.9 \AA, close to the bulk lattice spacing of about 3.84 \AA, while significantly different from the MgO lattice spacing of about 4.2 \AA. Moreover, Moir\'{e} pattern is also observed, further confirming the lattice difference between the LPCMO overlayer and the MgO template. These results suggest that the stress induced by the lattice mismatch is quickly released in the initial few LPCMO layers, and cannot account for the observed novel phenomena in the LPCMO nanowires. Therefore the dimensionality should be mainly responsible for the observed effects, especially the robust quantum tunneling across the intrinsic insulating barriers. It has been proposed theoretically that in an EPS system such as manganite, two ferromagnetic domains can be separated by a new type of domain wall, i.e., a stripe of antiferromagnetic insulating phase\cite{37,38,39}. Based on a standard double exchange model, it was predicted that an insulating domain wall can be energetically lower than a conventional Bloch domain wall, provided that the magnetic anisotropy is sufficiently strong \cite{38}. Unlike the conventional domain wall, the insulating domain wall may still survive even when the spins of the ferromagnetic domains are aligned. Here for the present LPCMO nanowires with extreme aspect ratio, significant magnetic anisotropy is revealed from the magnetic and transport measurements, and can thus stabilize the insulating domain walls between FMM domains under magnetic field up to 14 T, leading to the observed tunneling transport. As illustrated in the previous sections, this picture was also verified by the MFM characterization and magnetic measurement. In contrast, the conventional Bloch domain walls disappear at relative small field, which can be inferred from the close of hysteresis loops at about 0.5 T shown in Figure 1b. We stress here again that the robustness of the insulating domain walls is not caused by the pinning effect from defects, as discussed earlier. Instead, this robustness should be attributed to the strong magnetic anisotropy of the quasi-1D nanowires, in consistent with the theoretical prediction.\cite{38} Previous studies have revealed that the quantum nature of electrons modifies the classic percolation picture for three-dimensional and two-dimensional electron systems in that the quantum interference plays an important role \cite{40}. Analogously, the quantum behavior of electrons may also alter the classic 1D percolation picture, in which percolative conduction can be only reached when the system is completely in the metallic phase. Here we demonstrate that this classic percolation picture is modified essentially, and a novel quantum percolation state is realized by quantum tunneling across the intrinsic insulating barriers along the quasi-1D transport path. Such quantum percolation state is robust in the low temperature range even under magnetic field up to 14 T, and the revised phase diagram for the LPCMO nanowires is depicted in Figure 3. In summary, by using single-crystalline LPCMO/MgO core-shell nanowires as a model system, we demonstrate that the quasi-1D confinement on the electronically phase-separated manganites substantially enhances the sensitivity of transport to even probe the magnetic fluctuations, \emph{e}.\emph{g}., the magnetic nano-droplets in the insulating matrix, which is the precursor to the FMM phase. More interestingly, the quasi-1D confinement can even modify the phase competition to stabilize thin insulating domains at low temperatures, which serve as tunneling barriers to form intrinsic tunneling junctions. Such tunneling effect survives even under magnetic field up to 14 T, and essentially modifies the classic 1D percolation picture to stabilize a novel quantum percolation state. A new phase diagram for this model manganite system under quasi-1D confinement is thus established for the first time, which differs substantially from that for the bulk or thin films. Our findings inspire new insight into understanding and manipulation of the EPS and corresponding magneto-transport properties in electronically soft matters via dimensionality control, and thus hold great promises towards electronic device applications.
4a9602910c7ff996e7e6eed097a41e89a3aa995c
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction}\label{sec1:level1} The theoretical development and experimental realizations of small but nontrivial quantum systems have already had a significant impact, esp. in quantum computing \cite{Montanaro_2016,Wendin_2017,Preskill_2018,Arute:2019zxq}, quantum chemistry \cite{Kandala_2017}, and secure communication systems \cite{Yin2020}. Nevertheless, further progress is required to validate, benchmark, and fully exploit such quantum systems. To this end, a topic of particular importance is the measurement of an ensemble of copies of a quantum state, associated to a quantum system of interest, and the subsequent reconstruction of the state's density matrix from the recorded measurements, a process known as quantum state tomography (QST) \cite{MauroDAriano2003}. The relation of QST to various classical data processing problems is relatively simple to describe. In both cases, quantum and classical, the task is to reconstruct an unknown initial state, given data from the output state. QST can be performed by many classical techniques such as maximum likelihood estimation, compressed sensing, and Bayesian state tomography, among others. Although these techniques are sophisticated, two issues remain. The first one is the lack of versatility, since these techniques are often restricted to a subclass of quantum states suited for a particular method. The second one is that these techniques often require heavily preprocessed data as their input. To be more precise, the output measurements are often considered at a very abstract level: empirical estimates of certain expectations. The preprocessing, which we referred to previously, corresponds to a nontrivial measurement chain involved in the readout of the quantum devices. For example, superconducting qubits tend to utilise the so-called dispersive readout \cite{wendin2017quantum, PhysRevApplied.7.054020}, which we explain in further detail in the following, but which produces a complex-valued signal in response to a pulse \cite{McClure2016}. This signal is processed into the so-called IQ-plane data, which are complex-valued. The empirical estimates of certain expectation values, the expectation values of projective measurement operators that we measure are then obtained from the IQ-plane data. In this paper, we show how to perform QST starting from the IQ-plane data, prior to the preprocessing, which is often heuristic and which often introduces non-Gaussian artifacts. The precision of QST depends on the precise reconstruction method as well as the number of measurements and the type of measurements performed. Our approach can be seen as a joint problem, while processing IQ-plane data independently of QST can be seen as a decomposition of the joint problem. While there are information-theoretic arguments showing \cite{Haah2016} that certain algorithms are optimal with respect to the number of samples needed for the latter step of the decomposition, assuming that the former step is performed without any error, in practice there will be errors. The joint problem, which utilizes the IQ-plane data or the signal of dispersive readout directly, makes it possible to obtain better estimates given a certain number of samples than using the decomposition approach. The sample complexity, in turn, translates to the computational efficiency of the reconstruction method; it is not uncommon to have weeks of postprocessing time for QST of nontrivial devices. Improving the precision of QST may have a considerable impact: To implement quantum gates, one needs to characterize the operation of a quantum device by running a series of known inputs and reconstructing the corresponding outputs using QST. Additionally, QST also provides the current industry standard for the verification of quantum devices, such as the fidelity estimation of 2-photon CNOT gates \cite{cnot}, and many benchmarking procedures. The role of QST in quantum technologies is hence of a fundamental importance. \subsection{\label{sec1:level2} Notation} Throughout this article we make use of the following notation. By $\mathbb{N}, \mathbb{Z}, \mathbb{R}, \mathbb{C}, \mathbb{K}^n, \mathbb{K}^{n\times m}$ we denote the semi-ring of naturals, the ring of integers, the field of reals, the field of complex numbers, the linear space of vectors with values in $\mathbb{K}$ and the linear space of $n\times m$ matrices with values in $\mathbb{K}$ respectively. By we denoted $SU(n)$ the group of $n\times n$ special unitary matrices. We further denote by $\mathbb{S}_+^n, \mathbb{S}_{++}^n$ the cone of $n\times n$ positive semidefinite matrices and the space of $n\times n$ positive definite matrices respectively. The space of symmetric $n\times n$ matrices is denoted by $\mathbb{S}^n$. The space of $n\times n$ Hermitian matrices is denoted by $\mathbb{H}^n$ while if the subset of these matrices that have trace one we denote it by $\mathbb{H}_1^n$. Similarly $\widehat{\mathbb{H}}$ denotes the space of estimated Hermitian matrices. For any matrix $\bf{A}$, we denote its transpose by $\bs{A}^\top$ while for any complex matrix $\bs{B}$ we denote its complex conjugate by $\bs{B}^*$ and its conjugate transpose by $\bs{B}^\dagger$. The $n\times n$ unity matrix is denoted by $\bs{1}_{n}$ or simply by $\bs{1}$ if no confusions arise. Similarly we denote by $\bs{0}_n$ the $n\times n$ matrix of only zeros. Furthermore we denote $\im = \sqrt{-1}$, and we define $[n]=\{1,\ldots,n\}$. For a vector $v \in \mathbb{R}^n$ the Euclidean vector norm is $\norm{v} = \sqrt{\sum_{i=1}^n v_i^2}$, the $p$-norm is $\|{v}\|_p = \big(\sum_i|v_i|^p \big)^{1/p}$ while for a square matrix $\bs{A}$ the Frobenius norm is $\norm{\bs{A}}_F = {\mathrm{Tr}(\bs{A}^\dagger \bs{A})}^{1/2} = \braket{\bs{A},\bs{A}}^{1/2}$, where $\dagger$ denotes the conjugate transpose. The unknown quantum states to be estimated are denoted by $\bs{\rho}$ while the estimates of those by $\widehat{\bs \rho}$. We commonly use the vectorization operator $\mathrm{vec}: \mathbb{K}^{d\times d}\to \mathbb{K}^{d^2}$, for $\mathbb{K} = \mathbb{R}, \mathbb{C}$. For example for a $2\times 2$ matrix $\bs{A}=(a_{ij})$, $i,j=1,2$, $\mathrm{vec}({\bs{A}})= (a_{11},a_{12},a_{21},a_{22})^\top$. Finally, the expectation value of a Hermitian operator $\bs O$ is denoted as $\braket{\bs O}$. We abbreviate \emph{semidefinite programming} by SDP, \emph{positive semidefinite} by PSD, \emph{commutative polynomial optimization} by POP, \emph{non-commutative polynomial optimization} by NCPOP and \emph{Navasqu\'es-Pironio-Ac\'in} by NCA. \section{Background} \subsection{Dispersive Readout of a Qubit} QST requires the acquision of data from the quantum mechanical device (QMD) that is investigated. The QMD must be well isolated from sources of noise or dissipation from their coupled environment. Popular QMDs that satisfy the above criteria are superconducting qubits such as the transmon qubit \cite{Koch_2007}, a device widely used by IBM and Rigetti Computing, as well as the xmon qubit used by Google \cite{GoogleReadout}. A common practice is to couple the qubit(s) to a dispersive oscillator. This oscillator has a resonant frequency that depends on the qubit state. Probing it with a pulse \cite{McClure2016} of frequency $\omega_{\rm probe}$ and analyzing the response can reveal nontrivial information about the state. Figure \ref{qubit_cavity_control} shows a schematic of the three main devices that interact in the process. (For simplicity, we have not included several other devices that are needed such as microwave pulse amplifiers.) \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{transmon.png} \caption{Simplified schematic of the transmon qubit device realized by a Josephson junction coupled to an LC resonator via a coupling capacitor. Similarly, the cavity which is kept at $T<10$mK (it can be lower than 3mK), is coupled to the control line via another weak coupling capacitor. The input pulse from the classical control line is converted to an analog signal, which interacts with the qubit. The output pulse passes through an AD converter to the FPGA controller interface where the I-Q readout is recorded. See \cite{vandijk2019electronic} for a detailed analysis of the structure of the interface controller. } \label{qubit_cavity_control} \end{figure} The notion of the dispersive readout of the qubit refers to the process of determining whether the qubit was measured in the $\ket{0}$ or the $\ket{1}$ eigen-state with respect to the measurement operator $\bs M$. The readout chain is composed by three levels of output data of increasing complexity and higher noise at each level: \begin{enumerate}\addtocounter{enumi}{-1} \item \emph{raw data} correspond to a discrete-time signal of the output pulse with frequency $\omega_{\rm r.o.}$. \item \emph{I-Q plane data} correspond to removing the frequency component of the readout signal (see Equation \eqref{IQeq}) and obtaining the complex valued in-phase and quadrature component (I-Q) data. \item \emph{discriminated data} correspond to the so-called $b$-vector we introduce in Equation \eqref{bi} and are obtained by applying a discriminator to the I-Q plane data. \end{enumerate} \begin{figure}[t] \centering \hspace*{-0.4cm} \includegraphics[width=1.0\columnwidth]{omegas.png} \caption{Resonance frequency-amplitude diagram for the qubit. The input probe pulse $\omega_{\rm probe}$ excites the readout to either the green Lorentzian peak corresponding to $\ket{0}$ or the blue Lorentzian peak corresponding to $\ket{1}$. The sign of the phase difference of $\omega_{\rm probe}$ to $\omega_{\rm r.o.}$, the readout frequency, will reveal the pointer state the qubit is (with respect to one of the Pauli observables). The difference between the two resonance peaks is approximately $2\chi$ and it is of the order of a few MHz.} \label{energy} \end{figure} At level 0, by analyzing the integrated return pulse $\omega_{\rm r.o.}$ one can deduct whether the QMD is in the ground or in the excited state in the pointer basis. The pointer basis $\{\ket{e},\ket{g}\}$ corresponds to eigenstates of the observable of the measurement apparatus that represent the possible positions of the display pointer of the equipment. The reason that the output measured is not simply some representation of $\ket{0}$ or $\ket{1}$ is that the measurement apparatus is entangled with the quantum system being measured \cite{von1955mathematical}. Nevertheless, for all purposes we can safely approximate $\ket{g}\approx \ket{0}$ and $\ket{e} \approx \ket{1}$ as discussed in \cite{Zurek:1981xq}. The output response pulse (at level 0) can be mapped to a complex number (at level 1) that can be decomposed as the amplitude response $I$ and the phase response $Q$ and whose precise meaning is explained below. Repetition of the measurement $n$ times yields a mixture of two distributions as shown in Figure \ref{IQ}. As briefly mentioned in the previous paragraph, the measurement process in a QMD such as the transmon qubit consists of probing the resonator with a pulse of frequency $\omega_{\text{probe}}$. The maximum fidelity is achieved when $\omega_{\text{probe}} = (\omega_C + \omega_{C}^{\chi})/2$. A short readout pulse $s_{\text{r.o.}}(t)$ is then directed towards the resonator to interact with it, and thus interact with the qubit and be transmitted back to the control line. Assuming a linear pulse, its readout waveform reads: \begin{align} s_{\rm r.o.}(t) = A_{\rm r.o} \Big(\cos (\omega_{\rm r.o}t) + \vartheta_{\rm r.o.}\Big), \end{align} where $A_{\rm r.o}$ is the amplitude of the probe pulse and $\theta_{\rm probe}$ is the phase, both of which depend on the state of the qubit. We can rewrite the waveform as \begin{align} s_{\rm r.o.}(t) &= {\rm Re}\left(A_{} e^{i(\omega_{\rm r.o.}t + \vartheta_{})} \right) \\ &= {\rm Re} \left(A_{} e^{\im \theta_{}} e^{\im \omega_{\rm r.o.}t} \right) \end{align} where we skipped the labels on the amplitude, frequency and phase of the readout pulse. The quantity $s|_{\omega_{\rm r.o.}} = A_{} e^{\im \theta_{}}$ is called a phasor, and for a fixed frequency it completely specifies the pulse. The qubit resonance readout is performed by recording the \emph{in-phase} component $I$ and the \emph{quadrature} component $Q$ of the phasor: \begin{align}\label{IQeq} s|_{\omega_{\rm r.o.}} &= A_{}\Big( \cos(\theta_{}) + \im \sin(\theta_{})\Big) \\ &= I + \im Q. \ \end{align} The I-Q plane $\simeq \mathbb{C}$ can be thought of as the phase space of the resonator-qubit coupled system. Once the signal has been transmitted back from the resonator to the control line comparison of the readout pulse to the original pulse is performed. Using the phase-shift between the known input pulse and the measured output the qubit state can be mapped onto the complex I-Q plane. \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{gradients.png} \caption{The I-Q plane data corresponds the phase space measurements of the response signal. While this figure is idealized, in reality, the scatter plot contains a lot of noise and robust techniques are required. Note that we can essentially identify the pointer basis $\{ \ket{g}, \ket{e}\}$ to the standard qubit state basis $\{\ket{0}, \ket{1} \}$. The measurement described above has been performed in a fixed projective observable, say $\bs{\sigma}_z$. The process needs to be repeated for the other two observables, $\bs{\sigma}_x, \bs{\sigma}_y$ in order to get a full description of the state.} \label{IQ} \end{figure} Thus, a single pulse sent to the qubit maps to a point in the the complex I-Q plane. In total, $n$ repetitions of the measurement provide a distribution that allows one to create a histogram of the recorded events and assign the probabilities of experiment outcomes. Repetition of the same procedure, with further copies of the state measured with different measurement operators of the $N$-dimensional measurement basis (albeit possibly with different pulse frequencies) allows to study a $N$-dimensional histogram from which one can use to proceed to the estimation of the measured state. \section{QST as bilevel SDP} As discussed in the previous section, to perform QST one prepares an ensemble of $n_I$ identical qubits in the state $\bs \rho$ and performs measurement of the observable $\bs{\sigma}_I$ on each on these copies. Here the Pauli observables $\bs{\sigma}_I$ are the standard Pauli matrices: \begin{align*} \bs{\sigma}_x &= \begin{pmatrix} 0 & 1\\1&0\end{pmatrix},\,\, \bs{\sigma}_y = \begin{pmatrix} 0 & \im\\-\im&0 \end{pmatrix},\,\, \bs{\sigma}_z = \begin{pmatrix} 1 & 0\\0&-1 \end{pmatrix}. \end{align*} This procedure allows one to find $n_I^{\ket{0}}$ measurements corresponding to the $\ket{0}$ state and $n_I^{\ket{1}}$ measurements corresponding to the $\ket{1}$ state where $n_I = n_I^{\ket{0}} + n_I^{\ket{1}}$. The $n_I$ measurements are read off from the I-Q plane data and are classified in a binary way. To be precise, the I-Q plane data yield a binary vector $\boldsymbol{\beta}_I \in \{0,1\}^{n_I} $ defined as: \begin{align}\label{eq:bvector} \boldsymbol{\beta}_I := \braket{\boldsymbol{\sigma}_I} + \boldsymbol{\varepsilon}_I, \end{align} where $\braket{\boldsymbol{\sigma}_I} = [\braket{{\bs{\sigma}}_I}_{1}\ldots \braket{{\bs{\sigma}}_I}_{n_I}]^\top \in [-1,1]^{n_I}$ is the \emph{hidden} vector of expectation values of the Pauli observables. The second terms corresponds to statistical and system noise, $\boldsymbol{\varepsilon}_I = [(\varepsilon_I)_1 \ldots (\varepsilon_I)_{n_1}]^\top \in M_r{\bs 0}_{n_I} \subset \mathbb{R}^{n_I}$, the radius $0 \leq r \ll 1$ neighborhood of $\bs{0}_{n_I}$. From these data we have that $n_I^{\ket{0}}$ of the $\boldsymbol{\beta}_I$ corresponding to the qubit being in the $\ket{0}$ eigenstate of $\bs{\sigma}_I$ and $n_I^{\ket{1}}$ of the $\boldsymbol{\beta}_I$ corresponding to the qubit being in the $\ket{1}$ eigenstate of $\bs{\sigma}_I$. The three measurement observables of the Pauli basis provide us with three such vectors ($\boldsymbol{\beta}_x,\boldsymbol{\beta}_y, \boldsymbol{\beta}_z$). Out of each of these vectors one can write down the empirical estimates $b_I$ for the expectation value of $\bs{\sigma}_I$: \begin{align}\label{bi} b_I = \frac{1}{n_I}\left (\sum_{j\in n_I^{\ket{0}}} (\boldsymbol{\beta}_I)_j -\sum_{j\in n_I^{\ket{1}}} (\boldsymbol{\beta}_I)_j \right), \end{align} with the error being \begin{equation}\label{Deltabi} {\Delta b_I} = 2 \left(\frac{ n_I^{|0\rangle} n_I^{|1\rangle} }{n_{I}^{-3}}\right)^{\frac{1}{2}}. \end{equation} For a qubit and for $n = n_x + n_y + n_z$ measurements in the Pauli basis, we obtain the following empirical estimates for the expectation values of the Pauli observables: \begin{align}\label{bvector} b = (b_x \,\,\, b_y \,\,\, b_z)^\top \in [-1,1]^3. \end{align} Generically, for a level $n$ system and for $M$ measurement basis operators the $b$ vector generalizes accordingly to a $n$-dimensional vector. QST utilizes the $b$ vector to estimate the density matrix out of which these expectation values appear. QST can be interpreted as a function \begin{equation}\label{eq:QST0} \mathbb{QST} : [0,1]^M \to \widehat{\mathbb{H}}_1^n. \end{equation} The precise definition of this function will be given below in Equation (\ref{SDPdensity}). Essentially, convex optimization methods provide \emph{constrained least-squares} fit. The least-squares problem for a qubit and measurement operators in the Pauli basis consists of the following data: \begin{itemize} \item The measurement operators vectorized and stacked into a matrix $\bs A$ that reads: \begin{align*} \bs A = \begin{bmatrix} \mathrm{vec}(\bs{\sigma}_x) \, \mathrm{vec}(\bs{\sigma}_y) \, \mathrm{vec}(\bs{\sigma}_z) \end{bmatrix}^\top \in \mathbb{C}^{4\times 3}, \end{align*} \item The vector of empirical estimates $b$ of Equation (\ref{bvector}). \end{itemize} The objective of the least-squares problem is to find a Hermitian matrix $\widehat{\bs\rho} \in \widehat{\mathbb{H}}_1^2$ such that the $\ell_2$-norm \begin{align}\label{IBMopt} \norm{\bs A\mathrm{vec}(\widehat {\bs \rho}) - b}_{2}^2, \end{align} is minimized under two necessary conditions, unitarity and PSD-ness, that we will shortly describe precisely. Note that the objective function (\ref{IBMopt}) is strongly convex, which can be proven by realizing that the eigenvalues of Hessian of (\ref{IBMopt}) are strictly positive and $A$ is trivially injective. Furthermore, the infimum of Equation (\ref{IBMopt}) is guaranteed to be unique due to the injectivity of $\bs A$. The convexity of the SPD problem (\ref{IBMopt}) becomes less trivial when one moves away from a least-squares objective function towards a linear objective function, a crucial step in order to formulate the problem in standard SDP form which is required for technical reasons to be explained below. \subsection{The standard SDP formulation of QST} The \emph{error function} (\ref{IBMopt}) is usually minimized by linear inversion or other maximization-minimization-type algorithms. Nevertheless, in this article we will treat the problem as an SDP. Let us now make precise the optimization problem at hand, together with the two physical constraints mentioned above. Performing QST amounts to finding the optimum of the SDP problem: \begin{equation}\label{SDPdensity} \begin{aligned} \mathbb{QST}_{\bs A}(b) := \arg \min_{\quad {\widehat{\bs\rho}}\, \succeq 0} &\quad \norm{\bs A \mathrm{vec}(\widehat{\bs\rho}) - b}_{2}^2 \\ {\text{subject to}} & \quad \mathrm{Tr}{(\widehat{\bs\rho})} = 1. \\ & \quad \widehat{\bs\rho} \in \mathbb{S}_+^2. \ \end{aligned} \end{equation} It should be evident now how the problem above can be seen as the function $\mathbb{QST}_{\bs A}(b)$ that was first mentioned in Equation (\ref{eq:QST0}), a function from the space of recorded measurements $b$ to the space of estimates $\widehat{\bs \rho}$. As far as convexity is concerned, except for the strongly convex objective, recall that the subspace of all density matrices embedded to the space of all Hermitian operators that act on a Hilbert space $\mathcal{H}$ forms a convex subspace. Furthermore, problem (\ref{SDPdensity}) is not qubit specific as it easily generalizes to (i) other measurement basis choices as well as to (ii) higher level systems by the corresponding generalization of $\bs A$ as well as of the $b$ vector. The optimum $p^*$ of Problem (\ref{SDPdensity}) can be used to extract a unique optimal solution $\mathrm{vec}(\widehat {\bs\rho})^*$, for example by employing the GNS construction \cite{Klep2018}, which itself defines the residual $r = A\mathrm{vec}(\widehat {\bs\rho})^* - b$ and we have a perfect fit iff $\norm{r}_2^2=0$. \subsubsection*{Evaluation map} In Equation (\ref{SDPdensity}) we know how to construct the matrix $\bs A$ for any measurement basis but we have to assume that $b$ has been preprocessed and is given. Generically, the $b$ vector can be seen as a function from the space of the possible QMD measurements $\Omega_{\bs M}$ for the observable $\bs M$ to the interval of empirical estimates $B:=[0,1]$. Let us denote by $\Omega_{\bs M}^B$ the set of all maps $f_j : \Omega_{\bs M}\to B$, parametrized by $j\in \mathbb{Z}$. Then, $b_I := {\rm ev}(f_j,\{s_1,\ldots, s_{n_I})$, where $\{s_1,\ldots, s_{n_I}\} \in \Omega_{\bs M}$ and ${\rm ev}$ is the evaluation map. In the following subsection, we will pick an element of $\Omega_{\bs M}^B$, for $\bs M \in \{\bs{\sigma}_x,\bs{\sigma}_y,\bs{\sigma}_z \}$, that allows us to re-formulate Problem (\ref{SDPdensity}) as a bilevel problem; an optimization problem that involves at least one constraint that it is a well-defined optimization problem on its own. \subsection{Bilevel SDP formulation of QST} \label{sec:bilevelSDP} In the preceding analysis and also in Problem (\ref{SDPdensity}) we assumed that the $b$ vector is given and obtained with any underlying clustering algorithm for the I-Q plane data, for example algorithms as simple as SVM clustering or as advanced as neural networks \cite{bolduc2016projected, Torlai_2018,Xin_2019}. Interestingly, we can perform QST directly using the I-Q plane data by re-formulating Problem (\ref{SDPdensity}) as a bilevel SDP, which adds an extra constraint, as follows: \begin{equation}\label{SDPdensity2} \begin{aligned} \mathbb{QST}_{\bs A}(b) := \arg \min_{b, {\widehat {\bs \rho}}\, \succeq 0} &\quad \norm{\bs A \mathrm{vec}(\widehat {\bs\rho}) - b}_{2}^2 \\ {\text{subject to}} & \quad b_I \in [\mathbb{IQP}_{\mathcal{D}_{\ket{0},\ket{1}}}(x)]_I \\ & \quad \mathrm{Tr}{(\widehat {\bs\rho})} = 1 \\ & \quad \widehat{\bs \rho} \in \mathbb{S}_+^2. \end{aligned} \end{equation} The new constraint forces (\ref{SDPdensity2}) to a bilevel SDP since $b_I$ is defined as to belong to the set of feasible of the so-called lower level optimization problem that we define below. Going back to the I-Q plane data, fix a measurement observable $\bs M$ let $\Omega_{\bs M}= \Omega_{\bs M}^{\ket{0}}\cup \Omega_{\bs M}^{\ket{1}}$ denote the space of possible outcomes of the measurement of the QMD with respect to $\bs M$. $\Omega_{\bs M}$ is indeed the union of the outcomes that will be tagged to belong to the $\ket{0}$ cluster and the ones that will be tagged to belong to the $\ket{1}$ cluster. Assuming that the recorded samples belong to a mixture of two Gaussian probability distributions $\mathcal{D}_{\ket{0}} \sim \mathcal{N}(\mu_0,\sigma_0)$ and $\mathcal{D}_{\ket{1}}\sim \mathcal{N}(\mu_1,\sigma_1)$, where $\mu_j$ is the mean of the $j$-th distribution and $\bs{\Sigma}_j$ is the covariance matrix of the $j$-th distribution, the $b$ vector can be interpreted as the evaluation of a function ``\text{I-Q plane data} $\mapsto b_I$'': \begin{align}\label{eq:IQP} \mathbb{IQP}_I:& (\Omega_{\bs \sigma_I}; \mathcal{D}_{\ket{0}}, \mathcal{D}_{\ket{1}}) \to [0,1], \end{align} the evaluation of which produces the $I$-th component of the $b$ vector, the component corresponding to $\bs \sigma_I$. Here, we index $\mathbb{IQP}$ to stress the fact that for the full QST one needs to perform this operation for all possible measurement operators of the measurement basis. With the assumptions on the distributions as above, several formulations of the robust parameter estimation problem of Gaussian mixture models exist, for example \cite{bakshi2021robustly}. Here, we propose a \emph{natural formulation} of the problem at hand which is an instance of the typical statistical problem of the minimization of the Mahalanobis distance of two distributions with parameters $\theta_0 = \{\alpha_0, \mu_0, {\bs \Sigma}_0\}$ corresponding to the $\ket{0}$ state, parameters $\theta_1 = \{\alpha_1, \mu_1, {\bs\Sigma}_1\}$ corresponding to the $\ket{1}$ state, and parameter $\alpha_3$ for an unknown arbitrary distribution $g(x)$ corresponding to adversary noise that essentially captures the vector ${\bs \varepsilon}_i$ of Equation (\ref{eq:bvector}). The Gaussian mixture model that allows us to define the function $\mathbb{IQP}$ of Equation (\ref{eq:IQP}) corresponds to a H\"uber contamination model \cite{huber2004robust} which reads as follows: \begin{equation} \begin{aligned} \label{dis_adv} f_{\rm mix}(x_s) & = \frac{1}{\sqrt{2\pi}}\Bigg\{ \frac{\alpha_0}{|\bs{\Sigma}_0|^\frac{1}{2}} \exp [(x_s - {\mu}_0)^\top \bs{\Sigma}_0^{-1}(x_s-{\mu}_0)] \\ & \quad \hspace{2.1em}+\frac{\alpha_1}{|\bs{\Sigma}_1|^{\frac{1}{2}}} \exp[ (x_s - {\mu}_1)^\top \bs{\Sigma}_1^{-1}(x_s -{\mu}_1) ] \\ & \qquad \hspace{13.5em} +\alpha_2 g(x) \Bigg\} \\ & = \frac{1}{\sqrt{2\pi}}\left\{ \frac{\alpha_0}{|\bs \Sigma_0|^\frac{1}{2}} \mathcal{D}_{\ket{0}} + \frac{\alpha_1}{|\bs \Sigma_1|^{\frac{1}{2}}}\mathcal{D}_{\ket{1}}+ \alpha_2 g(x_s)\right\}, \end{aligned} \end{equation} with the weights of the mixture model are obeying $\alpha_0+\alpha_1+\alpha_2=1$, $\alpha_i \in [0,1], \, \forall i \in \{0,1,2\}$ and $x_s$ is the random variable corresponding to the sample $s$. We can now define $\mathbb{IQP}$ from Equation (\ref{eq:IQP}) precisely as follows. We define: \begin{equation*} f_{\mathbb{IQP}} := \sum_{s=1}^{|S_I|} \left[\sum_{i \in\{0,1 \}} f_s^{\ket{i}} \log \mathcal{D}_{\ket{i}}(x_s) + f_s^{\varepsilon}g(x_s) \right] \end{equation*} Let $|S_I| \equiv n_{I}$ denote the number of recorded samples for the $i$-the measurement operator. Then, for the feasibility set $U = \{ f_s^{\ket{0}}, f_s^{\ket{1}},f_s^{\rm noise} \in \{ 0, 1 \}, \mu_1, \mu_2,\bs{\Sigma}_1, \bs{\Sigma}_2, \alpha_1, \alpha_2$\}, $\mathbb{IQP}$ corresponds to the following nonconvex problem: \begin{equation} \begin{aligned}\label{GMM} \mathbb{IQP}_{\mathcal{D}_{{\ket{0}},{\ket{1}}}}(x_s) :=\arg \min_{U} & \quad f_{\mathbb{IQP}} \\ \noindent \textrm{ s.t. } & \sum_{s=1}^{|S_I|} f_s^{\ket{0}} = \alpha_0 |S_I| \\ & \sum_{s=1}^{|S_I|} f_s^{\ket{1}} = \alpha_1 |S_I| \\ & \sum_{s=1}^{|S_I|} f_s^{\rm noise} = \alpha_2 |S_I| \\ & 1 = \sum_{j=0}^2\alpha_{i}. \end{aligned} \end{equation} Notice that there is no need to perform the inversion for $\bs{\Sigma}_i^{-1}$: one can optimize over the matrix variable, whose meaning is to be the inverse of the covariance. Refining the interpretation of (\ref{GMM}) as a map from the I-Q plane data corresponding to the $\bs \sigma_I$ measurement observable to the space of $\boldsymbol{b}_I$ we have: \begin{equation}\label{boldb} \begin{aligned} \mathbb{IQP}_I :& (\Omega_{\bs \sigma_I}; \mathcal{D}_{\ket{0}}, \mathcal{D}_{\ket{1}}) \to [0,1], \\ & x_s \mapsto (\boldsymbol{b}_I)_s. \end{aligned} \end{equation} The lower-level problem (\ref{GMM}) corresponding to the upper-level problem (\ref{SDPdensity2}) is a very nontrivial problem to attack. Nevertheless, in some occasions one can make significant progress. Let us discuss to what extent and how easily one can utilize (\ref{GMM}). \subsubsection{Known means and known covariance matrics with no noise} Assume the ideal scenario, where the mean vectors and the covariance matrices of Equation (\ref{dis_adv}) are known and the noise zero or its samples are known. (In the commonly considered spherical case, one makes the assumption that the covariance matrices are identity matrices.) When one considers the noise-less case, Problem (\ref{GMM}) corresponds to the well-known problem of degree of membership (posterior information) of the samples, which in the simplest case is: \begin{equation} \begin{aligned} \label{beta_arg_max} \bs \beta_{\ket{i}}(x_s) & = \arg \max_{i\in \{0,1\}} N \Big\{\bs X_i(x_s) \Big\}, \ \end{aligned} \end{equation} with $\bs \beta_{\ket{i}}$ relating to the $b$ vector as prescribed in Equation \eqref{bi}, although the role of the index is different in this context and where we fix $\bs \beta_{\ket{i}} = (\bs \beta_I)_{\ket{i}}$, for some $I$. Furthermore, $N$ is a normalization factor, that we set equal to one in the following, \begin{align}\label{X} \bs X_i(x_s) := -(x_s-\mu_i)^\top { \bs{\Sigma}}_i^{-1}(x_s-\mu_i) \end{align} and the decision rule is to assign each sample $x_s$ to the cluster whose mean minimizes the Mahalanobis distance. Problem \eqref{dis_adv} has a solution computable in linear time. Note that by expanding the terms in \eqref{X} we can define a PSD matrix \begin{align} \bs F^{(x)}_{\ket{i}} = \begin{pmatrix} -\bs{\Sigma}_i^{-1} & \bs{\Sigma}^{-1}\mu_i \\ \mu_i^\top\bs{\Sigma}_i^{-1}& -\mu_i^{\top}\bs{\Sigma}_i^{-1} \mu_i \ \end{pmatrix}. \end{align} Then \eqref{dis_adv} can be written as: \begin{align} \bs \beta_{\ket{i}}(x_s) &= \arg \max_{i\in\{0,1\}} \,\,\,\begin{pmatrix} \tilde x_s^\top &1 \end{pmatrix} \bs F^{(x)}_{\ket{i}} \tilde x_s, \end{align} where $\tilde x_s := (x_s^\top \otimes_{\rm Kron}1 )^\top$. This can be approximated by: \begin{align} \bs \gamma_{\ket{i}}(x_s) &= \softmax_i \,\,\,\begin{pmatrix} x_s^\top &1 \end{pmatrix} \bs F^{(x)}_{\ket{i}} \begin{pmatrix} x_s\\1 \end{pmatrix}, \end{align} where $\softmax$ is defined at \eqref{softmax_def}, such that \begin{align} \bs \gamma_{\ket{0}}^{(x_s)}+ \bs \gamma_{\ket{1}}^{(x_s)} & = 1 \end{align} where by definition $\bs \gamma_{\ket{i}}(x_s)$ will be in the interval $[0,1]$ and will sum up to one. When $\bs F_{\ket{i}}$ are non-negenerate, this yields a strongly convex, unconstrained lower-level problem, which in turn can be substituted by its first-order optimality conditions. Then, it is possible to formulate the SDP \eqref{SDPdensity2} with the additional constraints, taking into account all measurement basis operators. Explicitly we have: \begin{equation}\label{SDPdensity3} \begin{aligned} \mathbb{QST}_{\bs A} := \arg \min_{\quad {b, \widehat{\bs\rho}}, \bs \gamma} &\quad \norm{\bs A \mathrm{vec}(\widehat{\bs\rho}) - b}_{2}^2 \\ {\text{subject to}} & \quad \mathrm{Tr}{(\widehat{\bs\rho})} = 1 \\ & \quad \widehat{\bs\rho} \in \mathbb{S}_+^2 \\ & \quad b = (b_x \,\, b_y \,\, b_z)^\top \\ & \quad b_x = \gamma_{\ket{0}}^{(x_s)} - \gamma_{\ket{1}}^{(x_s)} \\ & \quad b_y = \gamma_{\ket{0}}^{(y_s)} - \gamma_{\ket{1}}^{(y_s)} \\ & \quad b_z = \gamma_{\ket{0}}^{(z_s)} - \gamma_{\ket{1}}^{(z_s)} \\ & \quad \gamma_{\ket{i}}^{(x)} =\sum_{x_s}\softmax_i \tilde x_s^\top \bs F^{(x)}_{\ket{i}}\tilde x_s\\ & \quad \gamma_{\ket{i}}^{(y)} = \sum_{y_s}\softmax_i \tilde y_s^\top \bs F^{(y)}_{\ket{i}}\tilde y_s \\ & \quad \gamma_{\ket{i}}^{(z)}= \sum_{z_s}\softmax_i \tilde z_s^\top \bs F^{(z)}_{\ket{i}}\tilde z_s \\ \end{aligned} \end{equation} This bi-level SDP \eqref{SDPdensity3} can then be extended, for instance towards unknown parameters of the Gaussian mixture and adversarial noise. \subsubsection{Known means, known covariance matrices and positive adversary noise} When the mixture model is contaminated by noise, i.e., $\alpha_2 > 0$, we need to consider a constrained optimisation problem. This, in general, does not have a unique solution, and we need to consider the inclusion $b_I \in [\mathbb{IQP}_{\mathcal{D}_{\ket{0},\ket{1}}}(x)]_I.$ While in theory, one could consider the non-convex NCPOP \eqref{GMM}, and consider KKT conditions of their SDP relaxations \cite{Navascu_s_2008}, a natural approach is to consider a continuous relaxation in variables $U = \{f_s^{\ket{0}}, f_s^{\ket{1}},f_s^{\rm noise} \}\in [ 0, 1 ]$ over the polyhedron defined by \eqref{GMM}. This convex optimization problem can be replaced by the KKT conditions under mild assumptions. \subsubsection{Unknown means and known covariance matrices} Assume that the mean vectors $\mu_i$ of Equation (\ref{dis_adv}) are unknown but the covariance matrices ${\bs{\Sigma}}_i$ are known. In the case, one has to estimate the mean vectors $\hat{\mu}_i$ as well as mixture weights $\hat{a}_i$ the lower-level problem (\ref{GMM}) corresponds to a nonconvex, but commutative POP. While this is renders the problem nontrivial, it has been studied \cite{jeyakumar2016convergent}. Furthermore, the parameter space is reduced by making the reasonable assumption $\mu_0^1 \approx - \mu_1^1, \mu_0^2 \approx \mu_1^2$, for $\mu_i = (\mu_i^1\,\,\, \mu_i^2)^\top$. In particular, \cite[Theorem 4.7]{jeyakumar2016convergent} shows that assuming Mangasarian-Fromovitz constraint qualification (or, less strictly, that there exists a representation of the feasible set of the lower-level problem as a finite union of closed convex sets with non-empty interior), there exists an $\epsilon_0$, such that for all $\epsilon \in [0, \epsilon_0)$, one can obtain an $\epsilon$-approximation of the problem by a convexification, which turns out to be an SDP that could be utilized considering the recent study \cite{doi:10.1137/16M1099303} of bi-level optimization with an SDP at the lower level. The dimension of this SDP will grow quickly with $\epsilon$, but this is justified by the well-known issues \cite{NIPS2016_3875115b} in estimating the parameterss of a Gaussian mixture model using the EM-algorithm, which would be a simple-minded alternative. As a practically relevant alternative, one may consider the first available SDP, which resembles Shor's \cite{shor1987quadratic,shor1990dual} SDP relaxation and its KKT conditions. \subsubsection{Unknown means and unknown covariance matrices} Finally, assuming that both the mean vectors and the covariance matrices of Equation (\ref{dis_adv}) are unknown and the noise is arbitrary. The lower-level problem (\ref{GMM}) then corresponds to a nonconvex NCPOP. One can solve such a bilevel polynomial optimization problem with a nonconvex lower level problem and encounter towards a global minimizer using semidefinite programming hierarchies. Such hierarchy relaxations for NCPOP problems are the NPA relaxation hierarchies \cite{Navascu_s_2008} which essentially convert the original NCPOP to a series of SDP problems labeled by $k$ such that for some $k$ they converge to a solution. For the convexification of the problem in the form of an SDP, one can employ the KKT conditions in a manner similar to \cite{jeyakumar2016convergent}. \section{Extensions} One could apply similar bilevel view in a number of related problems. \subsection{Quantum Hamiltonian Identification} The state of a quantum system, such as the superconducting qubit we are interested in, evolves in time from the input Hilbert space $\mathcal{H}_{\rm in}$ to the output Hilbert space $\mathcal{H}_{\rm out}$ according to a quantum Hamiltonian operator $H: \mathcal{H}_{\rm in} \to \mathcal{H}_{\rm out}$ which satisfies the Liouville evolution equation: \begin{equation}\label{Lou} \frac{d\bs \rho(t)}{dt} = - \frac{\im}{\hbar} [H, \bs \rho(t)]. \end{equation} When considering the discrete time evolution of $\bs \rho_{i}$ at time $t=i$ to $ \bs \rho_{i+1}$ at time $t=i+1$, the the discrete analog of Equation \eqref{Lou} can be written as follows using the Kraus map (Kraus operator-sum representation): \begin{align} \bs \rho_{i+1} &= \mathcal{E}(\bs \rho_i) \\ &= \sum_{k=1}^{d^2-1} \bs E_k^{} \bs\rho_{i} \bs E_k^{\dagger}, \end{align} where $\sum_{k=1}^{d^2-1}\bs E_k^\dagger \bs E_k = \bs 1$ and $\mathcal{E}$ denote the unknown quantum operation responsible for the density matrix evolution. To perform quantum Hamiltonian identification (QHI), we sample the unknown process $m$ times resulting in output state trajectories indexed by the lower index (see below). We introduce the following notation for the output and estimated density-matrix trajectories: \begin{equation} \begin{aligned} {\bs\rho}_{\rm out}^{(j)}(t=i) &\equiv {\bs\rho}_i^{(j)} \\ {\bs\rho}_{\rm est}^{(j)}(t=i) &\equiv \widehat {\bs\rho}_i^{(j)} \ \end{aligned} \end{equation} where trajectory $j \in \{1,\ldots, N\}$. The index $i$ here denotes the sample ordinal number (discrete time). We model the evolution of the states as a linear dynamical system: \begin{equation} \begin{aligned}\label{LDS} {\bs\rho}^{(j)}_{i} &= \bs G {\bs\rho}^{(j)}_{i-1}, \\ \widehat {\bs\rho}^{(j)}_{i} &= \bs J {\bs\rho}^{(j)}_{i}. \ \end{aligned} \end{equation} Here $\bs G,\bs J$ are the system matrices that we are interested in recovering. Effectively, $\bs G$ corresponds to the evolution matrix (Hamiltonian) while $\bs J$ is a matrix that transforms the hidden state to the observed state measured by the apparatus. To this end, we define the loss function \begin{equation}\label{loss} f_{\rm loss} = \sum_{i,j} \norm{{\bs\rho}_i^{(j)} - \widehat {\bs\rho}_i^{(j)}}^2_F \end{equation} Using the Kraus operator-sum representation with a fixed basis $\{ \bs E_k\}_{k=1}^{d^2-1}$ in the space of Hermitian matrices $\mathbb{H}_1^{N}$, a first physical formulation of the state estimation problem, in terms of an SDP, takes the following form: \begin{equation} \label{Denys_sdp} \begin{aligned} \min_{U} & \quad f_{\rm loss}, \\ {\rm s.t.} & \quad {\bs\rho}_{i+1} = \sum_{k=1}^{d^2-1}\bs E_k{\bs\rho}_i \bs E_k^\dagger\\ & \quad \sum_{k=1}^{d^2-1}\bs E_k^\dagger \bs E_k=\bs 1,\ \end{aligned} \end{equation} where $\bs U = \{ \bs{\rho}_i^{(j)},\bs E_{k} \}$. By algebraic manipulations one can show that $\sum_{k=1}^{d^2-1} \bs E_k^*\bs E_k =\bs G$ and as a result, Problem (\ref{Denys_sdp}) is reformulated as: \begin{equation} \label{primal_QHI} \begin{aligned} \min_{S} & \quad f_{\rm loss}, \\ {\rm s.t.} & \quad \rho^{(j)}_{i} = \bs G \rho^{(j)}_{i-1}, \\ & \quad b^{(j)}_{i} = \bs J \rho^{(j)}_{i}, \\ & \quad \widehat {\bs\rho}^{(j)}_{i} = \mathbb{QST}(b^{(j)}_{i}), \\ & \quad {\bs\rho}_i^{(j)}, \widehat {\bs\rho}_i^{(j)} \in \mathbb{S}_+^{2} \text{ for each } i,j \\ & \quad \mathrm{Tr}{ \bs\rho^{(j)}_{i}} = \mathrm{Tr}{\widehat {\bs\rho}^{(j)}_{i}} = 1. \ \end{aligned} \end{equation} Note that the index $i\in \{1,\ldots,N\}$ in $b_i$ corresponds to the discrete time and is not to be confused with $I \in \{x,y,z\}$. We show therefore that QHI as well can be expressed as a bilevel SDP where the lower level problem is Problem (\ref{SDPdensity2}), the main object of study of this article. For simplicity, we can assume here that the $b$ vectors are given to us so as not to worry about a \emph{matryoshka}-type optimization problems. In our follow-up work, we show that the lower-level problem (\ref{SDPdensity2}) can be substituted by its KKT conditions which we also derive. The result is an algorithm that runs in polynomial time as opposed to formulation (\ref{primal_QHI}) as it stands. \subsection{Calibration} In superconducting qubit circuits, calibration of the qubit \cite{Weber_2014,wittler2021integrated} is an essential step, wherein one wishes to learn a mapping from the signals coming from the readout resonator to the states of a qubit. This can be experimentally realized by explicitly preparing the qubit in a known state and probing with pulses that should not excite the qubit. For an ensemble of many copies of the state, one can see the problem as clustering or parameter estimation in a Gaussian mixture model with two components, contaminated by non-Gaussian noise \cite[cf. Methods in the Supplementary material]{Weber_2014}. Optimization problem $\mathbb{IQP}$ of Equation (\ref{GMM}) uses H\"uber contamination model that is well suited for calibration purposes too. Seen as robust estimation of the parameters one can also \cite{kothari2018robust} which uses semidefinite programming, or even simpler techniques \cite{8635825}. This then underlies the decompositions of the computationally demanding bilevel problem (\ref{SDPdensity2}) into the lower-level problem and the upper-level problem. \section{A Numerical Illustration} Consider a two-level system that we initialize at the state $\psi_0 = \ket{0}$ and we let it evolve with Hamiltonian $H = \tfrac{\pi}{5} \sigma_x$. We sample the evolution of the state 100 times within a time-window of 2s and choose a random sample that reads: \begin{align} \psi_{22} &= 0.3457611\ket{0}- 0.97181103 \im \ket{1}, \end{align} equivalently, \begin{equation}\label{truestate} \begin{aligned} \bs \rho_{22} & = |\psi_{22} \rangle \langle\psi_{22} | \\ &= \begin{bmatrix} 0.056 & \im 0.229 \\ \im0.229 & 0.944 \ \end{bmatrix}. \end{aligned} \end{equation} We perform $n=10,000$ measurements of the state with respect to each of the elements in the Pauli basis to obtain: \begin{align}\label{fake_data} \nonumber n_x^{\ket{0}} &= 4996, \qquad n_x^{\ket{1}} = 5004 \\ n_y^{\ket{0}} &= 2663, \qquad n_y^{\ket{1}} = 7337 \\ \nonumber n_z^{\ket{0}} &= 540,\,\,\, \qquad n_z^{\ket{1}} = 9460 \ \end{align} Given a Gaussian mixture model for each $i\in \{x,y,z\}$ with mean vectors $\mu_0, \mu_1$ and covariance matrices $\bs \Sigma_0, \bs \Sigma_1$, we can sample data that resemble the I-Q plane data. In particular, we consider two spherical Gaussians of means $\mu_0 = \begin{pmatrix} 2.5 & 2.0 \end{pmatrix}^\top$,\,\, $\mu_1 = \begin{pmatrix} -2.5 & 2.0 \end{pmatrix}^\top$. We sample the first Gaussian with frequency $n_I^{\ket 0}/n_I$ and the second Gaussian with frequency $n_I^{\ket{1}}/n_I$, $I \in \{x,y,z\}$, in total 10,000 times. Traditionally, having the I-Q plane data, one would use ME-algorithm for half of the dataset for calibration purposes, and then use the calibration results to decide the membership of the rest of the measured points, cf. \eqref{dis_adv}. Using Equation \eqref{bi}, one obtains the $b$ vector: \begin{align} b = \begin{bmatrix} -0.0044 \\ -0.4659 \\ -0.8979 \end{bmatrix}. \end{align} which approximates the $b$-vector one obtains from \texttt{Qutip}: \begin{align} \label{Qutipb} b = \begin{bmatrix} -0.0006 \\ -0.4674 \\ -0.892 \end{bmatrix}. \end{align} Subsequently, the optimizer of the standard SDP formulation of QST \eqref{SDPdensity} is: \begin{align} \bs X_{(12)} = \begin{bmatrix} 0.0597 & -0.0008 + 0.2274\im \\ -0.0008 - 0.2274\im & 0.9403 \ \end{bmatrix}, \end{align} which, given the small sample supply to the algorithm, is a good estimate of the true state \eqref{truestate}. For comparison, using \eqref{Qutipb} we obtain a state estimate: \begin{align} \bs{X}_{\texttt{Qutip}} = \begin{bmatrix} 0.0571 & -0.0003 + 0.2321\im \\ -0.0003 - 0.2321\im & 0.9429 \ \end{bmatrix}. \end{align} Using our approach \eqref{SDPdensity3}, which utilizes the I-Q plane data directly, we explicitly count states using \eqref{dis_adv} to estimate the true state $\bs X$ \eqref{truestate} as: \begin{align} \bs{X}_{(24)} = \begin{bmatrix} 0.0544 & -0.0002 + 0.2240\im \\ -0.0002 - 0.2240\im & 0.9456 \ \end{bmatrix}. \end{align} The Frobenius norm of the difference as computed by MATLAB\textsuperscript \textregistered is $ \|\bs{X}_{\texttt{Qutip}}- \bs{X}_{(12)}\|_F = 0.6455$ and $\|\bs{X}_{\texttt{Qutip}}-\bs{X}_{(24)}\|_F = 0.6406$, respectively, suggesting a modest improvement in the estimate of the quantum state. This numerical illustration provides only an anecdotal evidence of the improvement that can be obtained by considering the bilevel problem. We envision further work could corroborate the observation on real data. \section{Conclusions} We have considered, for the first time, a joint problem of quantum state tomography and discriminating between the states of a quantum system using the signal actually obtained in the dispersive readout, or similar mechanisms. This allows for a lower sample complexity of the quantum state tomography, compared to traditional approaches, which discriminate first and perform state estimation second, while achieving the same error in the estimate of the state. Considering robust statistics \cite{huber2004robust} in this context allows for many important extensions. \begin{acknowledgments} We wish to acknowledge Denys Bondar, Zakhar Popovych as well as Christos Aravanis for helpful discussions. Our work has been supported by OP VVV project CZ.02.1.01/0.0/0.0/16\_019/0000765 ``Research Center for Informatics''. \end{acknowledgments}
e1451caa4fa72e8dd93b58ce5f11cfe9d4b20d5c
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Graph drawing is an important tool in the analysis of networks, which are used in many application areas as diverse as social sciences and bioinformatics. Simple force-directed layout algorithms can be applied successfully to undirected graphs consisting of a few 100 vertices. However, for larger graphs, the standard methods are hampered by very long running times and do often converge to local minima. {\it Multilevel} approaches for layouts of large graphs have been successfully used in the past to overcome these limitations. For a recent review, see e.g. \cite{bartel}. A multilevel algorithm usually consists of three steps. In the first step, the {\em coarsening phase}, the graph $G$ is coarsened into a sequence $G_1,G_2,\ldots G_L$ of graphs with decreasing number of vertices. This is done by combining several vertices of the graph $G_i$ into a single vertex of $G_{i+1}$. Then a standard force-directed algorithm is used to layout the smallest graph $G_L$. Finally, in the {\em placement phase}, the positions of the vertices of $G_L$ are used as a starting point for the layout of $G_{L-1}$. In that step, the vertices of $G_{L-1}$ are placed close to their coarsened description in $G_L$. The standard force-directed algorithm is then used again to layout $G_{L-1}$. This procedure is repeated until the original graph $G_1$ is finally layouted. \par {\it Communities} in networks are clusters of vertices with a higher than expected fraction of internal edges. Force-directed network layouts that replace edges by pairwise attractive forces between the adjacent vertices can therefore be expected to place vertices of the same community close to each other. Recently Blondel et al.~\cite{blondel} proposed a fast calculation of the community structure of networks which also uses a multilevel approach. In this paper we show that the approach of~\cite{blondel} can be used in the coarsening phase of a multilevel layout algorithm. In Section \ref{sec:method} we give details of our approach, and in Section \ref{sec:results} we compare it with other methods. \par In some cases such as the protein-protein interaction (PPI) network, a community structure is expected~\cite{Lewis}, but the layout does not reflect this due to the complexity and size of the network. Our multilevel approach can be generalized naturally by stiffening the strength of the springs inside a community. In this way, the final layout accentuates the decomposition of the network into communities and thus simplifies the interpretation of the result. This will also be shown in Section 3. \section{Methods}\label{sec:method} \subsection{Force model and algorithm} Our layout algorithm uses a standard force-directed method~\cite{Fruchterman91graphdrawing}. It assumes the following three forces: a repelling Coulomb-like force ${\bf F}_C$ between all vertices, a spring force ${\bf F}_S$ between the connected vertices, and a drag force ${\bf F}_D$. The Coulomb-like force between two vertices with charges $Q_i$, $Q_j$ at positions ${\bf x}_{i}, {\bf x}_{j}$ is given by \begin{equation} {\bf F}_C = \kappa Q_i Q_j \frac{ {\bf x}_{i}-{\bf x}_{j}}{|{\bf x}_{i}-{\bf x}_{j}|^3} , \end{equation} where we set $\kappa =1$ and $Q_i=Q_j=3$. The spring forces are given by \begin{equation} {\bf F}_S = - k S_{ij} \left(|{\bf x}_{i}-{\bf x}_{j}|-r_0\right) \frac{{\bf x}_{i}-{\bf x}_{j}}{|{\bf x}_{i}-{\bf x}_{j}|}, \label{eq:spring} \end{equation} where $k=10^{-4}$ is the spring constant, $r_0=50$ is the rest length, and $S_{ij}$ is a factor that can be modified to change the relative strength of the spring force between and within communities. We use $S_{ij}=1$ unless otherwise stated. To avoid oscillations, a drag force acts on a vertex moving with velocity $\dot {\bf x}$ according to \begin{equation} {\bf F}_D = - c \dot{{\bf x}}, \end{equation} with $c=0.01$. The many-particle force ${\bf F}_C$ is efficiently handled by a Barnes-Hut approximation~\cite{BH}, and the equations of motion are solved via Runge-Kutta integration. \subsection{Multilevel Approach} The detection of the community structure can be achieved by maximizing the {\it modularity} $Q$, which is defined as \begin{equation} Q=\frac{1}{2m}\sum_{i,j}\left(A_{ij}-\gamma\frac{k_ik_j}{2m}\right)\delta(C_i,C_j), \end{equation} where the sum runs over all pairs of vertices. The adjacency matrix element $A_{ij}$ represents the weight of the edge between the vertices $i$ and $j$, $k_i=\sum_j A_{ij}$ is the sum of the weights of the edges attached to vertex $i$, $m=1/2\sum_{ij}A_{ij}$, and $C_i$ denotes the community of vertex $i$. The so-called resolution parameter $\gamma$ was introduced in~\cite{reichardt}, where it was shown that maximizing the modularity is equivalent to finding the ground state of a spin glass model. It has been shown that maximizing the modularity is NP-complete~\cite{brandes2008}. A heuristic which is able to efficiently discover the community structure in graphs consisting of several millions of vertices has been proposed recently~\cite{blondel}. This algorithm generates successive coarsenings of the original graph until it reaches a final level where the modularity cannot be optimized any further. The algorithm starts with the original graph, where each vertex forms a community of its own and all edges have weights equal to one. Then neighbors are placed in the same community if that increases the modularity. This is repeated until no gain in the modularity can be observed. Now all vertices which belong to the same community are joined into a {\it meta-vertex}. While the vertices of the original graph have weight zero, nonzero weights are assigned to the meta-vertices in the subsequent levels. The weight of these meta-vertices is given by the sum of the weights of the previous graph plus twice the sum of the edge weights inside the community. The edge weights between two (connected) meta-vertices are simply the sum of the edge weights connecting them. The same approach as for the original vertices is now applied to the meta-vertices. This procedure is repeated iteratively until no gain in the modularity can be observed anymore. The final meta-vertices are the communities. In the following we number the levels of aggregation with an index $i$, where $i=1$ denotes the first level, corresponding to the original graph, and $i=L$ denotes the final level. We exploit the multilevel structure of the community detection for the graph layout by starting at the coarsed level $i=L$, in which the graph has the same number of vertices as there are communities. The spring constants at arbitrary levels are given by the weights of the meta-edge times the original spring constant, and the charge of a meta-vertex is its weight times the standard charge. To layout the graph $G_L$, we start with a random configuration and perform $n_L$ Runge-Kutta steps. There are several possibilities to determine the starting positions of the vertices in the $(i-1)$th level, given the final configuration in the $i$th level, see~\cite{bartel} for some examples of this placement step. Here we determine the initial configuration of the $(i-1)$th level by randomly placing all vertices inside a circle around the corresponding vertex from level $i$. The radius of this circle is taken to be half of the minimum distance to all other vertices. The above steps are repeated until the final level $i=1$ is reached. \begin{figure}[h] \begin{center}$ \begin{array}{cccc} \includegraphics[width=.22\textwidth]{A1.pdf} & \includegraphics[width=.22\textwidth]{A2.pdf} & \includegraphics[width=.22\textwidth]{A3.pdf} & \includegraphics[width=.22\textwidth]{A4.pdf} \\ \includegraphics[width=.22\textwidth]{B1.pdf} & \includegraphics[width=.22\textwidth]{B2.pdf} & \includegraphics[width=.22\textwidth]{B3.pdf} & \includegraphics[width=.22\textwidth]{B4.pdf} \\ \includegraphics[width=.22\textwidth]{B5.pdf} & \includegraphics[width=.22\textwidth]{B6.pdf} & \includegraphics[width=.22\textwidth]{B7.pdf} & \includegraphics[width=.22\textwidth]{B8.pdf} \\ \includegraphics[width=.22\textwidth]{B9.pdf} & \includegraphics[width=.22\textwidth]{B10.pdf} & \includegraphics[width=.22\textwidth]{B11.pdf} & \includegraphics[width=.22\textwidth]{B12.pdf} \\ \end{array}$ \end{center} \caption{The layout for the UK graph using a standard force-directed method (upper row, tiles A1 to A4) and our new multi-scale algorithm (lower rows, tiles B1 to B12). The graph is colored according to the 39 communities detected. In the graphs generated by the standard method, the number of Runge-Kutta steps are $n=100$, $n=1006$, $n=9970$ and $n=393677$ for A1, A2, A3 and A4 respectively, corresponding to CPU times of 2.9, 31, 320, and 12203 seconds. In the multi-level approach, tiles B1$-$B12, the number of steps for each level is determined via Equation (\ref{eq:ml}) with $n=100$, and the total CPU time was 3.1 seconds. Videos showing the progress of the layout procedure are available at~\cite{DuerrForce} and~\cite{DuerrML}.} \label{fig:uk} \end{figure} To determine the number of Runge-Kutta integration steps $n_i$ in the $i$th level we require that the same amount of computation time is spent on each level. The performance of the algorithm is limited by the calculation of the many-particle forces. The Barnes-Hut treatment of the vertices in the $i$th-level scales like $|V_i| \ln (|V_i|)$~\cite{BH}, where $|V_i|$ denotes the number of (meta-)vertices in level $i$. Hence, we set \begin{equation} n_i = \frac{n}{L} \; \frac{|V_1| \ln(|V_1|)}{|V_i| \ln(|V_i|)}, \label{eq:ml} \end{equation} where $n=\sum_i n_i$ is the total number of Runge-Kutta steps. This also enables an easy comparison of the multiscale approach with non-multiscale ($L=1$) algorithms using $n$ integration steps. \section{Results\label{sec:results}} In Figure \ref{fig:uk} we show results of applying our approach to the so-called UK dataset with $|V|=4824$ vertices and $|E|=6827$ edges taken from~\cite{walshaw}. Videos of the unfolding of the graph with increasing number of integration steps are also available at~\cite{DuerrForce} for the standard force-directed layout and at~\cite{DuerrML} for the multi-level approach. \subsection{Comparison with a standard force-directed approach} In the upper row of Figure \ref{fig:uk} (labeled A1 to A4), the graph layout obtained with the standard force-directed method is displayed after $n=100$, $n=1006$, $n=9970$ and $n=393677$ integration steps. In the lower rows (labeled B1 to B12) the various steps of our multiscale approach are illustrated. The community detection algorithm determines $L=6$ levels and finds 39 communities in the data set. The starting point of the multiscale algorithm for the graph layout is therefore a random configuration of 39 meta-vertices, each one corresponding to a community. This is shown in tile B1. After 4772 integration steps, the configuration shown in tile B2 is reached. Note that the global structure is already visible in that early phase of the algorithm. In tile B3, some of meta-vertices are replaced by the correponding meta-vertices at level $L=5$. This configuration ($|V_5|=71$, $|E_5=141|$) is further evolved with $n_5=2253$ steps (see tile B4). The remaining tiles B5-B10 correspond to the levels $i=4,3,2$ with number of integration steps $n_i = 461, 121, 38$ and corresponding number of vertices $|V_i| = 265, 836, 2281$. Tile B11 is the starting configuration for the final $n_1=16$ integration steps. Note that the total computational time needed for the final layout of the multiscale approach corresponds to the time needed to get the layout displayed in tile A1 with the standard force-directed method, where no global structure is visible. The total computation time was 3.1 seconds, while the time for detecting the communities was 72 milliseconds and can thus be neglected. \par To compare the algorithms quantitatively, we computed the total energy of the configuration of the UK graph for various total number of integration steps $n$ (see Figure \ref{fig:ukenergy}) for the standard force-directed layout and our multiscale approach, where $n$ is the same for both methods and is distributed between the different levels of the multiscale approach according to Equation (\ref{eq:ml}). For $n=10$, the multiscale approach generates a configuration with an energy that is reached with the standard method only after about 200 steps. The reason is that the communities and corresponding meta-vertices are determining a close-to minimal energy configuration of the UK graph already at the coarsest level, so that no major, long-range layout changes are needed. This in turn is due to the fact that communities are clusters of vertices with many edges (springs), and therefore keeping vertices of the same community close together is favorable to the energy minimization. \begin{figure}[h] \centering \includegraphics[width=.95\textwidth]{energy.pdf} \caption{Comparison of the energies (in arbitrary units) of the UK graph layout between a standard force-directed layout (circles) and the multiscale approach (triangles) as a function of the total number of integration steps $n$.} \label{fig:ukenergy}% \end{figure} \subsection{Comparison with other multi-level approaches} There are several layout algorithms using a multiscale ansatz~\cite{bartel}. While a thorough comparison is beyond the scope of this work, we compare the running time of our method with two recent implementations of the multiscale FM3 algorithm. In Table \ref{tab:comp}, the running times of our Java implementation on a standard quad core desktop PC are compared (where availible) for some data sets against a standard CPU-based implementation~\cite{Hochul} and a highly sophisticated implementation using graphic cards processors (GPU)~\cite{Godiyal}. The authors of the GPU-based approach claim that their approach is about $20-60$ times faster than existing CPU implementations~\cite{Godiyal}. \begin{table}\label{tab:comp} \begin{tabular}{| l r|| r | r |r|} \hline Data Set &($|V|$; $|E|$) & FM3 \cite{Hochul} & FM3 (GPU) \cite{Godiyal} & Our method \\ \hline add32 &(4960; 9462)& 12.1 & 1.4 & 0.7 \\ \hline uk &(4824; 6837) & - & - & 3.1 \\ \hline tree\_06\_05 &(9331; 9330) & 17.7 & - & 14.1 \\ \hline tree\_06\_06 &(55987; 55986) & 121.3 & 24.6 & 93 \\ \hline \end{tabular} \caption{Comparison of running times in seconds for several data sets.} \end{table} For the add32 dataset we choose $n=20$, while for all other datasets we set $n=100$. The final configuration for the UK data set is shown in tile B12 of Figure \ref{fig:uk}, the layouts for the other data sets are displayed in Figure \ref{fig:termalization}. \begin{figure}[h] \centering \includegraphics[width=.95\textwidth]{Thermalization_Cropped.png} \caption{Final layouts of data sets used for the comparison in Table \ref{tab:comp} and generated with our method. From left to right: add32, tree\_06\_05, and tree\_06\_06.} \label{fig:termalization}% \end{figure} \subsection{Community-aware layout} Sometimes networks are so entangled that the community structure is not evident in the layout anymore. As an example we consider the human protein interaction network downloaded in June 2011 from the ConsensusPathDB~\cite{cpdb}. The network has $|V| \approx 13000$ vertices and $|E| \approx 240000$ edges. This network is shown on the right hand side of Figure \ref{fig:ppi}. One would expect to see a community structure in the network, but the layout does not even partially reflect the community structure due to its complexity and size. Further, it is known that functionally related proteins preferably cluster in the same community~\cite{Lewis}. Hence, also for the visual inspection of the network it is beneficial if communities are layouted in distinct regions of the graph. To achieve this, we set the constant $S_{ij}$ in the spring force Equation (\ref{eq:spring}) to $S_{ij} = 100$ for edges belonging to the same community, and to $S_{ij}=1$ for edges connecting different communities. The resulting network is shown on the left hand side of Figure \ref{fig:ppi}. The community structure is clearly exhibited. \begin{figure}[h] \centering \includegraphics[scale=0.55]{ppi.png} \caption{Left: Protein interaction network with layout computed using stiffened springs within communities, $S_{ij}=100$. Right: Same network, but using the standard spring strength for all edges, $S_{ij}=1$.} \label{fig:ppi}% \end{figure} \section{Conclusions} Generating adequate and useful layouts for complex networks is a challenging task. Utilizing recent progress in the detection of communities in networks, we constructed a new, multi-level layout algorithm that generates configurations of close-to-minimal energy very fast. For the examples studied in this paper, our method outperforms the (standard implementation of the) FM3 algorithm. We expect that our method works particularly well whenever a clear community structure is present in a network. Knowledge of the communities can also be used to create layouts which accentuate these structures, thus making it easier to understand complex relationships within large networks. \section*{Acknowledgements} We would like to thank J. Hoefkens for carefully reading the manuscript, C. Walshaw for information about the UK graph and U. Brandes for pointing out relevant literature. \clearpage
761057a49a3db3ece65683dd1c5d42faf2f76bcf
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{ Introduction } The two coordinates of an electron in the plane orthogonal to a uniform magnetic field $B\,\bs{e}$ pointing along the direction $\bs{e}$ do not commute. There follows the $U(1)$ algebra \begin{equation} [t(\bs{a}),t(\bs{b})]= \mathrm{i}\, \sin \left(\frac{(\bs{a}\wedge\bs{b})\cdot\bs{e}}{2}\right)\, t(\bs{a}+\bs{b}) \label{eq: def magnetic algebra} \end{equation} obeyed by the triplet of generators $t(\bs{a})$, $t(\bs{b})$, and $t(\bs{a}+\bs{b})$ for any pair $\bs{a}$ and $\bs{b}$ of vectors orthogonal to $\bs{e}$, which is called the magnetic translation algebra in this context ~\cite{Zak64} The magnetic translation algebra can be used to derive the transverse conductivity of the integer quantum Hall effect (IQHE). It has also been used by Girvin, MacDonald, and Platzman in Ref.~\onlinecite{Girvin85} to place a variational estimate on the excitation gap for the fractional quantum Hall effect (FQHE), following closely the approach of Feynman and Bijl in their study of excitations in $^4$He ~\cite{Feynman72} Hamiltonians defined on two-dimensional lattices with topologically nontrivial bands can also display quantum Hall physics. The IQHE can occur in band insulators when the Bloch bands have a nonvanishing Chern number, as shown by Haldane ~\cite{Haldane88} The FQHE effect requires strong electronic correlations. This is possible if the Chern bands are sufficiently narrow (or even flat) ~\cite{Neupert11a,Tang11,Sun11} Whether flat Chern bands can sustain a FQHE or not is a matter of energetics. Exact diagonalization studies of fractionally filled Chern bands with added short-range interactions are consistent with a correlated liquid ground state supporting a FQHE for certain filling fractions ~\cite{Neupert11a,Sheng11,Wang11a,Regnault11a,Neupert11b,Bernevig11,Venderbos12,Wang12a,Bernevig12a,Bernevig12b,Wang12b,Liu12,Grushin12,Lauchli12,Sterdyniak12} Such topological correlated states on the lattice are now known as fractional Chern insulators (FCI). In an effort to draw a bridge between the case when the FQHE is realized in the continuum in contrast to the case when it is realized in a FCI, Parameswaran, Roy, and Sondhi in Ref.~\onlinecite{Parameswaran12} have pioneered an algebraic approach to FCIs by deriving the algebra obeyed by the density operators projected to the partially filled band ~\cite{Bernevig11,Goerbig,Neupert12b,Estienne12} They found that the algebra~(\ref{eq: def magnetic algebra}) emerges to leading order in a gradient expansion. Remarkably, Murthy and Shankar have (i) constructed in Ref.~\onlinecite{Murthy12} a coherent superposition of the projected density operator that closes the $U(1)$ algebr ~(\ref{eq: def magnetic algebra}) on the square lattice and (ii) represented any Hamiltonian that commutes with the number operator and describes the competition between the electronic hopping and the electronic interaction in terms of these generators ~\cite{Murthy11} In this paper, we are going to generalize the results by Murthy and Shankar as follows. We shall represent the $U(1)$ algebr ~(\ref{eq: def magnetic algebra}) in terms of coherent superpositions of electron-hole pairs in arbitrary dimensions both in the continuum and for Bravais lattices. We shall then show that these generators provide a complete basis for the linear space of operators spanned by charge-neutral fermion bilinears provided the Bravais lattice, or its embedding space in the continuum limit, is even dimensional. For odd dimensions, the generators of the $U(1)$ algebr ~(\ref{eq: def magnetic algebra}) form an incomplete basis of the space of operators spanned by charge-neutral fermion bilinears. We first treat the case of Hamiltonians acting on wave functions supported in the continuum for pedagogical reasons in Sec.~\ref{sec: continuum}. After this warm-up, we turn our attention to Hamiltonians acting on wave functions supported on Bravais lattices in Sec ~\ref{sec: lattice}. Sections~\ref{sec: continuum} and \ref{sec: lattice} constitute the main results of this paper. As a sanity check, we verify that the $f$-sum rule is obeyed if one represents the electronic density operator in terms of the particle-hole generators of the algebr ~(\ref{eq: def magnetic algebra}) in Sec.~\ref{subsec: f sum rule}. This exercise also suggests caution when performing uncontrolled approximations using the magnetic algebra, for such uncontrolled approximations could predict effects associated to a spurious breaking of time-reversal symmetry. In Sec.~\ref{subsec: Normal ordering and the bare band width}, we explain how, when represented in terms of these generators of the algebr ~(\ref{eq: def magnetic algebra}), interactions induce one-body terms that can significantly change the bare bandwidth of lattice Hamiltonians. The same effect in the FQHE requires the addition of a strong one-body perturbation to a Landau band, one that is of the order of the FQHE gap. Thus, whereas the FQHE is a strong-coupling problem, the FCI in a flat band is more like a problem at intermediate coupling. This result explains why in Ref ~\onlinecite{Grushin12} a FCI with a Chern number of 2 was more stable if the bare dispersion was not flat rather than flat, for the bare and induced one-body terms can act to neutralize each other. \section{ The case of the continuum } \label{sec: continuum} We define the fermionic Fock space $\mathfrak{F}$ with the help of the algebra \begin{equation} \begin{split} & \left\{ \hat{c} (\bs{k} ), \hat{c}^{\dag}(\bs{k}') \right\}= \delta(\bs{k}-\bs{k}'), \\ & \left\{ \hat{c}(\bs{k} ), \hat{c}(\bs{k}') \right\}= \left\{ \hat{c}^{\dag}(\bs{k} ), \hat{c}^{\dag}(\bs{k}') \right\}=0 \end{split} \label{eq: def fermion algebra continuum} \end{equation} for any pair of momenta $\bs{k},\bs{k}'\in\mathbb{R}^{d}$. Without loss of generality, we ignore any internal degrees of freedom such as the spin quantum numbers since we are after the $U(1)$ algebra~(\ref{eq: def magnetic algebra}). The linear space of fermionic bilinears that we study is spanned by the basis \begin{subequations} \begin{equation} \hat{T}(\bs{q}^{\ }_{1},\bs{q}^{\ }_{2}):= \hat{c}^{\dag}(\bs{q}^{\ }_{1})\, \hat{c}(\bs{q}^{\ }_{2}), \label{eq: def T's continuum} \end{equation} which obeys the algebra \begin{equation} \begin{split} \left[ \hat{T}(\bs{q}^{\,}_{1},\bs{q}^{\,}_{2}), \hat{T}(\bs{q}^{\prime}_{1},\bs{q}^{\prime}_{2}) \right]=&\, \delta(\bs{q}^{\,}_{2}-\bs{q}^{\prime}_{1})\, \hat{T}(\bs{q}^{\,}_{1},\bs{q}^{\prime}_{2}) \\ &\, - \delta(\bs{q}^{\,}_{1}-\bs{q}^{\prime}_{2})\, \hat{T}(\bs{q}^{\prime}_{1},\bs{q}^{\,}_{2}) \end{split} \label{eq: algebra T's continuum} \end{equation} \end{subequations} for any quadruple $\bs{q}^{\,}_{1}$, $\bs{q}^{\,}_{2}$, $\bs{q}^{\prime}_{1}$, and $\bs{q}^{\prime}_{2}$ from $\mathbb{R}^{d}$. For any momentum $\bs{q}\in\mathbb{R}^{d}$ and for any function $ f:\mathbb{R}^{d}\times\mathbb{R}^{d} \longrightarrow\mathbb{C} $, define the coherent superposition \begin{subequations} \begin{equation} \hat{\varrho}^{f}(\bs{q}):= \int\limits_{\bs{p}} f(\bs{q},\bs{p})\, \hat{c}^{\dag}(\bs{q}+\bs{p})\, \hat{c}(\bs{p}). \label{eq: def rho f continuum} \end{equation} There follows the algebra \begin{widetext} \begin{equation} \begin{split} \left[ \hat{\varrho}^{f }(\bs{q} ), \hat{\varrho}^{f'}(\bs{q}') \right]=&\, \int\limits_{\bs{p}} \Big[ f(\bs{q},\bs{q}'+\bs{p})\, f^{\prime}(\bs{q}',\bs{p}) - ( \bs{q}\leftrightarrow\bs{q}' \hbox{ and } f\leftrightarrow f' ) \Big]\, \hat{c}^{\dag}(\bs{q}+\bs{q}'+\bs{p})\, \hat{c}^{\, }(\bs{p}) \label{eq: commutator two rho f's continuum} \end{split} \end{equation} \end{widetext} \end{subequations} for any pair of momenta $\bs{q}$ and $\bs{q}'$ and for any pair of functions $f$ and $f'$. The choice $f(\bs{q},\bs{p})=1$ for any pair of momenta $\bs{q}$ and $\bs{p}$ from $\mathbb{R}^{d}$ defines the momentum representation of the local density operator \begin{subequations} \begin{equation} \hat{\rho}(\bs{q}):= \int\limits_{\bs{p}} \hat{c}^{\dag}(\bs{q}+\bs{p})\, \hat{c}(\bs{p}). \label{eq: def rho continuum} \end{equation} Any pair thereof commutes as \begin{equation} \left[ \hat{\rho}(\bs{q} ), \hat{\rho}(\bs{q}') \right]=0. \label{eq: algebra densities continuum} \end{equation} \end{subequations} Another choice of the function $f$ is made with the family \begin{subequations} \label{eq: def rgo G q continuum} \begin{equation} \hat{\varrho}(\bs{q};\bs{G}):= \int\limits_{\bs{p}}\, e^{ +\mathrm{i}\,\Phi(\bs{q},\bs{p};\bs{G}) }\, \hat{c}^{\dag}(\bs{q}+\bs{p})\, \hat{c}^{\,}(\bs{p}), \label{eq: def rgo G q continuum a} \end{equation} for any pair $\bs{q}$ and $\bs{G}$ from $\mathbb{R}^{d}$, where \begin{equation} \Phi(\bs{q},\bs{p};\bs{G}):= (\bs{q}+\bs{G})\,\bs{\ast}\,\bs{p} - \frac{1}{2} \bs{q}\,\bs{\ast}\,\bs{G} \label{eq: def rgo G q continuum b} \end{equation} and the $\bs{\ast}$ product \begin{equation} \bs{a}\,\bs{\ast}\,\bs{b}= - \bs{b}\,\bs{\ast}\,\bs{a}\equiv \sum_{i,j=1}^{d} a^{\,}_{i}\, M^{(\bs{\ast})}_{ij} b^{\,}_{j} \label{eq: def rgo G q continuum c} \end{equation} \end{subequations} defines a real antisymmetric bilinear form specified by the real-valued $d\times d$ antisymmetric matrix $M^{(\bs{\ast})}$. When $d$ is even, we assume that $M^{(\bs{\ast})}$ is invertible. When $d$ is odd, $M^{(\bs{\ast})}$ has at least one vanishing eigenvalue and is thus not invertible. Observe that \begin{equation} \hat{\rho}(\bs{q})= \hat{\varrho}(\bs{q};-\bs{q}). \label{eq: relating rho and varrho continuum} \end{equation} We are going to prove that (1) the family $\hat{\varrho}(\bs{q};\bs{G})$ labeled by the pair $\bs{q}$ and $\bs{G}$ from $\mathbb{R}^{d}$ obeys the $U(1)$ algebra~(\ref{eq: def magnetic algebra}), and (2) in even-dimensional space, the family $\hat{\varrho}(\bs{q};\bs{G})$ labeled by the pair $\bs{q}$ and $\bs{G}$ from $\mathbb{R}^{d}$ is complete. \begin{widetext} \textit{Proof of closure.} We define \begin{subequations} \label{eq: algebra rho g p,q continuum} \begin{equation} \Gamma(\bs{q},\bs{q}',\bs{p};\bs{G},\bs{G}'):= \Phi(\bs{q},\bs{q}'+\bs{p};\bs{G}) + \Phi(\bs{q}',\bs{p};\bs{G}') - \Phi(\bs{q}+\bs{q}',\bs{p};\bs{G}+\bs{G}') \label{eq: algebra rho g p,q continuum a} \end{equation} in terms of which Eq ~(\ref{eq: commutator two rho f's continuum}) can be rewritten as \begin{equation} \begin{split} \left[ \hat{\varrho}(\bs{q} ;\bs{G} ), \hat{\varrho}(\bs{q}';\bs{G}') \right]=&\, \int\limits_{\bs{p}} \left[ e^{ \mathrm{i}\, \Gamma(\bs{q},\bs{q}',\bs{p};\bs{G},\bs{G}') } - ( \bs{q}\leftrightarrow\bs{q}' \hbox{ and } \bs{G}\leftrightarrow\bs{G}' ) \right]\, e^{ +\mathrm{i}\,\Phi(\bs{q}+\bs{q}',\bs{p};\bs{G}+\bs{G}') }\, \hat{c}^{\dag}(\bs{q}+\bs{q}'+\bs{p})\, \hat{c}(\bs{p}). \end{split} \label{eq: algebra rho g p,q continuum b} \end{equation} \end{subequations} Since \begin{subequations} \label{eq: evaluation Upsilon continuum} \begin{equation} \begin{split} \Gamma(\bs{q},\bs{q}',\bs{p};\bs{G},\bs{G}')=&\, \left(\bs{q}+\frac{1}{2}\bs{G}\right)\, \bs{\ast}\, \left(\bs{q}'+\frac{1}{2}\bs{G}'\right) - \frac{1}{4} \bs{G}\,\bs{\ast}\,\bs{G}' \equiv \Upsilon(\bs{q},\bs{q}';\bs{G},\bs{G}') \end{split} \end{equation} is independent of $\bs{p}$ and antisymmetric under $\bs{q}\leftrightarrow\bs{q}'$ and $\bs{G}\leftrightarrow\bs{G}'$, \begin{equation} \Upsilon(\bs{q},\bs{q}';\bs{G},\bs{G}')= - \Upsilon(\bs{q}',\bs{q};\bs{G}',\bs{G}), \end{equation} \end{subequations} the algebra~(\ref{eq: algebra rho g p,q continuum b}) closes to \begin{subequations} \label{eq: closure of algebra in continuum} \begin{equation} \left[ \hat{\varrho}(\bs{q} ;\bs{G} ), \hat{\varrho}(\bs{q}';\bs{G}') \right]= F(\bs{q},\bs{q}';\bs{G},\bs{G}')\, \hat{\varrho}(\bs{q}+\bs{q}';\bs{G}+\bs{G}') \end{equation} with the structure constant \begin{equation} \begin{split} F(\bs{q},\bs{q}';\bs{G},\bs{G}')=&\, e^{ \mathrm{i}\, \Upsilon(\bs{q},\bs{q}';\bs{G},\bs{G}') } - ( \bs{q}\leftrightarrow\bs{q}' \hbox{ and } \bs{G}\leftrightarrow\bs{G}' ) = 2\mathrm{i}\, \sin\,\Upsilon(\bs{q},\bs{q}';\bs{G},\bs{G}'). \end{split} \end{equation} \end{subequations} \textit{Proof of completeness.} Choose any function $f:\mathbb{R}^{d}\times\mathbb{R}^{d}\longrightarrow\mathbb{C}$ such that the Fourier transform \begin{equation} f(\bs{q},\bs{p})=: \bar{f}(\bs{q},\bs{p})\, e^{ \mathrm{i}\,\bs{q}\,\bs{\ast}\,\bs{p} } =: \left( \int\limits_{\bs{G}}\, e^{ +\mathrm{i}\, \bs{G}\,\bs{\ast}\,\bs{p} } \, \tilde{f}(\bs{q},\bs{G}) \right) e^{ \mathrm{i}\,\bs{q}\,\bs{\ast}\,\bs{p} } \end{equation} is well defined. For the second equality to be true for arbitrary functions $\bar{f}(\bs{q},\cdot):\mathbb{R}^{d}\to\mathbb{C}$ with the well-defined Fourier transform $\tilde{f}(\bs{q},\cdot):\mathbb{R}^{d}\to\mathbb{C}$, the square matrix $M^{(\bs{\ast})}$ that defines the $\bs{\ast}$ product must be invertible and thus have an even number $d$ of rows (columns). Indeed, the rank of an antisymmetric matrix $M^{(\bs{\ast})}$ is necessarily even. Hence, in odd-dimensional space, $M^{(\bs{\ast})}$ is never invertible as it has at least one vanishing eigenvalue. This means that the $\bs{\ast}$ Fourier transform $ \int\limits_{\bs{G}}\, e^{ +\mathrm{i}\, \bs{G}\,\bs{\ast}\,\bs{p} } \, \tilde{h}(\bs{G}) $ is at best a function of $d-1$ coordinates of $\bs{p}$ if $d$ is odd. For completeness to hold, it is thus necessary that $d$ be even, which we now assume. A sufficient condition for completeness to hold is that the linear space spanned by the operators~(\ref{eq: def T's continuum}) is limited to the coherent superpositions of the form~(\ref{eq: def rho f continuum}) such that the function $\bar{f}(\bs{q},\cdot):\mathbb{R}^{d}\to\mathbb{C}$ has a Fourier transform for any given momentum $\bs{q}$. With the help of Eq.~(\ref{eq: def rgo G q continuum b}), we can then write \begin{equation} f(\bs{q},\bs{p})= \int\limits_{\bs{G}}\, \tilde{f}(\bs{q},\bs{G})\, e^{ \mathrm{i}\,\bs{q}\,\bs{\ast}\,\bs{G}/2 }\, e^{ \mathrm{i}\Phi(\bs{q},\bs{p};\bs{G}) }. \end{equation} In turn and with the help of Eq.~(\ref{eq: def rho f continuum}), we conclude with \begin{equation} \hat{\varrho}^{f}(\bs{q})= \int\limits_{\bs{p}}\, f(\bs{q},\bs{p})\, \hat{c}^{\dag}(\bs{q}+\bs{p})\, \hat{c}^{\,}(\bs{p}) = \int\limits_{\bs{G}}\, \tilde{f}(\bs{q},\bs{G}) e^{ \mathrm{i}\,\bs{q}\,\bs{\ast}\,\bs{G}/2 } \int\limits_{\bs{p}}\, e^{ \mathrm{i}\Phi(\bs{q},\bs{p};\bs{G}) }\, \hat{c}^{\dag}(\bs{q}+\bs{p})\, \hat{c}^{\,}(\bs{p}) = \int\limits_{\bs{G}}\, \tilde{f}(\bs{q},\bs{G})\, e^{ \mathrm{i}\,\bs{q}\,\bs{\ast}\,\bs{G}/2 }\, \hat{\varrho}(\bs{q};\bs{G}). \end{equation} \end{widetext} \section{ The case of the lattice } \label{sec: lattice} We begin with some notation. Let $\Lambda$ be a Bravais lattice and $\Lambda^{\star}$ be its dual. Sites in $\Lambda$ are denoted by $\bs{r}$, and sites in $\Lambda^{\star}$ are denoted by $\bs{G}$. The first Brillouin zone is denoted $\Omega^{\,}_{\mathrm{BZ}}$; it contains the origin of $\mathbb{R}^{d}$. We shall decompose $\mathbb{R}^{d}$ into a set of shifted Brillouin zones $\Omega^{\bs{G}}_{\mathrm{BZ}}$ obtained by translation of $\Omega^{\,}_{\mathrm{BZ}}$ by $\bs{G}\,\in\Lambda^{\star}$, \begin{equation} \mathbb{R}^{d}= \bigcup_{\bs{G}\,\in\Lambda^{\star}} \Omega^{\bs{G}}_{\mathrm{BZ}}. \label{eq: union all BZs} \end{equation} Sites in $\Omega^{\,}_{\mathrm{BZ}}$ are denoted $\bs{k}$, $\bs{q}$, and $\bs{p}$. If $\bs{q}$ and $\bs{p}$ belong to the Brillouin zone $\Omega^{\,}_{\mathrm{BZ}}$, this might not be the case for $\bs{q}+\bs{p}$. There is a unique $\bs{G}^{\,}_{\bs{q}+\bs{p}}\,\in\Lambda^{\star}$ such that $\bs{q}+\bs{p}\,\in\Omega^{\bs{G}^{\,}_{\bs{q}+\bs{p}}}_{\mathrm{BZ}}$. Correspondingly, $\bs{q}+\bs{p}-\bs{G}^{\,}_{\bs{q}+\bs{p}}\,\in\Omega^{\,}_{\mathrm{BZ}}$. We shall use the notation \begin{subequations} \begin{equation} [\bs{q}+\bs{p}]^{\,}_{\mathrm{BZ}}\equiv \bs{q}+\bs{p}-\bs{G}^{\,}_{\bs{q}+\bs{p}} \,\in\Omega^{\,}_{\mathrm{BZ}}. \label{eq: def noation [...]BZ} \end{equation} Two observations are pertinent to what follows. First, the bracket~(\ref{eq: def noation [...]BZ}) obeys the nesting rule \begin{equation} \left[\vphantom{\Big[} \bs{q}'+[\bs{q}+\bs{p}]^{\,}_{\mathrm{BZ}} \right]^{\,}_{\mathrm{BZ}}= [ \bs{q} + \bs{q}' + \bs{p} ]^{\,}_{\mathrm{BZ}} \label{eq: nested bracket} \end{equation} \end{subequations} for any triplet $\bs{q}$, $\bs{q}'$, and $\bs{p}$ from the first Brillouin zone. Second, if we hold $\bs{q}\,\in\Omega^{\ }_{\mathrm{BZ}}$ fixed and vary $\bs{p}$ across the Brillouin zone $\Omega^{\ }_{\mathrm{BZ}}$, the unique reciprocal wave vector $\bs{G}^{\,}_{\bs{q}+\bs{p}}\,\in\Lambda^{\star}$ such that $\bs{q}+\bs{p}-\bs{G}^{\,}_{\bs{q}+\bs{p}}\,\in\Omega^{\,}_{\mathrm{BZ}}$ defines an implicit function of $\bs{q}$ that is piecewise constant with discontinuous jumps each time $\bs{q}+\bs{p}$ crosses the boundary separating neighboring Brillouin zones. We define the fermionic Fock space $\mathfrak{F}$ with the help of the algebra \begin{equation} \begin{split} & \left\{ \hat{c}^{\, }_{\bs{k} +\bs{G} }, \hat{c}^{\dag}_{\bs{k}'+\bs{G}'} \right\}= \delta^{\ }_{\bs{k},\bs{k}'}, \\ & \left\{ \hat{c}^{\,}_{\bs{k} +\bs{G} }, \hat{c}^{\,}_{\bs{k}'+\bs{G}'} \right\}= \left\{ \hat{c}^{\dag}_{\bs{k} +\bs{G} }, \hat{c}^{\dag}_{\bs{k}'+\bs{G}'} \right\}=0 \end{split} \label{eq: def fermion algebra lattice} \end{equation} for any pair $\bs{k}$ and $\bs{k}'$ from the Brillouin zone $\Omega^{\,}_{\mathrm{BZ}}$ and any pair $\bs{G}$ and $\bs{G}'$ from the dual lattice $\Lambda^{\star}$. The linear space of fermionic bilinears that we study is spanned by the basis \begin{subequations} \begin{equation} \hat{T}^{\,}_{\bs{q}^{\,}_{1},\bs{q}^{\,}_{2}}:= \hat{c}^{\dag}_{\bs{q}^{\,}_{1}}\, \hat{c}^{\, }_{\bs{q}^{\,}_{2}}, \label{eq: ded T12 family} \end{equation} which obeys the algebra \begin{equation} \left[ \hat{T}^{\,}_{\bs{q}^{\,}_{1},\bs{q}^{\,}_{2}}, \hat{T}^{\,}_{\bs{q}^{\prime}_{1},\bs{q}^{\prime}_{2}} \right]= \delta^{\,}_{\bs{q}^{\,}_{2},\bs{q}^{\prime}_{1}}\, \hat{T}^{\,}_{\bs{q}^{\,}_{1},\bs{q}^{\prime}_{2}} - \delta^{\,}_{\bs{q}^{\,}_{1},\bs{q}^{\prime}_{2}}\, \hat{T}^{\,}_{\bs{q}^{\prime}_{1},\bs{q}^{\,}_{2}} \label{eq: algebra T's} \end{equation} \end{subequations} for any quadruple $\bs{q}^{\,}_{1}$, $\bs{q}^{\,}_{2}$, $\bs{q}^{\prime}_{1}$, and $\bs{q}^{\prime}_{2}$ from the Brillouin zone. For any $\bs{q}$ from the Brillouin zone $\Omega^{\,}_{\mathrm{BZ}}$ and for any function $ f:\Omega^{\,}_{\mathrm{BZ}}\times\Omega^{\,}_{\mathrm{BZ}} \longrightarrow\mathbb{C} $, define \begin{equation} \hat{\varrho}^{f}_{\bs{q}}:= \sum_{\bs{p}\,\in\Omega^{\,}_{\mathrm{BZ}}} f^{\,}_{\bs{q},\bs{p}}\, \hat{c}^{\dag}_{[\bs{q}+\bs{p}]^{\,}_{\mathrm{BZ}}}\, \hat{c}^{\, }_{\bs{p}}. \label{eq: def varrho lattice} \end{equation} There follows the algebra [with the help of Eq.~(\ref{eq: nested bracket})] \begin{equation} \begin{split} \left[ \hat{\varrho}^{f }_{\bs{q} }, \hat{\varrho}^{f'}_{\bs{q}'} \right]=&\, \sum_{\bs{p}\,\in\Omega^{\,}_{\mathrm{BZ}}} \left[ f^{\, }_{ \bs{q},[\bs{q}'+\bs{p}]^{\,}_{\mathrm{BZ}} }\, f^{\prime}_{ \bs{q}',\bs{p} }\, - ( \bs{q},f\leftrightarrow\bs{q}',f' ) \right] \\ &\, \times \hat{c}^{\dag}_{ [\bs{q}+\bs{q}'+\bs{p}]^{\,}_{\mathrm{BZ}} }\, \hat{c}^{\, }_{\bs{p} } \end{split} \label{eq: commutator two rhos} \end{equation} for any pair of momenta $\bs{q}$ and $\bs{q}'$ from the Brillouin zone and for any pair of functions $f$ and $f'$. The choice $f^{\,}_{\bs{q},\bs{p}}=1$ for any pair $\bs{q}$ and $\bs{p}$ from the Brillouin zone defines the momentum representation of the local density operator \begin{subequations} \begin{equation} \hat{\rho}^{\,}_{\bs{q}}:= \sum_{\bs{p}\in\Omega^{\,}_{\mathrm{BZ}}} \hat{c}^{\dag}_{[\bs{q}+\bs{p}]^{\,}_{\mathrm{BZ}}}\, \hat{c}^{\,}_{\bs{p}}. \label{eq: def rho lattice} \end{equation} Any pair thereof commutes as \begin{equation} \left[ \hat{\rho}^{\,}_{\bs{q} }, \hat{\rho}^{\,}_{\bs{q}'} \right]=0. \label{eq: algebra densities lattice} \end{equation} \end{subequations} Another choice of the function $f$ is made with the family \begin{subequations} \label{eq: def Phi G q p} \begin{equation} \hat{\varrho}^{\bs{G} }_{\bs{q}}:= \sum_{\bs{p}\,\in\Omega^{\,}_{\mathrm{BZ}}} e^{+\mathrm{i}\,\Phi^{\bs{G}}_{\bs{q},\bs{p}}}\, \hat{c}^{\dag}_{[\bs{q}+\bs{p}]^{\,}_{\mathrm{BZ}}}\, \hat{c}^{\, }_{\bs{p}} \label{eq: def varrho G q} \end{equation} for any $\bs{G}$ from the dual lattice and $\bs{q}$ from the Brillouin zone, where \begin{equation} \Phi^{\bs{G}}_{\bs{q},\bs{p}}:= \frac{1}{2\pi} \left[ (\bs{q}+\bs{G})\,\bs{\ast}\,\bs{p} - (\bs{q}+\bs{p}+\bs{G}) \,\bs{\ast}\, \bs{G}^{\,}_{\bs{q}+\bs{p}+\bs{G}} \right] \label{eq: def Phi G q p a} \end{equation} \end{subequations} and the $d\times d$ matrix $M^{(\bs{\ast})}_{\Lambda}$ that defines the $\bs{\ast}$ product is antisymmetric, as was the case in the continuum, but with the restriction that \begin{equation} \frac{1}{2\pi}\,\bs{G}\,\bs{\ast}\,\bs{G}'= 0 \hbox{ mod } 2\pi, \qquad \forall\bs{G},\bs{G}'\in\Lambda^{\star}, \label{eq: * product set by Bravais lattice} \end{equation} to accommodate the $d$-dimensional Bravais lattice $\Lambda$. When $d$ is even, $M^{(\bs{\ast})}_{\Lambda}$ has a nonvanishing determinant by assumption. As announced, the $U(1)$ algebra \begin{equation} \left[ \hat{\varrho}^{\bs{G} }_{\bs{q} }, \hat{\varrho}^{\bs{G}'}_{\bs{q}'} \right]= 2\mathrm{i}\, \sin\, \left( \frac{ (\bs{q}+\bs{G})\,\bs{\ast}\,(\bs{q}'+\bs{G}') } { 2\pi } \right)\, \hat{\varrho}^{\bs{G} +\bs{G}'}_{\bs{q}+\bs{q}'} \label{eq: closed GMP algebra on the lattice} \end{equation} follows for any quadruple $\bs{q}$, $\bs{q}'$, $\bs{G}$, and $\bs{G}^{\prime}$. The proof of Eq.~(\ref{eq: closed GMP algebra on the lattice}) is technically more involved than that of Eq.~(\ref{eq: closure of algebra in continuum}) as one needs to account for the restriction on momenta to the first Brillouin zone. For this reason, we refer the reader to Appendi ~\ref{appsec: Proof closure algebra} for the details of the proof. To prove completeness, we assume that the dimensionality $d$ is even for the same reasons as given below Eq ~(\ref{eq: def rgo G q continuum c}). One verifies that \begin{subequations} \label{eq: evaluation Phi q and p lattice} \begin{equation} \Phi^{\bs{G}}_{\bs{q},\bs{p}}= \Theta^{\,}_{\bs{q},\bs{p}} + \frac{ \bs{G}\,\bs{\ast}\,\bs{p} - \bs{p}\,\bs{\ast}\,\bs{G} } { 2\pi } + \Theta^{\bs{G}}_{\bs{q}} + \hbox{ mod } 2\pi, \end{equation} where the function \begin{equation} 2\pi\, \Theta^{\,}_{\bs{q},\bs{p}}:= \bs{q}\,\bs{\ast}\,\bs{p} - (\bs{q}+\bs{p})\,\bs{\ast}\,\bs{G}^{\,}_{\bs{q}+\bs{p}} \label{eq: evaluation Phi q and p lattice a} \end{equation} is independent of $\bs{G}$, while the function \begin{equation} 2\pi\, \Theta^{\bs{G}}_{\bs{q}}:= - \bs{q}\,\bs{\ast}\,\bs{G} \label{eq: evaluation Phi q and p lattice b} \end{equation} \end{subequations} is independent of $\bs{p}$. We will use the fact that \begin{equation} \Theta^{\,}_{\bs{q}=\bs{0},\bs{p}}= \Theta^{\bs{G}}_{\bs{q}=\bs{0}}=0 \label{eq: Theta-q-p and Theta-q-G can vanish if q=0} \end{equation} in Sec.~\ref{subsec: Normal ordering and the bare band width} and Appendix~\ref{appendix: Equation MAIN RESULT}. We define the function $ \bar{f}:\Omega^{\,}_{\mathrm{BZ}}\times\Omega^{\,}_{\mathrm{BZ}} \longrightarrow\mathbb{C} $ by \begin{equation} f^{\,}_{\bs{q},\bs{p}}=: \bar{f}^{\,}_{\bs{q},\bs{p}}\, e^{+\mathrm{i}\,\Theta^{\,}_{\bs{q},\bs{p}}}. \label{eq: from f to tilde f lattice} \end{equation} We then use the Fourier expansion \begin{equation} \bar{f}^{\,}_{\bs{q},\bs{p}}=: \sum_{\bs{G}\,\in\Lambda^{\star}} \tilde{f}^{\bs{G}}_{\bs{q}}\, e^{ +\mathrm{i}\, \left( \bs{G}\,\bs{\ast}\,\bs{p} - \bs{p}\,\bs{\ast}\,\bs{G} \right) /(2\pi) } \label{eq: Fourier trsf lattice} \end{equation} to do the following manipulations: \begin{eqnarray} f^{\,}_{\bs{q},\bs{p}}&=& e^{+\mathrm{i}\,\Theta^{\,}_{\bs{q},\bs{p}}}\, \bar{f}^{\,}_{\bs{q},\bs{p}} \nonumber\\ &=& e^{+\mathrm{i}\,\Theta^{\,}_{\bs{q},\bs{p}}}\, \left[ \sum_{\bs{G}\,\in\Lambda^{\star}} \tilde{f}^{\bs{G}}_{\bs{q}} e^{ +\mathrm{i}\, \left( \bs{G}\,\bs{\ast}\,\bs{p} - \bs{p}\,\bs{\ast}\,\bs{G} \right) /(2\pi) } \right] \nonumber\\ &=& \sum_{\bs{G}\,\in\Lambda^{\star}} \tilde{f}^{\bs{G}}_{\bs{q}} e^{ + \mathrm{i}\, \Theta^{\,}_{\bs{q},\bs{p}} + \mathrm{i}\, \left( \bs{G}\,\bs{\ast}\,\bs{p} - \bs{p}\,\bs{\ast}\,\bs{G} \right) /(2\pi) + \mathrm{i}\, \Theta^{\bs{G}}_{\bs{q}} - \mathrm{i}\, \Theta^{\bs{G}}_{\bs{q}} } \nonumber\\ &=& \sum_{\bs{G}\,\in\Lambda^{\star}} \underbrace{ \tilde{f}^{\bs{G}}_{\bs{q}} e^{ - \mathrm{i}\, \Theta^{\bs{G}}_{\bs{q}} } }_{\hbox{independent of $\bs{p}$}} \times\, e^{ + \mathrm{i}\, \Phi^{\bs{G}}_{\bs{q},\bs{p}} }. \label{eq: correct fourier expansion f} \end{eqnarray} Inserting Eq.~(\ref{eq: correct fourier expansion f}) into Eq.~(\ref{eq: def varrho lattice}) gives \begin{eqnarray} \hat{\varrho}^{f}_{\bs{q}}&=& \sum_{\bs{G}\,\in\Lambda^{\star}} \tilde{f}^{\bs{G}}_{\bs{q}} e^{ - \mathrm{i}\, \Theta^{\bs{G}}_{\bs{q}} } \, \left( \sum_{\bs{p}\,\in\Omega^{\,}_{\mathrm{BZ}}} e^{ + \mathrm{i}\, \Phi^{\bs{G}}_{\bs{q},\bs{p}} } \, \hat{c}^{\dag}_{[\bs{q}+\bs{p}]^{\,}_{\mathrm{BZ}}}\, \hat{c}^{\, }_{\bs{p}} \right) \nonumber\\ &=& \sum_{\bs{G}\,\in\Lambda^{\star}} \tilde{f}^{\bs{G}}_{\bs{q}} e^{ - \mathrm{i}\, \Theta^{\bs{G}}_{\bs{q}} }\, \hat{\varrho}^{\bs{G}}_{\bs{q}}, \label{eq: eq stating completeness} \end{eqnarray} where we made use of definition~(\ref{eq: def varrho G q}) to reach the last equality. Completeness has thus been proved if the space of functions $f$ is restricted to those for which Fourier transform~(\ref{eq: Fourier trsf lattice}) exists. \section{ Discussion } \label{sec: Discussion} As pointed out by Murthy and Shankar, the magnetic translation algebra is not limited to situations in which time-reversal symmetry is broken. From the point of view of many-body physics, the generators of the magnetic translation algebra can also be thought of as special coherent superpositions of particle-hole excitations. As such they are always present in the many-body Fock space. If time-reversal symmetry is either explicitly or spontaneously broken, it is plausible that these excitations might be selected by the many-body interactions to play an important role at low energies and long distances. However, the breaking of time-reversal symmetry alone is no guarantee for the FQHE. The selection of a ground state supporting the FQHE is a subtle compromise between the kinetic energy and the interactions. If time-reversal symmetry is neither explicitly nor spontaneously broken, it is harder to imagine that these excitations are of relevance to the low-energy and long-distance properties of interacting electrons. With this motivation in mind, we are going to discuss the following two cases. \textit{(a) The $f$-sum rule.} We begin in Sec.~\ref{subsec: f sum rule} with the case of interacting electrons in the continuum limit without explicit breaking of time-reversal symmetry and for which spontaneous symmetry breaking of time-reversal symmetry is not anticipated. This situation is the one expected if electrons interact through sufficiently weak density-density interactions. We are going to show how to recover the $f$-sum rule when we choose to represent the many-body Hamiltonian in terms of the generators~(\ref{eq: def rgo G q continuum}) of the magnetic translation algebra for any even dimension $d$ of space. This exercise serves two purposes. First, it gives us the confidence that we can solve an interacting problem devoid of any magnetic field using the magnetic translation algebra, i.e., using a technology that is geared to the presence of a magnetic field. We find this result remarkable. Second, it is a warning against blindly performing a mean-field approximation of the Hamiltonian, when represented in terms of the generator ~(\ref{eq: def rgo G q continuum}), that delivers the FQHE. In other words, one should be cautious when using the magnetic translation algebra in an approximate fashion to predict a FQHE, for such treatments can predict a FQHE when none is known to occur. \textit{(b) FCIs at intermediate rather than strong couplings.} To illustrate the delicate competition between the kinetic energy and the interactions, we consider in Sec ~\ref{subsec: Normal ordering and the bare band width} a Hamiltonian describing a band insulator to which we add density-density interactions that preserve translation invariance. We represent the projection of this Hamiltonian onto a single band in terms of the generators~(\ref{eq: def Phi G q p}) for any even dimension $d$ of the Bravais lattice. In doing so, we are going to show that normal ordering can change the bare bandwidth by a value comparable to the characteristic energy for the interactions. Hence, if the bare bandwidth is smaller than the characteristic energy for the interactions, as is usually believed to be necessary to stabilize a FCI, normal ordering can be an effect of order 1. As an application of this result, we consider any projected and normal-ordered Hamiltonian $\hat{H}$ describing itinerant fermions in a flat band carrying a nonvanishing Chern number and interacting though a density-density interaction that preserves translation invariance. We assume that $\hat{H}$ supports a FCI as the ground state at the partial filling $0<\nu<1$ of the flat band. A particle-hole transformation turns the normal-ordered $\hat{H}$ into $\widetilde{H}$, whereby $\widetilde{H}$ must support a FCI made of holes as the ground state at the partial filling $\widetilde{\nu}=1-\nu$. What is remarkable is that the projected Hamiltonian $\widetilde{H}$, when decomposed into a one-body term and a normal-ordered interaction, can be thought of as describing holes with a genuine dispersion and interacting through a normal-ordered density-density interaction sharing the same functional form as $\hat{H}$. The dispersion of the holes is genuine because its width is generically nonvanishing and of the order of the characteristic interaction strength times a numerical factor of geometrical origin. Indeed, this numerical factor arises because of the geometry induced by the overlaps between pairs of Bloch states from the original flat band ~\cite{Page87} When these overlaps are constant, as is the case in the FQHE, this numerical factor vanishes so that $\widetilde{H}$ can also be assigned a flat band. When these overlaps are functions of both the relative and center-of-mass momenta of the pair of Bloch states, then this numerical factor can be nonvanishing. That this numerical factor can be of order unity, and thus matters in a crucial way in order to stabilize the FCI at the filling fraction $\widetilde{\nu}$, can be inferred from the following numerical results. In Ref.~\onlinecite{Neupert11a}, a band insulator with two flat bands supporting the Chern numbers $\pm1$ was shown to support a FCI phase at the filling fraction $1/3$ in the presence of a repulsive nearest-neighbor density-density interaction projected onto the lower flat band. In Ref.~\onlinecite{Regnault11a}, the \textit{same} band insulator was shown to support the \textit{same} FCI phase at the \textit{same} filling fraction $1/3$ in the presence of a \textit{different} interaction, namely, the repulsive nearest-neighbor density-density interaction projected onto the lower flat band and then \textit{normal ordered}. Hence, at the filling fraction $1/3$, the FCI phase is robust to whether the projected interaction is normal ordered or not. In Ref.~\onlinecite{Neupert11b}, the same model as in Ref.~\onlinecite{Neupert11a} was also shown to support a FCI phase at the filling fraction $2/3$. However, no evidence for a topological phase was found at the filling fraction $2/3$ using the \textit{normal ordered} projected interaction in Ref.~\onlinecite{Regnault11a}. Hence, at the filling fraction $2/3$, the FCI phase either is not selected as the ground state or is very close to a phase transition to a phase without topological order when the projected interaction is normal ordered, while the FCI phase is selected as the ground state when the projected interaction is not normal ordered. We conclude that the characteristic bandwidth of the one-body term that is generated by normal ordering the repulsive nearest-neighbor density-density interaction must be of the same order as the characteristic energy scale of the interaction. Both quantitative examples are consistent with the fact that interactions projected onto a single Chern band can induce one-body terms that can significantly alter the bandwidth of lattice Hamiltonians for itinerant fermions. \subsection{ $f$-sum rule } \label{subsec: f sum rule} The $f$-sum rule holds for electrons with mass $m$ and the quadratic dispersion \begin{subequations} \label{eq: f sum rule} \begin{equation} \varepsilon(\bs{p})\,:= \frac{\bs{p}^{2}}{2m} \label{eq: f sum rule a} \end{equation} subjected to any one-body potential $V$ and interacting with any translation-invariant density-density interaction $U$ in any dimension $d$. Pines and Nozi\`eres presented a derivation thereof in Ref.~\onlinecite{Pines66} that hinges on the fact that the operator identity \begin{equation} \Big[ \hat{\rho}(\bs{q}), [\hat{H},\hat{\rho}(\bs{-q})] \Big]= \frac{\bs{q}^{2}}{m}\, \hat{N} \label{eq: f sum rule b} \end{equation} holds for any momentum $\bs{q}\in\mathbb{R}^{d}$. Here, \begin{equation} \hat{N}:= \int\limits_{\bs{p}}\, \hat{c}^{\dag}(\bs{p})\, \hat{c}(\bs{p}) \label{eq: f sum rule c} \end{equation} is the conserved particle number operator, and the many-body Hamiltonian $ \hat{H}:= \hat{H}^{\,}_{0} + \hat{H}^{\,}_{V} + \hat{H}^{\,}_{U} $ is the sum of the dispersion \begin{equation} \hat{H}^{\,}_{0}:= \int\limits_{\bs{p}} \varepsilon(\bs{p})\, \hat{c}^{\dag}(\bs{p})\, \hat{c}(\bs{p}), \label{eq: f sum rule d} \end{equation} the one-body potential \begin{equation} \hat{H}^{\,}_{V}:= \int\limits_{\bs{q}} V(\bs{q})\, \hat{\rho}(-\bs{q}), \label{eq: f sum rule e} \end{equation} and the two-body potential \begin{equation} \hat{H}^{\,}_{U}:= \int\limits_{\bs{q}} U(\bs{q})\; \hat{\rho}(\bs{q})\, \hat{\rho}(\bs{-q}). \label{eq: f sum rule f} \end{equation} \end{subequations} The only nonvanishing contribution to the nested commutator in Eq ~(\ref{eq: f sum rule b}) arises from the quadratic dispersion $\hat{H}^{\,}_{0}$ in view of the Abelian algebr ~(\ref{eq: algebra densities continuum}). Equation~(\ref{eq: f sum rule b}) follows from the algebra~(\ref{eq: def fermion algebra continuum}). As a sanity check, we are going to verify Eq ~(\ref{eq: f sum rule b}) for any even dimension $d$ with the help of the magnetic translation algebra \begin{equation} \begin{split} \left[ \hat{\varrho}(\bs{q} ;\bs{G} ), \hat{\varrho}(\bs{q}';\bs{G}') \right]=&\, 2\mathrm{i}\, \sin\Upsilon(\bs{q},\bs{q}';\bs{G},\bs{G}') \\ &\, \times\, \hat{\varrho}(\bs{q}+\bs{q}';\bs{G}+\bs{G}'), \end{split} \label{eq: magnetic algebra for f sum rule} \end{equation} where $\Upsilon(\bs{q},\bs{q}';\bs{G},\bs{G}')$ is defined in Eq.~(\ref{eq: evaluation Upsilon continuum}). We shall only evaluate the contribution from the quadratic dispersion~(\ref{eq: f sum rule d}). First, we recall that $\hat{\rho}(\bs{q})=\hat{\varrho}(\bs{q};\bs{-q})$ according to Eq.~(\ref{eq: relating rho and varrho continuum}). Second, we expand $\hat{H}^{\,}_{0}$ in terms of the magnetic translation densities $\hat{\varrho}(\bs{q};\bs{G})$, \begin{subequations} \label{eq: rep H0 varrho} \begin{eqnarray} \hat{H}^{\,}_{0}&=& \int\limits_{\bs{G}} \tilde{\varepsilon}(\bs{G})\; \hat{\varrho}(\bs{q}=0;\bs{G}), \label{eq: rep H0 varrho a} \end{eqnarray} where \begin{eqnarray} \tilde{\varepsilon}(\bs{G})&=& \int\limits_{\bs{p}} e^{ -\mathrm{i}\,\bs{G}\bs{\ast}\bs{p} }\, \varepsilon(\bs{p}). \label{eq: rep H0 varrho b} \end{eqnarray} \end{subequations} It is with Eq.~(\ref{eq: rep H0 varrho b}) that we made use of $d$ being even. \begin{widetext} \noindent Third, we make a first use of Eq.~(\ref{eq: magnetic algebra for f sum rule}) to evaluate the internal commutator \begin{eqnarray} \Big[\hat{\rho}(\bs{q}),[\hat H,\hat{\rho}(\bs{-q})]\Big] &=& \int\limits_{\bs{G}} \tilde{\varepsilon}(\bs{G})\; \Big[ \hat{\varrho}(\bs{q};\bs{-q}), [ \hat{\varrho}({0};\bs{G}), \hat{\varrho}(\bs{-q};\bs{q}) ] \Big] \nonumber\\ &=& \int\limits_{\bs{G}} \tilde{\varepsilon}(\bs{G})\; 2\mathrm{i}\, \sin\,\Upsilon(0,\bs{-q};\bs{G},\bs{q})\; [ \hat{\varrho}(\bs{q};\bs{-q}), \hat{\varrho}(\bs{-q};\bs{G+q}) ]. \end{eqnarray} We make a second use of Eq.~(\ref{eq: magnetic algebra for f sum rule}) to evaluate the external commutator \begin{eqnarray} \Big[\hat{\rho}(\bs{q}),[\hat H,\hat{\rho}(\bs{-q})]\Big] &=& \int\limits_{\bs{G}} \tilde{\varepsilon}(\bs{G})\; 2\mathrm{i}\, \sin\,\Upsilon(0,\bs{-q};\bs{G},\bs{q})\; 2\mathrm{i}\, \sin\,\Upsilon(\bs{q},\bs{-q};\bs{-q},\bs{G+q})\; \hat{\varrho}(0;\bs{G}) \nonumber\\ &=& \int\limits_{\bs{G}} \tilde{\varepsilon}(\bs{G})\; \left[ 2\mathrm{i}\, \sin\, \left( \frac{\bs{q}\ast\bs{G}}{2} \right) \; \right]^2 \; \hat{\varrho}(0;\bs{G}). \end{eqnarray} The integral over $\bs{G}$ can now be performed, \begin{eqnarray} \Big[\hat{\rho}(\bs{q}),[\hat H,\hat{\rho}(\bs{-q})]\Big] &=& \int\limits_{\bs{G}} \tilde{\varepsilon}(\bs{G})\; \left( e^{+\mathrm{i}\,\bs{q}\ast\bs{G}}+e^{-\mathrm{i}\,\bs{q}\ast\bs{G}} -2 \right) \; \int\limits_{\bs{p}} \; e^{+\mathrm{i}\,\bs{G}\ast\bs{p}}\; \hat{c}^{\dag}(\bs{p})\, \hat{c}(\bs{p}) \nonumber\\ &=& \int\limits_{\bs{p}} \frac{1}{2m} \left( |\bs{p+q}|^2 + |\bs{p-q}|^2 - 2|\bs{p}|^2 \right) \; \hat{c}^{\dag}(\bs{p})\, \hat{c}(\bs{p}) \nonumber\\ &=& \frac{\bs{q}^{2}}{m} \int\limits_{\bs{p}}\, \hat{c}^{\dag}(\bs{p})\, \hat{c}(\bs{p}). \end{eqnarray} Equation~(\ref{eq: f sum rule b}) follows from the definition~(\ref{eq: f sum rule c}). \end{widetext} \subsection{ Projected Hamiltonians and the importance of induced one-body terms } \label{subsec: Normal ordering and the bare band width} We begin with the generic lattice Hamiltonian \begin{equation} \hat{H}:= \hat{H}^{\,}_{0} + \hat{H}^{\,}_{U}, \label{eq: generic lattice H} \end{equation} where the dimensionality $d$ of the lattice is assumed even. Our goal is to understand how normal ordering of the interaction $\hat{H}^{\,}_{U}$ changes the bandwidth of the kinetic Hamiltonian $\hat{H}^{\,}_{0}$. To this end, we need to choose the representation in which we define $\hat{H}^{\,}_{0}$ and $\hat{H}^{\,}_{U}$. We will see that the choice of the representation of $\hat{H}$ can change the effects on $\hat{H}^{\,}_{0}$ of normal ordering on $\hat{H}^{\,}_{U}$. The kinetic Hamiltonian is defined by \begin{subequations} \begin{equation} \hat{H}^{\,}_{0}:= \frac{1}{2} \sum_{\bs{r},\bs{r}'\in\Lambda} \sum_{\alpha,\alpha'} \left( \hat{\psi}^{\dag}_{\bs{r},\alpha}\, t^{\alpha,\alpha'}_{\bs{r}-\bs{r}'}\, \hat{\psi}^{\,}_{\bs{r}',\alpha'} + \mathrm{H.c.} \right), \end{equation} where the hopping amplitudes \begin{equation} t^{\alpha,\alpha'}_{\bs{r}-\bs{r}'}= \left( t^{\alpha',\alpha}_{\bs{r}'-\bs{r}} \right)^{*} \end{equation} \end{subequations} decay exponentially fast with the separation between any pair of sites $\bs{r}$ and $\bs{r}'$ from the lattice $\Lambda$ and we have reinstated a finite number of internal degrees of freedom labeled by the \textit{orbital index} $\alpha$. If $N$ denotes the number of sites in $\Lambda$, we can perform the Fourier transformation to the band basis in two steps. First, we do the Fourier transformation \begin{subequations} \begin{equation} \hat{\psi}^{\dag}_{\bs{r},\alpha}=: \sum_{\bs{p}\in\Omega^{\,}_{\mathrm{BZ}}} \frac{ e^{ -\mathrm{i}\,\bs{p}\cdot\bs{r} }\, } { \sqrt{N} } \hat{\psi}^{\dag}_{\bs{p},\alpha}, \quad \hat{\psi}^{\,}_{\bs{r},\alpha}=: \sum_{\bs{p}\in\Omega^{\,}_{\mathrm{BZ}}} \frac{ e^{ +\mathrm{i}\,\bs{p}\cdot\bs{r} }\, } { \sqrt{N} } \hat{\psi}^{\,}_{\bs{p},\alpha}, \end{equation} in terms of which \begin{equation} \begin{split} & \hat{H}^{\,}_{0}= \sum_{\bs{p}\in\Omega^{\,}_{\mathrm{BZ}}} \sum_{\alpha,\alpha'} \hat{\psi}^{\dag}_{\bs{p},\alpha}\, \mathcal{H}^{\alpha,\alpha'}_{\bs{p}}\, \hat{\psi}^{\,}_{\bs{p},\alpha'}, \\ & \mathcal{H}^{\alpha,\alpha'}_{\bs{p}}:= \sum_{\bs{r}\in\Lambda} e^{-\mathrm{i}\bs{p}\cdot\bs{r}}\, t^{\alpha,\alpha'}_{\bs{r}}. \end{split} \end{equation} \end{subequations} Second, for any given $\bs{p}$ from the Brillouin zone, we do the unitary transformation \begin{subequations} \begin{equation} \hat{\psi}^{\dag}_{\bs{p},\alpha}=: \sum_{a} \hat{c}^{\dag}_{\bs{p},a} u^{\alpha\,*}_{\bs{p},a}, \qquad \hat{\psi}^{\,}_{\bs{p},\alpha}=: \sum_{a} u^{\alpha}_{\bs{p},a}\, \hat{c}^{\,}_{\bs{p},a}, \end{equation} in terms of which \begin{equation} \hat{H}^{\,}_{0}= \sum_{\bs{p}\in\Omega^{\,}_{\mathrm{BZ}}} \sum_{a} \hat{c}^{\dag}_{\bs{p},a}\, \varepsilon^{\,}_{\bs{p},a}\, \hat{c}^{\,}_{\bs{p},a}. \end{equation} \end{subequations} The algebra~(\ref{eq: def fermion algebra lattice}) applies to the band operators labeled by the \textit{band index} $a$ if one multiplies the Kronecker symbol $\delta^{\,}_{\bs{p},\bs{p}'}$ in the Brillouin zone by the Kronecker symbol $\delta^{\,}_{a,a'}$ among the bands. The algebra~(\ref{eq: def fermion algebra lattice}) thus endows the orbital creation and annihilation operators with the canonical fermion algebra. The interacting Hamiltonian is defined by \begin{subequations} \label{eq: def lattice interacting H 0 and U} \begin{equation} \begin{split} \hat{H}^{\,}_{U}:=&\, \sum_{\bs{r},\bs{r}'\in\Lambda} \sum_{\alpha,\alpha'} \hat{\rho}^{\psi}_{\bs{r} ,\alpha }\, U^{\alpha,\alpha'}_{\bs{r}-\bs{r}'}\, \hat{\rho}^{\psi}_{\bs{r}',\alpha'} \\ =&\, \sum_{\bs{q}\in\Omega^{\,}_{\mathrm{BZ}}} \sum_{\alpha,\alpha'} \hat{\rho}^{\psi}_{+\bs{q} ,\alpha }\, \tilde{U}^{\alpha,\alpha'}_{\bs{q}}\, \hat{\rho}^{\psi}_{-\bs{q},\alpha'}, \end{split} \end{equation} with \begin{equation} \hat{\rho}^{\psi}_{\bs{r},\alpha}:= \hat\psi^{\dag}_{\bs{r},\alpha}\, \hat\psi^{\;}_{\bs{r},\alpha} \label{eq: density orbital basis} \end{equation} the local density at site $\bs{r}\in\Lambda$ and for the orbital $\alpha$. The corresponding Fourier transforms are \begin{equation} \begin{split} & \hat{\rho}^{\psi}_{\bs{q},\alpha}:= \sum_{\bs{p}\in\Omega^{\,}_{\mathrm{BZ}}} \hat\psi^{\dag}_{[\bs{q}+\bs{p}]^{\,}_{\mathrm{BZ}},\alpha}\, \hat\psi^{\;}_{\bs{p},\alpha}, \\ & \tilde{U}^{\alpha,\alpha'}_{\bs{q}}:= \frac{1}{N} \sum_{\bs{r}\in\Lambda} e^{-\mathrm{i}\,\bs{q}\cdot\bs{r}}\, U^{\alpha,\alpha'}_{\bs{r}}. \end{split} \end{equation} \end{subequations} For simplicity, we shall focus on orbital-independent (density-density) interactions, in which case \begin{equation} U^{\alpha,\alpha'}_{\bs{r}}= U^{\,}_{\bs{r}}, \quad \forall \alpha,\alpha^\prime \;. \end{equation} Normal ordering is the operation by which all creation operators are to be moved to the left of the annihilation operators. In the orbital basis, normal ordering results in \begin{subequations} \begin{equation} \hat{H}^{\,}_{U}= \hat{H}^{\prime\psi}_{U}\, +\, \hat{H}^{\prime\prime\psi}_{U}\;. \end{equation} The one-body Hamiltonian $\hat{H}^{\prime\psi}_{U}$, a consequence of the fermion algebra, is proportional to the conserved number operator, \begin{equation} \hat{H}^{\prime\psi}_{U}:= \sum_{\bs{r}\in\Lambda} \sum_{\alpha} U^{\;}_{\bs{0}}\, \hat{\psi}^{\dag}_{\bs{r},\alpha}\, \hat{\psi}^{\, }_{\bs{r},\alpha}\equiv U\,\hat{N}, \end{equation} where we defined $U\equiv U^{\,}_{\bs{r}=\bs{0}}$. The normal-ordered interaction $\hat{H}^{\prime\prime\psi}_{U}$ is \begin{equation} \hat{H}^{\prime\prime\psi}_{U}\equiv \sum_{\bs{r},\bs{r}'\in\Lambda} \sum_{\alpha,\alpha'} U^{\ }_{\bs{r}-\bs{r}'}\, \hat{\psi}^{\dag}_{\bs{r} ,\alpha}\, \hat{\psi}^{\dag}_{\bs{r}',\alpha'}\, \hat{\psi}^{\, }_{\bs{r}',\alpha'}\, \hat{\psi}^{\, }_{\bs{r} ,\alpha}. \end{equation} \end{subequations} \begin{widetext} The one-body term induced by normal ordering is, in the band basis, \begin{subequations} \begin{equation} \hat{H}^{\prime c}_{U}:= U \sum_{\bs{p}\in\Omega^{\,}_{\mathrm{BZ}}} \sum_{a} \hat{c}^{\dag}_{\bs{p},a}\, \hat{c}^{\, }_{\bs{p},a} \equiv U\, \hat{N}. \end{equation} The normal-ordered interaction is, in the band basis, \begin{equation} \begin{split} & \hat{H}^{\prime\prime c}_{U}:= \sum_{\bs{q} \in\Omega^{\,}_{\mathrm{BZ}}} \sum_{\bs{p} \in\Omega^{\,}_{\mathrm{BZ}}} \sum_{\bs{p}'\in\Omega^{\,}_{\mathrm{BZ}}} \sum_{a} \sum_{b} \sum_{a'} \sum_{b'} V^{a,b;a',b'}_{\bs{q},\bs{p},\bs{p}'}\, \hat{c}^{\dag}_{[+\bs{q}+\bs{p} ]^{\,}_{\mathrm{BZ}},a }\, \hat{c}^{\dag}_{[-\bs{q}+\bs{p}']^{\,}_{\mathrm{BZ}},a'}\, \hat{c}^{\, }_{\bs{p}',b'}\, \hat{c}^{\, }_{\bs{p} ,b }, \\ & V^{a,b;a',b'}_{\bs{q},\bs{p},\bs{p}'}:= \tilde{U}^{\,}_{\bs{q}} \sum_{\alpha,\alpha'} u^{\alpha\,*}_{[+\bs{q}+\bs{p} ]^{\,}_{\mathrm{BZ}},a }\, u^{\alpha }_{\bs{p} ,b }\, u^{\alpha'\,*}_{[-\bs{q}+\bs{p}']^{\,}_{\mathrm{BZ}},a'}\, u^{\alpha' }_{\bs{p}',b'}. \end{split} \end{equation} \end{subequations} \end{widetext} In any subspace of the Fock space with a fixed number of particles, normal ordering thus produces a rigid shift of all single-particle energy eigenvalues of $\hat{H}^{\,}_{0}$. For any band $a$, the width of the single-particle dispersion $\varepsilon^{\,}_{a}$ is not affected by the normal ordering, i.e., by adding to or subtracting from $\hat{H}^{\,}_{0}$ the operator $U\,\hat{N}$. We are going to show that this needs not be true any longer if we \textit{first} project Hamiltonian~(\ref{eq: generic lattice H}) onto band $\bar{a}$ and \textit{then} express the resulting projected Hamiltonian in terms of the generators~(\ref{eq: def Phi G q p}). The projection of Hamiltonian~(\ref{eq: generic lattice H}) onto band $\bar{a}$ is \begin{subequations} \label{eq: generic lattice H projected to bar a} \begin{equation} \hat{H}^{\bar{a}}= \hat{H}^{\bar{a}}_{0} + \hat{H}^{\bar{a}}_{U}, \label{eq: generic lattice H projected to bar a A} \end{equation} where the projected kinetic Hamiltonian is \begin{equation} \hat{H}^{\bar{a}}_{0}= \sum_{\bs{p}\in\Omega^{\,}_{\mathrm{BZ}}} \hat{c}^{\dag}_{\bs{p},\bar{a}}\, \left( \varepsilon^{\,}_{\bs{p},\bar{a}} + U \right) \hat{c}^{\,}_{\bs{p},\bar{a}}, \label{eq: generic lattice H projected to bar a B} \end{equation} while the projected interacting Hamiltonian is \begin{widetext} \begin{equation} \begin{split} & \hat{H}^{\bar{a}}_{U}= \sum_{\bs{q} \in\Omega^{\,}_{\mathrm{BZ}}} \sum_{\bs{p} \in\Omega^{\,}_{\mathrm{BZ}}} \sum_{\bs{p}'\in\Omega^{\,}_{\mathrm{BZ}}} V^{\bar{a}}_{\bs{q},\bs{p},\bs{p}'}\, \hat{c}^{\dag}_{[+\bs{q}+\bs{p} ]^{\,}_{\mathrm{BZ}},\bar{a}}\, \hat{c}^{\dag}_{[-\bs{q}+\bs{p}']^{\,}_{\mathrm{BZ}},\bar{a}}\, \hat{c}^{\, }_{\bs{p}',\bar{a}}\, \hat{c}^{\, }_{\bs{p} ,\bar{a}}, \\ & V^{\bar{a}}_{\bs{q},\bs{p},\bs{p}'}= \tilde{U}^{\,}_{\bs{q}} \sum_{\alpha,\alpha'} u^{\alpha\,*}_{[+\bs{q}+\bs{p} ]^{\,}_{\mathrm{BZ}},\bar{a}}\, u^{\alpha }_{\bs{p} ,\bar{a}}\, u^{\alpha'\,*}_{[-\bs{q}+\bs{p}']^{\,}_{\mathrm{BZ}},\bar{a}}\, u^{\alpha' }_{\bs{p}',\bar{a}}. \end{split} \label{eq: generic lattice H projected to bar a C} \end{equation} \end{widetext} \end{subequations} For the purpose of representing the projection of Hamiltonian~(\ref{eq: generic lattice H}) onto band $\bar{a}$ by the magnetic density operator ~(\ref{eq: def Phi G q p}), it is necessary to undo the normal ordering in Eq ~(\ref{eq: generic lattice H projected to bar a C}). In doing so, a second one-body term is produced, \begin{subequations} \label{eq: generic lattice Hprime projected to bar a} \begin{equation} \hat{H}^{\bar{a}}= \hat{H}^{\prime\bar{a}}_{0} + \hat{H}^{\prime\bar{a}}_{U}, \label{eq: generic lattice H projected to bar a AA} \end{equation} where the projected kinetic Hamiltonian is \begin{widetext} \begin{equation} \begin{split} \hat{H}^{\prime\bar{a}}_{0}=&\, \sum_{\bs{p}\in\Omega^{\,}_{\mathrm{BZ}}} \left( \varepsilon^{\,}_{\bs{p},\bar{a}} + U \right)\, \hat{c}^{\dag}_{\bs{p},\bar{a}}\, \hat{c}^{\,}_{\bs{p},\bar{a}} - \sum_{\bs{p}\in\Omega^{\,}_{\mathrm{BZ}}} \left(\sum_{\bs{q}\in\Omega^{\,}_{\mathrm{BZ}}} V^{\bar{a}}_{\bs{q},[-\bs{q}+\bs{p}]^{\,}_{\mathrm{BZ}},\bs{p}} \right)\; \hat{c}^{\dag}_{\bs{p},\bar{a}}\, \hat{c}^{\,}_{\bs{p},\bar{a}}, \end{split} \label{eq: generic lattice H projected to bar a BB} \end{equation} while the projected interacting Hamiltonian is \begin{equation} \begin{split} & \hat{H}^{\prime\bar{a}}_{U}= \sum_{\bs{q} \in\Omega^{\,}_{\mathrm{BZ}}} \sum_{\bs{p} \in\Omega^{\,}_{\mathrm{BZ}}} \sum_{\bs{p}'\in\Omega^{\,}_{\mathrm{BZ}}} V^{\bar{a}}_{\bs{q},\bs{p},\bs{p}'}\, \hat{c}^{\dag}_{[+\bs{q}+\bs{p} ]^{\,}_{\mathrm{BZ}},\bar{a}}\, \hat{c}^{\, }_{\bs{p} ,\bar{a}}\, \hat{c}^{\dag}_{[-\bs{q}+\bs{p}']^{\,}_{\mathrm{BZ}},\bar{a}}\, \hat{c}^{\, }_{\bs{p}',\bar{a}}. \end{split} \label{eq: generic lattice H projected to bar a CC} \end{equation} \end{widetext} \end{subequations} Observe that had we first represented Eq.~(\ref{eq: def lattice interacting H 0 and U}) in the band basis, followed by the projection consisting of restricting all the band indices to $\bar{a}$ prior to normal-ordering, then we would have obtained Eq.~(\ref{eq: generic lattice Hprime projected to bar a}) upon normal ordering without the second term on the right-hand side of Eq.~(\ref{eq: generic lattice H projected to bar a BB}). The correct implementation of projection is to normal order first and then to project, leading to Eq.~(\ref{eq: generic lattice H projected to bar a BB}). Indeed, the order by which normal ordering is followed by restricting all band indices to the projected ones corresponds to sandwiching the Hamiltonian by the projection operator onto a subset of bands. The reverse order by which the density operators is projected onto a subset of bands followed by normal ordering corresponds to sandwiching first all density operators by the projection operator onto a subset of bands and then assembling a Hamiltonian out of these projected density operators. As the projection operators do not commute with the density operators, the order in which the operations of normal ordering and projection are performed matters. \begin{widetext} We can now express the Hamiltonian in terms of the magnetic density operators (the details are provided in Appendix~\ref{appendix: Equation MAIN RESULT}) \begin{subequations} \label{eq: MAIN RESULT} \begin{equation} \begin{split} & \hat{H}^{\prime\bar{a}}_{0}= \sum_{\bs{G}\,\in\Lambda^{\star}} \left( \tilde{\varepsilon}^{\,}_{\bs{G}} + U\,\delta^{\;}_{\bs{G},0} - \sum_{\bs{q}}\;\tilde{U}^{\,}_{\bs{q}}\, \tilde{h}^{\bs{G}}_{-\bs{q}} \right) \hat{\varrho}^{\bs{G}}_{\bs{0}}, \\ & \hat{H}^{\prime\bar{a}}_{U}= \sum_{\bs{q}\in\Omega^{\,}_{\mathrm{BZ}}} \tilde{U}^{\,}_{\bs{q}}\; \sum_{\bs{G},\bs{G}'\,\in\Lambda^{\star}} \tilde{f}^{\bs{G} }_{\bs{q}}\, \tilde{f}^{-\bs{G}'}_{-\bs{q}}\, e^{ -\mathrm{i}\, ( \Theta^{\bs{G}}_{+\bs{q}} + \Theta^{-\bs{G}'}_{-\bs{q}} ) } \; \hat{\varrho}^{\bs{G} }_{\bs{q}}\, \hat{\varrho}^{-\bs{G}'}_{-\bs{q}}, \end{split} \end{equation} where \begin{eqnarray} \tilde{\varepsilon}^{\,}_{\bs{G}} &=& \frac{1}{N}\; \sum_{\bs{p}} \varepsilon^{\,}_{\bs{p}}\; e^{ -\mathrm{i}\, \left( \bs{G}\,\bs{\ast}\,\bs{p} - \bs{p}\,\bs{\ast}\,\bs{G} \right) /(2\pi) }, \\ \tilde{h}^{\bs{G}}_{\bs{q}} &=& \frac{1}{N}\; \sum_{\bs{p}} \left| \sum_\alpha u^{\alpha\,*}_{[\bs{q}+\bs{p}]^{\,}_{\mathrm{BZ}},\bar a}\, u^{\alpha }_{\bs{p},\bar a} \right|^2 \; e^{ -\mathrm{i}\, \left( \bs{G}\,\bs{\ast}\,\bs{p} - \bs{p}\,\bs{\ast}\,\bs{G} \right) /(2\pi) }, \\ \tilde{f}^{\bs{G}}_{\bs{q}} &=& \frac{1}{N}\; \sum_{\bs{p}} \left( \sum_\alpha u^{\alpha\,*}_{[\bs{q}+\bs{p}]^{\,}_{\mathrm{BZ}},\bar a}\, u^{\alpha }_{\bs{p},\bar a} \right) \; e^{-\mathrm{i}\,\Theta^{\,}_{\bs{q},\bs{p}}}\, e^{ -\mathrm{i}\, \left( \bs{G}\,\bs{\ast}\,\bs{p} - \bs{p}\,\bs{\ast}\,\bs{G} \right) /(2\pi) }, \end{eqnarray} \end{subequations} with $\Theta^{\,}_{\bs{q},\bs{p}}$ and $\Theta^{\bs{G}}_{\bs{q}}$ defined in Eqs.~(\ref{eq: evaluation Phi q and p lattice a}) and~(\ref{eq: evaluation Phi q and p lattice b}), respectively. \end{widetext} Equation~(\ref{eq: MAIN RESULT}) is the main result of Sec.~\ref{subsec: Normal ordering and the bare band width}. Applied to a Chern insulator to which density-density interactions have been added, Eq.~(\ref{eq: MAIN RESULT}) suggests that there will always be linear in $\hat{\varrho}^{\bs{G}}_{\bs{q=0}}$ contributions to the Hamiltonian even if the bare band is flat to begin with, i.e., even if $\varepsilon^{\,}_{\bs{p},\bar{a}}=0$. Because of the topological attributes of the Bloch spinors as they wrap around the Brillouin zone, we expect a nonvanishing $\tilde{h}^{\bs{G}}_{\bs{q}}$. (An extreme case of a topologically trivial band insulator has Bloch spinors that are constant across the Brillouin zone, in which case only $\tilde{h}^{\bs{G=0}}_{\bs{q}}\ne 0$ and the additional one-body contribution is just proportional to the total particle number. This would also be the case in the context of the quantum Hall effect.) This effect on the bare dispersion is controlled by the bare interaction $U^{\,}_{\bs{q}}$. Hence, it could be as large as the effects of the density-density interaction. It is far from evident that a FCI is selected by the competition between the one-body and two-body terms in Eq.~(\ref{eq: MAIN RESULT}) since they are both controlled by one characteristic energy scale in the limit of a flat bare bandwidth. On the other hand, if a ground state supporting a FCI is selected for some range of parameters, then the effective quantum field theory describing the low-energy and long-distance properties of this phase should belong to one of the universality class associated with the FQHE. \section*{Acknowledgments} This work was supported in part by DOE Grant No.\ DEFG02-06ER46316.
d62711181b799b44a12001ef24b660031d02305d
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Ferrimagnetism is a fundamental phenomenon in the field of magnetism. The simplest case of the conventional ferrimagnetism is the ground state of the mixed spin chain. For example, there is an ($s$, $S$)=(1/2, 1) mixed spin chain with a nearest-neighbor antiferromagnetic (AF) isotropic interaction. In this system, the so-called Lieb-Mattis (LM)-type ferrimagnetism\cite{Lieb, Marshall, Takano, Okamoto, Tonegawa, Sakai} is realized in the ground state because two different spins are arranged alternately in a line owing to the AF interaction. Since this type of ferrimagnetism has been studied extensively, the magnetic properties and the occurrence mechanism of the LM ferrimagnetism are well known. In particular, the ferrimagnetism in the quantum Heisenberg spin model on the bipartite lattice without frustration is well understood within the Marshall-Lieb-Mattis (MLM) theorem\cite{Lieb, Marshall}. In the LM ferrimagnetic ground state, the spontaneous magnetization occurs and the magnitude is fixed to a simple fraction of the saturated magnetization. In recent years, a new type of ferrimagnetism, which is clearly different from LM ferrimagnetism, has been found in the ground state of several one-dimensional frustrated Heisenberg spin systems\cite{PF1, PF2, PF3, PF4, PF5, Shimokawa, Shimokawa2}. The spontaneous magnetization in this new type of ferrimagnetism changes gradually with respect to the strength of frustration. The incommensurate modulation with long-distance periodicity in local magnetizations is also a characteristic behavior of the new type of ferrimagnetism. Hereafter, we call the new type of ferrimagnetism the non-Lieb-Mattis (NLM) type. The mechanism of the occurrence of the NLM ferrimagnetism has not yet been clarified. On the other hand, some candidates of the NLM ferrimagnetism among the two-dimensional (2D) systems, for example, the mixed-spin $J_{1}$-$J_{2}$ Heisenberg model on the square lattice\cite{2D-candidate1} and the $S=1/2$ Heisenberg model on the Union Jack lattice\cite{2D-candidate2}, were reported. These 2D frustrated systems have the intermediate ground state, namely, the ``canted state'', in which the spontaneous magnetization is changed when the inner interaction of the system is varied. It has not been, however, determined whether the incommensurate modulation with long-distance periodicity exists in the local magnetization of the canted state owing to the difficulty of treating these 2D frustrated systems numerically and theoretically. Therefore, the relationships between the canted states of these 2D frustrated systems and the NLM ferrimagnetic state are still unclear. Under these circumstances, recently, another candidate of the NLM ferrimagnetism among the 2D systems was reported in ref.~\ref{kagome-2D} in which the $S=1/2$ Heisenberg model on the spatially anisotropic kagome lattice depicted in Fig. \ref{fig1}(a) was studied. A region of the intermediate-magnetization states is observed between the LM ferrimagnetism that is rigorously proved by the MLM theorem\cite{Lieb, Marshall} and the nonmagnetic state of the spatially isotropic kagome-lattice antiferromagnet in the absence of magnetic field\cite{Lecheminant, Waldtmann,Hida_kagome,Cabra,Honecker0,Honecker1,Cepas, Spin_gap_Sindzingre,HN_Sakai2010,Sakai_HN_PRBR,HN_Sakai2011}. The local magnetization in the intermediate state of the kagome lattice was investigated by the exact diagonalization method, and it was reported that the local magnetization greatly depends on the position of the sites, although it is difficult to determine clearly whether the incommensurate modulation with long-distance periodicity is present. This result leads to the high expectation that the intermediate state of the spatially anisotropic kagome lattice is the NLM ferrimagnetic state. Additional research is desirable to conclude that the intermediate state of this 2D system is the NLM ferrimagnetism. \begin{figure*}[t] \begin{center} \includegraphics[width=15cm]{fig1.eps} \caption{ Structures of the lattices: the spatially anisotropic kagome lattice (a) and the quasi-one-dimensional kagome strip lattices (b) and (c) with different widths. An $S=1/2$ spin is located at each site denoted by a black circle. Antiferromagnetic bonds $J_{1}$ (bold straight line) and $J_{2}$ (dashed line), and ferromagnetic bond $J_{\rm F}$ (dotted line). The sublattices in a unit cell of lattice (b) are represented by A, ${\rm A}^{\prime}$, B, ${\rm B}^{\prime}$, C, ${\rm C}^{\prime}$, and D. The sublattices in a unit cell of lattice (c) are represented by A, ${\rm A}^{\prime}$, B, and C. } \label{fig1} \end{center} \end{figure*} In this paper, we study the ground-state properties of the $S=1/2$ Heisenberg models on the quasi-one-dimensional (Q1D) kagome strip lattices depicted in Figs.~\ref{fig1}(b) and \ref{fig1}(c) instead of the 2D lattice depicted in Fig.~\ref{fig1}(a). Note that the inner parts of the lattices in Figs.~\ref{fig1}(b) and \ref{fig1}(c) are common to a part of the 2D lattice in Fig.~\ref{fig1}(a). We also note that the lattice shapes of strips in the present study are different from some kagome strips (chains) studied in refs. \ref{kagome-strip1}-\ref{kagome-strip3}, where the nontrivial properties of kagome antiferromagnets were reported. According to the study in ref. \ref{kgm-stripe}, it was already known that the NLM ferrimagnetism is realized in the ground state of the kagome strip lattice in Fig.~\ref{fig1}(c). In the present study, we show that both the lattice in Fig.~\ref{fig1}(c) and the lattice in Fig.~\ref{fig1}(b) reveal the NLM ferrimagnetism in the ground state. Note also that the lattice shape at the edge under the open boundary condition depicted in Fig.~\ref{fig1}(c) is different from that in ref. \ref{kgm-stripe} [see Fig.~\ref{fig1}(b) in ref.~\ref{kgm-stripe}]. Thus, one can recognize that the results of the strip lattice with small width are irrespective of boundary conditions. We also present clearly the existence of the incommensurate modulation with long-distance periodicity in the local magnetizations of both models in Figs.~\ref{fig1}(b) and \ref{fig1}(c). Our numerical calculations suggest that the intermediate state of the 2D lattice in Fig. \ref{fig1}(a) is the NLM ferrimagnetism. This paper is organized as follows. In \S 2, we first present our numerical calculation methods. In \S 3, we show the ground-state properties of the lattice depicted in Fig. \ref{fig1}(c) in finite-size clusters. In \S 4, we show the ground-state properties of the lattice depicted in Fig. \ref{fig1}(b). Sections 5 and 6 are devoted to discussion and summary, respectively. \section{Numerical Methods} We employ two reliable numerical methods: the exact diagonalization (ED) method and the density matrix renormalization group (DMRG) method\cite{DMRG1, DMRG2}. The ED method is used to obtain precise physical quantities of finite-size clusters. This method does not suffer from the limitation of the cluster shape. It is applicable even to systems with frustration, in contrast to the quantum Monte Carlo (QMC) method coming across the so-called negative-sign problem for systems with frustration. The disadvantage of the ED method is the limitation that available system sizes are very small. Thus, we should pay careful attention to finite-size effects in quantities obtained by this method. On the other hand, the DMRG method is very powerful when a system is (quasi-)one-dimensional under the open boundary condition. The method can treat much larger systems than the ED method. Note that the applicability of the DMRG method is irrespective of whether or not the systems include frustrations. In the present research, we use the ``finite-system'' DMRG method. \section{Kagome Strip Lattice with Small Width} In this section, we study the magnetic properties in the ground state of the $S=1/2$ Heisenberg model on the kagome strip lattice depicted in Fig. \ref{fig1}(c). The Hamiltonian of this model is given by \begin{widetext} \begin{eqnarray} \label{Hamiltonian1} \mathcal{H} &=& J_{1} \sum_{i} [{\bf S}_{i,{\rm B}}\cdot {\bf S}_{i,{\rm C}} + {\bf S}_{i,{\rm C}}\cdot {\bf S}_{i,{\rm A}^{\prime}} + {\bf S}_{i,{\rm C}}\cdot {\bf S}_{i+1,{\rm A}} + {\bf S}_{i,{\rm C}}\cdot {\bf S}_{i+1,{\rm B}}] \nonumber \\ &+&J_{2} \sum_{i} [{\bf S}_{i,{\rm A}}\cdot {\bf S}_{i,{\rm B}} +{\bf S}_{i,{\rm B}}\cdot {\bf S}_{i,{\rm A}^{\prime}}] + J_{\rm F} \sum_{i} [{\bf S}_{i,{\rm A}}\cdot {\bf S}_{i+1,{\rm A}} +{\bf S}_{i,{\rm A}^{\prime}}\cdot {\bf S}_{i+1, {\rm A}^{\prime}}], \end{eqnarray} \end{widetext} where ${\bf S}_{i, \xi}$ is the $S=1/2$ spin operator at the $\xi$-sublattice site in the $i$-th unit cell. The positions of the four sublattices in a unit cell are denoted by A, ${\rm A}^{\prime}$, B, and C in Fig. \ref{fig1}(c). Energies are measured in units of $J_{1}$; we fixed $J_{1}=1$ hereafter. In what follows, we examine the region of $0<J_{2}/J_{1}< \infty$ in the case of $J_{\rm F}=-1$. Note that the number of total spin sites is denoted by $N$; thus, the number of unit cells is $N/4$. \begin{figure}[t] \begin{center} \includegraphics[width=6cm]{fig2.EPS} \caption{(Color)(a) Dependence of the lowest energy on $S_{\rm tot}^{z}$. The results of $J_{2}/J_{1}=0.5$ and $0.58$ for the system size of $N=96$ are presented. Arrows indicate the values of the spontaneous magnetization $M$ in each $J_{2}/J_{1}$. The position of an arrow is given by the highest $S_{\rm tot}^{z}$ value among those that give the common lowest energy. (b) $J_{2}/J_{1}$ dependence of $M/M_{\rm s}$ obtained from ED calculations for $N=24$ (black square) under the periodic boundary condition and DMRG calculation for $N=48$ (red triangle) and $96$ (blue inverted triangle) under the open boundary condition. Note that for $N=96$ in (a) and (b), we use $MS=900$ and $SW=15$ and, that for $N=48$ in (b), we use $MS=400$ and $SW=15$. } \label{fig2} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=6cm]{fig3.eps} \caption{(Color) Local magnetization $\langle S_{i,\xi}^{z} \rangle$ at each sublattice $\xi$; A (black square), ${\rm A}^{\prime}$ (red triangle), B (blue square), and C (purple inverted triangle). Panels (a) and (b) show results for $J_{2}/J_{1}=0.3$ and $0.57$, respectively. These results are obtained from our DMRG calculations for $N=96$ ($i=1, 2, \cdots, 24$). } \label{fig3} \end{center} \end{figure} We examine the $J_{2}/J_{1}$ dependence of the ratio $M/M_{\rm s}$, where $M$ and $M_{\rm s}$ are the spontaneous and saturation magnetizations, respectively. Let us explain the method used to determine $M$ as a function of $N$ and $J_{2}/J_{1}$. First, we calculate the lowest energy $E(J_{2}/J_{1}$, $S_{\rm tot}^{z}$, $N)$, where $S_{\rm tot}^{z}$ value is the $z$-component of the total spin. For example, the energies for various $S_{\rm tot}^{z}$ in the two cases of $J_{2}/J_{1}$ are presented in Fig. \ref{fig2}(a); the results are obtained by our DMRG calculations of the system of $N=96$ with the maximum number of retained states ($MS$) of 600 and the number of sweeps ($SW$) of 15. The spontaneous magnetization $M(J_{2}/J_{1}$, $N$) is determined as the highest $S_{\rm tot}^{z}$ among those at the lowest common energy [see arrows in Fig. \ref{fig2}(a)]. Our results of the $J_{2}/J_{1}$ dependence of $M/M_{\rm s}$ are depicted in Fig. \ref{fig2}(b). We find the existence of the intermediate magnetic phase of $0<M/M_{\rm s}<1/2$ between the LM ferrimagnetic phase of $M/M_{\rm s}=1/2$ and the nonmagnetic phase. In order to determine the spin state of this intermediate phase, we calculate the local magnetization $\langle S_{i, \xi}^{z} \rangle$, where $\langle A \rangle$ denotes the expectation value of the physical quantity $A$ and $S_{i, \xi}^{z}$ is the $z$-component of ${\bf S}_{i, \xi}$. Figure \ref{fig3} depicts our results for a system size $N=96$ on the lattice depicted in Fig. \ref{fig1}(c) under the open boundary condition; Figs. \ref{fig3}(a) and \ref{fig3}(b) correspond to the case of the LM ferrimagnetic phase and that of the intermediate phase, respectively. The results clearly indicate the existence of the incommensurate modulation with long-distance periodicity in the intermediate phase. We also confirm that the periodicities of the local magnetizations in the NLM ferrimagnetism in the present model depend on the $J_{2}/J_{1}$ value but not on the length of the system, as reported in the case of ref. \ref{PF2}. It is worth emphasizing the point that the intermediate phase commonly exists irrespective of the shape at the edge of the strip lattice when one compares the result of the present lattice depicted in Fig. \ref{fig1}(c) and that in ref. \ref{kgm-stripe}. Therefore, we can conclude that the intermediate phase exists and that it is NLM ferrimagnetic. Although there exists an intermediate NLM ferrimagnetic phase in the case of the kagome strip lattice depicted in Fig. \ref{fig1}(c), we should note that there is a large discrepancy in dimensionality between the kagome strip lattice depicted in Fig. \ref{fig1}(c) and the 2D kagome lattice depicted in Fig. \ref{fig1}(a). In the next section, we treat the kagome strip lattice depicted in Fig. \ref{fig1}(b) whose width is larger than that of the kagome strip lattice depicted in Fig. \ref{fig1}(c). \section{Kagome Strip Lattice with Large Width} \subsection{Hamiltonian} In this section, we study the ground-state properties of the $S=1/2$ Heisenberg model on the kagome strip lattice depicted in Fig. \ref{fig1}(b). The Hamiltonian of this model is given by \begin{widetext} \begin{eqnarray} \label{Hamiltonian2} \mathcal{H} &=& J_{1} \sum_{i} [ {\bf S}_{i,{\rm B}}\cdot {\bf S}_{i,{\rm C}} + {\bf S}_{i,{\rm C}}\cdot {\bf S}_{i,{\rm D}} + {\bf S}_{i,{\rm C}}\cdot {\bf S}_{i+1,{\rm A}} + {\bf S}_{i,{\rm C}}\cdot {\bf S}_{i+1,{\rm B}} \nonumber \\ &+& {\bf S}_{i,{\rm C}^{\prime}}\cdot {\bf S}_{i,{\rm B}^{\prime}} + {\bf S}_{i,{\rm C}^{\prime}}\cdot {\bf S}_{i,{\rm A}^{\prime}} + {\bf S}_{i,{\rm C}^{\prime}}\cdot {\bf S}_{i+1,{\rm D}} +{\bf S}_{i,{\rm C}^{\prime}}\cdot {\bf S}_{i+1,{\rm B}^{\prime}} ] \nonumber \\ &+&J_{2} \sum_{i} [{\bf S}_{i,{\rm A}}\cdot {\bf S}_{i,{\rm B}} +{\bf S}_{i,{\rm B}}\cdot {\bf S}_{i,{\rm D}} +{\bf S}_{i,{\rm D}}\cdot {\bf S}_{i,{\rm B}^{\prime}} +{\bf S}_{i,{\rm B}^{\prime}}\cdot {\bf S}_{i,{\rm A}^{\prime}}] \nonumber \\ &+& J_{\rm F} \sum_{i} [{\bf S}_{i,{\rm A}}\cdot {\bf S}_{i+1,{\rm A}} +{\bf S}_{i,{\rm A}^{\prime}}\cdot {\bf S}_{i+1, {\rm A}^{\prime}}] \nonumber \\ &-& h \sum_{i} [S^{z}_{i,{\rm A}}+S^{z}_{i,{\rm A}^{\prime}}+S^{z}_{i,{\rm B}}+S^{z}_{i,{\rm B}^{\prime}}+S^{z}_{i,{\rm C}}+S^{z}_{i,{\rm C}^{\prime}}+S^{z}_{i,{\rm D}}]. \end{eqnarray} \end{widetext} Here, the positions of seven sublattices are denoted by A, ${\rm A}^{\prime}$, B, ${\rm B}^{\prime}$, C, ${\rm C}^{\prime}$, and D in Fig. \ref{fig1}(b). Note that the last term of eq. \ref{Hamiltonian2} is the Zeeman term. The number of spin sites is denoted by $N$. The number of unit cells is $N/7$; we consider $N/14$ as an integer. We mainly use the DMRG method for investigating the magnetic properties in the ground state of this Q1D system under the open boundary condition. We also investigate the properties under the periodic boundary condition by the ED method, although the size treated by this method is only in the case of $N=28$. Hereafter, we consider $J_{1}=1$ as an energy scale and we investigate the region of $0<J_{2}/J_{1}<\infty$ in the case of $J_{\rm F}=-1$. \subsection{Phase diagram} First, let us examine the $J_{2}/J_{1}$ dependence of the ratio $M/M_{\rm s}$ in the absence of the external magnetic field $h$. The procedure for determining $M$ is the same as that mentioned in \S 3 [see also Fig. \ref{fig4}(a)]. We present our results of the spontaneous magnetization in Fig. \ref{fig4}(b). We successfully observe the intermediate-magnetization region irrespective of the boundary conditions. A careful observation of Fig. \ref{fig4}(b) enables us to observe the eight regions at least in the finite-size system. As a matter of convenience, hereafter, we call these regions ${\rm R}_{1}$, ${\rm R}_{2}$, $\cdots$, ${\rm R}_{7}$, and ${\rm R}_{8}$. In the case of $N=112$ under the open boundary condition, for example, Fig. \ref{fig5}(a) illustrates the regions ${\rm R}_{1}$ to ${\rm R}_{8}$: ${\rm R}_{1}$ is the region of $M/M_{\rm s}=3/7$, ${\rm R}_{2}$ is the region of $11/28\leq M/M_{\rm s} <3/7$, ${\rm R}_{3}$ is the region of $1/8 < M/M_{\rm s} <11/28$, ${\rm R}_{4}$ is the region of $M/M_{\rm s} =1/8$, ${\rm R}_{5}$ is the region of $0< M/M_{\rm s} <1/8$, ${\rm R}_{6}$ is the region of $M/M_{\rm s} =0$, ${\rm R}_{7}$ is the region of $0 < M/M_{\rm s} <1/7$, and ${\rm R}_{8}$ is the region of $M/M_{\rm s} =1/7$. Here, the dashed lines in Fig. \ref{fig5}(a) indicate the boundaries of these regions. \begin{figure}[h] \begin{center} \includegraphics[width=5.6cm]{fig4.eps} \caption{(Color)(a) Lowest energy in each subspace divided by $S_{\rm tot}^{z}$. The results of the DMRG calculations obtained when the system size is $N=56$ for $J_{2}/J_{1}=0.20$ and 0.57 are presented. Arrows indicate the spontaneous magnetization $M$ for a given $J_{2}/J_{1}$. (b) $J_{2}/J_{1}$ dependence of $M/M_{\rm s}$ obtained from the ED calculations for $N=$28 (blue triangle) under the periodic boundary condition and the DMRG calculations for $N=$56 (red square) and 112 (black pentagon) under the open boundary condition. Note that for $N=$ 56 in (a) and (b), we use $MS=$ 600 and $SW=$15, and that for $N=112$ in (b), we use $MS\geq$ 900 and $SW=$15. Here, $MS$ denotes the maximum number of retained states and $SW$ the number of sweeps used in DMRG calculations. } \label{fig4} \end{center} \end{figure} It should be noted that the values of $M/M_{\rm s}$ in the ${\rm R}_{4}$ region and that at the lower edge of the ${\rm R}_{2}$ region change with increasing $N$, as shown in Fig. \ref{fig4}(b); the former value is $M/M_{\rm s}=(N-14)/7N$ and the latter value is $M/M_{\rm s}=(3N-28)/7N$. These changes due to the increase in system size come from the finite-size effect. We find that the value of $M/M_{\rm s}$ in the ${\rm R}_{4}$ region under the open boundary condition increases and approaches the value of $M/M_{\rm s}=1/7$ when $N$ increases. Furthermore, the magnetization value in the ${\rm R}_{4}$ region is $M/M_{\rm s}=1/7$ in the case of $N=28$ under the periodic boundary condition. Therefore, it is expected that the value of $M/M_{\rm s}$ in the ${\rm R}_{4}$ region is $1/7$ in the thermodynamic limit. We also confirm that the value of $M/M_{\rm s}$ at the lower edge of the ${\rm R}_{2}$ region gradually increases and approaches the value of $M/M_{\rm s}=3/7$ with increasing $N$. In addition, we cannot confirm the ${\rm R}_{2}$ region in the case of $N=28$ under the periodic boundary condition. These circumstances indicate a possibility that the ${\rm R}_{2}$ region merges with the ${\rm R}_{1}$ region of $M/M_{\rm s}=3/7$ in the thermodynamic limit. Next, to determine whether each region survives in the thermodynamic limit, we study the size dependences of the boundaries between the regions under the open boundary condition. Figure \ref{fig5}(b) shows the results of $N=$42, 56, 84, and 112 from DMRG calculations. Note here that we define ${\rm R}_{2}$ as the region of $(3N-28)/7N<M/M_{\rm s}<3/7$ and ${\rm R}_{4}$ as the region of $M/M_{\rm s}=(N-14)/7N$ in the finite-size system. One can find immediately from Fig. \ref{fig5}(b) that all regions, except the ${\rm R}_{7}$ region, survive in the limit $N \rightarrow \infty$. To determine whether the ${\rm R}_{7}$ region survives in the thermodynamic limit, we investigate the size dependence of the width of the ${\rm R}_{7}$ region in Fig. \ref{fig6}. This plot shows us that the width of the ${\rm R}_{7}$ region decreases with increasing $N$. It is difficult to determine whether the ${\rm R}_{7}$ region survives in the thermodynamic limit. The convex downward behavior is observed for large sizes so that the region might survive; however, the observed behavior may be one of the serious finite-size effects. The issue of establishing the presence or absence of the ${\rm R}_{7}$ region should be clarified in future studies. \begin{figure}[h] \begin{center} \includegraphics[width=5.6cm]{fig5.eps} \caption{(a) Definitions of the ${\rm R}_{1}$-${\rm R}_{8}$ regions in the case of $N=112$ under the open boundary condition: ${\rm R}_{1}$ is the region of $M/M_{\rm s}=3/7$, ${\rm R}_{2}$ is the region of $11/28\leq M/M_{\rm s} <3/7$, ${\rm R}_{3}$ is the region of $1/8 < M/M_{\rm s} <11/28$, ${\rm R}_{4}$ is the region of $M/M_{\rm s} =1/8$, ${\rm R}_{5}$ is the region of $0< M/M_{\rm s} <1/8$, ${\rm R}_{6}$ is the region of $M/M_{\rm s} =0$, ${\rm R}_{7}$ is the region of $0 < M/M_{\rm s} <1/7$, and ${\rm R}_{8}$ is the region of $M/M_{\rm s} =1/7$. Dashed lines indicate the boundaries of these regions. (b) Size dependences of boundaries under the open boundary condition. The results presented are those of $N=$42, 56, 84, and 112 from DMRG calculations. The curve line named ${\rm R}_{l}$-${\rm R}_{l+1}$ indicates the boundary line between the ${\rm R}_{l}$ and ${\rm R}_{l+1}$ regions, where $l$ is integer. } \label{fig5} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=6cm]{fig6.eps} \caption{Size dependence of the width of the ${\rm R}_{7}$ region. It is difficult to determine whether the ${\rm R}_{7}$ region survives in the thermodynamic limit. } \label{fig6} \end{center} \end{figure} \subsection{Magnetic properties in each region} In this subsection, we investigate the local magnetization $\langle S_{i,\xi }^{z} \rangle$ to study the magnetic structures in various regions except the ${\rm R}_{2}$ and ${\rm R}_{6}$ regions. Note that we calculate $\langle S_{i,\xi }^{z} \rangle$ within the subspace of the highest $S_{\rm tot}^{z}$ corresponding to the spontaneous magnetization $M$ when $J_{2}/J_{1}$ is given. Considering the fact that the present lattice depicted in Fig.~\ref{fig1}(b) has seven sublattices in the system, we will use different colors or symbols for each sublattice $\xi$ for presenting our results of $\langle S_{i,\xi }^{z} \rangle$, as depicted in the inset of Fig. \ref{fig7}(a); we use a black square for $\xi=$A, a red triangle for $\xi={\rm A}^{\prime}$, a blue cross for $\xi={\rm B}$, a green pentagon for $\xi={\rm B}^{\prime}$, a purple inverted triangle for $\xi=$C, an aqua diamond for $\xi={\rm C}^{\prime}$, and a black circle for $\xi=$D. First, we examine the ${\rm R}_{1}$ and ${\rm R}_{8}$ regions. We present our DMRG results of $\langle S_{i,\xi }^{z} \rangle$ of the system of $N=112$ in Figs. \ref{fig7}(a) and \ref{fig7}(b) for $J_{2}/J_{1}=0.2$ and 1.9, respectively. In Fig. \ref{fig7}(a), we observe the uniform behavior of upward-direction spins at the sublattice sites A, ${\rm A}^{\prime}$, B, ${\rm B}^{\prime}$, and D, and downward-direction spins at the sublattice sites C and ${\rm C}^{\prime}$. In Fig. \ref{fig7}(b), we also observe the uniform behavior of upward-direction spins at the sublattice sites B, ${\rm B}^{\prime}$, C, and ${\rm C}^{\prime}$, and downward-direction spins at the sublattice sites A, ${\rm A}^{\prime}$, and D. Therefore, we conclude that the LM ferrimagnetic states are realized in the regions of ${\rm R}_{1}$ and ${\rm R}_{8}$. \begin{figure}[h] \begin{center} \includegraphics[width=6cm]{fig7.eps} \caption{(Color) Local magnetization $\langle S_{i,\xi }^{z} \rangle$ at each sublattice $\xi$. Panels (a) and (b) show results for $J_{2}/J_{1}=$0.2 and 1.9, respectively. These results are obtained from our DMRG calculations for $N=112$ ($i=$1, 2, $\cdots$, 16). The inset in the panel (a) presents the correspondence relationship between each colored symbol and each sublattice $\xi$ used in Figs. \ref{fig7}, \ref{fig9}, and \ref{fig10}. } \label{fig7} \end{center} \end{figure} Our understanding of the origins of these LM ferrimagnetic phases is based on the Marshall-Lieb-Mattis (MLM) theorem. In the case of $J_{2}/J_{1}=0$, no frustration occurs; thus, the spin state depicted in Fig. \ref{fig8}(a) is realized. This state shows the LM ferrimagnetism with $M/M_{\rm s}=3/7$. The ${\rm R}_{1}$ region is directly connected to the LM ferrimagnetic state of $J_{2}/J_{1}=0$. Therefore, the ${\rm R}_{1}$ region of $M/M_{\rm s}=3/7$ is regarded as the LM ferrimagnetic phase. In the limit $J_{2} \rightarrow \infty$, on the other hand, the present model becomes equal to a model of an $S=1/2$ diamond chain depicted in Fig. \ref{fig8}(b). The value of magnetization takes $M/M_{\rm s}=1/7$ in the ground state of this diamond chain according to the MLM theorem. Therefore, the ${\rm R}_{8}$ region of $M/M_{\rm s}=1/7$ is regarded as the LM ferrimagnetic phase. \begin{figure}[h] \begin{center} \includegraphics[width=6cm]{fig8.eps} \caption{(a) Kagome strip lattice depicted in Fig. \ref{fig1}(b) in the limit of $J_{2}/J_{1}=0$. White arrowheads denote the classical spin configuration in the LM ferrimagnetic state of $M/M_{\rm s}=3/7$. (b) Kagome strip lattice depicted in Fig. \ref{fig1}(b) in the limit of $J_{2} \rightarrow \infty$. White circles represent the effective $S=1/2$ spins formed by five $S=1/2$ spins at the sublattices $\xi=$A, ${\rm A}^{\prime}$, B, ${\rm B}^{\prime}$, and D. The black bold and black dotted lines show the antiferromagnetic and ferromagnetic interactions, respectively. } \label{fig8} \end{center} \end{figure} Next, we investigate the ${\rm R}_{3}$, ${\rm R}_{5}$, and ${\rm R}_{7}$ regions. Our results obtained from the DMRG calculations of $N=168$ are depicted in Figs. \ref{fig9}(a)-\ref{fig9}(c) for $J_{2}/J_{1}=0.57$, 1.14, and 1.69, respectively. We clearly observe incommensurate modulations with long-distance periodicities in every case in Fig.~\ref{fig9}. In addition, we confirm from Fig. \ref{fig4}(b) that the ratio $M/M_{\rm s}$ changes gradually with the variation in $J_{2}/J_{1}$ in the R$_3$, R$_5$, and R$_7$ regions. Since the widths of the R$_3$ and R$_5$ regions survive in the thermodynamic limit as was clarified in the previous subsection, we conclude that the ${\rm R}_{3}$ and ${\rm R}_{5}$ regions are NLM ferrimagnetic phases. Although it is unclear whether the ${\rm R}_{7}$ region survives in the thermodynamic limit, this region is an NLM ferrimagnetic phase if it survives. \begin{figure}[h] \begin{center} \includegraphics[width=6.0cm]{fig9.eps} \caption{(Color) Local magnetization $\langle S_{i,\xi }^{z} \rangle$ at each sublattice $\xi$. The correspondence relationship between each colored symbol and each sublattice $\xi$ is described in the inset in Fig. \ref{fig7}(a). Panels (a), (b), and (c) show results for $J_{2}/J_{1}=$0.57, 1.14, and 1.69, respectively. These results are obtained from our DMRG calculations for $N=168$ ($i=$1, 2, $\cdots$, 24). } \label{fig9} \end{center} \end{figure} Finally, in this subsection, we examine the ${\rm R}_{4}$ region. Our result of $\langle S_{i,\xi }^{z} \rangle$ for $J_{2}/J_{1}=1$ in the system of $N=168$ is depicted in Fig. \ref{fig10}. We do not detect the incommensurate modulation in this ${\rm R}_{4}$ region. In addition, we confirm from Fig. \ref{fig4}(b) that the ratio $M/M_{\rm s}$ in the ${\rm R}_{4}$ region does not change with the variation in $J_{2}/J_{1}$ in contrast to the cases in the R$_3$ and R$_5$ regions. These circumstances suggest that the ${\rm R}_{4}$ region is the LM ferrimagnetic phase. \begin{figure}[h] \begin{center} \includegraphics[width=6cm]{fig10.eps} \caption{(Color) Local magnetization $\langle S_{i,\xi }^{z} \rangle$ at each sublattice $\xi$. This result is obtained from our DMRG calculations for $J_{2}/J_{1}=1$ in the system of $N=168$ ($i=$1, 2, $\cdots$, 24). The correspondence relationship between each colored symbol and each sublattice $\xi$ is described in the inset in Fig. \ref{fig7}(a). } \label{fig10} \end{center} \end{figure} However, one cannot speculate the spin state from the result of $\langle S_{i,\xi }^{z} \rangle$ because it is difficult to clarify the spin state on the basis of the results of determining whether the spins are up or down, as was successfully observed in the ${\rm R}_{8}$ region. We further investigate the magnetization curve in this region to know whether the ${\rm R}_{4}$ region is the LM ferrimagnetic phase. The magnetization curves of $J_{2}/J_{1}=1$ for $N=56$ and $N=112$ calculated by the DMRG method are presented in Fig. \ref{fig11}(a)\cite{comment}. Figure \ref{fig11}(b) is obtained by zooming the region of $h$ near $h=0$. One can observe the existence of the magnetization plateaus at the height of the spontaneous magnetization for both system sizes. We also confirm that the difference in the width of the plateau between $N=112$ and 56 is very small. These features indicate that the spin gap exists in the ${\rm R}_{4}$ region in the thermodynamic limit. If the ${\rm R}_{4}$ region is the NLM ferrimagnetic phase, no spin gap is present in this region. (It was reported in ref. \ref{Hida} that the NLM ferrimagnetism is gapless as a response to a uniform magnetic field.) Therefore, we conclude that the ${\rm R}_{4}$ region is the LM ferrimagnetic phase. \begin{figure}[h] \begin{center} \includegraphics[width=6cm]{fig11.eps} \caption{Magnetization curve in the ${\rm R}_{4}$ region. Panel (a) is obtained from the DMRG calculations of $J_{2}/J_{1}=1$ for $N=56$ (triangle) and $N=112$ (square). Panel (b) is obtained by zooming the region near $h=0$ in panel (a). } \label{fig11} \end{center} \end{figure} \section{Discussion} We discuss the relationships between the interval $0< J_{2}/J_{1}\le 1$ of the kagome strip lattice depicted in Fig. \ref{fig1}(b) and those of the spatially anisotropic kagome lattice studied in ref. \ref{kagome-2D}, while we consider the case of another new lattice, with a larger but finite width along the direction perpendicular to the bonds of interaction $J_{\rm F}$ in Fig. \ref{fig1}(b), namely, the strip width. The ratio $M/M_{\rm s}$ of the ${\rm R}_{1}$ and ${\rm R}_{2}$ regions is commonly $M/M_{\rm s}=3/7$ in the thermodynamic limit of the lattice depicted in Fig. \ref{fig1}(b). The difference of $M/M_{\rm s}=3/7$ from $M/M_{\rm s}=1/3$ in the case of the LM ferrimagnetic phase of the spatially anisotropic kagome lattice in ref. \ref{kagome-2D} is attributed to the finiteness of the strip width. Thus, the ratio $M/M_{\rm s}$ approaches $M/M_{\rm s}=1/3$ when the strip width increases; the ${\rm R}_{1}$ and ${\rm R}_{2}$ regions of the present model depicted in Fig. \ref{fig1}(b) correspond to the LM ferrimagnetic phase of the spatially anisotropic kagome lattice studied in ref. \ref{kagome-2D}. The ratio $M/M_{\rm s}=1/7$ at $J_{2}/J_{1}=1$ in the present model depicted in Fig. \ref{fig1}(b) may be related to the fact that the model includes seven sublattices, namely, the strip width is finite. At least at $J_{2}/J_{1}=1$, on the other hand, it is widely believed that the spontaneous magnetization disappears in the limit of infinite width\cite{Lecheminant, Waldtmann,Hida_kagome,Cabra,Honecker0,Honecker1,Cepas, Spin_gap_Sindzingre,HN_Sakai2010,Sakai_HN_PRBR,HN_Sakai2011}. Although the relationship should be clarified in the examination of the case of the lattice with an even larger strip width, there are two possibilities of the behavior near the case of $J_{2}/J_{1}=1$. One is the case when the ratio $M/M_{\rm s}$ decreases with increasing strip width and finally vanishes, while the LM ferrimagnetic phase, such as ${\rm R}_4$, survives in systems with larger strip widths. In this case, the ${\rm R}_{3}$ region corresponds to the NLM phase of the two-dimensional model on the spatially anisotropic kagome lattice. In the other case, the LM ferrimagnetic phase, such as ${\rm R}_4$, becomes narrower with increasing strip width, while the ratio $M/M_{\rm s}$ does not vanish; finally, the ${\rm R}_3$ and ${\rm R}_5$ regions merge with each other. In this case, the value of $J_2/J_1$ at the boundary between the ${\rm R}_5$ and ${\rm R}_6$ regions decreases across $J_2/J_1=1$. In any cases, it is important to note that from our finding of an intermediate phase in all the three cases in Fig.~\ref{fig1} between the ferrimagnetic phase and the nonmagnetic phase, the phase is considered to exist irrespective of the strip width. Finally, one should note that the intermediate phase between the Lieb-Mattis ferrimagnetic and nonmagnetic states is observed in other cases. However, this phase is not always ferrimagnetic. One of the cases observed is the case of the three-leg ladder system forming a strip lattice obtained by cutting off from the spatially anisotropic triangular lattice in ref.~\ref{Abe}, in which the properties of the intermediate phase were unclear. Sakai's unpublished study suggests that the intermediate phase is nematic\cite{Sakai_private_com}. Careful examinations are required to investigate such an intermediate phase if it is found. \section{Summary} We have studied the ground-state properties of the $S=1/2$ Heisenberg models on the kagome strip lattices depicted in Figs. \ref{fig1}(b) and \ref{fig1}(c) by the ED and DMRG methods. As a common phenomenon in the ground state of both cases, we have confirmed the existence of the non-Lieb-Mattis ferrimagnetism between the Lieb-Mattis ferrimagnetic phase and the nonmagnetic phase. We have clearly found incommensurate modulations with long-distance periodicity in the non-Lieb-Mattis ferrimagnetic state. The occurrence of the non-Lieb-Mattis ferrimagnetism irrespective of strip width strongly suggests that the intermediate state found in the case of the spatially anisotropic kagome lattice in two dimensions is the non-Lieb-Mattis ferrimagnetism. \section*{Acknowledgments} We thank Prof. Toru Sakai for letting us know his unpublished results. This work was partly supported by Grants-in-Aid (Nos. 20340096, 23340109, 23540388, and 24540348) from the Ministry of Education, Culture, Sports, Science and Technology of Japan. This work was partly supported by a Grant-in-Aid (No. 22014012) for Scientific Research and Priority Areas ``Novel States of Matter Induced by Frustration'' from the Ministry of Education, Culture, Sports, Science and Technology of Japan. Diagonalization calculations in the present work were carried out using TITPACK Version 2 coded by H. Nishimori. DMRG calculations were carried out using the ALPS DMRG application\cite{ALPS}. Some computations were performed using the facilities of the Supercomputer Center, Institute for Solid State Physics, University of Tokyo.
c2e60b54192b109da038d049c8226edcc32a79ce
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Seesaw mechanism is one of the popular mechanisms\cite{seesawI,seesawII,seesawIII} beyond the standard model (SM) which can provide some explanations why neutrino masses are so much smaller than their charged lepton partner mass scales $m_D$. In Type I and III seesaw models\cite{seesawI,seesawIII} it requires the existence of heavy right-handed neutrinos of a Majorana mass scale $M$. The light neutrino mass is of order $m_D (m_D/M)$. The usual scale of the heavy right-handed neutrino mass $M$ is expected to be super heavy which can be as high as the grand unification scale. It would be good if the seesaw mechanism can be tested by high energy colliders. The LHC can test theoretical models beyond the SM at an energy scale as high as 8 TeV at present and will reach 14 TeV in the future. If indeed the heavy seesaw scale is of grand unification scale, it is impossible to test seesaw mechanism directly at accessible collider energies. Theoretically it is interesting to see if the seesaw scale can be lowered to TeV range allowing direct probe of ATLAS and CMS experiments at the LHC. There are indeed special solutions which allow lower heavy right handed neutrinos of order TeV with the price of fine tuning of the parameters\cite{low-seesaw}. Although this is theoretically allowed, it loses the original motivation of naturally explanation for the lightness of neutrinos through seesaw mechanism. Radiative seesaw neutrino mass generation can easy the problem and at the same time provide the much desired candidate for dark matter when additional symmetry exists to stablize the dark matter candidate\cite{ma,he-ren}. The inverse seesaw mechanism\cite{inverse-seesaw} can also lower the seesaw scale. In the inverse seesaw model, the heavy Majorana neutrinos are replaced by the heavy Dirac particles. The light neutrino masses are of order $\mu (m_D/M)^2$. Here $M$ is the heavy Dirac particle scale and $\mu$ is a Majorana mass of the heavy Dirac particles which are supposed to be small even compared with $m_D$. It is clear that the heavy Dirac mass scale $M$ can be much lower than that for the Majorna mass in the usual seesaw models naturally. If the inverse seesaw is also achieved by radiative correction, the heavy scale can be even lower\cite{ma1,sandy,bazzocchi}. There are also other mechanisms to further lower the scale by naturally having a small $\mu$ parameter, such as that discussed in Ref.\cite{fong} through extra warped dimension. Here we will study radiative inverse seesaw models. In achieving radiative mass generation, sometimes it involves introduction of new symmetries to forbid terms which may induce tree level neutrino masses. If the symmetry introduced is unbroken, there may be a stable new particle in the theory. This new particle may play the role of the dark matter needed to explain about 23\% of the energy budget of our universe\cite{dark-matter}. In this paper we study several simple models using a leptonic heavy Dirac multiple to facilitate radiative inverse seesaw neutrino mass and also to have dark matter candidate. \\ \section{ Tree Inverse Seesaw} The inverse seesaw neutrino mass matrix $M_\nu$ is the mass matrix resulted from the effective Lagrangian \begin{eqnarray} L_m = - \bar \nu_L m_D N_R - \bar N_L M N_R - {1\over 2} \bar N^c_R \mu_R N_R - {1\over 2} \bar N_L \mu_L N^c_L + h.c. \end{eqnarray} where $\nu_L$ is the light active neutrino, $N_{L,R}$ are heavy neutrinos. In the bases $(\nu^c_L, N_R, N_L^c)^T$, $M_\nu$ is given by \begin{eqnarray} M_\nu = \left ( \begin{array}{ccc} 0&m_D&0\\ m_D^T&\mu_R&M^T\\ 0&M&\mu_L \end{array} \right )\, \end{eqnarray} With the hierarchy $\mu_L\sim \mu_R << m_D << M$, the light neutrino mass matrix $m_\nu$, defined by $L_{mass} = - (1/2)\nu_L m_\nu \nu^c_L$, to order $(m_D/M)^2$ is given by\cite{inverse-seesaw} \begin{eqnarray} m_\nu = m_DM^{-1}\mu_L (M^{-1})^Tm_D^T. \end{eqnarray} There are different ways to achieve inverse seesaw mechanism depending on where $N_{L,R}$ comes from. We briefly outline two simple possibilities which may realize inverse seesaw at tree level. One of the simplest ways is to introduce right-handed $N_R$ and left-handed $N_L$ singlet heavy neutrinos with a discrete $N_R \to N_R$, $N_L \to - N_L$ $Z_2$ symmetry and all other SM particles do not transform under this symmetry. The $m_D$ term is generated through Yukawa coupling $\bar L_L Y_D \tilde H N_R$. Here $H = (h^+, (v_H+ h+ iI)^T/\sqrt{2}$ is the SM Higgs doublet which transform under the SM electroweak gauge group $SU(2)_L\times U(1)_Y$ as $(2,1/2)$. $\tilde H = i\sigma_2 H^*$. $v_H$ is the vacuum expectation value (vev) of $H$. $L_L = (\nu_L, e_L)^T: (2,-1/2)$ is the SM lepton doublet. The $\mu_{L,R}$ terms are from bare Majorana mass terms $\bar N^c_R \mu_R N_R$ and $\bar N_L \mu_L N^c_L$. Because the $Z_2$ symmetry, the bare Dirac mass term $\bar N_L M N_R$ is not allowed. In order to generate a non-zero M term, one can introduce a singlet scale $S$ transforming under $Z_2$ as $S\to - S$ with a non-zero vev $v_s/\sqrt{2}$. In this case, the Yukawa term $\bar N_L Y_s S N_R$ is allowed which generates a $M$ given by $Y_s v_s/\sqrt{2}$. One can also introduce a leptonic doublet $D_{L,R}: (2,-1/2)$ along with a singlet $S$ and a triplet $\Delta: (3, - 1)$ ($\Delta_{ij}$ with $\Delta_{11} = \Delta^0$, $\Delta_{12}=\Delta_{21} = \Delta^{-}/\sqrt{2}$ and $\Delta_{22} = \Delta^{--}$) to realize the inverse seesaw. One can introduce a global $U(1)_D$ symmetry to distinguish $D_L$ and $L_L$. Under this symmetry $D_{L,R} \to exp[i\alpha_D]D_{L,R}$, $S \to exp[-i\alpha_D]S$, $\Delta \to exp[2i\alpha_D] \Delta$, and other fields do not transform. We have the following Lagrangian relevant to neutrino masses \begin{eqnarray} L_D = - \bar L_L Y_D D_R S - \bar D_L M D_R - {1\over 2} \bar D_L Y_L D^c_L \Delta - {1\over 2} \bar D_R^c Y_R D_R \Delta^\dagger + h.c. \end{eqnarray} If both $S$ and $\Delta$ develop non-zero vev's, the inverse seesaw mechanism is realized. This model, however, will have a Goldstone boson due to breaking of the global $U(1)_D$ symmetry which may be problematic. To avoid the existence of a Goldstone boson in the theory, extension is needed. Also in the above two models, no candidates for dark matter. In following sections, we will extend the two models discussed in this section to radiatively generate inverse seesaw neutrino masses. We will discuss the possibility of having dark matter candidates in these models. \\ \section{ Radiative Two Loop Inverse Seesaw} To avoid the appearance of massless Goldstone boson in the theory, a possible approach is not to allow the global symmetry to break and therefore no Goldstone boson emerges. Applying this idea to the model involving $D_{L,R}$, $S$ and $\Delta$ are then not allowed to have vev's. This however also firbids the light neutrinos to have non-zero masses at tree level. We have to extend the model. To this end, we introduce another singlet $\sigma$ which transforms under the $U(1)_D$ as $\sigma \to exp[2i\alpha_D]\sigma$. We refer to this model as the $U(1)_D$ model. The allowed renormalizable terms in the potential $V_D$ are given by \begin{eqnarray} V_D &=& -\mu^2_H H^\dagger H + \lambda_H (H^\dagger H)^2 + \mu^2_S S^\dagger S + \lambda_S (S^\dagger S)^2 + \mu^2_\sigma \sigma^\dagger \sigma + \lambda_\sigma (\sigma^\dagger \sigma)^2\nonumber\\ & + & \mu^2_\Delta \Delta^\dagger \Delta + \lambda^\alpha_\Delta (\Delta^\dagger \Delta \Delta^\dagger \Delta)_\alpha + \sum_{ij}\lambda_{ij} i^\dagger ij^\dagger j + (\mu_{S\sigma} S^2 \sigma + \lambda_{\Delta \sigma H} H \Delta \sigma^\dagger H + h.c.), \end{eqnarray} where the sum $\sum_{ij}$ is over all possible $i$ and $j$, and $i$ to be one of the $H$, $S$, $\sigma$ and $\Delta$. The allowed terms are: \begin{eqnarray} \lambda^\beta_{H\Delta} (H^\dag H \Delta^\dag \Delta)_\beta + \lambda_{H\sigma} (H^\dag H \sigma^\dag \sigma)+ \lambda_{HS} (H^\dag H S^\dag S) + \lambda_{\Delta \sigma} (\Delta^\dag \Delta \sigma^\dag \sigma) + \lambda_{\sigma S} (\sigma^\dag \sigma S^\dag S)\;. \end{eqnarray} In the above the indices $\alpha$ and $\beta$ indicate different ways of forming singlet. They are given by \begin{eqnarray} (\Delta^\dag \Delta \Delta^\dag \Delta)_1 = \Delta^*_{ij} \Delta_{ij} \Delta^*_{kl} \Delta_{kl}\;,\;\;\;\;(\Delta^\dag \Delta \Delta^\dag \Delta)_2 = \Delta^*_{ij} \Delta_{ik} \Delta^*_{kl} \Delta_{jl} \cr (\Delta^\dag \Delta H^\dag H)_1 = \Delta^*_{ij} \Delta_{ij} H^*_{k} H_{k}\;,\;\;\;\;(\Delta^\dag \Delta H^\dag H)_2 = \Delta^*_{ij} \Delta_{kj} H^*_{k} H_{i} \end{eqnarray} In the above $\mu^2_i$ are all larger than zero. The potential only allows $H$ to have a non-zero vev $v_H$. The theory has an unbroken $U(1)_D$ global symmetry after spontaneous symmetry breaking from $SU(2)_L\times U(1)_Y$ to $U(1)_{em}$. At the tree level, light neutrinos are massless. Giving the above terms in $L_D$ and $V_D$, it is not possible to defined conserved lepton number. This is because that among the $\bar L_L D_R S$, $\bar D_{L,R} D^c_{L,R} \Delta$ and $S^2\sigma$, and $H \Delta \sigma^\dagger H$ vertices, there is always one vertex where lepton number is violated. For example, assigning $L_L$ to have lepton number $+1$ and $D_{L,R}$ to have $X$, if one demands conservation of lepton number as $\bar L_L D_R S$, $\bar D_{L,R} D^c_{L,R} \Delta$ and $S^2\sigma$ vertices, at the vertex $H \Delta \sigma^\dagger H$ would violate lepton number by 2 units. One can demand other vertices to converse lepton number, but no matter what one chooses, the combination of terms proportional to $Y_DY_{L} \mu_{S\sigma}\lambda_{\Delta \sigma H}Y_D$ always violate lepton number by 2 units. However, the global $U(1)_D$ symmetry is respected. Because of lepton number is violated, at loop levels, Majorana neutrino masses may be generated. We find that non-zero Majorana neutrino masses can be generated at two loop level shown in Fig.1. This two loop contribution violates the lepton number by 2 units. This two loop mass generation is similar to the Babu-Zee model two loop neutrino mass generation\cite{babu-zee} but with the light charged leptons in the loop replaced by new heavy particles. The last two terms in the potential are crucial for light neutrino mass generation. \begin{figure} \includegraphics[width=6cm]{two-loop.eps} \caption{Two loop diagram for neutrino mass generation.}\label{two-loop} \end{figure} We will now describe how to calculate the two-loop induced light neutrino mass. After $H$ develops vev, mixing between $\Delta^0$ and $\sigma$ will be generated via the term $ H \Delta \sigma^\dagger H$. The corresponding mass matrix for $(\Delta^0, \sigma)^T$ can be expressed as: \begin{eqnarray} \left( \begin{array}{cc} M^2_{11} & M^2_{12} \\ M^2_{21} & M^2_{22} \end{array} \right) \nonumber \end{eqnarray} with \begin{eqnarray} M^2_{11} &=& \mu^2_\Delta + {1\over 2}\lambda^1_{H\Delta} v^2_H\;,\;\; M^2_{22} = \mu^2_\sigma + {1\over 2}\lambda_{H\sigma} v^2_H\;,\;\;M^2_{12} = M^2_{21} = {1\over 2} \lambda_{\Delta \sigma H } v^2_H\;. \end{eqnarray} One can diagonalize the mass matrix via: \begin{equation} \left(\begin{array}{cc} \phi_1 \\ \phi_2 \end{array} \right) = \left( \begin{array}{cc} \cos \alpha & \sin \alpha \\ -\sin \alpha & \cos \alpha \end{array} \right) \left( \begin{array}{cc} \Delta^{0} \\ \sigma \end{array} \right) \nonumber \end{equation} with \begin{eqnarray} m^2_{\phi_{1,2}} &=& { M^2_{11} + M^2_{22} \pm \sqrt{ (M^2_{11} - M^2_{22} )^2 + 4 M^2_{12} M^2_{21} } \over 2 }\;, \cr \sin 2\alpha &=& { 2 M^2_{12} \over \sqrt{ (M^2_{11} - M^2_{22} )^2 + 4 M^2_{12} M^2_{21} } }\;. \nonumber \end{eqnarray} The light neutrino mass $m_\nu$ generated via two-loop diagram is given by: \begin{eqnarray} m_\nu &=& Y_D^2 M^2 Y_L \mu_{S\sigma} \cos\alpha \sin\alpha \int {d^4p\over (2\pi)^4} {d^4q \over (2\pi)^4}{ 1 \over p^2-m_S^2 } { 1 \over q^2-m_S^2 }{ 1 \over p^2-M^2 }{ 1 \over q^2-M^2 }\nonumber\\ &\times& \left ({ 1 \over (p-q)^2-m_{\phi_1}^2 } - { 1 \over (p-q)^2-m_{\phi_2}^2 }\right )\;. \end{eqnarray} The last factor in the above can be written as $(m^2_{\phi_1} - m^2_{\phi_2})/((p-q)^2-m^2_{\phi_1})((p-q)^2-m^2_{\phi_2})$. Using, $\sin\alpha \cos\alpha (m^{2}_{\phi_1}-m^{2}_{\phi_2}) = M^2_{12}={1\over 2}\lambda_{\Delta\sigma H}v_H^2$, and neglecting the mass splitting between $m_{\phi_1}$ and $m_{\phi_2}$ in the denominator, we obtain \begin{eqnarray} m_\nu=&&\frac{\lambda_{\Delta\sigma H} Y_L Y_D^2 \mu_{S\sigma} v_H^2 M^2}{2(M^2-m^2_{S})^2}\int\frac{d^{4}p}{(2\pi)^4}\frac{d^{4}q}{(2\pi)^4}\times\nonumber\\ &&\frac{1}{[(p-q)^2-m^2_{\phi_1}]^2}(\frac{1}{p^2-m^2_{S}}\frac{1}{q^2-m^2_{S}}-\frac{1}{p^2-m^2_{S}}\frac{1}{q^2-M^2}-\frac{1}{p^2-M^2} \frac{1}{q^2-m^2_{S}}\nonumber\\ &&+\frac{1}{P^2-M^2}\frac{1}{q^2-M^2})\;. \end{eqnarray} Our results for the two loop integral agree with that obtained in Ref.\cite{bruce}. Carrying out the loop integrals, we finally obtain \begin{eqnarray} m_\nu=&&\frac{\lambda_{\Delta\sigma H} Y_L Y_D^2 \mu_{S\sigma} v_H^2 }{2(4\pi)^4 M^2(1-m^2_S/M^2)^2} [g(m_{\phi_1},m_S,m_S)-g(m_{\phi_1},M,m_S)\cr &&-g(m_{\phi_1},m_S,M)+g(m_{\phi_1},M,M)]\;, \end{eqnarray} where \begin{eqnarray} g(m_1,m_2,m_3)=\int^{1}_{0} dx [1+Sp(1-\mu^2)-\frac{\mu^2}{1-\mu^2}\log\mu^2]\nonumber \end{eqnarray} with $\mu^2=\frac{ax+b(1-x)}{x(1-x)}, a=\frac{m^2_2}{m^2_1}, b=\frac{m^2_3}{m^2_1}$. $Sp(z)$ is the Spence function or the dilogarithm function defined as: \begin{eqnarray} Sp(z)=-\int^z_0 {\ln(1-t)\over t} dt \end{eqnarray} To compare with the inverse seesaw mass formula, we rewrite the above in a matrix form in the bases where $M$ is diagonalized \begin{eqnarray} m_\nu^{ij} =\frac{v_H Y_D^{ik}(\lambda_{\Delta\sigma H}\mu_{S\sigma} Y_L^{kl}) Y_D^{jl} v_H}{M^2_{kk}}\kappa_{kl}\;, \label{mass} \end{eqnarray} where $\kappa_{kl}$ is defined as: \begin{eqnarray} \kappa_{kl} &=&\delta_{kl}\frac{1}{2(4\pi)^4}{1\over (1- m^2_S/M^2_{kk})^2}[g(m_{\phi_1},m_S,m_S)-g(m_{\phi_1},M_{kk},m_S) \cr &&-g(m_{\phi_1},m_S,M_{kk})+g(m_{\phi_1},M_{kk},M_{kk})] \end{eqnarray} If one identifies, effectively, $m_D = Y_D v_H$, $M = diag(M_{ii})$ and $\mu_L = (\mu_L^{ij})$ with $\mu_L^{ij} = (\lambda_{\Delta\sigma H}\mu_{s\sigma}) Y_L^{ij}\kappa_{ij}$, the light neutrino mass matrix is effectively an inverse seesaw mass form. We therefore refer this as radiative inverse seesaw mechanism. This model is different than those radiative inverse seesaw models discussed in Ref.\cite{ma1,sandy} where additional neutral heavy spin-half particles are introduced to generate radiative neutrino masses. The above formula can easily fit current data on neutrino mixing and masses\cite{neutrino-data}. As an example, let us consider a simple case with $Y_L$ diagonal and $Y_D =y_D U_{PMNS}$. For the normal hierarchy, choose $Y_L = diag(1, ~1.05, ~ 2.01)\times 10^{-2}$, $y_D = 10^{-2}$, $\lambda_{\Delta\sigma H} =0.1$, $\mu_{S\sigma} = 100$GeV, $m_{\phi_1} = 300$GeV, $m_S = 150$GeV, $M_{ii}=500$GeV, we can get all the three neutrino mass $2.804\times 10^{-2}$eV, $2.936\times 10^{-2}$eV, $5.636\times 10^{-2}$eV, respectively. These are consistent with data. For inverted hierarchy case, we just need to replace $Y_L$ with $Y_L = diag(1.297, ~1.317, ~0.100)\times 10^{-2}$, with all the other parameters unchanged, the neutrino masses will be $4.90\times 10^{-2}$eV, $4.98\times 10^{-2}$eV, $3.78\times 10^{-3}$eV, respectively. Again, these numbers are consistent with data. Along the same idea, the case with singlet heavy neutrinos discussed earlier, can also be modified to have two loop realization of inverse seesaw mechanism. To this end we impose on the theory a global $U(1)_S$ symmetry. The new particles beyond SM are: $N_{L,R}: (1,0)$, $\eta: (2,-1/2)$, $\Delta: (3, -1)$ and $S:(1,0)$. Under the $U(1)_S$ these particles transform as: $N_{L,R} \to exp[i\alpha_S]$, $\eta \to exp[-i\alpha_S] \eta$, $\Delta \to exp[-2i\alpha_S]$, $S \to exp[-2i\alpha] S$. The Lagrangian $L_S$ for the bare mass term and Yukawa couplings, and the potential $V_S$ relevant for two loop neutrino mass generation are given by \begin{eqnarray} L_S &=& - \bar N_L M N_R - \bar L_L Y_D N_R \eta - {1\over 2} \bar N^c_R Y_R N_R S - {1\over 2} \bar N_L Y_L N^c_L S^\dagger + h.c.\nonumber\\ V_S&=& \mu_{\Delta \eta}\eta \Delta^\dagger \eta + \lambda_{\Delta S H} H \Delta^\dagger S H + h.c. + ..., \end{eqnarray} where ``...'' indicate other allowed terms. The light neutrino mass matrix can be obtained by replacing $\mu_{S\sigma}$ by $\mu_{\Delta\eta}$, and $\lambda_{\Delta \sigma H}$ by $\lambda_{\Delta S H}$ in eq.\ref{mass}. In this model terms proportional to $Y_D Y_L\mu_{\Delta\eta}\lambda_{\Delta SH} Y_D$ violates lepton number by 2 units, for the same reasons for the $U(1)_D$ model. As long as the radiative generation of inverse seesaw neutrino masses is concerned the above two models are very similar. However, when considering dark matter physics, these two models have different features. We proceed to discuss them in the following. \\ \section{ Dark Matter Candidate} Since in both the $U(1)_D$ and $U(1)_S$ models, the global symmetries are not broken, there are stable particles which may play the role of dark matter. Which one of the new particles is the lightest one depends on the parameter space and therefore determines which one plays the role of dark matter. In the $U(1)_D$ model, the heavy fermion particles have non-zero hypercharge and cannot play the role of dark matter. This is because that although dark matter relic density can be produced by dark matter annihilate into gauge particle with known interaction strength with sufficiently large dark matter mass, the direct detection rate from t-channel Z boson exchange would be too large. This possibility is therefore ruled out. The neutral components of the scalar fields in the models are other possibilities which may be identified as dark matter. The neutral component $\Delta^0$, has problem to play the role of dark matter due to its non-zero hypercharge. If the real and imaginary parts of the $\Delta^0$ masses $m_r$ and $m_i$ have a splitting $\delta = m_r - m_i$, the non-zero hypercharge problem can be resolved by invoking the inelastic dark matter mechanism\cite{inelastic-dm}, namely the scattering of a dark matter off nucleon is kinematically forbidden if the mass splitting $\delta$ is larger than 100 KeV or so. In the $U(1)_D$ model, however, we find that it is not possible to generate a non-zero $\delta$ for the real and imaginary parts in $\Delta^0$, the inelastic dark matter mechanism is ineffective. The natural dark matter field is $S$. It does not have a non-zero hypercharge and does not mix with any particle having hypercharge. As long as dark matter properties are concerned, this model is very similar to the real singlet (darkon) model\cite{real-singlet} and therefore similar dark matter properties\cite{darkon} and identical to the complex scalar singlet model\cite{complex-singlet} with degenerate mass for the real and imaginary parts of S. This is a typical Higgs portal model. Dark matter annihilation and detection are all mediated by Higgs boson. The term important is $S^\dagger S H^\dagger H$. Removing the would-be Goldstone bosons in $H$, we have \begin{eqnarray} \lambda_{SH} S^\dagger S H^\dagger H = {1\over 2}\lambda_{SH}(v^2_H + 2v_H h + hh)SS^\dagger\;. \end{eqnarray} The first term will modify the mass of $S$ from $\mu^2_S$ to $M_D^2 = \mu^2_S + \lambda_{SH} v^2_H/2$. As long as dark matter annihilation and detection are concerned, the free parameters are: $M_D$, $\lambda_{SH}$ and also the Higgs boson mass. In the model, the Higgs boson $h$ properties, its mass and its couplings to SM particles (fermions and gauge bosons), are very close to the SM Higgs boson $h_{SM}$. The recent LHC data indicate that the mass is about 125 GeV\cite{LHC-higgs}. We will analyze the model using Higgs mass of $m_h = 125$ GeV. The dark matter relic density and direct detection constraints on the coupling $\lambda_{SH}$ and $M_D$ are shown in Fig. 2. Since now we have two degenerate components as dark matter, the constraint on $\lambda_{SH}$ from relic density is $1/\sqrt{2}$ times smaller than the darkon model\cite{real-singlet}. The recent data on direct dark matter search from Xenon100\cite{newxenon100} put the most strigent constraint on the allowed range for dark matter mass. The range of a few tens of GeV for dark matter mass is in trouble. However, dark matter mass about half of the Higgs mass and larger than 130 GeV is still allowed. \begin{figure} \includegraphics[width=6cm]{lamda.eps}\\ \includegraphics[width=6cm]{detection.eps} \caption{Constraints on the coupling and dark matter mass from dark matter relic density and direct detection\cite{dark-direct,newxenon100} for $S$ as the dark matter with Higgs mass set to be 125 GeV. The projected Xenon1T sensitivity is also drawn.}\label{S-dark-matter} \end{figure} The $\sigma$ field is also a possibility for dark matter since it does not have a hypercharge neither. It mixes with $\Delta^0$ after $H$ develops $vev$ through the term: $\lambda_{\Delta \sigma H} H \Delta \sigma^\dagger H$. The lighter of physical particle which may play the role of dark matter will also has a non-zero $Z$ coupling. However, in this case there is the mixing parameter to tune to satisfy the constraint. We find that as long as the parameter $\sin \alpha$ is less than $10^{-3}$, the large direct detection cross section can be solved. We also checked that $\sin\alpha$ of order $10^{-3}$ can be made compatible with the neutrino mass generation requirement. With $\alpha < 10^{-3}$, the dark matter is dominated by the component $\sigma$. The dark matter properties are similar to $S$. We now briefly discuss dark matter properties in the $U(1)_S$ model. In this model, the neutral scalars in $\eta$, $\Delta$ and $S$, and the $N$ are possible candidates for dark matter. The neutral components in $\eta$ and $\Delta$ have hypercharges and also there are no mass splitting between the real and imaginary parts, they have too large cross section for direct dark matter detection after fitting relic density requirement. The $S$ field although does not have a hypercharge, it mixes with $\Delta^0$, some fine tuning is needed to be compatible with direct dark matter detection data. The situation is similar to the case of $\sigma$ as the dark matter in the $U(1)_D$ model. This is similar to the case of $\sigma$ as dark matter as in the $U(1)_D$ model. The $N$ field does not have hypercharge and may also play the role of dark matter. In this case, the dark matter relic density is achieved by t-channel exchange of $\eta$ induced $N\;N$ pair annihilate into lepton pairs, $l^+\;l^-$ and $\nu_L\; \bar\nu_L$. The annihilation rate is governed by the Yukawa coupling $Y_D$, the mass $m_\eta$ of $\eta$ and also the dark matter mass $M_D = M$. We have checked that there are parameter space where the correct relic density can be produced. In Fig.3 we show some correlations of the parameters which can produce the correct relic density. At the tree level, $N$ does not couple to quarks. However, at one loop level, with $L_L$ and $\eta$ in the loop $\bar N$-$N$-$Z$ coupling can be generated which can lead to sizeable dark matter direct detection cross section and at the same time satisfy the dark matter relic density constraint. The results are shown in Fig. 3. We again see that the recent Xenon100\cite{newxenon100} data put stringent constraint on the allowed range for dark matter mass. But the $N$ can still play the role of dark matter with appropriate masses. \\ \begin{figure} \includegraphics[width=6cm]{Y.eps}\\ \includegraphics[width=6cm]{detectionN.eps} \caption{Constraints on the coupling and dark matter mass from dark matter relic density and direct detection for $N$ as the dark matter. The different curves are for $\eta$ mass to be 200 GeV, 300 GeV and 500 GeV, respectively.}\label{N-dark-matter} \end{figure} \section{ Conclusions} We have proposed two models, the $U(1)_D$ and $U(1)_S$ models, in which neutrino masses are generated through inverse seesaw mechanism at two loop level. In these models, a global $U(1)$ is unbroken leading to a stable beyond SM new particle in each model. These stable new particles are natural candidates for dark matter. We find that these models can satisfy current experimental constraints from neutrino masses, mixing, dark matter relic density and direct detections. Because in these models the neutrino masses are generated at two loop level and also inverse seesaw type, the seesaw scale can be as low as a few hundred GeV. This can lead to observable signatures. Before closing, we would like to make a few comments about some phenomenological implications of the models. One of them is related to Higgs properties. Although the Higgs couplings to SM particles are not modified at tree level, there are noticeable corrections at one loop level in the above two models. An important example is the modification to $h \to \gamma\gamma$. Because the existence of the two terms $(\Delta^\dagger \Delta H^\dagger H)_\alpha$ in both models and ($\eta^\dagger_i \eta_i H^\dagger_j H_j\;,\eta^\dagger_i \eta_j H^\dagger_j H_i)$ terms in the $U(1)_S$ model, terms like $\Delta^{++}\Delta^{--} h$ and $\Delta^{+}\Delta^{-} h$ ($\eta^+\eta^- h$) will be generated after $H$ develops vev, the $h\to \gamma\gamma$ can be modified. At present the experimental value\cite{LHC-higgs} for this channel is $1.9\pm 0.5$ (ATLAS) ($1.56\pm 0.43$(CMS)) times that predicted by the SM. The central value is higher than the SM prediction. With large enough $\lambda_{H\Delta }^\alpha$, one may bring the value to close to the data. Another is related to probing the new degrees of freedom in the models at the LHC. In both models there are new charged particles, the $\Delta$, $\eta$ and $D$ fields. The particles in this multiplet can be pair produced via electromagnetic and weak interactions. However, in both models there are unbroken $U(1)$ symmetries, the new particles cannot decay into pure SM final state making detection difficult. A possible signature is that the charged new particle decays into an SM particle and a dark matter. The SM particle is detected, but the dark matter carries away large transverse missing momentum and energy. For example for the $U(1)_D$ model, with $S$ been the dark matter, $D^\pm$ can decay into a charged lepton $l^\pm$ and the dark matter $S$. In the $U(1)_S$ model with $N$ being the dark matter, $\eta^\pm$ can decay into a charged lepton and the dark matter. Finally, there are potentially large FCNC effects in leptonic sector in these models. This is because that the Yukawa couplings $Y_D$ in both models can be of order $O(0.1)$, at loop level exchange $D$ and $S$, and, $N$ and $S$ in the $U(1)_D$ and $U(1)_S$ models, respectively, can generate flavor changing radiative decay of charged lepton $l\to l' \gamma$ with branching ratios close to the current experimental bound\cite{he-ren}. Also possible large $\mu \to e$ conversion. Near future improved experiments can test these models\cite{he-ren}. Detailed analysis will be presented else where. \acknowledgments \vspace*{-1ex} This work was supported in part by NSC of ROC, and NNSF(grant No:11175115) and Shanghai science and technology commission (grant No: 11DZ2260700) of PRC. XGH would like to thank the Center for Theoretical Underground Physics and Related Areas (CETUP* 2012) in South Dakota for its hospitality and for partial support during the completion of this work. \bigskip
4366f32d43e1cf9aea4d99a52c417e2aa42b08b5
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} The classical statistical pattern recognition setting involves $$(X,Y),(X_1,Y_1),\dotsc,(X_n,Y_n) \stackrel{i.i.d.}{\sim} F_{X,Y},$$ where the $X_i:\Omega\mapsto \mathbb{R}^d$ are observed feature vectors and the $Y_i:\Omega\mapsto\{0,1\}$ are observed class labels for some probability space $\Omega$. We define $\mathcal{D}=\{(X_i,Y_i)\}$ as the training set. The goal is to learn a classifier $h(\cdot;\mathcal{D}): \mathbb{R}^d \to \{0,1\}$ such that the probability of error $\Pr[h(X;\mathcal{D})\neq Y|\mathcal{D}]$ approaches Bayes optimal as $n\to \infty$ for all distributions $F_{X,Y}$ -- universal consistency \citep{devroye1996probabilistic}. Here we consider the case wherein the feature vectors $X,X_1,\dotsc,X_n$ are unobserved, and we observe instead a latent position graph $G(X,X_1,\dotsc,X_n)$ on $n+1$ vertices. We show that a universally consistent classification rule (specifically, $k$-nearest neighbors) remains universally consistent for this extension of the pattern recognition set up to latent position graph models. Latent space models for random graphs \citep{Hoff2002} offer a framework in which a graph structure can be parametrized by latent vectors associated with each vertex. Then, the complexities of the graph structure can be characterized usings well-known techniques for vector spaces. One approach, which we adopt here, is that given a latent space model for a graph, we first estimate the latent positions and then use the estimated latent positions to perform subsequent analysis. When the latent vectors determine the distribution of the random graph, accurate estimates of the latent positions will often lead to accurate subsequent inference. In particular, this paper considers the random dot product graph model introduced in \cite{nickel2006random} and \cite{young2007random}. This model supposes that each vertex is associated with a latent vector in $\Re^d$. The probability that two vertices are adjacent is then given by the dot product of their respective latent vectors. We investigate the use of an eigen-decomposition of the observed adjacency matrix to estimate the latent vectors. The motivation for this estimator is that, had we observed the expected adjacency matrix (the matrix of adjacency probabilities), then this eigen-decomposition would return the original latent vectors (up to an orthogonal transformation). Provided the latent vectors are i.i.d.\ from any distribution $F$ on a suitable space $\mathcal{X}$, we show that we can accurately recover the latent positions. Because the graph model is invariant to orthogonal transformations of the latent vectors, note that the distribution $F$ is identifiable only up to orthogonal transformations. Consequently, our results show only that we estimate latent positions which can then be orthogonally transformed to be close to the true latent vectors. As many subsequent inference tasks are invariant to orthogonal transformations, it is not necessary to achieve a rotationally accurate estimate of the original latent vectors. For this paper, we investigate the inference task of vertex classification. This supervised or semi-supervised problem supposes that we have observed class labels for some subset of vertices and that we wish to classify the remaining vertices. To do this, we train a $k$-nearest-neighbor classifier on estimated latent vectors with observed class labels, which we then use to classify vertices with un-observed class labels. Our result states that this classifier is universally consistent, meaning that regardless of the distribution for the latent vectors, the error for our classifier trained on the estimated vectors converges to Bayes optimal for that distribution. The theorems as stated can be generalized in various ways without much additional work. For ease of notation and presentation, we chose to provide an illustrative example for the kind of results that can be achieved for the specific random dot product model. In the discussion we point out various ways that this can be generalized. The remainder of the paper is structured as follows. Section~\ref{sec:relwork} discusses previous work related to the latent space approach and spectral properties of random graphs. In section~\ref{sec:frame}, we introduce the basic framework for random dot product graphs and our proposed latent position estimator. In section~\ref{sec:rdpg}, we argue that the estimator is consistent, and in section~\ref{sec:knn} we show that the $k$-nearest-neighbors algorithm yields consistent vertex classification. In section~\ref{sec:disc} we consider some immediate ways the results presented herein can be extended and discuss some possible implications. Finally, section~\ref{sec:emp} provides illustrative examples of applications of this work through simulations and a graph derived from Wikipedia articles and hyper-links. \section{Related Work} \label{sec:relwork} The latent space approach is introduced in \cite{Hoff2002}. Generally, one posits that the adjacency of two vertices is determined by a Bernoulli trial with parameter depending only on the latent positions associated with each vertex, and edges are independent conditioned on the latent positions of the vertices. If we suppose that the latent positions are i.i.d.\ from some distribution, then the latent space approach is closely related to the theory of exchangeable random graphs \citep{Bickel2009,kallenberg2005probabilistic,aldous1981representations}. For exchangeable graphs, we have a (measurable) link function $g:[0,1]^2\mapsto[0,1]$ and each vertex is associated with a latent i.i.d.\ uniform $[0,1]$ random variable denoted $X_i$. Conditioned on the $\{X_i\}$, the adjacency of vertices $i$ and $j$ is determined by a Bernoulli trial with parameter $g(X_i,X_j)$. For a treatment of exchangeable graphs and estimation using the method of moments, see \cite{bickel2011method}. The latent space approach replaces the latent uniform random variables with random variables in some $\mathcal{X}\subset\Re^d$, and the link function $g$ has domain $\mathcal{X}^2$. These random graphs still have exchangeable vertices and so could be represented in the i.i.d.\ uniform framework. On the other hand, $d$-dimensional latent vectors allow for additional structure and advances interpretation of the latent positions. In fact, the following result provides a characterization of {\em finite-dimensional} exchangeable graphs as random dot product graphs. First, we say $g$ is rank $d<\infty$ and positive semi-definite if $g$ can be written as $g(x,y)=\sum_{i=1}^d \psi_i(x)\psi_i(y)$ for some linearly independent functions $\psi_j:[0,1]\mapsto [-1,1]$. Using this definition and the inverse probability transform, one can easily show the following. \begin{proposition} An exchangeable random graph has rank $d<\infty$ and positive semi-definite link function if and only if the random graph is distributed according to a random dot product graph with i.i.d. latent vectors in $\Re^d$. \end{proposition} \noindent Put another way, random dot products graphs are exactly the finite-dimensional exchangeable random graphs, and hence, they represent a key area for exploration when studying exchangeable random graphs. An important example of a latent space model is the stochastic blockmodel \citep{Holland1983}, where each latent vector can take one of only $b$ distinct values. The latent positions can be taken to be $\mathcal{X}=[b]=\{1,\dotsc,b\}$ for some positive integer $b$, the number of blocks. Two vertices with the same latent position are said to be members of the same block, and block membership of each vertex determines the probabilities of adjacency. Vertices in the same block are said to be stochastically equivalent. This model has been studied extensively, with many efforts focused on unsupervised estimation of vertex block membership \citep{Snijders1997Estimation,Bickel2009,Choi2010}. Note that \cite{STFP-2011} discusses the relationship between stochastic blockmodels and random dot product graphs. The value of the stochastic blockmodel is its strong notions of communities and parsimonious structure; however the assumption of stochastic equivalence may be too strong for many scenarios. Many latent space approaches seek to generalize the stochastic blockmodel to allow for variation within blocks. For example, the mixed membership model of \cite{Airoldi2008} posits that a vertex could have partial membership in multiple blocks. In \cite{Handcock2007Modelbased}, latent vectors are presumed to be drawn from a mixture of multivariate normal distributions with the link function depending on the distance between the latent vectors. They use Bayesian techniques to estimate the latent vectors. Our work relies on techniques developed in \cite{rohe2011spectral} and \cite{STFP-2011} to estimate latent vectors. In particular, \cite{rohe2011spectral} prove that the eigenvectors of the normalized Laplacian can be orthogonally transformed to closely approximate the eigenvectors of the population Laplacian. Their results do not use a specific model but rather rely on assumptions for the Laplacian. \cite{STFP-2011} shows that for the directed stochastic blockmodel, the eigenvectors/singular vectors of the adjacency matrix can be orthogonally transformed to approximate the eigenvectors/singular vectors of the population adjacency matrix. \cite{fishkind2012consistent} extends these results to the case when the number of blocks in the stochastic blockmodel are unknown. \cite{Marchette2011VN} also uses techniques closely related to those presented here to investigate the semi-supervised vertex nomination task. Finally, another line of work is exemplified by \cite{oliveira2009concentration}. This work shows that, under the independent edge assumption, the adjacency matrix and the normalized Laplacian concentrate around the respective population matrices in the sense of the induced $L^2$ norm. This work uses techniques from random matrix theory. Other work, such as \cite{chung2004spectra}, investigates the spectra of the adjacency and Laplacian matrices for random graphs under a different type of random graph model. \section{Framework} \label{sec:frame} Let $\mathcal{M}_n(A)$ and $\mathcal{M}_{nm}(A)$ denote the set of $n\times n$ and $n\times m$ matrices with values in $A$ for some set $A$. Additionally, for $\mathbf{M}\in \mathcal{M}_n(\Re)$, let $\lambda_i(\mathbf{M})$ denote the eigenvalue of $\mathbf{M}$ with the $i^\text{th}$ largest magnitude. All vectors are column vectors. Let $\mathcal{X}$ be a subset of the unit ball $\mathcal{B}(0,1)\subset\mathbb{R}^{d}$ such that $\langle x_1, x_2 \rangle \in [0,1],$ for all $x_1, x_2 \in \mathcal{X}$ where $\langle \cdot,\cdot\rangle$ denotes the standard Euclidean inner product. Let $F$ be a probability measure on $\mathcal{X}$ and let $X, X_1,X_2,\dotsc,X_n \stackrel{\mathrm{i.i.d.}}{\sim} F$. Define $\mathbf{X}:=[X_1,X_2,\dotsc,X_n]^\top :\Omega\mapsto \mathcal{M}_{n,d}(\mathbb{R})$ and $\mathbf{P}:= \mathbf{X}\mathbf{X}^\top:\Omega \mapsto \mathcal{M}_{n}(\mathbb{R})$. We assume that the (second moment) matrix $\mathbb{E}[X_1X_1^\top]\in\mathcal{M}_d(\Re)$ is rank $d$ and has distinct eigenvalues $\{\lambda_i(\mathbb{E}[XX^\top])\}$. In particular, we suppose there exists $\delta>0$ such that \begin{equation}\label{eq:momEigGap} 2\delta < \min_{i\neq j} |\lambda_i(\mathbb{E}[XX^\top ])-\lambda_j(\mathbb{E}[XX^\top ])| \quad\text{ and }\quad 2\delta<\lambda_d(\mathbb{E}[XX^\top ]). \end{equation} \begin{remark} The distinct eigenvalue assumption is not critical to the results that follow but is assumed for ease of presentation. The theorems hold in the general case with minor changes. \end{remark} \noindent Additionally, we assume that the dimension $d$ of the latent positions is known. Let $\mathbf{A}$ be a random symmetric hollow matrix such that the entries $\{\mathbf{A}_{ij}\}_{i < j}$ are independent Bernoulli random variables with $\Pr[\mathbf{A}_{ij}=1]=\mathbf{P}_{ij}$ for all $i,j\in [n]$, $i < j$. We will refer to $\mathbf{A}$ as the adjacency matrix that corresponds to a graph with vertex set $\{1,\dotsc,n\}$. Let $\tilde{\mathbf{U}}_\mathbf{A} \tilde{\mathbf{S}}_\mathbf{A} \tilde{\mathbf{U}}_\mathbf{A}^\top $ be the eigen-decomposition of $|\mathbf{A}|$ where $|\mathbf{A}| = (\mathbf{A}\mathbf{A}^\top)^{1/2}$ with $\tilde{\mathbf{S}}_\mathbf{A}$ having positive decreasing diagonal entries. Let $\mathbf{U}_\mathbf{A}\in\mathcal{M}_{n,d}(\mathbb{R})$ be given by the first $d$ columns of $\tilde{\mathbf{U}}_\mathbf{A}\in \mathcal{M}_{n}(\mathbb{R})$ and let $\mathbf{S}_\mathbf{A}\in \mathcal{M}_{d}(\mathbb{R})$ be given by the first $d$ rows and columns of $\tilde{S}_\mathbf{A}\in \mathcal{M}_{n}(\mathbb{R})$. Let $\mathbf{U}_\mathbf{P}$ and $\mathbf{S}_\mathbf{P}$ be defined similarly. \section{Estimation of Latent Positions}\label{sec:rdpg} The key result of this section is the following theorem which shows that, using the eigen-decomposition of $|\mathbf{A}|$, we can accurately estimate the true latent positions up to an orthogonal transformation. \begin{theorem}\label{thm:main} With probability greater than $1-\frac{2(d^2+1)}{n^2}$, there exists an orthogonal matrix $\mathbf{W} \in \mathcal{M}_{d}(\mathbb{R})$ such that \begin{equation} \| \mathbf{U}_\mathbf{A} \mathbf{S}_\mathbf{A}^{1/2} \mathbf{W} - \mathbf{X} \| \leq 2d\sqrt{\frac{3\log n}{\delta^3}}.\label{eq:XBnd} \end{equation} Let $\mathbf{W}$ be as above and define $\hat{\mathbf{X}}=\mathbf{U}_\mathbf{A}\mathbf{S}_\mathbf{A}^{1/2}\mathbf{W}$ with row $i$ denoted by $\hat{X}_i$. Then, for each $i\in[n]$ and all $\gamma<1$, \begin{equation} \Pr[\|\hat{X}_i -X_i\|^2>n^{-\gamma}] = O(n^{\gamma-1} \log n). \label{eq:xiBnd} \end{equation} \end{theorem} We now proceed to prove this result. First, the following result, proved in \cite{STFP-2011}, provides a useful Frobenius bound for the difference between $\mathbf{A}^2$ and $\mathbf{P}^2$. \begin{proposition}[\cite{STFP-2011}] \label{prop:froBound} For $\mathbf{A}$ and $\mathbf{P}$ as above, it holds with probability greater than $1-\frac{2}{n^2}$ that \begin{equation}\label{eq:APfroBound} \| \mathbf{A}^2-\mathbf{P}^2\|_F \leq \sqrt{3n^3\log n}. \end{equation} \end{proposition} The proof of this theorem is omitted and uses the same Hoeffding bound as is used to prove Eq.~\eqref{eq:eigBound1} below. \begin{proposition}\label{prop:PeigBound} For $i\leq d$, it holds with probability greater than $1-\frac{2d^2}{n^2}$ that \begin{equation} |\lambda_i(\mathbf{P})-n\lambda_i(\mathbb{E}[X X^\top ])| \leq 2d^2 \sqrt{n\log n}, \label{eq:eigboundProp} \end{equation} and for $i>d$, $\lambda_i(P)=0$. If Eq.~\eqref{eq:eigboundProp} holds, then for $i,j\leq d+1$, $i\neq j$ and $\delta$ satisfying Eq.~\eqref{eq:momEigGap} and $n$ sufficiently large, we have \begin{equation} \label{eq:PeigGap} |\lambda_i(\mathbf{P}) - \lambda_j(\mathbf{P}) | > \delta n. \end{equation} \end{proposition} \begin{proof} First, $\lambda_i(P)=\lambda_i(\mathbf{X}\mathbf{X}^\top )=\lambda_i(\mathbf{X}^\top \mathbf{X})$ for $i\leq d$. Note each entry of $\mathbf{X}^\top \mathbf{X}$ is the sum of $n$ independent random variables each in $[-1,1]$: $\mathbf{X}^\top\mathbf{X} _{ij} = \sum_{l=1}^n X_{li}X_{lj}$. This means we can apply Hoeffding's inequality to each entry of $\mathbf{X}^\top \mathbf{X}-n\mathbb{E}[XX^\top ]$ to obtain \begin{equation} \Pr[|(\mathbf{X}^\top \mathbf{X}-n\mathbb{E}[XX^\top ])_{ij}| \geq 2\sqrt{n\log n}] \leq \frac{2}{n^2}. \label{eq:eigBound1} \end{equation} Using a union bound we have that $\Pr[\|\mathbf{X}^\top\mathbf{X}-\mathbb{E}[XX^\top ]\|_F\geq 2d^2\sqrt{n\log n}] \leq \frac{2d^2}{n^2}$. Using Weyl's inequality \citep{horn85:_matrix_analy}, we have the result. Eq.~\eqref{eq:PeigGap} follows from Eq.~\eqref{eq:eigboundProp} provided $2d^2\sqrt{n\log n}<n\delta$, which is the case for $n$ large enough. \end{proof} This next lemma shows that we can bound the difference between the eigenvectors of $\mathbf{A}$ and $\mathbf{P}$, while our main results are for scaled versions of the eigenvectors. \begin{lemma}\label{lem:eigvecBnd} With probability greater than $1-\frac{2{d^2+1}}{n^2}$, there exists a choice for the signs of the columns of $\mathbf{U}_\mathbf{A}$ such that for each $i\leq d$, \begin{equation} \| (\mathbf{U}_\mathbf{A})_{\cdot i}- (\mathbf{U}_\mathbf{P})_{\cdot i} \|_F \leq \sqrt{\frac{3\log n}{\delta^2 n}}. \label{eq:eigVecBound} \end{equation} \end{lemma} \begin{proof} This is a result of applying the Davis-Kahan Theorem (\cite{davis70}; see also \cite{rohe2011spectral}) to $\mathbf{A}$ and $\mathbf{P}$. Proposition~\ref{prop:froBound} and \ref{prop:PeigBound} give that the eigenvalue gap for $\mathbf{P}^2$ is greater than $\delta^2 n^2$ and that $\|\mathbf{A}^2-\mathbf{P}^2\|_F\leq \sqrt{3n^3 \log n }$ with probabilty greater then $1-\frac{2(d^2+1)}{n^2}$. Apply the Davis-Kahan theorem to each eigenvector of $\mathbf{A}$ and $\mathbf{P}$, which are the same as the eigenvectors of $\mathbf{A}^2$ and $\mathbf{P}^2$, respectively, to get \begin{equation} \min_{r_i\in\{-1,1\}} \| (\mathbf{U}_\mathbf{A})_{\cdot i}- (\mathbf{U}_\mathbf{P})_{\cdot i} r_i\|_F \leq \sqrt{\frac{3\log n}{\delta^2 n}} \label{eq:eigveciBound} \end{equation} for each $i\leq d$. The claim then follows by choosing $U_A$ so that $r_i=1$ minimizes Eq.~\eqref{eq:eigveciBound} for each $i\leq d$. \end{proof} We now have the ingredients to prove our main theorem. \begin{proof}[Proof of Theorem~\ref{thm:main}] The following argument assumes that Eqs.~\eqref{eq:eigVecBound} and \eqref{eq:PeigGap} hold, which occurs with probability greater than $1-\frac{2(d^2+1)}{n^2}$. By the triangle inequality, we have \begin{equation} \begin{split} \|\mathbf{U}_\mathbf{A} \mathbf{S}_\mathbf{A}^{1/2}-\mathbf{U}_\mathbf{P} \mathbf{S}_\mathbf{P}^{1/2}\|_F &\leq \|\mathbf{U}_\mathbf{A} \mathbf{S}_\mathbf{A}^{1/2}-\mathbf{U}_\mathbf{A} \mathbf{S}_\mathbf{P}^{1/2}\|_F+\|\mathbf{U}_\mathbf{A} \mathbf{S}_\mathbf{P}^{1/2}-\mathbf{U}_\mathbf{P} \mathbf{S}_\mathbf{P}^{1/2}\|_F\\ & = \|\mathbf{U}_\mathbf{A}(\textbf{S}_\mathbf{A}^{1/2}-\mathbf{S}_\mathbf{P}^{1/2})\|_F + \|(\mathbf{U}_\mathbf{A}-\mathbf{U}_\mathbf{P})\mathbf{S}_\mathbf{P}^{1/2}\|_F. \end{split} \label{eq:triIneqBound} \end{equation} Note that \begin{equation} \lambda_i^{1/2}(|\textbf{A}|)-\lambda_i^{1/2}(\mathbf{P}) =\frac{\lambda_i^2(|\mathbf{A}|) - \lambda_i^2(\mathbf{P})}{ (\lambda_i(|\mathbf{A}|)+\lambda_i(\mathbf{P}))(\lambda_i(|\mathbf{A}|)^{1/2}+\lambda_i(\mathbf{P})^{1/2})} \label{eq:factoring} \end{equation} where the numerator of the right hand side is less than $\sqrt{3 n^3 \log n}$ by Proposition~\ref{prop:froBound} and the denominator is greater than $(\delta n)^{3/2}$ by Proposition~\ref{prop:PeigBound}. The first term in Eq.~\eqref{eq:triIneqBound} is thus bounded by $d\sqrt{3 \log n/\delta^3}$. For the second term, $(\mathbf{S}_\mathbf{P})_{ii}~\leq~n$ and $\|\mathbf{U}_\mathbf{A}-\mathbf{U}_\mathbf{P}\|\leq d\sqrt{\frac{3\log n}{\delta^2 n}}$. We have established that with probability greater than $1-\frac{2{d^2+1}}{n^2}$, \begin{equation} \|\mathbf{U}_\mathbf{A} \mathbf{S}_\mathbf{A}^{1/2}-\mathbf{U}_\mathbf{P} \mathbf{S}_\mathbf{P}^{1/2}\|_F \leq 2d\sqrt{\frac{3\log n}{\delta^3}}. \label{eq:mainBound} \end{equation} We now will show that an orthogonal transformation will give us the same bound in terms of $\mathbf{X}$. Let $\mathbf{Y} = \mathbf{U}_\mathbf{P} \mathbf{S}_\mathbf{P}^{1/2}$. Then $\mathbf{Y} \mathbf{Y}^\top = \mathbf{P} = \mathbf{X} \mathbf{X}^\top$ and thus $ \mathbf{Y} \mathbf{Y}^\top \mathbf{X} = \mathbf{X} \mathbf{X}^\top \mathbf{X}$. Because $\mathrm{rank}(\mathbf{P)} = d = \mathrm{rank}(\mathbf{X})$, we have that $\mathbf{X}^\top \mathbf{X}$ is non-singular and hence $\mathbf{X} = \mathbf{Y} \mathbf{Y}^\top \mathbf{X} (\mathbf{X}^\top \mathbf{X})^{-1}$. Let $\mathbf{W} = \mathbf{Y}^{\top} \mathbf{X} (\mathbf{X}^\top \mathbf{X})^{-1}$. It is straightforward to verify that $\mathrm{rank}(\mathbf{W}) = d$ and that $\mathbf{W}^\top\mathbf{ W} = \mathbf{I}$. $\mathbf{W}$ is thus an orthogonal matrix, and $\mathbf{X} = \mathbf{Y} \mathbf{W} = \mathbf{U}_\mathbf{P} \mathbf{S}_\mathbf{P}^{1/2} \mathbf{W}$. Eq.~\eqref{eq:XBnd} is thus established. Now, we will prove Eq.~\eqref{eq:xiBnd}. Note that because the $\{X_i\}$ are i.i.d., the $\{\hat{X}_i\}$ are exchangeable and hence identically distributed. As a result, each of the random variables $\|\hat{X}_i -X_i\|$ are identically distributed. Note that for sufficiently large $n$, by conditioning on the event in Eq.~\eqref{eq:XBnd}, we have \begin{equation} \mathbb{E}[\|\mathbf{X}-\hat{\mathbf{X}}\|^2] \leq \left(1-\frac{2(d^2+1)}{n^2}\right)(2d)^2\frac{3\log n}{\delta^3}+ \frac{2(d^2+1)}{n^2} 2n= O\left(\frac{d^2\log n}{\delta^3}\right) \end{equation} because the worst case bound is $\|\mathbf{X}-\hat{\mathbf{X}}\|^2\leq 2n$ with probability 1. We also have that \begin{equation} \mathbb{E}\left[\sum_{i=1}^{n} \mathbb{I}\{\|\hat{X}_i -X_i\|^2>n^{-\gamma}\} n^{-\gamma}\right] \leq \mathbb{E}[\|\mathbf{X}-\hat{\mathbf{X}}\|^2], \end{equation} and because the $\|\hat{X}_i -X_i\|$ are identically distributed, the left hand side is simply $n^{1-\gamma}\Pr[\|\hat{X}_i -X_i\|^2>n^{-\gamma}]$. \end{proof} \section{Consistent Vertex Classification}\label{sec:knn} So far we have shown that using the eigen-decomposition of $|\mathbf{A}|$, we can consistently estimate all latent positions simultaneously (up to an orthogonal transformation). One could imagine that this will lead to accurate inference for various exploitation tasks of interest. For example, \cite{STFP-2011} explored the use of this embedding for unsupervised clustering of vertices in the simpler stochastic blockmodel setting. In this section, we will explore the implications of consistent latent position estimation in the supervised classification setting. In particular, we will prove that universally consistent classification using $k$-nearest-neighbors remains valid when we select the neighbors using the estimated vectors rather than the true but unknown latent positions. First, let us expand our framework. Let $\mathcal{X}\subset\Re^d$ be as in section~\ref{sec:frame} and let $F_{X,Y}$ be a distribution on $\mathcal{X}\times\{0,1\}$. Let $(X_1,Y_1), (X_2,Y_2),\dotsc,(X_n,Y_n),(X_{n+1},Y_{n+1})\stackrel{iid}{\sim} F_{X,Y}$ and let $\mathbf{P}\in\mathcal{M}_{n+1}([0,1])$ and $\mathbf{A}\in\mathcal{M}_{n+1}(\{0,1\})$ be as in section~\ref{sec:frame}. Here the $Y_i$s are the class labels for the vertices in the graph corresponding to the adjacency matrix $\mathbf{A}$. We suppose that we observe only $\mathbf{A}$, the adjacency matrix, and $Y_1,\dotsc,Y_n$, the class labels for all but the last vertex. Our goal is to accurately classify this last vertex, so for notational convenience define $X:=X_{n+1}$ and $Y:=Y_{n+1}$. Let the rows of $\mathbf{U}_\mathbf{A}\mathbf{S}_\mathbf{A}^{1/2}$ be denoted by $\zeta_1^\top,\dotsc,\zeta_{n+1}^\top$. The $k$-nearest-neigbor rule for $k$ odd is defined as follows. For $1\leq i \leq n$, let $W_{ni}(X)=1/k$ only if $\zeta_i$ is one of the $k$ nearest points to $\zeta$ from among $\{\zeta_i\}_{i=1}^n$; $W_{ni}(X)=0$ otherwise. (We break ties by selecting the neighbor with the smallest index.) The $k$-nearest-neighbor rule is then given by $h_n(x)=\mathbb{I}\{\sum_{i=1}^n W_{ni}(X) Y_i > \frac{1}{2}\}$. It is a well known theorem of \cite{stone1977consistent} that, had we observed the original $\{X_i\}$, the $k$-nearest neighbor rule using the Euclidean distance from $\{X_i\}$ to $X$ is universally consistent provided $k\to\infty$ and $k/n\to 0$. This means that for any distribution $F_{X,Y}$, \begin{equation} \mathbb{E}[L_n] := \mathbb{E}[\Pr[\tilde{h}_n(X) \neq Y|(X_1,Y_1), (X_2,Y_2),\dotsc,(X_n,Y_n)]] \to \Pr[h^*(X)\neq Y]=:L^* \end{equation} as $n\to \infty$, where $\tilde{h}_n$ is the standard $k$-nearest-neighbor rule trained on the $\{(X_i, Y_i)\}$ and $h^*$ is the (optimal) Bayes rule. This theorem relies on the following very general result, also of \cite{stone1977consistent}, see also \cite{devroye1996probabilistic}, Theorem~6.3. \begin{theorem}[\cite{stone1977consistent}]\label{thm:stone} Assume that for any distribution of $X$, the weights $W_{ni}$ satisfy the following three conditions: \begin{enumerate}[(i)] \item \label{stone1} There exists a constant $c$ such that for every nonnegative measurable function $f$ satisfying $\mathbb{E}[f(X)]<\infty$, \begin{equation}\label{eq:stone1} \mathbb{E}\left[ \sum_{i=1}^{n} W_{ni}(X) f(X_i)\right]\leq c \mathbb{E}[f(X)]. \end{equation} \item \label{stone2} For all $a>0$, \begin{equation}\label{eq:stone2} \lim_{n\to\infty} \mathbb{E}\left[ \sum_{i=1}^{n} W_{ni}(X)\mathbb{I}\{\|X_i-X\|>a\} \right]=0 \end{equation} \item \label{stone3} \begin{equation} \lim_{n\to\infty} \mathbb{E}\left[\max_{1\leq i\leq n} W_{ni}(X) \right]=0 \end{equation} \end{enumerate} Then $h_n(x) = \mathbb{I}\{\sum_i W_{ni}(x) > 1/2\}$ is universally consistent. \end{theorem} \begin{remark} Recall that the $\{\hat{X_i}\}$ are defined in Theorem~\ref{thm:main}. Because the $\{\hat{X}_i\}$ are obtained via an orthogonal transformation of the $\{\zeta_i\}$, the nearest neighbors of $\hat{X}=\hat{X}_{n+1}$ are the same as those of $\zeta$. As a result of this and the relationship between $\mathbf{X}$ and $\hat{\mathbf{X}}$, we work using the $\{\hat{X}_i\}$, even though these cannot be known without some additional knowledge. \end{remark} To prove that the $k$-nearest-neighbor rule for the $\{\hat{X}_i\}$ is universally consistent, we must show that the corresponding $W_{ni}$ satisfy these conditions. The methods to do this are adapted from the proof presented in \cite{devroye1996probabilistic}. We will outline the steps of the proof, but the details follow {\em mutatis mutandis} from the standard proof. First, the following Lemma is adapted from \cite{devroye1996probabilistic} by using a triangle inequality argument. \begin{lemma Suppose $k/n\to0$. If $X\in \mathrm{supp}(F_X)$, then $\|\hat{X}_{(k)}(\hat{X})-\hat{X}\|\to 0$ almost surely, where $\hat{X}_{(k)}(\hat{X})$ is the $k$-th nearest neighbor of $\hat{X}$ among $\{\hat{X}_i\}_{i=1}^n$. \end{lemma} Condition~\eqref{stone3} follows immediately from the definition of the $W_{ni}$. The remainder of the proof follows with few changes after recognizing that the random variables $\{(X,\hat{X})\}$ are exchangeable. Overall, we have the following universal consistency result. \begin{theorem} \label{thm:univCons} If $k\to\infty$ and $k/n\to 0$ as $n\to\infty$, then the $W_{ni}(X)$ satisfy the condtions of Theorem~\ref{thm:stone} and hence $\mathbb{E}[\Pr[h_n(\hat{X})\neq Y| \mathbf{A},\{Y_i\}_{i=1}^n]=\mathbb{E}[L_n]\to L^*_X$. \end{theorem} \section{Extensions} \label{sec:disc} The results presented thus far are for the specific problem of determining one unobserved class label for a vertex in a random dot product graph. In fact, the techniques used can be extended to somewhat more general settings without significant additional work. \subsection{Classification} For example, the results in section~\ref{sec:knn} are stated in the case that we have observed the class labels for all but one vertex. However, the universal consistency of the $k$-nearest-neighbor classifier remains valid provided the number of vertices $m$ with observed vertex class labels goes to infinity and $k/m\to 0$ as the number of vertices $n\to\infty$. In other words, we may train the $k$-nearest neighbor on a smaller subset of the estimated latent vectors provided the size of that subset goes to $\infty$. On the other hand, if we fix the number of observed class labels $m$ and the classification rule $h_m$ and let the number of vertices tend to $\infty$, then we can show the probability of incorrectly classifying a vertex will converge to $L_m=\Pr[h_m(Z)\neq Y]$. Additionally, our results also hold when the class labels $Y$ can take more than two but still finitely many values. In fact, the results in section~\ref{sec:knn} and Eq.~\eqref{eq:xiBnd} from Theorem~\ref{thm:main} rely only on the fact that the $\{X_i\}$ are i.i.d.\ and bounded, the $\{(X_i,\hat{X}_i)\}$ are exchangeable, and $\|\mathbf{X}-\hat{\mathbf{X}}\|_F^2$ can be bounded with high probability by a $O(\log n)$ function. The random graph structure provided in our framework is of interest, but it is the total noise bounds that are crucial for the universal consistency claim to hold. \subsection{Latent Position Estimation} In section~\ref{sec:rdpg}, we state our results for the random dot product graph model. We can generalize our results immediately by replacing the dot product with a bi-linear form, $g(x,y)=x^\top (\mathbf{I}_{d'} \oplus(-\mathbf{I}_{d''}))y$, where $\mathbf{I}_d$ is the $d\times d$ identity matrix. This model has the interpretation that similarities in the first $d'$ dimensions increase the probability of adjacency, while similarities in the last the last $d''$ reduce the probability of adjacency. All the results remain valid under this model, and in fact, arguments in \cite{oliveira2009concentration} can be used to show that the signature of the bi-linear form can also be estimated consistently. We also recall that the assumption of distinct eigenvalues for $\mathbb{E}[XX^T]$ can be removed with minor changes. Particularly, Lemma~\ref{lem:eigvecBnd} applies to groups of eigenvalues, and subsequent results can be adapted without changing the order of the bounds. This work focuses on undirected graphs and this assumption is used explicitly throughout section~\ref{sec:rdpg}. We believe moderate modifications would lead to similar results for directed graphs, such as in \cite{STFP-2011}; however at present we do not investigate this problem. We also note that we assume the graph has no loops so that $\mathbf{A}$ is hollow. This assumption can be dropped, and in fact, the impact of the diagonal is asymptotically negligible, provided each entry is bounded. \cite{Marchette2011VN} suggest that augmenting the diagonal may improve latent position estimation for finite samples. In \cite{rohe2011spectral}, the number of blocks in the stochastic blockmodel, which is related to $d$ in our setting \citep{STFP-2011}, is allowed to grow with $n$; our work can also be extended to this setting. In this case, it will be the interaction between the rate of growth of $d$ and the rate that $\delta$ vanishes that controls the bounds in Theorem~\ref{thm:main}. Additionally, the consistency of $k$-nearest-neighbors when the dimension grows is less well understood and results such as Stone's Theorem~\ref{thm:stone} do not apply. In addition to keeping $d$ fixed, we also assume that $d$ is known. \cite{fishkind2012consistent} and \cite{STFP-2011} suggest consistent methods to estimate the latent space dimension. The results in \cite{oliveira2009concentration} can also be used to derive thresholds for eigenvalues to estimate $d$. Finally, \cite{fishkind2012consistent} and \cite{Marchette2011VN} also consider that the edges may be attributed; for example, if edges represent a communication, then the attributes could represent the topic of the communication. The attributed case can be thought of as a set of adjacency matrices, and we can embed each separately and concatenate the embeddings. \cite{fishkind2012consistent} argues that this method works under the attributed stochastic blockmodel and similar arguments could likely be used to extend the current work. \subsection{Extension to the Laplacian} The eigen-decomposition of the graph Laplacian is also widely used for similar inference tasks. In this section, we argue informally that our results extend to the Laplacian. We will consider a slight modification of the standard normalized Laplacian as defined in \cite{rohe2011spectral}. This modification scales the Laplacian in \cite{rohe2011spectral} by $n-1$ so that the first $d$ eigenvalues of our matrix are $O(n)$ rather then $O(1)$ for the standard normalized Laplacian. Let $\mathbf{L}:=\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}$ where $\mathbf{D}$ is diagonal with $\mathbf{D}_{ii} := \frac{1}{n-1}\sum_{i=1}^n \mathbf{A}_{ij}$. Additionally, let $\mathbf{Q}:=\bar{\mathbf{D}}^{-1/2}\mathbf{P}\bar{\mathbf{D}}^{-1/2}$ where $\bar{\mathbf{D}}$ is diagonal with \begin{equation} \bar{\mathbf{D}}_{ii} := \frac{1}{n-1}\mathbb{E}\left[\sum_{j=1}^n \mathbf{A}_{ij} | \ \mathbf{X}\right] = \frac{1}{n-1}\sum_{j\neq i} \mathbf{P}_{ij} = \frac{1}{n-1} \sum_{j\neq i} \langle X_i,X_j\rangle. \end{equation} Finally, define $q:\Re^d\times\Re^d\mapsto\Re^d$ as $q(x,y):=\frac{x}{\sqrt{\langle x,y\rangle}}$, $Z_i:= q(X_i,\frac{1}{n}\sum_{j\neq i} X_j)$ and $\tilde{Z}_i:=q(X_i,\mathbb{E}(X))$. Because the pairwise dot products of the rows of $\bar{\mathbf{D}}^{-1/2}\mathbf{X}$ are the same as the entries of $\mathbf{Q}$, the scaled eigenvectors of $\mathbf{Q}$ must be an orthogonal transformation of the $\{Z_i\}$. Further, note that for large $n$, $Z_i$ and $\tilde{Z_i}$ will be close with high probability because $ \frac{1}{n}\sum_{j\neq i} X_j\stackrel{a.s}{\to}\mathbb{E}[X]$ and the function $q(X_j,\cdot)$ is smooth almost surely. Additionally, the $\{\tilde{Z}_i\}$ are i.i.d.\ and $q(\cdot,\mathbb{E}[X])$ is one-to-one so that the Bayes optimal error rate is the same for the $\{\tilde{Z}_i\}$ as for the $\{X_i\}$: $L^*_X = L^*_{\tilde{Z}}$. If the further assumption that the minimum expected degree among all vertices is greater than $\sqrt{2}n/\sqrt{\log n}$ holds, then the assumptions of Theorem~2.2 in \cite{rohe2011spectral} are satisfied. Let $\hat{Z}_i$ denote the $i^\mathrm{th}$ row of the matrix $\mathbf{U}_\mathbf{L}\mathbf{S}_\mathbf{L}$ defined analogously to section~\ref{sec:frame} and let $\tilde{\mathbf{Z}}$ be the matrix with row $i$ given by $\tilde{Z}_i^\top$. Using the results in \cite{rohe2011spectral} and similar tools to those we have used thus far, one can show that $\min_{\mathbf{W}}\|\mathbf{U}_\mathbf{L}\mathbf{S}_\mathbf{L}\mathbf{W}-\tilde{\mathbf{Z}}\|^2$ can be bounded with high probability by a function in $O(\log n)$. As discussed above, this is sufficient for $k$-nearest-neighbors trained on $\{(\hat{Z}_i,Y_i)\}$ to be universally consistent. In this paper we do not investigate the comparative values of the eigen-decompositions for the Laplacian versus the adjacency matrix, but one factor may be the properties of the map $q$ defined above as applied to different distributions on $\mathcal{X}$. \section{Experiments} \label{sec:emp} In this section we present empirical results for a graph derived from Wikipedia links as well as simulations for an example wherein the $\{X_i\}$ arise from a Dirichlet distribution. \subsection{Simulations} \label{sec:sim} \begin{figure}[t] \begin{center} \includegraphics[width=.7\textwidth]{rdpgSim_scatterXhat_Lstar0.pdf} \end{center} \caption{An example of estimated latent position $\{\hat{X}_i\}$ for the distribution described in section~\ref{sec:sim}. Each point is colored according to class labels $\{Y_i\}$. For the original latent position $\{X_i\}$, the two classes would be perfectly separated by the line $y=x$. In this figure the two classes are nearly separated but have some overlap. Note also that some estimated positions are outside the support of the original distribution.} \label{fig:simScatter} \end{figure} To demonstrate our results, we considered a problem where perfect classification is possible. Each $X_i:\Omega \mapsto\Re^2$ is distributed according to a Dirichlet distribution with parameter $\alpha=[2,2,2]^\top$ where we keep just the first two coordinates. The class labels are determined by the $X_i$ with $Y_i=\mathbb{I}\{X_{i1}<X_{i2}\}$ so in particular $L^*=0$. For each $n\in\{100,200,\dots,2000\}$, we simulated 500 instances of the $\{X_i\}$ and sample the associated random graph. For each graph, we used our technique to embed each vertex in two dimensions. \ To facilitate comparisons, we used the matrix $\mathbf{X}$ to construct the matrix $\hat{\mathbf{X}}$ via transformation by the optimal orthogonal $\mathbf{W}$. Figure~\ref{fig:simScatter} illustrates our embedding for $n=2000$ with each point corresponding to a row of $\hat{\mathbf{X}}$ with points colored according the class labels $\{Y_i\}$. To demonstrate our results from section~\ref{sec:rdpg}, figure~\ref{fig:simFroErr} shows the average square error in the latent position estimation per vertex. \begin{figure} \begin{center} \includegraphics[width=.9\textwidth]{rdpgSim_errfro_Lstar0_errBar.pdf} \end{center} \caption{Mean square error versus number of vertices. This figure shows the mean square error in latent position estimation per vertex, given by $\|\hat{\mathbf{X}}-\mathbf{X}\|_F^2/n$, for the simulation described in section~\ref{sec:sim}. The error bars are given by the standard deviation of the average square error over 500 monte carlo replicates for each $n$. On average, the estimated latent positions converge rapidly to the true latent positions as the number of vertices in the graph increases.} \label{fig:simFroErr} \end{figure} For each graph, we used leave-one-out cross validation to evaluate the error rate for $k$-nearest-neighbors for $k=2\lfloor\sqrt{n}/4\rfloor +1 $. We suppose that we observe all but 1 class label as in section~\ref{sec:knn}. Figure~\ref{fig:simLhatBoxplot} shows the classification error rates. The black line shows the classification error when classifying using $\hat{\mathbf{X}}$ while the red line shows the classification error when classifying using $\mathbf{X}$. Unsurprisingly, classifying using $\hat{\mathbf{X}}$ gives worse performance. However we still see steady improvement as the number of vertices increases, as predicted by our universal consistency result. Indeed, this figure suggests that the rates of convergence may be similar for both $\mathbf{X}$ and $\hat{\mathbf{X}}$. \begin{figure} \begin{center} \includegraphics[width=.9\textwidth]{rdpgSim_errkrootn_Lstar0_errBar.pdf} \end{center} \caption{Leave-one-out cross validation classification error estimates using $k$-nearest neighbors for the simulations described in section~\ref{sec:sim}. The black line show the classification error when classifying using $\hat{\mathbf{X}}$ while the red line shows the error rates when classifying using $\mathbf{X}$. Error bars show the standard deviation over the 500 monte carlo replicates. Chance classification error is $0.5$; $L^*=0$. This figure suggests the rates of convergence may be similar for both $\mathbf{X}$ and $\hat{\mathbf{X}}$. } \label{fig:simLhatBoxplot} \end{figure} \subsection{Wikipedia Graph} For this data (\cite{ma2012fusion}, \url{http://www.cis.jhu.edu/~zma/zmisi09.html}), each vertex in the graph corresponds to a Wikipedia page and the edges correspond to the presence of a hyperlink between two pages (in either direction). We consider this as an undirected graph. Every article within two hyperlinks of the article ``Algebraic Geometry'' was included as a vertex in the graph. This resulted in $n=1382$ vertices. Additionally, each document, and hence each vertex, was manually labeled as one of the following: Category (119), Person (372), Location (270), Date (191) and Math (430). To investigate the implications of the results presented thus far, we performed a pair of illustrative investigations. First, we used our technique on random induced subgraphs and used leave-one-out cross validation to estimate error rates for each subgraph. We used $k=9$ and $d=10$ and performed 100 monte carlo iterates of random induced subgraphs with $n\in\{100,200,\dotsc,1300\}$ vertices. Figure~\ref{fig:wiki_subgraph} shows the mean classification error estimates using leave-one-out cross validation on each randomly selected subgraph. Note, the chance error rate is $1-430/1382 =0.689$. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{Wiki_errorForRandomSubgraphs_errbar.pdf} \end{center} \caption{Error rate using leave-one-out cross validation for random induced subgraphs. Chance classification error is $\approx 0.688$ shown in blue. This illustrates the improvement vertex classification as the number of vertices and the number of observed class labels increases. } \label{fig:wiki_subgraph} \end{figure} We also investigated the performance of our procedure for different choices of $d$, the embedding dimension, and $k$, the number of nearest neighbors. Because this data has 5 classes, we use the standard $k$-nearest-neighbor algorithm and break ties by choosing the first label as ordered above. Using leave-one-out cross validation, we calculated an estimated error rate for each $d\in \{1,\dotsc,50\}$ and $k\in\{1,5,9,13,17\}$. The results are shown in Figure~\ref{fig:wiki_kd}. This figure suggests that our technique will be robust to different choices of $k$ and $d$ within some range. \begin{figure}[t!] \begin{center} \includegraphics[width=.9\textwidth]{Wiki_knnByDimension.pdf} \end{center} \caption{Leave-one-out error rate plotted against the embedding dimension $d$ for different choices of $k$ (see legend). Each line corresponds to a different choice for the number of nearest neighbors $k$. All results are better than chance $\approx 0.688$. We see that method is robust to changes of $k$ and $d$ near the optimal range.} \label{fig:wiki_kd} \end{figure} \section{Conclusion} Overall, we have shown that under the random dot product graph model, we can consistently estimate the latent positions provided they are independent and identically distributed. We have shown further that these estimated positions are also sufficient to consistently classify vertices. We have shown that this method works well in simulations and can be useful in practice for classifying documents based on their links to other documents. \section*{References} \bibliographystyle{plainnat}
64ebb59e9e85ca08018d98018ab1def6c69c2b1e
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{sec:intro} The Fast Fourier Transform (FFT) is arguably the most ubiquitous numerical algorithm in scientific computing. In addition to being named one of the ``Top Ten Algorithms'' of the past century~\cite{dongarra2000guest}, the FFT is a critical tool in myriad applications, ranging from signal processing to computational PDE and machine learning. At the time of its introduction, it represented a major leap forward in the size of problems that could be solved on available hardware, as it reduces the runtime complexity of computing the Discrete Fourier Transform (DFT) of a length-$N$ array from $\textup{O}(N^2)$ to $\textup{O}(N\log N)$. Any algorithm which computes all $N$ Fourier coefficients has a runtime complexity of $\Omega(N)$, since it takes that much time merely to report the output. However, in many applications it is known that the DFT of the signal of interest is highly sparse -- that is, only a small number of coefficients are non-zero. In this case it is possible to break the $\Omega(N)$ barrier by asking only for the largest $k$ terms in the signal's DFT. When $k\ll N$ existing algorithms can significantly outperform even highly optimized FFT implementations~\cite{iwen2007empirical,iwen2010combinatorial,hassanieh2012simple}. \subsection{Related Work} The first works to implicitly address the sparse approximate DFT problem appeared in the theoretical computer science literature in the early 1990s. In~\cite{linial1993constant}, a variant of the Fourier transform for Boolean functions was shown to have applications for learnability. A polynomial-time algorithm to find large coefficients in this basis was given in~\cite{kushilevitz1993learning}, while the interpolation of sparse polynomials over finite fields was considered in~\cite{mansour1995randomized}. It was later realized~\cite{gilbert2005improved} that this last algorithm could be considered as an approximate DFT for the special case when $N$ is a power of two. In the past ten or so years, a number of algorithms have appeared which directly address the problem of computing sparse approximate Fourier transforms. When comparing the results in the literature, care must be taken to identify the class of signals over which a specific algorithm is to perform, as well as to identify the error bounds of a given method. Different algorithms have been devised in different research communities, and so have varying assumptions on the underlying signals as well as different levels of acceptable error. The first result with sub-linear runtime and sampling requirements appeared in~\cite{gilbert2002near}. They give a $\textup{poly}(k,\log N, \log(1/\delta), 1/\varepsilon)$ time algorithm for finding, with probability $1-\delta$, an approximation $\hat{y}$ of the DFT of the input $\hat{x}$ that is nearly optimal, in the sense that $\|\hat{x}-\hat{y}\|_2^2 \le (1+\varepsilon)\|\hat{x}-\hat{x}_\textup{opt}\|_2^2$, where $\hat{x}_\textup{opt}$ is the best $k$-term approximation to $\hat{x}$. Here the exponent of $k$ in the runtime is two, so the algorithm is \emph{quadratic} in the sparsity. Moreover, the algorithm is non-adaptive in the sense that the samples used are independent of the input $x$. This algorithm was modified in~\cite{gilbert2005improved} to bring the dependence on $k$ down to linear.\footnote{See \cite{gilbert2008tutorial} for a ``user-friendly'' description of the improved algorithm.} This was accomplished mainly by replacing uniform random variables (used to sample the input) by random arithmetic progressions, which allowed the use of nonequispaced fast Fourier transforms to sample from intermediate representations and to estimate the coefficients in near-linear time. The increased overhead of this procedure, however, limited the range of $k$ for which the algorithm outperformed a standard FFT implementation~\cite{iwen2007empirical}. Around the same time, a similar algorithm was developed in the context of list decoding for proving hard-core predicates for one-way functions~\cite{akavia2003proving}. This can be considered an extension of~\cite{kushilevitz1993learning}, and like~\cite{gilbert2002near,gilbert2005improved} is a randomized algorithm. Since the goal in this work was to give a polynomial-time algorithm for list decoding, no effort was made to optimize the dependence on $k$; it stands at $k^{11/2}$, considerably higher than~\cite{gilbert2002near,gilbert2005improved}. The randomness in this algorithm is used only to construct a sample set on which norms are estimated, and in~\cite{akavia2010deterministic} this set is replaced with a deterministic construction. This construction is based on the notion of $\varepsilon$-approximating the uniform distribution over arithmetic progressions, and relies on existing constructions of $\varepsilon$-biased sets of small size~\cite{katz1989estimate,ajtai1990construction}. Depending on the size of the $\varepsilon$-biased sets used, the sampling and runtime complexities are $\textup{O}(k^4\log^c N)$ and $\textup{O}(k^6\log^c N)$, respectively, for some $c>4$.\footnote{Specifically, the runtime is $\textup{O}(k^2 \cdot \log N \cdot |S|)$, where $S$ is the set of samples read by the algorithm. This set takes the form $S = \bigcup_{\ell=1}^{\lfloor \log N \rfloor} A - B_\ell$, where $A$ has $\varepsilon$-discrepancy on rank 2 Bohr sets, $B_\ell$ $\varepsilon$-approximates the uniform distribution on $[0,2^\ell-1]\cap\mathbb{Z}$, and $A-B_\ell$ is the difference set. Using constructions from~\cite{katz1989estimate} one has $|A|=\textup{O}(\varepsilon^{-1}\log^4 N), \; |B_\ell| = \textup{O}(\varepsilon^{-3}\log^4 N)$; setting $\varepsilon = \Theta(k^{-1})$ and noting that $\left| \bigcup A-B_\ell \right| = \textup{O}\left(\sum\left| A-B_\ell \right|\right)$ and $\left| A-B_\ell \right| = \textup{O}(|A||B_\ell|)$ (see, e.g., \cite{tao2006additive}) one obtains the stated sampling and runtime complexities.} In the series of works \cite{iwen2008deterministic,iwen2010combinatorial,iwen2011improved}, a different deterministic algorithm for sparse Fourier approximation was given that relies on the combinatorial properties of \emph{aliasing}, or collisions among frequencies in sub-sampled DFTs. By taking enough short DFTs of co-prime lengths, and employing the Chinese Remainder Theorem to reconstruct energetic frequencies from their residues modulo these sample lengths, the author is able to prove sampling and runtime bounds of $\textup{O}(k^2 \log^4 N)$. The error bound is of the form $\|\hat{x}-\hat{y}\|_2 \le \|\hat{x}-\hat{x}_\textup{opt}\|_2 + k^{-1/2}\|\hat{x}-\hat{x}_\textup{opt}\|_1$; it has been shown that the stronger ``$\ell_2$-$\ell_2$'' guarantee of \cite{gilbert2005improved} cannot hold for a sub-linear, deterministic algorithm \cite{cohen2009compressed}. Moreover, the range of $k$ for which this algorithm is faster than the FFT is smaller in practice than that of \cite{gilbert2005improved}. Most recently, the authors of \cite{hassanieh2012simple} presented a randomized algorithm that extends by an order of magnitude the range of sparsity for which it is faster than the FFT. This is accomplished by removing the iterative aspect from \cite{gilbert2005improved} by using more efficient filters, which are nearly flat within the passband and which decay exponentially outside. In contrast, the box-car filters used in \cite{gilbert2005improved} have a frequency response which oscillates and decays like $|\omega|^{-1}$. In addition, the identification of significant frequencies is done by direct estimation after hashing into a large number of bins rather than the binary search technique of \cite{gilbert2005improved}. These changes give a runtime bound of $\textup{O}(\log N \sqrt{Nk\log N})$ and a somewhat stronger error bound $\|\hat{x}-\hat{y}\|_\infty^2 \le \varepsilon k^{-1} \|\hat{x}-\hat{x}_\textup{opt}\|_2^2 + \delta \|\hat{x}\|_1^2$ with probability $1-1/N$, where $\varepsilon>0$ and $\delta = N^{-\textup{O}(1)}$ is a precision parameter. These existing algorithms generally take one of two approaches to the sparse Fourier transform problem. In~\cite{gilbert2002near,akavia2003proving,gilbert2005improved,hassanieh2012simple}, the spectrum of the input is randomly permuted and then run through a low-pass filter to isolate and identify frequencies which carry a large fraction of the signal's energy. This leads to randomized algorithms that fail on a non-negligible set of possible inputs. On the other hand,~\cite{iwen2010combinatorial} takes advantage of the combinatorial properties of \emph{aliasing} in order to identify the significant frequencies. This leads to a deterministic algorithm with higher runtime and sampling requirements than the randomized algorithms mentioned. Both of these randomized and deterministic approaches have drawbacks. Randomized algorithms are not suitable for failure-intolerant applications, while the process used to reconstruct significant frequencies in~\cite{iwen2010combinatorial} relies on the Chinese Remainder Theorem (CRT), which is highly unstable to errors in the residues. While there do exist algorithms for ``noisy Chinese Remaindering'' \cite{goldreich2000chinese,boneh2002finding,shparlinski2004noisy} these have thus far not found application to the sparse DFT problem, and we leave this as future work. As this paper was being prepared, the authors became aware of an independent work using very similar methods for frequency estimation in the noiseless case \cite{hassanieh2012nearly}. Both methods consider the phase difference between Fourier samples to extract frequency information, but are based on different techniques for binning significant frequencies. The authors of \cite{hassanieh2012nearly} use random dilations and efficient filters of \cite{hassanieh2012simple}, whereas we use different sample lengths in the spirit of \cite{iwen2010combinatorial}. We believe both contributions are of interest, and reinforce the notion that exploiting phase information is critical for developing fast, robust algorithms for the sparse Fourier transform problem. \subsection{New Results} In this paper we describe a simple, deterministic algorithm that avoids reconstruction with the CRT. We are thus able to avoid two pitfalls associated with existing algorithms. Our method relies on sampling the signal in the time domain at slightly shifted points, and thus it assumes access to an underlying continuous-time signal. The shifted time samples allow us to determine the value of significant frequencies in sub-sampled FFTs and also indicate when two or more frequencies have been aliased in such a sub-sampled FFT. These two key facts allow us to significantly reduce (by up to two orders of magnitude) the average-case sampling and runtime complexity of the sparse FFT over a certain class of random signals. Our worst-case bounds improve by a constant factor those of prior deterministic algorithms. We present both adaptive and non-adaptive versions of our algorithms. If the application allows samples to be acquired adaptively (that is, dependent on previous samples), we are able to improve further on our average-case bounds. The remainder of this paper is organized as follows. In section \ref{sec:prelim} we introduce notation and prove the technical lemmas underlying our algorithms. In section \ref{sec:algs} we introduce randomized and deterministic versions of our algorithm. In section \ref{sec:avgcase} we prove that our algorithm has average-case runtime and sampling complexities of $\Theta(k\log(k))$ and $\Theta(k)$, respectively. In section \ref{sec:empirical} we present the results of an empirical evaluation of our algorithm and compare its runtime and sampling requirements to competing algorithms. Finally in section \ref{sec:conclusion} we provide some concluding remarks and discuss ongoing work to appear in the future. \section{Mathematical Background} \label{sec:prelim} \subsection{Preliminaries} Throughout this work we shall be concerned with frequency-sparse band-limited signals $S:[0,1)\to\mathbb{C}$ of the form \begin{equation} \label{eq:signal} S(t) = \sum_{j=1}^k a_j \textup{e}^{2\pi\i\omega_j t}, \end{equation} where $\omega_j \in [-N/2,N/2) \cap \mathbb{Z}, \; a_j \in \mathbb{C},$ and $k \ll N$. The Fourier series of $S$ is given by \begin{equation} \label{eq:fourierseries} \widehat{S}(\omega) = \int_0^1 S(t) \textup{e}^{-2\pi\i\omega t} \d t, \; \omega \in \mathbb{Z}, \end{equation} so that for signals of the form~\eqref{eq:signal} we have $\widehat{S}(\omega_j) = a_j$ and $\wh S(\omega) = 0$ for all other $\omega \in [-N/2,N/2) \cap \mathbb{Z}$. Given any finite sequence $\bm{S}=(s_0, s_1, \dots, s_{p-1})$ of length $p$ we define its Discrete Fourier transform (DFT) by \begin{equation} \label{eq:dft} \widehat{\bm{S}}[h] ~=~ \sum_{j=0}^{p-1} s_j \textup{e}^{\frac{2\pi\i jh}{p}} ~=~ \sum_{j=0}^{p-1} \bm{S}[j] W_p^{jh}, \end{equation} where $h = 0, 1, \ldots, p-1$, $\bm{S}[j]:=s_j$ and $W_p:=\textup{e}^{-\frac{2\pi\i}{p}}$ is the primitive $p$-th root of unity. The Fast Fourier Transform (FFT) allows the computation of $\wh\bm{S}$ in $\textup{O}(p\log p)$ steps. We apply the DFT to discrete samples of $S(t)$ to compute the Fourier coefficients $a_j$ of $S(t)$. For an integer $p$ and real $\varepsilon > 0$ we form discrete arrays of samples of $S$ of length $p$ via $$ \bm{S}_p[j] = S\Bigl(\frac{j}{p}\Bigr), ~~ \bm{S}_{p,\varepsilon}[j] = S\Bigl(\frac{j}{p}+ \varepsilon\Bigr),~~ j = 0, 1, \ldots, p-1. $$ Now assume that all $\omega_j \wmod{p},~ 1\leq j \leq k$ are distinct. It is a simple derivation to obtain $$ \wh\bm{S}_{p}[h] = \left\{\begin{array}{cl} pa_j & ~~h \equiv\omega_j \wmod p \\ 0 & \mbox{~~otherwise}. \end{array}\right. $$ By examining the peaks of $\wh\bm{S}_p[h]$ we will be able to determine $\{\omega_j \wmod{p}: 1\leq j \leq k\}$. Previous approaches applied the Chinese Remainder Theorem to reconstruct $\{\omega_j\}$ by taking a suitable number of $p$'s, which must overcome the problem of registrations to match up each $\omega_j$ whenever a new $p$ is used (see e.g. \cite{iwen2010combinatorial,iwen2011improved}). Our algorithm takes a different approach using the shifted sub-samples. Note that $$ \wh\bm{S}_{p,\varepsilon}[h] = \left\{\begin{array}{cl} pa_je^{2\pi \i\varepsilon \omega_j} & ~~h \equiv\omega_j \wmod p \\ 0 & \mbox{~~otherwise}. \end{array}\right. $$ It follows that in this setting, for $h\equiv\omega_j \wmod p$ we have $\frac{\wh\bm{S}_{p,\varepsilon}[h]}{\wh\bm{S}_{p}[h]} = \textup{e}^{2\pi \i \varepsilon\omega_j}$. Hence \begin{equation} \label{eq:omega1} 2\pi\varepsilon\omega_j \equiv \Arg\left(\frac{\widehat{\bm{S}}_{p,\varepsilon}[h]}{\widehat{\bm{S}}_{p}[h]}\right) \wmod{2\pi}, \end{equation} where $\Arg(z)$ denotes the phase angle of the complex number $z$ in $[-\pi, \pi)$. Assume that we take $|\varepsilon| \leq \frac{1}{N}$. Then $\omega_j$ is completely determined by (\ref{eq:omega1}) as there will be no wrap-around aliasing, and \begin{equation} \label{eq:omega} \omega_j = \frac{1}{2\pi\varepsilon} \Arg\left(\frac{\widehat{\bm{S}}_{p,\varepsilon}[h]}{\widehat{\bm{S}}_{p}[h]}\right). \end{equation} In fact, more generally, if we have an estimate of $\omega_j$, say $|\omega_j| < \frac{L}{2}$, then by taking $|\varepsilon| \leq \frac{1}{L}$ the same reconstruction formula (\ref{eq:omega}) holds. The observation that by taking slightly shifted samples will allow us to identify frequencies in $S(t)$ underlies the algorithms which follow, and the bulk of this paper analyzes various aspects of the proposed algorithms, such as efficiency and robustness. One of the problems is that when $p<N$ it is possible that two or more distinct frequencies will have the same remainder modulo $p$. In this case we say the frequencies are \emph{aliased} or \emph{collide} $\wmod p$. In general, for $h \in \{0,\ldots,p-1\}$ and the given signal $S(t)$ let $I(S,h;p) :=\{j:~\omega_j \equiv h \wmod p\}$. Then we have \begin{equation} \label{eq:aliasing} \wh{\bm{S}}_p[h] = \sum_{\omega \equiv h \scriptsize{\wmod p}} \wh{S}(\omega) = p\sum_{j\in I(S,h;p)} a_j. \end{equation} When aliasing occurs reconstruction via (\ref{eq:omega}) is no longer valid. The aliasing phenomenon presents a serious challenge for any method with sub-linear sampling complexity. In the next section we develop a simple test to determine whether or not aliasing has occurred in a $p$-length DFT, which then allows us to effectively overcome this challenge and develop provably correct sub-linear algorithms. \subsection{Technical Lemmas} To effectively apply the sub-sample idea in a Fourier algorithm one must first overcome the aliasing challenge. Using shifted sub-samples gives us a simple yet extremely effective criterion to determine whether or not aliasing has occurred at a given location in a $p$-length DFT without resorting to complicated combinatorial techniques. Observe that complementing (\ref{eq:aliasing}) we have \begin{equation} \label{eq:shiftaliasing} \wh{\bm{S}}_{p,\varepsilon}[h] = p\sum_{j\in I(S,h;p)} a_j \textup{e}^{2\pi\i\varepsilon\omega_j}. \end{equation} It follows that \begin{align} \label{eq:diffaliasing} \left|\wh{\bm{S}}_{p,\varepsilon}[h]\right|^2 -\left|\wh{\bm{S}}_{p}[h]\right|^2 = p^2&\sum_{j,l\in I(S,h;p)} a_j\overline{a_l} \textup{e}^{2\pi\i\varepsilon(\omega_j-\omega_l)} \\ &-p^2\Bigl|\sum_{j\in I(S,h;p)} a_j\Bigr|^2. \notag \end{align} \begin{lemma} \label{lem:aliasing} Let $p>1$ and $h\in\{0, 1, \dots, p-1\}$. Assume that $q=|I(S,h;p)|>1$, i.e. $\omega_j \equiv h \wmod{p}$ for more than one $j$ in $S(t)$. Then we have the following: \begin{itemize} \item[\rm (A)]~ Let $\varepsilon>0$ and $E:=\{\omega_j-\omega_m: j,m\in I(S,h;p)\}$. Suppose that all elements of $\varepsilon E$ are distinct $\wmod 1$. Then $\big|\wh{\bm{S}}_{p,m\varepsilon}[h]\big| \neq \big|\wh{\bm{S}}_{p}[h]\big|$ for some $1 \leq m \leq q^2-q$. \item[\rm (B)]~ For almost all $\varepsilon>0$ we have $\big|\wh{\bm{S}}_{p,\varepsilon}[h]\big| \neq \big|\wh{\bm{S}}_{p}[h]\big|$. \end{itemize} \end{lemma} \begin{proof} The proof of part (B) is immediate from (\ref{eq:diffaliasing}). Observe that $f(\varepsilon):=\left|\wh{\bm{S}}_{p,\varepsilon}[h]\right|^2 -\left|\wh{\bm{S}}_{p}[h]\right|^2$ is trigonometric polynomial in $\varepsilon$, and it is not identically $0$ given that $q=|I(S,h;p)|>1$. Thus it has at most finitely many zeros for $\varepsilon\in [0,1)$, and hence (B) is clearly true. We resort to the Vandermonde matrix to prove part (A). For simplicity we write $f(t) = \sum_{\alpha\in E} c_\alpha \textup{e}^{2\pi \i \alpha t}$. Set $r_\alpha := \textup{e}^{2\pi \i \alpha\varepsilon}$ where $\varepsilon$ satisfies the hypothesis of the lemma, which implies that all $r_j$ are distinct. Assume the claim of part (A) is false. Then we have $f(m\varepsilon)=0$ for all $0 \leq m \leq q^2-q$. Here $f(0)=0$ is automatic because $\bm{S}_{p,0} = \bm{S}_p$. Thus we have \begin{equation} \label{vandermonde} \sum_{\alpha\in E} c_\alpha r_\alpha^m = 0, ~~~m=0, 1, \dots, q^2-q. \end{equation} But the cardinality of $E$ is at most $q^2-q+1$, which means that there are at most $q^2-q+1$ terms in the sum in (\ref{vandermonde}). Because all $r_\alpha$ are distinct the matrix $[r_\alpha^m]$ is a nonsingular Vandermonde matrix, and for (\ref{vandermonde}) to hold all $c_\alpha$ must be zero. This is clearly not the case, and a contradiction. \end{proof} \medskip \noindent {\bf Remark.}~ Any irrational $\varepsilon$ or $\varepsilon= \frac{a}{b}$ with $a, b$ coprime and $b\geq 2N$ will satisfy the hypothesis of part (A) of Lemma \ref{lem:aliasing}. It is also easy to show that in the special case where all coefficients $a_j$ are real and $|I(S,h;p)|=2$, we have $\big|\wh{\bm{S}}_{p,\varepsilon}[h]\big| \neq \big|\wh{\bm{S}}_{p}[h]\big|$ for any $\varepsilon = \frac{a}{b}$ with $a, b$ coprime and $b\geq N$. \medskip Lemma~\ref{lem:aliasing} allow us to determine whether aliasing has occurred by whether $\big|\wh{\bm{S}}_{p,\varepsilon}[h]\big|/\big|\wh{\bm{S}}_{p}[h]\big|=1$ for a few values of $\varepsilon$. It offers both a deterministic (part (B)) and a random (part (A)) procedure to identify aliasing in the sub-sampled DFTs. In practice we need to set a tolerance $\tau$ in order to accept or reject frequencies according to the criterion \begin{equation} \label{eq:aliasing_test} \left| \frac{\left| \wh{\bm{S}}_{p,\varepsilon}[h]\right|}{\left| \wh{\bm{S}}_p[h]\right|} -1\right| \le \tau. \end{equation} We typically choose $\varepsilon = 1/cN$ for some small constant $c\geq 2$, which would satisfy the hypothesis of part (A) of Lemma~\ref{lem:aliasing}. A tolerance on the order of $p/N$ works well in general, which is what we use in our experiments in section~\ref{sec:empirical} below. In our algorithms we will take a number of sub-sampled DFTs of an input signal $S(t)$ of the form~\eqref{eq:signal}, whose lengths we denote $p_\ell$. Lemma~\ref{lem:aliasing} allows us to determine whether or not two or more frequencies are aliased, so that we only add the non-aliased term to our representation. Since it is unlikely that two or more frequencies are aliased modulo two different sampling rates, using a different $p_\ell$ in a subsequent iteration lets us quickly discover all frequencies present in $S(t)$. Lemma~\ref{lem:numps} gives a worst-case bound on the number of $p_\ell$'s required by our deterministic algorithm to identify all $k$ frequencies in a given Fourier-sparse signal. It is similar to~\cite[Lemma 1]{iwen2010combinatorial}, but with a smaller constant. In its proof we use the CRT, which we quote here for completeness (see, e.g.,~\cite{niven1991introduction}). \begin{thm}[Chinese Remainder Theorem] Any integer $n$ is uniquely specified modulo $N$ by its remainders modulo $m$ pairwise relatively prime numbers $p_\ell$, provided $\prod_{\ell=1}^m p_\ell \ge N$. \end{thm} \begin{lemma} \label{lem:numps} Let $M>1$. It suffices to take $1+(k-1)\lfloor\log_{M}N\rfloor$ pairwise relatively prime $p_\ell$'s with $p_\ell \ge M$ to ensure that each frequency $\omega_j$ is isolated (i.e. not aliased) $\wmod{p_\ell}$ for at least one $\ell$. \end{lemma} \begin{proof} Assume otherwise, namely that given $p_\ell$ for $\ell =1, 2, \dots, L$ with $L > k\lfloor\log_{M}N\rfloor$ there exists some $\omega_j$ such that $\omega_j$ is aliased $\wmod{p_\ell}$. By the Pigeon Hole Principle there exists at least one $\omega_m \neq \omega_j$ such that $\omega_j-\omega_m \equiv 0 \wmod{p_\ell}$ at least $q$ times, where $q> \lfloor\log_{M}N\rfloor$. Without loss of generality we assume that $\omega_j-\omega_m \equiv 0 \wmod{p_\ell}$ for $\ell =1, 2, \dots, q$. Now by the fact that $p_\ell \geq M$ we have $$ \prod_{\ell=1}^q p_\ell \ge M^q \ge N. $$ By the CRT we would then have $\omega_j \equiv \omega_m \wmod N$, a contradiction. \end{proof} We remark that the algorithm in~\cite{iwen2010combinatorial} requires taking $1+2k\log_kN$ co-prime sample lengths, since that algorithm requires each $\omega$ to be isolated in at least half of the DFTs of length $p_\ell$. This requirement stems from the fact that that algorithm cannot distinguish between aliased and non-aliased frequencies in a given sub-sampled DFT. Our worst-case bound is approximately a factor of two better, though in practice our algorithms never use all those sample lengths on random input. The fact that we can tell which frequencies are ``good'' for a given $p_\ell$ allows us to construct our Fourier representation one term at a time, and quit when we have achieved a prescribed stopping criterion. \section{Algorithms} \label{sec:algs} Both of our algorithms proceed along a similar course; in fact they differ only in the choice of the sample lengths $p_\ell$. We assume that we are given access to the continuous-time signal $S(t)$ whose Fourier coefficients we would like to determine, and further that we can sample from $S$ at arbitrary points $t$ in unit time. This is an appropriate model for analog signals, but not for discrete ones. In the discrete case, one could interpolate between given samples to approximate the required $S$-values, though we have not implemented or analyzed this case. (The same assumptions hold for the algorithms in~\cite{iwen2010combinatorial}, while those in~\cite{gilbert2002near, gilbert2005improved, hassanieh2012simple} are formulated purely in the discrete realm.) In this paper mainly limit ourselves to the noiseless case. Though this is a highly unrealistic assumption, it permits a simple description of the underlying algorithm. In section \ref{sec:alg-noise} we discuss some of the problems associated with noisy signals and give a minor modification of our algorithm for low-level noise. A second manuscript in preparation addresses the issue of noise specifically, with more significant modifications to the algorithms described below. \subsection{Non-adaptive} Our algorithms start by choosing a sample length $p_1$ such that $p_1 \ge ck$ for some constant $c>1$. For a fixed $\varepsilon \le 1/N$, we then compute $\bm{\wh{S}}_p$ and $\bm{\wh{S}}_{p,\varepsilon}$, sort the results by magnitude, and compute frequencies $\omega$ via~\eqref{eq:omega} for the $k$ largest coefficients in absolute value. We then check whether or not each of those frequencies is aliased via~\eqref{eq:aliasing}, and if it is not, we add it to our list. The coefficient is given by the unshifted sample value $\wh{\bm{S}}_p[h]$ at that frequency. After this, we combine terms with the same frequency and prune small coefficients from the list. We then iterate until a stopping criterion is reached. In the empirical study described in section~\ref{sec:empirical}, we stopped when the number of distinct frequencies in our list equalled the desired sparsity. Our deterministic algorithm chooses $p_\ell$ to be the $\ell^\textup{th}$ prime greater than $ck$. This ensures that all samples lengths are co-prime, at the expense of taking slightly more samples than necessary. By Lemma~\ref{lem:numps}, $1+(k-1)\lfloor\log_{ck}N\rfloor$ such $p_\ell$s suffice to isolate every $\omega$ at least once. This gives us worst-case sampling and runtime complexity on the same order as~\cite{iwen2010combinatorial}, though the results in section~\ref{sec:empirical} indicate that on average we significantly outperform those pessimistic bounds. Our Las Vegas algorithm chooses $p_\ell$ uniformly at random from the interval $[c_1k, c_2k]$ for constants $1<c_1<c_2$. In this case we cannot make a worst-case guarantee on the number of iterations needed by the algorithm to converge. However, the results in section~\ref{sec:empirical} indicate that the Las Vegas version performs similarly to the deterministic version on the class of signals tested. \subsection{Adaptive} The algorithms can also be implemented in an adaptive fashion, by which we mean that the size of the current representation is taken into account in subsequent iterations. In particular, if $\bm{R}$ is our current representation, we let $k^* = k-|\bm{R}|$ and choose the next $p_\ell$ with respect to $k^*$ instead of $k$. Moreover, before taking DFTs, we subtract off the contribution from the current representation, so that effort is not expended re-identifying portions of the spectrum already discovered. This idea is similar to that in~\cite{gilbert2002near, gilbert2005improved}, though in our empirical studies the evaluation of the representation is done directly, rather than as an unequally-spaced FFT. This gives our algorithms asymptotically slower runtime, but the effect is negligible for the values of $k$ studied in section~\ref{sec:empirical}. A formal description appears below in algorithm~\ref{alg:alg}. \begin{algorithm} \caption{\textsc{Phaseshift}} \label{alg:alg} \begin{algorithmic}[5] \STATE \textbf{Input:} function pointer $S$, integers $c_1,c_2,k,N$, real $\varepsilon$ \STATE \textbf{Output:} $\bm{R}$, a sparse representation for $\wh{S}$ \STATE $\bm{R} \gets \emptyset$, $\varepsilon_0 \gets 0, \varepsilon_1 \gets \varepsilon,$ $\ell \gets 1$ \WHILE{$\left|\bm{R}\right| < k$} \STATE $k^* \gets k-|\bm{R}|$ \hfill \COMMENT{or $k$ if non-adaptive} \STATE $p_\ell \gets$ first prime $\ge c_1k^*$ \\ \hfill \COMMENT{or \textsc{Uniform}$(c_1k^*, c_2k^*)$ if Las Vegas} \FOR{$m=0$ to 1} \FOR{$j=0$ to $\ell-1$} \STATE $\bm{S}_{\ell,m}[j] \gets S\left(\dfrac{j}{p_\ell}+\varepsilon_m\right)$ \STATE $\displaystyle{\bm{S}_\textrm{rep}[j] \gets \sum_{(\omega,c_\omega) \in \bm{R}} c_\omega \textup{e}^{2\pi\i\omega(j/p_\ell+\varepsilon_m)}}$ \\ \hfill \COMMENT{omit if non-adaptive} \ENDFOR \STATE $\wh{\bm{S}}_{\ell,m} \gets \textsc{FFT}(\bm{S}_{\ell,m}-\bm{S}_\textrm{rep})$ \STATE $\wh{\bm{S}}_{\ell,m}^\textrm{sort} \gets \textsc{Sort}(\wh{\bm{S}}_{\ell,m})$ \FOR{$j=1$ to $k^*$} \STATE $\omega_{j,\ell} \gets \dfrac{1}{2\pi\varepsilon} \Arg\left( \dfrac{\wh{\bm{S}}_{\ell,1}^\textrm{sort}[j]}{\wh{\bm{S}}_{\ell,0}^\textrm{sort}[j]}\right)$ \ENDFOR \ENDFOR \FOR{$j=1$ to $k^*$} \IF{ $\left|\dfrac{\left|\wh{\bm{S}}_{\ell,0}^\textrm{sort}[j]\right|}{\left|\wh{\bm{S}}_{\ell,1}^\textrm{sort}[j]\right|}-1\right| < \dfrac{p_\ell}{N}$} \STATE $\bm{R} \gets \bm{R} \cup \left\{ \left( \omega_{j,\ell}, \wh{\bm{S}}_{\ell,0}[\omega_{j,\ell}] \right)\right\}$ \ENDIF \ENDFOR \STATE collect terms in $\bm{R}$ with same $\omega$ \STATE prune small coefficients from $\bm{R}$ \STATE $\ell \gets \ell+1$ \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{Modifications in the presence of noise} \label{sec:alg-noise} In the noiseless versions of the algorithms described in this paper, a test for aliasing is implemented by considering the ratio of magnitudes of shifted and unshifted peaks. When the samples are corrupted by noise, there will be two challenges. The first challenge is that the reconstruction of frequencies from shifts will be corrupted by noise. The second challenge is that there will be variations among the magnitudes even for non-aliased terms, so a higher threshold that depends on the size of the noise must be set. When this threshold is too large it affects the ability to distinguish aliased terms as there will be an increased number of false negatives. On the other hand, lower thresholds that reduce false negatives will lead to an increased number of false positives. The first challenge can be addressed effectively through a combination of using larger $p_j$'s, multiple shifts and a multiscale unwrapping. The idea of using larger $p_j$'s is rather straightforward yet effective. For any given $p_j$ the DFT detects the location of the frequencies modulo $p_j$ rather accurately even with substantial noise. Furthermore, the reconstructed frequencies will still tend to cluster around the true value. Suppose that we sample the signal and compute DFTs of length $p_j$ on these samples. The locations of the peaks in these short DFTs tell us the accurate value of $\omega \bmod p_j$ for each unaliased frequency $\omega$ appearing in the signal. Writing $\omega = ap_j + b$ with $a,b\in\mathbb{Z}$, we now know $b$ and must determine $a$. With a small amount of noise the reconstructed frequencies $\wt{\omega}$ using \eqref{eq:omega} will be close to the true $\omega$. We can thus round $\wt{\omega}$ to the nearest integer of the form $ap_j + b$, which will recover the true frequency $\omega$ as long as $|\wt{\omega}-\omega| < p_j/2$. For high noise levels, it is possible that the $\wt{\omega}$ will deviate by more than $p_j/2$ from $\omega$, so that the value for $a$ given by rounding will be incorrect. By choosing larger $p_j$ (i.e., increasing the parameter $c_1$) one can alleviate the problem somewhat, provided that the noise level is not too high. When the noise level is so high that taking a large $p_j$ is no longer economical, a potential solution is to take multiple shifts and employ a multiscale unwrapping technique. We are still at the preliminary stage in our study of these new techniques, but early results are very encouraging. The second challenge poses a bigger problem, but again it can be addressed in several ways. The multiscale unwrapping method will repeatedly check for aliasing at each stage, which makes it highly unlikely that an aliased frequency will pass through all the tests. Even in the unlikely even that it does, our algorithm allows false positives. Since each mode is subtracted from the original signal in our algorithm, a false positive frequency will lead to an extra mode in the new signal. As the process continues it will be extracted and cancel out the false frequency extracted earlier. \section{Average-case analysis} \label{sec:avgcase} In this section we prove that the average-case runtime and sampling complexity of our algorithm are $\Theta(k\log k)$ and $\Theta(k)$, respectively. This is shown over a class of random signals described in section \ref{sec:random_signal_model}. Before giving this result on the expected runtime and sampling complexity, in section \ref{sec:inner_loop} estimate the costs of a single iteration of the while loop in algorithm \ref{alg:alg}, lines 5--25. We then describe in section \ref{sec:random_signal_model} the random signal model over which we prove our average-case bounds. In section \ref{sec:markov} we prove that the expected number of iterations of the while loop is constant, and in section \ref{sec:karp} we use this result to prove our average-case bounds. \subsection{While loop runtime and sampling complexity} \label{sec:inner_loop} The computational cost of the while loop in algorithm \ref{alg:alg}, lines 5--25 is dominated by three operations. The first is the evaluation of the current representation $\bm{R}$ of $k-k*$ terms at the $\textup{O}(k^*)$ points $j/p_\ell$ in line 10. In our implementation, we simply calculated this directly, looping over both the sample points and the terms in the representation. The complexity of this implementation is $\textup{O}(p_\ell (k-k^*)) = \textup{O}(k^*(k-k^*)) = \textup{O}(k^2)$, and while non-equispaced fast Fourier transforms \cite{dutt1993fast, anderson1996rapid} yield an asymptotically faster runtime of $\textup{O}(k \log(k))$, they also incur large overhead costs. For the values of $k$ considered in this paper, the direct evaluation seems to have little effect on the overall runtime. The other two dominant computational tasks in the inner loop are the FFTs of $\textup{O}(k)$ samples and the subsequent sorting of these DFT coefficients. It is well-known that both of these operations can be done in time $\Theta(k\log(k))$ \cite{cormen2001introduction}. Thus the inner loop has overall time complexity $\Theta(k \log(k))$, assuming the use of non-equispaced FFTs. \subsection{Random signal model} \label{sec:random_signal_model} For both the average-case analysis and for the empirical evaluation described in section \ref{sec:empirical} we considered test signals with uniformly random phase over the bandwidth and coefficients chosen uniformly from the complex unit circle. In other words, given $k$ and $N$, we choose $k$ frequencies $\omega_j$ uniformly at random (without replacement) from $[-N/2,N/2)\cap\mathbb{Z}$. The corresponding Fourier coefficients $a_j$ are of the form $\textup{e}^{2\pi\i\theta_j}$, where $\theta_j$ is drawn uniformly from $[0,1)$. The signal is then given by \begin{equation} \label{eq:test_signal} S(t) = \sum_{j=1}^k a_j \textup{e}^{2\pi\i\omega_j t}. \end{equation} This is the standard signal model considered in previous empirical evaluations of sub-linear Fourier algorithms \cite{iwen2007empirical, iwen2010combinatorial, hassanieh2012simple}. \subsection{Markov Analysis of Collisions} \label{sec:markov} In order to analyze the expected runtime and sampling complexity of our algorithms, we must estimate the expected number of collisions among frequencies modulo the sample lengths used by the algorithms. Recall that in the noiseless case, our algorithms are able to detect when a collision between two or more frequencies has occurred, and for those that are not aliased we are able to calculate the value of the frequency. Thus we seek to estimate the expected fraction of frequencies that are aliased modulo a given sample length $p$, since this determines how many passes the algorithm makes. In this section we derive bounds on the expected value of this quantity and discuss how the stopping criteria used in the algorithm affect its average-case performance. In the random signal model considered in section \ref{sec:empirical}, we assume the $k$ frequencies are uniformly distributed over the bandwidth $[-N/2,N/2)$, and so the residues $\omega \bmod p$ are also uniformly distributed over $[0,p-1]$. Our problem then becomes a classical occupancy problem: the number of collisions among the frequencies is equivalent to the number of multiple-occupancy bins when $k$ balls are thrown uniformly at random into $p$ bins. Define $X_m$ to be the number of single-occupancy bins after $m$ balls are thrown, $Y_m$ to be the number of multiple-occupancy bins after $m$ balls are thrown, and $Z_m$ to be the number of zero-occupancy bins after $m$ balls are thrown. Since $p$ is constant, we have the trivial relationship $Z_m = p-X_m-Y_m$, so it suffices to consider only the pair $(X_m,Y_m)$. When the $(m+1)^\textup{st}$ ball is thrown, we have the following possibilities: \begin{itemize} \item it lands in an unoccupied bucket, with probability $Z_m/p = 1-(X_m+Y_m)/p$; \item it lands in a single-occupancy bucket, with probability $X_m/p$; \item it lands in a multiple-occupancy bucket, with probability $Y_m/p$. \end{itemize} In the first case, we have $X_{m+1}=X_m+1, \; Y_{m+1} = Y_m$; in the second case, we have $X_{m+1} = X_m-1, \; Y_{m+1}=Y_m+1$; and in the third case, we have $X_{m+1}=X_m, \; Y_{m+1}=Y_m$. Conditioning on the values of $X_m,\,Y_m$ we have \begin{equation} \label{eq:EXY_cond} \mathbb{E}\left( \left[\begin{array}{c}X_{m+1} \\Y_{m+1}\end{array}\right]\left| \left[\begin{array}{c}X_{m} \\Y_{m}\end{array}\right]\right.\right) = \left[\begin{array}{cc}1-2/p & -1/p \\1/p & 1\end{array}\right]\left[\begin{array}{c}X_m \\Y_m \end{array}\right] + \left[\begin{array}{c}1 \\0\end{array}\right], \end{equation} so that the system forms a Markov chain. By recursively conditioning on the values of $X_{m-1},\,Y_{m-1}$, we can calculate the expected values of $X_k,\,Y_k$ for any $k>0$ using the initial condition $X_1=1,\,Y_1=0$. Denoting by $A$ the matrix in the right-hand side of equation~\eqref{eq:EXY_cond}, we have \begin{equation} \label{eq:EXYk_A} \mathbb{E}\left( \left[\begin{array}{c}X_k \\Y_k\end{array}\right]\right) = \sum_{m=0}^{k-1}\left(A^m \left[\begin{array}{c}1 \\0\end{array}\right] \right) = \left( \sum_{m=0}^{k-1}A^m\right) \left[\begin{array}{c}1 \\0\end{array}\right]. \end{equation} Since $\rho(A) = 1-1/p < 1$, where $\rho$ is the spectral radius, the geometric matrix series can be written \begin{equation} \label{eq:geomat} \sum_{m=0}^{k-1}A^m = (I-A)^{-1}(I-A^k). \end{equation} After some linear algebra, we obtain \begin{equation} \label{eq:EXYk_final} \mathbb{E}\left( \left[\begin{array}{c}X_k \\Y_k\end{array}\right]\right) = \left[\begin{array}{c}k(1-\frac{1}{p})^{k-1} \\p(1-(1-\frac{1}{p})^k)-k(1-\frac{1}{p})^{k-1}\end{array}\right]. \end{equation} Since $Z_k=p-X_k-Y_k$, we have $\mathbb{E}\left(Z_k\right) = p(1-1/p)^k$. In our algorithms we choose $p=ck$ for some small integer $c$. Using this and the approximation $(1+\frac{x}{n})^n \approx \textup{e}^x$, we have \begin{equation} \label{eq:EXYk_approx} \mathbb{E}\left( \left[\begin{array}{c}X_k \\Y_k\end{array}\right]\right) \approx \left[\begin{array}{c}k\textup{e}^{-1/c} \\ck(1-\textup{e}^{-1/c})-k\textup{e}^{-1/c}\end{array}\right]. \end{equation} This gives a nonlinear equation for the expected number of collisions among $k$ frequencies as a function of the parameter $c$. Newton's method can then be used to determine the value $c$ required to ensure a desired fraction of the frequencies are not aliased. For example, to ensure that 90\% of frequencies are isolated on average, it suffices to take $c=5$; this value for the parameter $c$ had already been found to give good performance in our empirical evaluation of the algorithms. \subsection{Average-case runtime and sampling complexity} \label{sec:karp} In this section we will use a probabilistic recurrence relation due to Karp \cite{karp1994probabilistic,dubhashi2009concentration} to give average-case performance bounds and concentration results for the case when the algorithm is halted after identifying $k$ or more terms. In particular, we use the following theorem for recurrences of the form \begin{equation} \label{eq:prob_recur} T(k) = a(k) + T(H(k)), \end{equation} where $T(k)$ denotes the time required to solve an instance of size $k$, $a(k)$ is the amount of work done on a problem of size $k$, and $0\le H(k) \le k$ is a random variable denoting the size of the subproblem generated by the algorithm. \begin{thm}{\cite[Theorem 1.2]{karp1994probabilistic}} \label{thm:karp} Suppose $a(k)$ is nondecreasing, continuous, and strictly increasing on $\{x:a(x)>0\}$, and that $\mathbb{E}[H(k)] \le m(k)$ for a nondecreasing continuous function $m(k)$ such that $m(k)/k$ is also nondecreasing. Denote by $u(k)$ the solution to the deterministic recurrence \begin{equation} \label{eq:det_recur} u(k) = a(k) + u(m(k)). \end{equation} Then for $k>0$ and $t\in\mathbb{N}$, \begin{equation} \label{eq:karp_thm} \P[T(k) > u(k) + ta(k)] \le \left( \frac{m(k)}{k}\right)^t. \end{equation} \end{thm} Our algorithm does work $a(k) = \Theta(k\log(k))$ on input of size $k$ and generates a subproblem whose average size is $m(k) = k/10$. (Recall from section \ref{sec:markov} that with the parameter $c=5$, on average over 90\% of the frequencies were not aliased modulo $p = \textup{O}(ck)$.) The associated deterministic recurrence is then \begin{equation} \label{eq:our_det_recur} u(k) = \Theta(k\log(k)) + u(k/10), \end{equation} whose solution is $u(k) = \Theta(k\log(k))$ (see, e.g., \cite{cormen2001introduction}). A straightforward application of Theorem \ref{thm:karp} yields \begin{equation} \label{eq:conc_bound} \P[T(k) > \Theta(k\log(k)) + t k\log(k)] \le 10^{-t}, \end{equation} so that the runtime is tightly concentrated about its mean $\Theta(k\log(k))$. The sampling complexity $S(k)$ can be handled in an analogous manner, since in this case $a(k) = \Theta(k)$ and $m(k) = k/10$ as before. The associated deterministic recurrence becomes \begin{equation} \label{eq:our_det_samp_recur} u(k) = \Theta(k) + u(k/10), \end{equation} whose solution is $u(k) = \Theta(k)$. Applying Theorem \ref{thm:karp} again we have \begin{equation} \P[S(k) > \Theta(k) + tk] \le 10^{-t}, \end{equation} so that we again have tight concentration of the number of samples around the mean $\Theta(k)$. \section{Empirical Evaluation} \label{sec:empirical} In this section we describe the results of an empirical evaluation of the \emph{adaptive} deterministic and Las Vegas variants of the Phaseshift algorithm described above. Both algorithms were implemented in C++ using FFTW 3.0 \cite{frigo2005design} for the FFTs, using \texttt{FFTW\_ESTIMATE} plans since the sample lengths are not known in advance for the Las Vegas variant. For comparison we also ran the same tests on the four variants of GFFT as well as on AAFFT and FFTW itself. The FFTW runs utilized the \texttt{FFTW\_PATIENT} plans with wisdom enabled, and so are highly optimized. The experiments were run on a single core of an Intel Xeon E5620 CPU with a clock speed of 2.4 GHz and 24 GB of RAM, running SUSE Linux with kernel 2.6.16.60-0.81.2-smp for x86\_64. All code was compiled with the Intel compiler using the \texttt{-fast} optimization. As in~\cite{iwen2011improved}, timing is reported in CPU ticks using the \texttt{cycle.h} file included with the source code for FFTW. In the following sections we refer to our algorithm as ``Phaseshift'', since by taking shifted time samples of the input signal we also shift the phase of the Fourier coefficients. To keep the plots readable, we only show data for the adaptive, deterministic variant of our algorithm; the other variants perform similarly The algorithms of~\cite{iwen2011improved} are denoted GFFT-XY, where X $\in \{$D,R$\}$ and Y $\in\{$F,S$\}$. The D/R stands for deterministic or randomized, while the F/S stands for fast or slow. The fast variants use more samples but less runtime while the slow variants use fewer samples but more runtime. In the plots below, we always show the GFFT variant with the most favorable sampling or runtime complexity. Finally, AAFFT denotes the algorithm of~\cite{gilbert2005improved}. The implementations tested are summarized in table~\ref{tab:implementations} along with the average-case sampling and runtime complexities, and the associated references. \begin{table} \centering \caption{Implementations used in the empirical evaluation.} \begin{tabular}{c|c|c|c|c \hline Algorithm & R/D & Samples & Runtime & Reference \\%& Source Code \\ \hline PS-Det & D & $k$ & $k\log k$ & Section \ref{sec:avgcase} \\%& forthcoming \\ PS-LV & R & $k$ & $k\log k$ & Section \ref{sec:avgcase} \\%& forthcoming \\ GFFT-DF& D & $k^2\log^4 N$ & $k^2\log^4 N$ & \cite{iwen2011improved} \\%& \url{gopherfft.sourceforge.net} \\ GFFT-DS& D & $k^2 \log^2 N$ & $N k\log^2 N$ & \cite{iwen2011improved} \\%& \url{gopherfft.sourceforge.net}\\ GFFT-RF& R & $k\log^4 N$ & $k\log^5 N$ & \cite{iwen2011improved} \\%& \url{gopherfft.sourceforge.net}\\ GFFT-RS& R & $k\log^2 N$ & $N \log N$ & \cite{iwen2011improved} \\%&\url{gopherfft.sourceforge.net}\\ AAFFT & R & $k \log^c N$ & $k \log^c N$ & \cite{gilbert2005improved} \\%& \url{aafftannarborfa.sourceforge.net}\\ FFTW & D & $N$ & $N\log N$ & \cite{frigo2005design} \\%& \url{fftw.org} \end{tabular} \label{tab:implementations} \end{table} \subsection{Setup} Each data point in Fig.~\ref{fig:fixed_n}--\ref{fig:fixed_k} is the average of 100 independent trials of the associated algorithm for the given values of the bandwidth $N$ and the sparsity $k$. The lower and upper bars associated with each data point represent the minimum and maximum number of samples or runtime of the algorithm over the 100 test functions. The values of $k$ tested were $2, 4, 8, \ldots, 4096$, while the values of $N$ were $2^{17}, 2^{18}, \ldots, 2^{26}$. For larger values of $k$, the slow GFFT variants and AAFFT took too long to complete on our hardware, so we only present partial data for these algorithms. Nevertheless, the trend seen in the plots below continues for higher values of the sparsity. The test signals were generated according to the signal model described in section \ref{sec:random_signal_model}. The Phaseshift and deterministic GFFT variants will always recover such signals exactly. The randomized GFFT variants are Monte Carlo algorithms, and so, when they succeed, will also recover the signal exactly. AAFFT, on the other hand, is an approximation algorithm which will fail on a non-negligible set of input signals. However, for the runs depicted in Fig.~\ref{fig:fixed_n}--\ref{fig:fixed_k}, AAFFT always produced an answer with $\ell_2$ error less than $10^{-4}$. The randomized GFFT variants failed a total of 7 times out of 2200 test signals, a relatively small amount that can be reduced by parameter tuning. For the Phaseshift variants, we chose the parameters $c_1 = 5, \; c_2 = 10$, and took the shift $\varepsilon$ to be $1/2N$. Finally, for the randomized GFFT variants, we chose the Monte Carlo parameter to be $1.2$. \subsection{Sampling Complexity} \label{sec:sampling} In Fig.~\ref{fig:fixed_n} (a), we compare the average number of samples of the input signal $S$ required by each algorithm when the bandwidth $N$ fixed at $2^{22}$. The sparsity of the test signal is varied from 2 to 4096 by powers of two. We can see that the Phaseshift variants require over an order of magnitude fewer samples than GFFT-RS, the GFFT variant with the lowest sampling requirements. Both Phaseshift variants also require over an order of magnitude fewer samples than AAFFT. The comparison with the deterministic GFFT variants is even starker; Phaseshift-Det requires two orders of magnitude fewer samples than GFFT-DS (not shown), and four orders of magnitude fewer samples than GFFT-DF (not shown). In Fig.~\ref{fig:fixed_k} (a), we compare the average number of samples of the input signal $S$ required by each algorithm when the sparsity $k$ is fixed at 60. The bandwidth $N$ was varied from $2^{17}$--$2^{26}$ by powers of two. Using powers of two for the bandwidth allows the best performance for both FFTW and AAFFT, though this fact is more relevant for the runtime comparisons in the following section. We can see that the Phaseshift variants require many fewer samples than all four GFFT variants as well as AAFFT and FFTW, for all values of $N$ tested. The Phaseshift variants exhibit almost no dependence on the bandwidth for all values of $N$, a feature not shared by the other deterministic algorithms. We note here that in future work we plan to replace the $1/2N$ shift by two or more larger shifts with co-prime denominators to obtain an equivalent shift, as in~\cite{wang1998use}. This should lead to more robustness at high values of $N$. \subsection{Runtime Complexity} \label{sec:runtime} In Fig.~\ref{fig:fixed_n} (b), we compare the average runtime of each algorithm over 100 test signals when the bandwidth $N$ is fixed at $2^{22}$. The range of sparsity $k$ considered is the same as in section~\ref{sec:sampling}. For all values of $k$ the Phaseshift variants are faster than GFFT-RF (the fastest GFFT variant) and AAFFT by more than an order of magnitude. When compared to GFFT-RS (not shown), GFFT-DS (not shown), and FFTW, the difference in runtime is closer to three orders of magnitude. In Fig.~\ref{fig:fixed_k} (b), we compare the average runtime of each algorithm over 100 test signals when the sparsity $k$ is fixed at 60. The range of bandwidth considered is the same as in section~\ref{sec:sampling}. The Phaseshift variants are the only algorithms that outperform FFTW for all values of $N$ tested. The other implementations tested only become competitive with the standard FFT for $N \gtrsim 2^{20}$, while ours are faster even for modest $N$. \begin{figure} \centering \subfloat[]{\includegraphics[width=0.8\textwidth]{samples_fixed_n}}\\ \subfloat[]{\includegraphics[width=0.8\textwidth]{runtime_fixed_n}} \caption{(a) Sampling complexity with fixed bandwidth $N=2^{22}$ for PS-Det (blue solid line), GFFT-RS (red solid line), AAFFT (black dashed line), and FFTW (magenta dashed line). (b) Runtime complexity with fixed bandwidth $N=2^{22}$ for PS-Det (blue solid line), GFFT-RF (red solid line), AAFFT (black dashed line), and FFTW (magenta dashed line).} \label{fig:fixed_n} \end{figure} \begin{figure} \centering \subfloat[]{\includegraphics[width=0.8\textwidth]{samples_fixed_k}}\\ \subfloat[]{\includegraphics[width=0.8\textwidth]{runtime_fixed_k}} \caption{(a) Sampling complexity with fixed sparsity $k=60$ for PS-Det (blue solid line), GFFT-RS (red solid line), AAFFT (black dashed line), and FFTW (magenta dashed line). (b) Runtime complexity with fixed sparsity $k=60$ for PS-Det (blue solid line), GFFT-RF (red solid line), AAFFT (black dashed line), and FFTW (magenta dashed line).} \label{fig:fixed_k} \end{figure} \subsection{Noisy Case} We report here on a preliminary study of the performance of the deterministic algorithm in the presence of noise. Our noisy signals were of the same form as in the previous section, but with complex white gaussian noise of standard deviation $\sigma$ added to each measurement. As described in section \ref{sec:alg-noise}, the simplest way to deal with low-level noise is to simply round the reconstructed frequencies to the nearest integer of the form $ap_j + b$, where $b \equiv \omega \bmod p_j$ is the location of the peak in a length-$p_j$ DFT. This modification doesn't change the runtime or sampling complexity significantly, so in this section we focus on the error in the approximation as a function of the noise level $\sigma$ and the parameter $c_1$. In the existing literature on the sparse Fourier transform, the $\ell_2$ norm is most often used to assess the quality of approximation. There are many reasons for this choice, with the the two most convincing perhaps being the completeness of the complex exponentials with respect to the $\ell_2$ norm and Parseval's theorem. For certain applications, however, this choice of norm is inappropriate. For example, in wide-band spectral estimation and radar applications, one is interested in identifying a set of frequency intervals containing active Fourier modes. In this case, an estimate $\wt{\omega}$ of the true frequency $\omega$ with $|\wt{\omega}-\omega| \ll N$ is useful, but unless $\wt{\omega}=\omega$ the $\ell_2$ metric will report an $\textup{O}(1)$ error. Furthermore, when considering non-periodic signals (equivalently, non-integer $\omega$'s) the same precision problem appears when using the $\ell_2$ metric. For these reasons, we propose measuring the approximation error of sparse Fourier transform problems with the Earth Mover Distance (EMD) \cite{rubner2000earth}. Originally developed in the context of content-based image retrieval, EMD measures the minimum cost that must be paid (with a user-specified cost function) to transform one distribution of points into another. EMD can be calculated efficiently as the solution of a linear program corresponding to a certain flow minimization problem. In our situation, we consider the cost to move a set of estimated Fourier modes and coefficients $\left\{(\wt{\omega}_j,c_{\wt{\omega}_j})\right\}_{j=1}^{\wt{k}}$ to the true values $\left\{(\omega_i, c_{\omega_j})\right\}_{j=1}^k$ under the cost function \begin{equation} \label{eq:emd-cost} d_1\big( (\omega,c_\omega), (\wt{\omega},c_{\wt{\omega}}); N \big) \stackrel{\textup{def}}{=} \frac{|\omega-\wt{\omega}|}{N} + |c_\omega-c_{\wt{\omega}}|. \end{equation} This choice of cost function strikes a balance between the fidelity of the frequency estimate (as a fraction of the bandwidth) and that of the coefficient estimate. We denote the EMD using $d_1$ for the cost function by EMD(1) below. In figure \ref{fig:error} we report the average EMD(1) error over 100 test signals as a function of the input noise level $\sigma$, for various choices of the parameter $c_1$. In this experiment, the sparsity and bandwidth are fixed at $k=64$ and $N=2^{22}$, respectively. As expected, the error decreases as $c_1$ increases, since the rounding procedure described in section \ref{sec:alg-noise} is more likely to result in the true frequency. Moreover, the error increases linearly with the noise level, indicating the procedure's robustness in the presence of noise. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{error} \caption{EMD(1) error as a function of the noise level $\sigma$ for various choices of the parameter $c_1$. The sparsity and bandwidth are fixed at $k=64$, $N=2^{22}$, respectively.} \label{fig:error} \end{figure} We remark that in the noiseless case the choice $c_1=5$ was found to be sufficient, while figure \ref{fig:error} indicates that the much larger value $c_1\approx 256$ is necessary for good approximation in the EMD(1) metric. The larger sample lengths imply an increase in both the runtime and sampling complexity, and indicate that the rounding procedure of section \ref{sec:alg-noise} should be complemented by other modifications. This is the purpose of a second manuscript under preparation, in which we combine the rounding procedure with the use of larger shifts $\varepsilon_j$ in a multiscale approach to frequency estimation. \section{Conclusion} \label{sec:conclusion} In this paper we have presented a deterministic and Las Vegas algorithm for the sparse Fourier transform problem that empirically outperform existing algorithms in average-case sampling and runtime complexity. While our worst-case bounds do not improve the asymptotic complexity, we are able to extend by an order of magnitude the range of sparsity for which our algorithm is faster than FFTW in the average case. The improved performance of our algorithm can be attributed to two major factors: adaptivity and ability to detect aliasing. In particular, we are able to extract more information from a small number of function samples by considering the \emph{phase} of the DFT coefficients in addition to their magnitudes. This represents a significant improvement over the current state of the art for the sparse Fourier transform problem. We have developed a multiresolution approach to handle the noisy case, in which we learn the value of a frequency from most to least significant bit by increasing the size of the shift $\varepsilon$. Finally, we are exploring the extension of these methods to handle non-integer frequencies, which would represent the first such result in the sparse Fourier transform context. \section*{Acknowledgments} We would like to thank Mark Iwen and I. Ben Segal for making available the source code to the AAFFT and GFFT algorithms, Yossi Rubner for making available the source code for the Earth Mover Distance, and Piotr Indyk and Eric Price for sharing a preprint and source code for the sFFT algorithm. We also acknowledge helpful discussions with Anna Gilbert and Martin Strauss. \bibliographystyle{amsalpha}
b5b00b13418eb6bb774559b0d13256467eae8fd8
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{intro} Transformer-based models \citep{vaswani2017attention} have achieved much progress across many areas of NLP including text classification \cite{Minaee_2021}. However, such progress is often limited to short sequences because self-attention requires quadratic computational time and space with respect to the input sequence length. Widely-used models like BERT \citep{devlin-etal-2019-bert} or RoBERTa \citep{liu2019roberta} are typically pretrained to process up to 512 tokens. This is problematic because real-world data can be arbitrarily long. As such, different models and strategies have been proposed to process longer sequences. In particular, we can identify a few standard approaches for the task of long document classification. The simplest approach is to truncate long documents --- using BERT or RoBERTa on the first 512 tokens is often used as a baseline. More efficient Transformer models like Longformer \cite{beltagy2020longformer} and Big Bird \cite{zaheer2020big} use sparse self-attention instead of full self-attention to process longer documents (e.g. up to 4,096 tokens). Other approaches process long documents in their entirety by dividing them into smaller chunks \cite[e.g.][]{pappagari2019hierarchical}. An alternative idea proposed by recent work is to select sentences from the document that are salient to making the classification decision \cite{ding_cogltx_2020}. However, the relative efficacy of these models is not very clear due to a lack of consensus on benchmark datasets and baselines. \citet{tay2020long} propose a benchmark for comparing Transformers that can operate over long sequences, but this only includes a single, simulated\footnote{The benchmark considers the task of classifying IMDB reviews \cite{maas-etal-2011-learning} using byte-level information to simulate longer documents.} long document classification task. Novel variants of efficient Transformers are often compared to a BERT/RoBERTa baseline only, without much comparison to other Transformer models designed for the task \cite[e.g.][]{beltagy2020longformer, zaheer2020big}. Conversely, models designed for long document classification often focus exclusively on state-of-the-art models for particular datasets, and do not consider a BERT/RoBERTa baseline or any other Transformer models \cite[e.g.][]{ding_cogltx_2020,pappagari2019hierarchical}. This paper provides a much-needed comprehensive comparison among existing models for long document classification by evaluating them against unified datasets and baselines. We compare models that represent different approaches on various datasets and against Transformer baselines. Our datasets cover binary, multi-class, and multi-label classification. We also consider different ways information that is relevant to the classification is organized in texts (e.g. in the beginning or toward the end) and how this affects model performance. We also compare the models in terms of their training time, inference time, and GPU memory requirements to account for additional complexity that some of the models have relative to a BERT baseline. This allows us to compare the practical efficacy of the models for real-world usage. Our results show that more sophisticated models are often outperformed by simpler models (often including a BERT baseline) and yield inconsistent performance across datasets. Based on these findings, we highlight the importance of considering diverse datasets while developing models, especially those that represent different ways key information is presented in long texts. Additionally, we recommend that future research should also always include simpler baseline models. To summarize, our contributions are: \begin{itemize} \item We provide insights into the practical efficacy of existing models for long document classification by evaluating them across different datasets, and against several baselines. We compare the accuracy of these models as well as their runtime and memory requirements. \item We present a comprehensive suite of evaluation datasets for long document classification with various data settings for future studies. \item We propose simple models that often outperform complex models and can be challenging baselines for future models for this task. \end{itemize} \section{Methods} In this paper, we compare models representing different approaches to long document classification \cite{beltagy2020longformer, pappagari2019hierarchical, ding_cogltx_2020} on unified datasets and baselines. \subsection{Existing Models} As described in \S\ref{intro}, four distinct approaches have been proposed for long document classification: 1) document truncation, 2) efficient self-attention, 3) chunk representations, 4) key sentence selection. We evaluate a representative model from each category in this work. \paragraph{BERT (document truncation)} The simplest approach consists of finetuning BERT after truncating long documents to the first 512 tokens.\footnote{In practice, the first 510 tokens are used along with the [CLS] and [SEP] tokens. We use the token count including the two special tokens throughout the paper for simplicity.} As in \citet{devlin-etal-2019-bert}, we use a fully-connected layer on the [CLS] token for classification. This is an essential baseline as it establishes the limitations of a vanilla BERT model in classifying long documents yet is still competitive \cite[e.g.][]{beltagy2020longformer,chalkidis-etal-2019-large}. However, some prior work fails to consider this baseline \cite[e.g.][]{ding_cogltx_2020, pappagari2019hierarchical}. \paragraph{Longformer (efficient self-attention)} We select Longformer \cite{beltagy2020longformer} as a model designed to process longer input sequences based on efficient self-attention that scales linearly with the length of the input sequence \cite[see][for a detailed survey]{tay2020efficient}. Longformer also truncates the input, but it has the capacity to process up to 4,096 tokens rather than 512 tokens as in BERT. Following \citet{beltagy2020longformer}, we use a fully-connected layer on top of the first [CLS] token with global attention. Longformer outperformed a RoBERTa baseline significantly on a small binary classification dataset \cite{beltagy2020longformer}. However, it has not been evaluated against any other models for text classification or on larger datasets that contain long documents. \paragraph{ToBERT (chunk representations)} Transformer over BERT \cite[ToBERT,][]{pappagari2019hierarchical} takes a hierarchical approach that can process documents of any lengths in their entirety. The model divides long documents into smaller chunks of 200 tokens and uses a Transformer layer over BERT-based chunk representations. It is reported to outperform previous state-of-the-art models on datasets of spoken conversations. However, it has not been compared to other Transformer models. We re-implement this model based on the specifications reported in \citet{pappagari2019hierarchical} as the code is not publicly available. \paragraph{CogLTX (key sentence selection)} Cognize Long TeXts \cite[CogLTX,][]{ding_cogltx_2020} jointly trains two BERT (or RoBERTa) models to select key sentences from long documents for various tasks including text classification. The underlying idea that a few key sentences are sufficient for a given task has been explored for question answering \cite[e.g.][]{Min_2018}, but not much for text classification. It is reported to outperform ToBERT and some other neural models (e.g. CNN), but it is not evaluated against other Transformer models. We use their multi-class classification code for any classification task with appropriate loss functions.\footnote{\url{https://github.com/Sleepychord/CogLTX}} Following \citet{beltagy2020longformer}, we use sigmoid and binary cross entropy loss on the logit output of the models for binary classification. The same setting is used for multi-label classification with softmax normalization and cross entropy loss. \subsection{Novel Baselines} In addition to the representative models above, we include two novel methods that serve as simple but strong baseline models. \paragraph{BERT+TextRank} While the BERT truncation baseline is often effective, key information required to classify documents is not always found within the first 512 tokens. To account for this, we augment the first 512 tokens, with a second set of 512 tokens obtained via TextRank, an efficient unsupervised sentence ranking algorithm \cite{mihalcea-tarau-2004-textrank}. TextRank provides an efficient alternative to more complex models designed to select key sentences such as CogLTX. Specifically, we concatenate the BERT representation of the first 512 tokens with that of the top ranked sentences from TextRank (up to another 512 tokens). As before, we use a fully-connected layer on top of the concatenated representation for classification. We use PyTextRank \cite{PyTextRank} as part of the spaCy pipeline \cite{spacy} for the implementation with the default settings. \paragraph{BERT+Random} As an alternative approach to the BERT+TextRank model, we select random sentences up to 512 tokens to augment the first 512 tokens. Like BERT+TextRank, this can be a simple baseline approach in case key information is missing in truncated documents.\footnote{For simplicity, sentences included in the first 512 tokens are not excluded in the random selection process. Different settings are possible, but our preliminary results did not show much difference.} \subsection{Hyperparameters} We use reported hyperparameters for the existing models whenever available. However, given that we include different datasets that the original papers did not use, we additionally explore different hyperparameters for the models. Detailed information is available in Appendix~\ref{sec:appendix-hyper}. \subsection{Data} We select three classification datasets containing long documents to cover various kinds of classification tasks: Hyperpartisan \cite{kiesel-etal-2019-semeval} (binary classification), 20NewsGroups \cite{Lang95-20news} (multi-class classification) and EURLEX-57K \cite{chalkidis-etal-2019-large} (multi-label classification). We also re-purpose the CMU Book Summary Dataset \cite{bamman2013new} as an additional multi-label classification dataset. \begin{table}[t] \centering \begin{tabular}{lrrr} \hline Dataset & \# BERT Tokens & \% Long\\ \hline \hline Hyperpartisan & 744.2 $\pm$ 677.9 & 53.5\\ 20NewsGroups & 368.8 $\pm$ 783.8 & 14.7\\ EURLEX-57K & 707.99 $\pm$ 538.7 & 51.3\\ Book Summary & 574.3 $\pm$ 659.6 & 38.8\\ \hspace{10pt}-- Paired & 1,148.6 $\pm$ 933.9 & 75.5\\ \hline \end{tabular} \caption{Statistics on the datasets. \# BERT Tokens refers to the average token count obtained via the tokenizer of the BERT base (uncased) model. \% Long refers to the percentage of documents with over 512 BERT tokens.} \label{tab:datasets} \end{table} We also modify the EURLEX and Book Summary datasets to represent different data settings and further test all models under these challenging variations. A document in the EURLEX dataset contains a legal text divided into several sections, and the first two sections (header, recitals) carry the most relevant information for classification \cite{chalkidis-etal-2019-large}. We invert the order of the sections so that this key information is located toward the end of each document (Inverted EURLEX). This creates a dataset particularly challenging for models that focus only on the first 512 tokens. We also combine pairs of book summaries from the CMU Book Summary dataset to create a new dataset (Paired Book Summary) that contains longer documents with two distinctive information blocks. Again, this challenges models not to solely rely on the signals from the first 512 tokens. In addition, it further challenges models to detect two separate sets of signals for correct classification results. In all, these modified datasets represent different ways information may be presented in long texts and test how robust the existing models are to these. Table \ref{tab:datasets} summarizes characteristics of all our datasets, with more details in Appendix~\ref{sec:appendix-datasets}. \begin{table*}[ht!] \centering \begin{tabularx}{\textwidth}{l *{6}{Y} } \hline \multirow{2}{*}{Model} & Hyper- & 20News & \multirow{2}{*}{EURLEX} & Inverted & Book &Paired \\ & partisan & Groups && EURLEX & Summary & Summary\\ \hline \hline BERT & 92.00 & 84.79 & \underline{73.09} & 70.53 & 58.18 & 52.24 \\ BERT+TextRank & \cellcolor{Gray} 91.15 & \underline{84.99} & \cellcolor{Gray}72.87 &\underline{71.30} & \underline{58.94} & 55.99\\ BERT+Random & \cellcolor{Gray}89.23 & \cellcolor{Gray}84.65 & \textbf{73.22} & \textbf{71.47}& \textbf{59.36} & 56.58 \\ Longformer & \textbf{95.69} & \cellcolor{Gray}83.39 & \cellcolor{Gray}54.53 & \cellcolor{Gray}56.47 & \cellcolor{Gray}56.53 & \textbf{57.76} \\ ToBERT & \cellcolor{Gray}89.54 & \textbf{85.52} &\cellcolor{Gray}67.57 & \cellcolor{Gray}67.31& \cellcolor{Gray}58.16 & \underline{57.08} \\ CogLTX & \underline{94.77} & \cellcolor{Gray}84.63 & \cellcolor{Gray}70.13 & 70.80& 58.27 & 55.91 \\ \hline \end{tabularx} \caption{Performance metrics on the test set for all datasets. The average accuracy (\%) over five runs is reported for Hyperpartisan and 20NewsGroups while the average micro-$F_{1}$ (\%) is used for the other datasets. The highest value per column is in bold and the second highest value is underlined. Results below the BERT baseline are shaded. } \label{tab:results-on-all-docs} \end{table*} \subsection{Metrics} For the binary (Hyperpartisan) and multi-class (20NewsGroups) classification tasks, we report accuracy (\%) on the test set. For the rest, multi-label classification datasets, we use micro-$F_{1}$ (\%), which is based on summing up the individual true positives, false positives, and false negatives for each class.\footnote{The choice of these metrics are based on previous literature. An exploration of other metrics (e.g. macro-$F_{1}$) may provide further insights. However, we did not see significant differences in preliminary results, and we believe the general trend of results would not differ.} \section{Results} Table \ref{tab:results-on-all-docs} summarizes the average performance of the models over five runs with different random seeds. Overall, the key takeaway is that more sophisticated models (Longformer, ToBERT, CogLTX) do not outperform the baseline models across the board. In fact, these models are significantly more accurate than the baselines only on two datasets. As reported in \citet{beltagy2020longformer}, Longformer recorded the strongest performance on Hyperpartisan, with CogLTX also performing well. Longformer and ToBERT performed the best for Paired Book Summary. Paired Book Summary seems to be most challenging for all models across the board and is the only dataset where the BERT baseline did the worst. However, it is worth noting that simple augmentations of the BERT baseline as in BERT+TextRank and BERT+Random were not far behind the best performing model even for this challenging dataset. ToBERT's reported performance was the highest for 20NewsGroups, but we were unable to reproduce the results due to its memory constraints. For the other datasets, these more sophisticated models were outperformed by the baselines. In particular, the simplest BERT baseline that truncates documents up to the first 512 tokens shows competitive performance overall, outperforming the majority of models for Hyperpartisan, 20NewsGroups and EURLEX. It is only the Paired Book Summary dataset where the BERT baseline performed particularly worse than other models. In general, we observe little-to-no performance gains from more sophisticated models across the datasets as compared to simpler models. A similar trend was observed even when the models were evaluated only on long documents in the test set (Appendix~\ref{sec:long-doc-results}). These finding suggests that the existing models do not necessarily work better for long documents across the board when diverse datasets are considered. \begin{table}[t] \centering \begin{tabular}{l|rrc} \hline \multirow{2}{*}{Model} & Train & Inference & GPU\\ & Time & Time & Memory \\ \hline \hline BERT & 1.00 & 1.00 & $<$16 \\ +TextRank & 1.96 & 1.96 & 16\\ +Random & 1.98 & 2.00 & 16\\ Longformer & 12.05 & 11.92 & 32\\ ToBERT & 1.19 & 1.70 & 32\\ CogLTX & 104.52 & 12.53& $<$16\\ \hline \end{tabular} \caption{Runtime and memory requirements of each model, relative to BERT, based on experiments on the Hyperpartisan dataset. Training and inference time were measured and compared in seconds per epoch. GPU memory requirement is in GB. Longformer and ToBERT were trained on a GPU with a larger memory and compared to a comparable run on the machine.} \label{tab:time-space-results} \end{table} The relatively inconsistent performance of these existing models is even more underwhelming considering the difference in runtime and memory requirements as summarized in Table \ref{tab:time-space-results}. Compared to BERT on the first 512 tokens, Longformer takes about 12x more time for training and inference while CogLTX takes even longer. ToBERT is faster than those two, but it requires much more GPU memory to process long documents in their entirety. Taken together with the inconsistency in accuracy/F1 scores, this suggests that sophisticated models are not necessarily a good fit for real word use cases where efficiency is critical. \section{Discussion and Recommendations} Our results show that complex models for long document classification do not consistently outperform simple baselines. The fact that the existing models were often outperformed by the simplest BERT baseline suggests that the datasets tend to have key information accessible in the first 512 tokens. This is somewhat expected as the first two sections of EURLEX are reported to carry the most information \cite{chalkidis-etal-2019-large} and 20NewsGroups contains mostly short documents. Including these datasets to evaluate models for long document classification is still reasonable given that a good model should work well across different settings. However, these datasets alone do not represent various ways information is presented in long texts. Instead, future studies should evaluate their models across various datasets to create robust models. While it is often difficult to obtain datasets suited for long document classification, our modifications of existing datasets may provide ways to repurpose existing datasets for future studies. We invert the order of the sections of EURLEX to create the Inverted EURLEX dataset, where key information is likely to appear toward the end of each document. Our results in Table \ref{tab:results-on-all-docs} show that selective models (BERT+TextRank, BERT+Random, CogLTX) performed better than those that read longer consecutive sequences (Longformer, ToBERT) on this dataset. This suggests that this inverted dataset may contain parts of texts that should be ignored for better performance, thus providing a novel test bed for future studies. The Paired Book Summary dataset presents another challenging data setting with two distinctive information blocks. While Longformer and ToBERT performed significantly better for this dataset than others, the overall model performance was quite underwhelming, leaving room for improvement for future models. Many of these findings were revealed only due to the choice of relevant baselines, and future work will benefit from including these as well. A BERT/RoBERTa baseline is essential to motivate the problem of long document classification using Transformers and reveal how much information is retrievable in the first 512 tokens. BERT+TextRank and BERT+Random are stronger baselines that often outperform more complex models that select key sentences. In fact, they outperformed CogLTX on five of the six datasets. \section{Conclusion} Several approaches have been proposed to use Transformers to classify long documents, yet their relative efficacy remains unknown. In this paper, we compare existing models and baselines on various datasets and in terms of their time and space requirements. Our results show that existing models, while requiring more time and/or space, do not perform consistently well across datasets, and are often outperformed by baseline models. Future studies should consider the baselines and datasets to establish robust performance. \section*{Acknowledgments} We would like to thank the reviewers and area chairs for their thoughtful comments and suggestions. We also thank the members of AWS AI Labs for many useful discussions and feedback that shaped this work.
d310fe313481c24186e51003d6bb0f6701c25cdc
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction. } We are living exciting times in theoretical physics. Polchinski's famous question "What is string theory?" \cite{Polchinski} can perhaps be turned inside out to ask the question "What is quantum field theory?" (QFT). As happens often in physics, what was believed to be understood is not really so once we are able to ask deeper questions. In recent years, duality relationships between different (supersymmetric) gauge theories, and the AdS/CFT duality between some conformal quantum field theories (CFT) and some specific string theories suggest to raise the question of whether string theory is not really a part of quantum field theory, in some unknown sense. \par It is still true that the main problem that prevents further progress is the inability to perform nontrivial computations in the strong coupling regime of QFT. Perturbative approaches are not enough. This is true even in the theory of strong interactions, QCD, where there is a plethora of experimental data in the strong coupling regime. The only theoretical grasp of this is through effective field theories, and of course, through the lattice approach. \par What we are able to do beyond perturbation theory is only a partial subset of all interesting computations, and this only in theories with certain amount of supersymmetry. It could be that all those marvelous conjectures and/or facts mentioned above are just an artifact of supersymmetry, and disappear as soon as supersymmetry is broken and maybe they would disappear in an uncontrollable way. \par On the other hand, it is known that there are some quantum field theories that do not admit a Lagrangian formulation, like little string theories \cite{Losev}, which do not even have any local operators. In QFT in non-commutative spaces \cite{Matusis} it is not possible to disentangle the infrared (IR) and the ultraviolet (UV) sectors of the theory; those appear linked in an intricate way. In the same vein, there are also QFT \cite{Seiberg} that cannot be latticized. This means that any framework that makes use of a Lagrangian, like the path integral, or the action principle, is bound not to be applicable to all QFTs. Similar observations can obviously be applied to the lattice approach. Any general treatment should be able to encompass all those non-Lagrangian theories as well. This is not possible at the moment, and we limit ourselves in these notes to the Lagrangian case, but we wanted to call attention to the fact that we are not able to treat all known QFTs in an unified way. \par In these notes we shall dwell on the most conservative approach about quantum effects in gravity, namely assuming that the geometric variables (metric tensor and/or the connection one-form) are the natural variables to be quantized. In spite of the fact that space-time is naturally endowed in General Relativity (GR) with a semi-Riemannian metric tensor (that is, one with Lorentzian signature), we shall perform our calculations in a Riemannian space. Even with this simplification, the Einstein-Hilbert Lagrangian fails to be positive definite, which implies that most of the usual arguments used in QFT to argue for an expansion around saddle points of the action are actually not compelling. \par Furthermore, given a Riemannian space, there is no an unique way to define a semi-Riemannian one that is in some sense its {\em analytic continuation}. (The flat space trick of defining a complex time coordinate does not work in general). Sometimes there is a complex manifold of which both the Riemannian and the semi-Riemannian manifolds are real sections \cite{Woodhouse}, but this is not true in general. \par It has been suggested by Hawking to make the analytic continuation at the end of the whole computation, namely on the transition amplitudes themselves, but we do not know how to do this in detail. For example, what is the semi-Riemannian meaning of the transition amplitude from a certain Riemannian space $\Sigma$ to another one, $\Sigma^\prime$? We insist that given a Riemannian manifold is not defined (much less uniquely) a semi-Riemannian one which is in some sense its analytic continuation. This fact sheds some doubts on the whole {\em Euclidean quantum gravity program}. On the other hand working directly with Lorentzian signature is much more difficult and not many general results have been obtained. \par There is a {\em vanilla problem}, which is the study of quantum field theories defined in an external classical background gravitational field. Some beautiful results have been obtained here, and this is probably the only instance in which people have been able to tackle the problems inherent to the Lorentzian signature. Hawking radiation and the Unruh effect are probably the milestones here. Once nontrivial time dependence is introduced, lots of ambiguities appear, like in the (Poincar\'e patch of the) de Sitter background \cite{WittenS}. In spite of the interest of this topic, it is not clear \cite{HarlowJ} in what sense this is a consistent approximation to a full quantum gravity treatment. \par What we do want to present in some detail in this course is the simplest instance of all the above, namely a set of beautiful techniques useful in order to do QFT computations (in particular divergent pieces) in Riemannian spaces in a covariant way. The purpose is pedagogical; this is why most calculations are done in gory detail. A general very good elementary reference is Mukhanov and Winitzki's book \cite{Mukhanov}. Full of gems, but not an easy read is the two-volume DeWitt's book \cite{DeWitt}. Somewhat more mathematical books are \cite{Gilkey}\cite{Blumenhagen}\cite{Fursaev}\cite{Avramidi}\cite{Kirsten}\cite{Kiefer}\cite{Elizalde}\cite{Mottola}. Quite good reviews are also available \cite{Alvarez-Gaume}\cite{Barvinsky}\cite{Gracia-Bondia}\cite{Vassilevich}\cite{Vilkovisky}. \newpage \section{Gravitation and quantum field theory: The Big Picture} There are many obvious issues when considering quantum gravity, by which we mean some unknown quantum theory that in the classical limit reduces to GR. For example, one of the basis of quantum field theory \cite{Symanzik} is {\em microcausality}, the statement that field variables defined at points spacelike separated should commute. Also the canonical commutators are defined at {\em equal time}. It is plain that these concepts make sense in a fixed gravitational background at best; and even then with caveats when horizons are present. \par In a similar vein, any attempt to write a Schr\"odinger equation for the gravitational field must face the fact that there is no natural notion of time in GR, even classically. The Wheeler-DeWitt equation is obtained by interpreting the Hamiltonian constraint as an operator equation by substituting the canonical momenta by functional derivatives. It is similar to the Schr\"odinger equation, except precisely for the absence of time. It has been repeteadly conjectured (\cite{Alvarez} for a review) that such a time can appear when a WKB type of semiclassical approximation is performed on the Wheeler-DeWitt equation, but this has not been properly substantiated. \par Some people try to apply the canonical approach to a clever set of variables introduced by Ashtekar \cite{Rovelli}. Those variables are related to the spacetime metric in a complicated way. It is unclear how this approach is related to the classical regime at all. This whole approach is dubbed {\em loop quantum gravity}, because a loop representation is useful to understand some aspects of the corresponding Hilbert space. \par We think it is fair to say that the results obtained so far from the canonical approach are quite modest. \par A natural first guess would be to use the functional integral to define transition amplitudes from one three-dimensional metric on a given three-dimensional manifold $\Sigma_i$, say $h_i$ to another three dimensional surface $\Sigma_f$ with its corresponding metric $h_f$, something like \begin{equation} K\left[h_i,h_f\right]\equiv\int {\cal D}g e^{i S_{EH}\left(g\right)} \end{equation} where the integration is performed over all metrics defined on a four-dimensional domain $D$ such that \begin{equation} \partial D=\Sigma_f-\Sigma_i \end{equation} \par Nevertheless, before we are even able to understand this conjecture, we have to face several ugly facts. First of all, the gravitational action is not positive definite, even with the Euclidean signature. As we have already suggested before, the loop expansion is not then justified by any sort of saddle point expansion. Even worse, the set of four dimensional manifolds is a complicated one. It has been shown by Markov \cite{Markov} that the problem of classifying four-dimensional geometries is an undecidable one. Given any two four-dimensional manifolds, there is no set of topological invariants that can discriminate when those two manifolds are diffeomorphic. The problem lies mainly with the first fundamental group, $\pi_1(M)$. It does not seem the case that there exists any justification for restricting the functional integral to any subset of manifolds (i.e. simply connected). \par Were the spacetime geometry to fluctuate we would have to build a new all our ideas about QFT, which we understand when defined in flat space only, and even there as mentioned above, we miss non-perturbative effects known to be important. \par Another issue is the following. Assuming that the symmetry group of the quantum theory is still diffeomorphism (Diff) invariance, what are the observables? For a {\em fixed} manifold, integrals of n-forms are Diff invariant objects, but there are not many of those. \par The preceding difficulties did not deter physicist to think on quantum aspects of gravitation. Besides many long and inconclusive discussions of the basic foundational points, to be mentioned later, such as what are the observables of the theory, the main breakthrough was made by 't Hooft and Veltman employing techniques invented by DeWitt and Feynman. What is computed are the quantum fluctuations around an {\em arbitrary} background, $\bar{g}_{\alpha\beta}(x)$, which can be any solution of Einstein's equations of motion (EM). General Relativity is considered in this treatment as an ordinary gauge theory, forgetting about all questions of principle. Actually the calculation is usually done with Euclidean signature, making an appropiate analytical continuation at the end of the procedure. Particularly easy is the computation of the divergences of the effective action, which must be eliminated in the renormalized theory. In this computation beautiful mathematical techniques can be employed. The propagator is assumed to be however well-defined for a generic background metric, which is a delicate assumption in the presence of horizons and/or singularities. It is doubtful whether we can assert any proposition about quantum \footnote{ In order to understand the sequel, the reader is assumed a working knowledge of quantum field theory at a graduate level, up to and including, Feynman's path integral. } gravity with some confidence. \par The tree level estimate for the cross section for production of gravitons in particle-antiparticle annihilation is of the order of the inverse of the mass scale associated to this problem which just by sheer dimensional analysis is Planck's mass, which is given in terms of Newton's constant, $G$ by \begin{equation} m_p\equiv\sqrt{\hbar c\over 8\pi G}\sim 10^{19}GeV \end{equation} if we remember that $1\,GeV(=10^3\, MeV)$ is the rough scale of hadronic physics (the mass and inverse Comptom wavelength of a proton, for example), this means that quantum gravity effects will only be apparent when we are able to explore concentrated energy $10^{19}$ times bigger (or a scale distance correspondingly smaller; these two statements are supposed to be equivalent owing to Heisenberg's principle). To set the scale, the Large Hadron Collider works roughly at the $TeV(=10^3\,GeV)$ scale, so there is a long way to go before reaching expected quantum gravity effects in accelerators. \par In terms of the cross section, this yields up to numerical factors of order unity \begin{equation} \sigma\sim l_p^2\sim 10^{-66}~cm^2\sim 10^{-40}~ fm^2 \end{equation} This is more or less 40 orders of magnitude smaller than typical nuclear reactions. \par There are however some interesting experimental facts such as the ones studied in \cite{Colella}. Free fall of neutrons has been reported there. Also there interference effects due to the Earth's classical gravitational field on a neutron's wave function are analyzed. The experimental apparatus is a neutron interferometer. The phase shift between the two different paths is given by \begin{equation} \Delta \phi\sim {2\pi m_p^2 l g \Delta h \over h^2} \end{equation} where $l$ is the common horizontal span of the paths and $\Delta h$ is the difference in height. There are some more contributions in the actual experiment and the precision is not too big. Nevertheless the effect seems clear. It is not clear however what is its meaning with respect to the relationship between gravitation and quantum mechanics. More recently \cite{Nesvizhevsky} experimental evidence for gravitational quantum bound states of neutrons has been claimed. \par If we want to get direct experimental information of quantum gravitational effects, we have to turn our attention towards Cosmology, or perhaps look for some clever precision experiment in the laboratory. Lacking any experimental clue, the only thing we can do is to think and try to look for logical (in)consistencies. \par Many people believe that those should stem from Einstein's equations themselves. There is evidence that the second member (the energy momentum tensor) is quantized, and as such, is subject to quantum interference effects. It seems logical to assume that the same thing should happen with the first member of Einstein's equations; that is, the geometry. \par It has been repeatedly argued by many particle physicists that the practical utility of the answer to this question will not presumably be great. How would we know for sure beforehand?. There has always been a recurrent dream, exposed vehemently by Salam \cite{Salam} that the inclusion of the gravitational interaction would cure many of the diseases and divergences of quantum field theory, through the inclusion in the propagator of terms of the type \begin{equation} e^{-{1\over m_p r}} \end{equation} So that for example, the sum of tree graphs that leads to the Schwarzschild solution as worked out by Duff \cite{Duff} \begin{equation} {1\over r}+ {2M\over m_p^2 r^2}+{4 M^2\over m_p^4 r^3}+\ldots={1\over r\left(1-{2M\over m_P^2 r}\right)} \end{equation} would get modified to \begin{equation} {1\over r}e^{-{1\over m_p r}}+ {2M\over m_p^2 r^2}e^{-{ 2 \over m_p r}}+\ldots\sim {1\over r}e^{-{1\over m_p r}}\frac{1}{\Big[ {1 -{2M\over m_p^2r}\,e^{-{1\over m_p r}}}\Big]} \end{equation} shifting the location of the horizon and eliminating the singularity at $r=0$. Nobody has been able to substantiate this dream so far. \par On the other hand, it has been speculated that quantum gravitational effects can tame the infinities that appear in QFT yielding a finite theory eventually. Some arguments in favor of this (first proposed by Arnowitt, Deser and Misner \cite{ADM}, the inventors of ADM) are as follows. The self-energy of a body of radius $\epsilon$ and mass $m$ and charge $e$ which in newtonian theory reads \begin{equation} m_\epsilon=m+{e^2\over 8\pi \epsilon}-{G m^2\over 2\epsilon} \end{equation} It diverges in the pointlike limit $\epsilon\rightarrow 0$. The only modification borne out by General Relativity was shown by ADM to be the replacement in the second member of $m_0$ by $m_\epsilon$ (because all energy gravitates) \begin{equation} m_\epsilon=m+{e^2\over 8\pi \epsilon}-{G m_\epsilon^2\over 2\epsilon} \end{equation} solving the quadratic equation yields \begin{equation} m_\epsilon={\epsilon\over G}\left[-1\pm \sqrt{1+{2 G\over \epsilon}~\left(m+{e^2\over 8\pi\epsilon}\right)}\right] \end{equation} which has a finite limit when $\epsilon\rightarrow 0$ namely \begin{equation} m_0=\sqrt{e^2\over 4\pi G} \end{equation} \par Ay ant rate quantum gravity is nevertheless a topic which has fascinated whole generations of physicists, just because it is a natural boundary of two of the most successful physical theories mankind has discovered. There seems to be a strong tension between the beautiful, geometrical world of General Relativity and the no less marvelous, less geometrical, somewhat mysterious, but very well tested experimentally, world of Quantum Mechanics. As with all matters of principle we can hope to better understand both quantum mechanics and gravitation if we are able to clarify the issue. \par The most conservative approach is of course to start from what is already known with great precision about the standard model of elementary particles associated to the names of Glashow, Weinberg and Salam. This can be called the {\em bottom-up approach} to the problem. In this way of thinking Wilson taught us that there is a working low energy effective theory, and some quantum effects in gravity can be reliably computed for energies much smaller than Planck's mass. \par There are two caveats to this. First of all, we do not understand why the observed cosmological constant is so small: the natural value from the low energy effective Lagrangian point of view ought to be much bigger. The second point is that one has to rethink again the lore of effective theories in the presence of horizons. We shall comment on both issues in due time. \par There is not a universal consensus even on the most promising avenues of research from the opposite {\em top-down} viewpoint. Many people think that strings are the best buy (we sort of agree with this); but it is true that after more than two decades of intense effort nothing substantial has come out of them on the particular problem of our interest here. Others \cite{Rovelli} try to quantize directly the Einstein-Hilbert Lagrangian, something that is at variance with our experience in effective field theories. But it is also true that as we have already remarked, the smallish value of the observed cosmological constant also cries out of the standard effective theories lore. \par It is generally accepted that General Relativity, a generally covariant theory, is akin to a gauge theory, in the sense that the diffeomorphism group of the apace-time manifold, $Diff(M)$ plays a role similar to the compact gauge group in the standard model of particle physics. There are some differences though. To begin with, the group, $ Diff(M)$ is too large; is not even a Lie group. Besides, its detailed structure depends on the manifold, which is a dynamical object not given a priori. Other distinguished subgroups (such as the volume-preserving diffeomorphisms, like in unimodular gravity, already discussed in an appendix of Pauli's wonderful book on Relativity \cite{Pauli}) are perhaps also arguable for. Those leave invariant a given measure, such as the Lebesgue measure. \par It also seems clear that when there is a boundary of space-time, then the gauge group is restricted to the subgroup consisting on those diffeomorphisms that act trivially on the boundary. The subgroup that act not-trivially is related to the set of conserved charges. In the asymptotically flat case this is precisely the Poincar\'e group, $SO(1,3)$ that gives rise to the ADM mass. \par In the asymptotically anti-de Sitter case, this is related to the conformal group $SO(4,2)$. \par It is nevertheless not clear what is the physical meaning of keeping constant the boundary of spacetime (or keeping constant some set of boundary conditions) in a functional integral of some sort. \par A related issue which we have already mentioned is that it is very difficult to define what could be {\em observables} in a diffeomorphism invariant theory, other than global ones defined as integrals of scalar composite operators $O(\phi_a(x))$ (where $\phi_a, a=1\ldots N$ parametrizes all physical fields) with the peudo-Riemannian measure \begin{equation} {\cal O}\equiv \int \sqrt{|g|}d^4 x O(\phi_a(x)) \end{equation} Some people claim that there are no local observables whatsoever, but only {\em pseudolocal } ones; the fact is that we do not know. Again, the exception to this stems from keeping the boundary conditions fixed; in this case it is possible to define an $S$-matrix in the asymptotically flat case, and a conformal quantum field theory in the asymptotically anti-de Sitter case. Unfortunatelly, the most interesting case from the cosmological point of view, which is when the space-time is asymptotically de Sitter is not well understood. \par It has been already mentioned that the equivalence problem in four-dimensional geometries is undecidable \cite{Markov}. In three dimensions Thurston's geometrization conjecture has recently been put on a firmer basis by Hamilton and Perelman, but it is still not clear whether it can be somehow implemented in a functional integral without some drastic restrictions. Those caveats should be kept in mind when reading the sequel. \par A radically different viewpoint has recently been advocated by Gerardus 't Hooft by insisting in causality to be well-defined, so that the conformal class of the space-time metric should be determined by the physics, but not necessarily the precise point in a given conformal orbit. If we write the spacetime metric in terms of a unimodular metric and a conformal factor \begin{equation} g_{\mu\nu}=\omega^2(x)\hat{g}_{\mu\nu} \end{equation} with \begin{equation} det\,\hat{g}_{\mu\nu}=1 \end{equation} then the unimodular metric is in some sense intrisic and determines causality, whereas the conformal factor depends on the observer in a way dictated by {\em black hole complementarity}. \par Finally, there is always the (in a sense, opposite) possibility that space-time (and thus diffeomorphism invariance) is not a fundamental physical entity in such a way that the appropiate variables for studying short distances are non geometrical. Some recent references are \cite{Verlinde}. Something like that could happen in string theory, but our understanding of it is still in its infancy. \par \newpage \section{Schwinger's action principle and\\Peierls'brackets.} {\em Schwinger's action principle} \cite{Schwinger}\cite{Symanzik}\cite{Toms} expresses the variation of a transition amplitude $\langle A| B\rangle$ between two states $\left|B\right\rangle$ and $\left|A\right\rangle$ in terms of the expectation value of the variation of the action, $\delta S$. Symbollically \begin{equation} \delta \langle A(t_1)| B(t_2)\rangle= i \langle A(t_1) | \delta S_{12} | B(t_2)\rangle \end{equation} This principle is the starting point for all dynamical laws in QFT for Schwinger's school. \par Let us elaborate. In Heisenberg's representation \begin{equation} \dot{\hat{q}}=-i \left[\hat{q},\hat{H}\right] \end{equation} and in particular, \begin{equation} \dot{\hat{H}}=0 \end{equation} This means that the time dependence of the operators is given by \begin{equation} \hat{q}(t)=e^{i \hat{H}(t-t_0)}\hat{q}(t_0) e^{-i \hat{H}(t-t_0)} \end{equation} let us denote by \begin{equation} |q\, t\rangle \end{equation} a state defined by measurements or preparations at time $t$ (i.e., eigenstates of $q(t)$). Also \begin{equation} | q \,t^\prime\rangle \end{equation} means that we have to replace $t$ by $t^\prime$; but otherwise keep the same preparation as before. We have \begin{eqnarray} &&\hat{q}(t)|q \,t\rangle=q|q \,t\rangle\nonumber\\ &&\hat{q}(t^\prime)| q \,t^\prime\rangle=q| q\, t^\prime\rangle=e^{iH(t^\prime-t)}\,\hat{q}(t)\,e^{-i H(t^\prime-t)}|q\, t^\prime\rangle \end{eqnarray} it follows that \begin{equation} | q \,t^\prime\rangle=e^{i H(t^\prime-t)}|q\,t\rangle \end{equation} so that \begin{equation} \langle q\,t|q\, t^\prime\rangle=\langle q,t|e^{i H(t^\prime-t)}|q\,t\rangle \end{equation} assuming that the Hamiltonian $\hat{H}$ depends on some external parameter, say $\lambda$, \begin{equation} \delta_\lambda \langle q\,t|q\, t^\prime\rangle=i\left\langle q,t\left|\int_{t^\prime}^t d\tau\delta_\lambda H(\tau)\,e^{i H(t^\prime-t)}\right|q\,t\right\rangle=i\left\langle q\,t\left|\int_{t^\prime}^t d\tau \delta_\lambda H(\tau)\right| q\, t^\prime\right\rangle \end{equation} Schwinger's principle can easily be generalized \cite{Symanzik} to the case when an operator insertion is included at some intermediate time $t_2\leq t\leq t_1$ \begin{eqnarray} &\delta_\lambda\left\langle q\,t_1\left|{\cal O} (q(t),\lambda) \right|q\,t_2\right\rangle=\left\langle q\,t_1\left|\delta_\lambda {\cal O}(q(t),\lambda) \right|q\,t_2\right\rangle\ +\nonumber\\ &+\left.i\left\langle q\,t_1\left|\int_t^{t_1} \delta_\lambda L(q(\tau),\lambda) d\tau\right|_{q(t)\,\text{fixed}}\,{\cal O}(q(t),\lambda) \right|q\,t_2\right\rangle+\nonumber\\ &+\left.i\left\langle q\,t_1\left|{\cal O}(q(t),\lambda)\int_{t_2}^t \delta L(q(\tau),\lambda) d\tau\right|_{q(\tau)\, \text{fixed}} \right|q\,t_2\right\rangle \end{eqnarray} When either the initial or the final time grows to infinity past or future, we write \begin{equation} \left.\left.\delta^{\text{ret}}{\cal O}(q(t))=\delta{\cal O}(q(t)\right|_{q\,\text{fixed}}+i \int_{-\infty}^t d\tau \bigg[ {\cal O}(q(t))\, ,\,\delta L(q(\tau))\right|_{q\,\text{fixed}}\bigg] \end{equation} \begin{equation} \left.\left.\delta^{\text{adv}}{\cal O}(q(t))=\delta{\cal O}(q(t)\right|_{q\,\text{fixed}}+i \int^t_{\infty} d\tau \bigg[ {\cal O}(q(t))\, ,\,\delta L(q(\tau))\right|_{q\,\text{fixed}}\bigg] \end{equation} if follows that \begin{equation} \left.\left(\delta^{\text{ret}}-\delta^{\text{adv}}\right)\,{\cal O}(q(t))=i \int_{-\infty}^{\infty} d\tau \bigg[ {\cal O}(q(t))\, ,\,\delta L(q(\tau))\right|_{q\,\text{fixed}}\bigg] \end{equation} this formula defines the {\em Peierls' bracket}, which is a generalization of Poisson's one, and that helps to determine QFT commutators in an elegant way. Consider the simplest example of a quantum mechanical harmonic oscillator. The retarded Green function is given by \begin{equation} G_R(t-t^\prime)=\theta(t-t^\prime){\sin\,\omega t\over \omega} \end{equation} and the advanced one \begin{equation} G_A(t-t^\prime)=-\theta(t^\prime-t){\sin\,\omega t\over \omega} \end{equation} now, take a perturbation \begin{equation} \delta L= j(t) q(t) \end{equation} then \begin{eqnarray} &&p=\dot{q}={d\over dt}\int_{-\infty}^t dt^\prime\,G_R(t-t^\prime)\,j(t^\prime)=\int_{-\infty}^t dt^\prime\,\cos\,\omega(t-t^\prime)\,j(t^\prime)=\nonumber\\ &&={d\over dt}\int^{\infty}_t dt^\prime\,G_A(t-t^\prime)\,j(t^\prime)=-\int^{\infty}_t dt^\prime\,\cos\,\omega(t-t^\prime)\,j(t^\prime) \end{eqnarray} It follows that \begin{equation} \left(\delta^R-\delta^A\right)\,p=\int_{-\infty}^\infty dt^\prime\,\cos\,\omega(t-t^\prime)\,j(t^\prime) \end{equation} besides \begin{equation} \delta L=j(\tau) q(\tau) \end{equation} it follows that \begin{equation} \left[p,q\right]=-i \hbar \end{equation} In the general case the {\em Peierls' bracket}, is defined by computing the change of some function of all field variables, ${\cal O}_1$ under a perturbation of the Lagrangian with some other operator, $\delta L= j {\cal O}_2$ \begin{equation} \left\{{\cal O}_2,{\cal O}_1\right\}\equiv \left(\delta^R-\delta^A\right)_{{\cal O}_2} {\cal O}_1 \end{equation} \par Schwinger's principle is in some sense a functional differential form of Feynman's path integral, where \begin{equation} \langle A| B\rangle=\int_B^A{\cal D}\phi\, e^{i S} \end{equation} the paths over which we are to integrate in the path integral are usually characterized by the initial and final points: $(x,x^\prime)$ meaning the path that obeys the boundary conditions \begin{eqnarray} &x_c(t^\prime)=x^\prime\nonumber\\ &x_c(t)=x \end{eqnarray} but that could be equally well be characterized by the initial position and momentum $(x^\prime,p^\prime)$. \par Hamilton's equations do tell us that \begin{equation} \dot{x}_c^i={\partial H_c\over \partial p_i} \end{equation} The jacobian mapping these two specificacions is called the {\em van Vleck-Morette} determinant \begin{equation} {\partial(p,x)\over \partial(x,x^\prime)}\equiv \det\,D_{ij} \end{equation} \begin{equation} D_{ij}\left(x\, t|x^\prime\, t^\prime\right)={\partial p_j\over \partial x^\prime_i}=-{\partial^2\over \partial x_i \partial x^\prime_j} S\left(x\, t|x^\prime\, t^\prime\right) \end{equation} The Hamilton-Jacobi equation \begin{equation} {\partial S\over \partial t}+ H_c\left(x,{\partial S\over \partial x},t\right)=0 \end{equation} leads to an equation for the van Vleck-Morette determinant \begin{equation} {\partial D\over \partial t}+\partial_i\left( D \dot{x}_c^i\right)=0 \end{equation} This in turn leads to a representation of the path integral in the WKB (one-loop) approximation \begin{equation} K\left(x\, t|x^\prime\, t^\prime\right)= \tilde{N}(x,x^\prime) \,D^{1/2}\left(x\, t|x^\prime\, t^\prime\right)\, e^{i S\left(x\, t|x^\prime\, t^\prime\right)} \end{equation} Let us elaborate. As in many other instances, Pauli's field theory book \cite{Pauli} gives the simplest explanation of the necessity of the van Vleck determinant. To be specific, consider the simplest Hamiltonian \begin{equation} H=\sum_k {p_k^2\over 2 m_k}+V(q) \end{equation} then the solution of Schr\"odinger's equation \begin{equation} i \hbar {\partial \over \partial t}-H=0 \end{equation} to $O(\hbar^2)$ is given by \begin{equation} K_c\equiv \left(2\pi i \hbar\right)^{-{n\over 2}}\,D^{1/2}\, e^{i{S\over \hbar}} \end{equation} The Hamilton-Jacobi equation reads \begin{equation} {\partial S\over \partial t}+\sum_k {1\over 2 m_k} S_k^2+V(q)=0 \end{equation} It follows that we derive \begin{equation} \partial_\tau S_{i}+\sum_k {1\over m_k} S_k S_{k i}=0 \end{equation} and derive again we get \begin{equation} \partial_\tau S_{ij}+\sum_k {1\over m_k}\left( S_{k j} S_{k i} +S_k S_{k i j}\right)=0 \end{equation} Remembering that $D_{ij}\equiv -S_{ij}$ multiplying by $(D^{-1})^{ij}$ we get \begin{equation} D^{-1}\partial_\tau D+\sum_k{1\over m_k}\left( S_{kk}+ S_k D^{-1}\partial_k D\right)=0 \end{equation} Now it is simple exercise to show that \begin{equation} i\hbar\partial_t K_c=\left(-\partial_t S+ {i\hbar\over 2} D^{-1}\partial_t D\right)\, K_c \end{equation} as well as \begin{equation} i\hbar\partial_k K_c=\left( -S_k+{i\hbar\over 2} D^{-1}D_k\right)\, K_c \end{equation} Derive again \begin{eqnarray} -\hbar^2\partial_k\partial_k K_c&&=\bigg\{\left(-S_k+ {i\hbar\over 2} D^{-1}D_k\right)^2 -i\hbar S_{kk}+\nonumber\\ &&+{\hbar^2\over 2}D^{-2} D_k^2-{\hbar^2\over 2}D^{-1}D_{kk}\bigg\} K_c\nonumber\\ \end{eqnarray} Collecting all results \begin{eqnarray} &&\partial_t S- {i\hbar\over 2} D^{-1}\partial_t D+\sum_{k}\frac{1}{2m_k}\bigg\{\left(-S_k+ {i\hbar\over 2} D^{-1}D_k\right)^2 -i\hbar S_{kk}+\nonumber\\ &&+{\hbar^2\over 2}D^{-2} D_k^2-{\hbar^2\over 2}D^{-1}D_{kk}\bigg\}+V=0 \end{eqnarray} \begin{itemize} \item To $O(\hbar^0)$ \begin{equation} {\partial S\over \partial t}+\sum {S_k^2\over 2 m_k}+V=0 \end{equation} is just the Hamilton-Jacobi equation. \item To $O(\hbar)$ \begin{equation} D^{-1}\partial_t D+\sum_k {1\over m_k}\left(S_{kk}+S_kD^{-1} D_k\right)=0 \end{equation} the equation we got a while ago. \item To $O(\hbar^2)$ appear {\em Pauli's false terms} and the ansatz needs to be corrected. \end{itemize} \newpage \section{Gravitation and Quantum Field Theory:\\Poor man's approach.} Following Wilson's effective Lagrangian approach, and to the extent that our previous experience with the other fundamental interactions is to be of any relevance here, there ought to be a regime, experimentally accessible in the not too distant future, in which gravitons propagating in flat spacetime can be isolated. This is more or less unavoidable, once gravitational waves have been observed \cite{LIGO}, and the road towards gravitons should not be too different from the road that lead from the discovery of electromagnetic waves to the identifications of photons as the quanta of the corresponding interaction, a road that led from Hertz to Planck. \par Any quantum gravity theory that avoids identifying gravitational radiation as consisting of large numbers of gravitons in a semiclassical state would be at variance with all we believe to know about quantum mechanics. \par What we expect instead to be confirmed by observations somewhere in the future is that the number of gravitons per unit volume with frequencies between $\omega$ and $\omega+d\omega$ is given by Planck's formula \begin{equation} n(\omega)d\omega={\omega^2 \over \pi^2}{1\over e^{\hbar \omega\over k T}-1} d\omega \end{equation} It is natural to keep an open mind for surprises here, because it can be argued that gravitational interaction is not alike any other fundamental interaction in the sense that the whole structure of space-time ought presumably be affected, but it cannot be denied that this is the most conservative approach and as such it should be explored first, up to its very limits, which could hopefully indicate further avenues of research. From our experience then with the standard model of elementary particles, and assuming we have full knowledge of the fundamental symmetries of our problem, we know that we can parametrize our ignorance on the {\em fundamental} ultraviolet physics by writing down all local operators in the low energy fields $\phi_i(x)$ compatible with the basic symmetries we have assumed. \begin{equation} L=\sum_{n=0}^\infty{\lambda_n(\Lambda)^n\over \Lambda^n}{\cal O}^{(n+4)}\left(\phi_i\right) \end{equation} Here $\Lambda$ is an ultraviolet cutoff, which restricts the contributions of large Euclidean momenta (or small Euclidean distances) and $\lambda_n(\Lambda)$ is an infinite set of dimensionless bare couplings. Two caveats. First, all this is done in a {\em flat} background. There are almost no experimental clues on what happens when the background is curved. Second, there is some contention on what is exactly the symmetry group of General Relativity. After all, {\em any} theory can be written in a covariant form. We shall be conservative in that respect. \par Standard Wilsonian arguments imply that {\em irrelevant operators}, corresponding to $n > 4$, are less and less important as we are interested in deeper and deeper infrared ({\em low energy}) variables. The opposite occurs with {\em relevant operators}, corresponding to $n<4$, like the masses, that become more and more important as we approach the IR. The intermediate role is played by the {\em marginal operators}, corresponding to precisely $n=4$, and whose relevance in the IR is not determined solely by dimensional analysis, but rather by quantum corrections. The range of validity of any finite number of terms in the expansion is roughly \begin{equation} {E\over\Lambda}\ll 1 \end{equation} where $E$ is a characteristic energy of the process under consideration. \par In the case of gravitation, we assume that general covariance (or diffeomorphism invariance) is the basic symmetry characterizing the interaction. We can then write \begin{eqnarray} &&L_{eff}=\lambda_0 \Lambda^4 \sqrt{|g|}+\lambda_1 \Lambda^2 R \sqrt{|g|}+\lambda_2 R^2+ {1\over 2}g^{\alpha\beta}\nabla_\alpha\phi\nabla_\beta\phi\sqrt{|g|}+\nonumber\\ &&+\lambda_3 {1\over \Lambda^2}R^{\alpha\beta}\nabla_\alpha\phi\nabla_\beta\phi\sqrt{|g|}+\lambda_4 {1\over \Lambda^2}R^3\sqrt{|g|}+\lambda_5 \phi^4\sqrt{|g|}+\nonumber\\ &&+\bar{\psi}\left(e^\mu_a \gamma^a \left(\partial_\mu-\omega_\mu\right)\psi-m\right)\psi+{\lambda_5\over \Lambda^2}\bar{\psi}e^\mu_a \gamma^a R\left(\partial_\mu-\omega_\mu\right)\psi+\ldots \end{eqnarray} where $e_a^\mu$ is the tetrad, such that \begin{equation} e_a^\mu e_\beta^\nu \eta^{\alpha\beta}=g^{\mu\nu} \end{equation} $\eta^{\alpha\beta}$ being Minkowski's metric. The quantities $\omega_\mu$ are the spin connection. \par The need to recover General Relativity in the classical IR limit means \begin{equation} \lambda_1\Lambda^2=-{c^3\over 16\pi G}\equiv -2 M_p^2 \end{equation} this in turn, means that if \begin{equation} \lambda_0\Lambda^4 \end{equation} is to yield the observed value for the cosmological constant (which is of the order of magnitude of Hubble's constant, $H_0^4$, which is a very tiny figure when expressed in particle physics units, $H_0\sim 10^{-33}\,eV$) then \begin{equation} \lambda_0\sim 10^{-244} \end{equation} this is one aspect of the cosmological constant problem; it seems most unnatural that the cosmological constant is observationally so small from the effective Lagrangian point of view. We do not have anything new to say on this. \par This expansion is fine as long as it is considered a low energy expansion. As Donoghue \cite{Donoghue} has emphasized, even if it is true that each time that a renormalization is made there is a finite arbitrariness, there are physical predictions stemming from the non-local finite parts. \par The problem is when energies are reached that are comparable to Planck's mass, \begin{equation} E\sim M_p. \end{equation} then all couplings in the effective Lagrangian become of order unity, and there is no {\em decoupling limit} in which gravitation can be considered by itself in isolation from all other interactions. This then seems the minimum prize one has to pay for being interested in quantum gravity; all couplings in the derivative expansion become important simultaneously. No significant differences appear when supergravity is considered. \par In conclusion, it does not seem likely that much progress can be made by somehow quantizing Einstein-Hilbert's Lagrangian in isolation. To study quantum gravity means to study all other interactions as well. \par On the other hand, are there any reasons to go beyond the standard model (SM)? \par Yes there are some, both theoretical, and experimental. From the latter, and most important, side, both the existence of neutrino masses and dark matter do not fit into the SM. And from the former, abelian sectors suffer from Landau poles and are not believed to be UV complete; likewise the self-interactions in the Higgs sector appear to be a trivial theory. Also the experimental values of the particle masses in the SM are not natural from the effective Lagrangian point of view. \par The particle physics community has looked thoroughly for such extensions since the eighties: extra dimensions (Kaluza-Klein), supersymmetry and supergravity, technicolor, etc. From a given point of view, the natural culmination of this road is string theory \par A related issue is the understanding of the so-called {\em semiclassical gravity}, in which the second member of Einstein's equations is taken as the expectation value of some quantum energy-momentum operator. It can be proved that this is the dominant $1/N$ approximation in case there are $N$ identical matter fields (confer \cite{Hartle}). In spite of tremendous effort, there is not yet a full understanding of Hawking's emission of a black hole from the effective theory point of view . Another topic in which this approach has been extensively studied is Cosmology. Novel effects (or rather old ones on which no emphasis was put until recently) came from lack of momentum conservation and seem to point towards some sort of inestability \cite{Polyakov}; again the low energy theory is not fully understood; this could perhaps have something to do with the presence of horizons. \par Coming back to our theme, and closing the loop, what are the prospects to make progress in quantum gravity? \par Insofar as effective Lagrangians are a good guide to the physics there are only two doors open: either there is a ultraviolet attractive fixed point in coupling space, such as in Weinberg's {\em asymptotic safety} or else new degrees of freedom, like in string theory exist in the UV. Even if Weinberg's approach is vindicated, the fact that the putative fixed point most likely lies at strong coupling combined with our present inability to perform analitically other than perturbative computations, means that our only means to get physical information on that regime should come from lattice simulations assuming they will be able to cope with the integration over (a subclass of) geometries before physical predictions could be made with the techniques at hand at the present moment. \par It is to be remarked that sometimes theories harbor the seeds of their own destruction. Strings for example, begin as theories in flat spacetime, but there are indications that space itself should be a derived, not fundamental concept. It is hoped that a simpler formulation of string theory exists bypassing the roundabouts of its historical development. This is far from being the case at present. \par Finally, it is perhaps worth pointing out that to the extent that a purely gravitational canonical approach, as the ones based upon the use of Ashtekar variables makes contact with the classical limit (which is an open problem from this point of view) the preceding line of argument should still carry on. \par It seems {\em unavoidable} with our present understanding, that any theory of quantum gravity should recover, for example, the prediction that there are quantum corrections to the gravitational potential given by \cite{BjerrumBohr} \begin{equation} V(r)=-{G m_1 m_2 \over r}\left(1+3 {G\left(m_1+m_2\right)\over r}+{41 \over 10\pi}{G\hbar\over r^2}\right) \end{equation} (the second term is also a loop effect, in spite of the conspicuous absence of $\hbar$.) Similarly, and although this has been the subject of some controversy, it seems now established that there are gravitational corrections to the running of gauge couplings, first uncovered by Robinson and Wilczek and given in standard notation by \begin{equation} \beta(g,E)=-{b_0\over (4\pi)^2}g^3 -3 {16\pi G\over (4\pi)^2 \hbar c^3}g E^2 \end{equation} sometimes these effects are dismissed as perturbative, and therefore trivial. This is not a healthy attitude. \par Something that can be done is to ignore most of the conceptual problems of quantum gravity, and treat it as a gauge theory. This is possible because the action of diffeomorphisms is formally similar to the one of the symmetry group of an ordinary gauge theory. Locally the fact that the group of diffeomorphisms of a given manifold, $\text{Diff}(M)$ is not a fixed entity, but rather depends in a complicated way on the specific manifold considered, this problem we say if of no concern for our perturbative analysis. All we aim at is to compute the quantum corrections to the gravitational action to first order in the coupling constant, $\kappa$. This was first done in a classic paper by 't Hooft and Veltman in 1973 \cite{tHooft}, as a byproduct of their analysis of one-loop amplitudes in non-abelian gauge theories. An essential tool of their analysis is the background field technique, first devised by DeWitt, to which we now turn. \newpage \section{Exact symmetries in quantum gravity.} There are some {\em folk theorems} nicely summarized in \cite{Banks} ({\em confer} a detailed discussion in \cite{Harlow} and \cite{Misner} for a pre-stringy approach) on what are the consistent symmetries allowed in quantum gravity. The arguments for this theorem stem mostly from consistency of the statistical interpretation of the black hole thermodynamics, although in \cite{Harlow} the theorem is argued to be a consequence of AdS/CFT. \par In a nutshell, this theorem asserts that there are no global symmetries. Only gauge symmetries with compact gauge group are possible. Besides, given such a gauge group, the Hilbert space must include states transforming with every possible finite dimensional irreducible representation of such a gauge group. \par The (simplified) argument goes as follows. \par Imagine that there is a global symmetry. Then there must me some states of mass $m$ transforming with some representation $R$ of $G$. This implies the possible existence of black holes made of matter transforming with $\otimes^n R$, with $n$ arbitrarily large. The long-distance physics of this black hole will be independent of the representation $R$. This black hole cannot lose its global charge, so this implies a stable remmant once full Hawking evaporation gas taken place. Stability is a consequence of the fact that any state with such a large representation of $G$ must be heavier than the remmant. Similar arguments still hold in the massless case, $m=0$ \cite{Banks}. This leads to an infinite number of remmant states. \par This in turn contradicts the {\em covariant entropy bound} (CEB) \cite{Fischler}\cite{Bousso}. The CEB conjecture that given any two-dimensional surface of area $ A$, consider $L$ to be the hypersurface generated by surface-orthogonal null geodesics with non-positive expansion. The conjecture then asserts that the entropy on $L$, $S$ does not exceed $A/4$. \begin{equation} S\leq {A\over 4} \end{equation} \par It is indeed easy to write dowm $QFT$ models that violate this theorem. Consider, for example \cite{Harlow}, Einstein-Hilbert Lagrangian coupled to two abelian $U(1)$ fields. This theory has a $\mathbb{Z}_2$ global symmetry exchanging both abelian gauge fields. Also it does not have charged matter fields. \par According to the theorem what happens is that this theory (and similar ones) cannot appear as the low energy limit of a consistent theory of quantum gravity; those theories are in the {\em swampland}, in Vafa's language \cite{Vafa}. \par All this approach relies heavily on the assumption that when we learn more about quantum gravity, we are not going to change our conception of how general are the (Schwarzschild) black hole states, as well as the esential characteristics of their Hawking evaporation. This may or may not be true. Time will say. \newpage \section{The background field approach in quantum field theory.} The only problem in quantum field theory that will concern us in these lectures is the computation of the partition function, which is nothing else than Schwinger's {\em vacuum persistence amplitude} in the presence of an external source, $J(x)$. It is useful to represent it as a functional integral \begin{equation} Z[J]\equiv e^{i W[J]}\equiv \int {\cal D} \phi~ e^{i S[\phi]+ i \int J(x)\phi(x)} \end{equation} Where in this formal analysis we represent all fields (including the gravitational field) by $\phi(x)$, and we add a coupling to an arbitrary external source as a technical device to compute Green functions out of it by taking functional derivatives of $Z[J]$ and then putting the sources equal to zero. This trick was also invented by Schwinger. The partition funtion generates all Green functions, connected and disconnected. Its logarithm, $W[J]$ sometimes dubbed the {\em free energy} generates connected functions only. These names come from a direct analogy with similar quantities in statistical physics. \par It is possible to give an intuitive meaning to the path integral in quantum mechanics as a transition amplitude from an initial state to a final state. This is actually the way Feynman derived it, and it is also the way of connecting it with {\em Schwinger's action principle}. It has already been pointed out that, in some sense, Feynman's approach is an integral version of Schwinger's differential approach to the quantum dynamics. \par In QFT the integration measure is not mathematically well-defined. For loop calculations, however, it is enough to {\em formally define} the gaussian path integral as a functional determinant, that is \begin{equation} \int {\cal D}\phi\, e^{i \left(\phi K \phi\right)}=\left(\text{det}~K\right)^{-{1\over 2}} \end{equation} where the scalar product is defined as \begin{equation} \left(\phi, K \phi\right)\equiv \int d(vol)~\phi~ K~\phi \end{equation} where $d(vol)$ is the appropiate measure (often $d(vol)\equiv \sqrt{|g|}\,dx^0\wedge\ldots\wedge dx^{n-1}$), and $K$ is a differential operator, usually \begin{equation} K=\Box+\text{something} \end{equation} there are implicit indices in the operator to pair the (also implicit) components of the field $\phi$. \par The only extra postulate needed is translation invariance of the measure, in the sense that \begin{equation} \int {\cal D}\phi\, e^{i ~\left(\left(\phi+\chi\right) K \left(\phi+\chi\right)\right)}=\int {\cal D}\phi\, e^{i \left(\phi K \phi\right)} \end{equation} this is the crucial property that allows the computation of integrals in the presence of external sources by completing the square. \par It is quite useful to introduce a generating function for one-particle irreducible (1-PI) Green functions. This is usually called the {\em effective action} and is obtained through a Legendre transform, quite analogous to the one performed when passing from the Lagrangian to the Hamiltonian in classical mechanics. \par One defines the {\em classical field} as a functional of the external current by \begin{equation} \phi_c[J]\equiv {1\over i}~{\delta W[J]\over \delta J(x)} \end{equation} the Legendre transform then reads \begin{equation} \Gamma[\phi_c]\equiv W[J]-i \int d^n x J(x)\phi_c(x) \end{equation} it is a fact that \begin{equation} {\delta \Gamma\over \delta \phi_c(x)}=\int d^n z ~{\delta W\over \delta J(z)}~{\delta J(z)\over \delta \phi_c(x)}-i J(x)-i \int d^n z \phi_c (z)~{\delta J(z)\over \delta \phi_c(x)}=- i J(x) \end{equation} The background field technique was invented by Bryce Dewitt as a clever device to keep track of divergent terms in theories (such as gravity) with a complicated algebraic structure. The main idea is to split the integration fields into a {\em classical} and a {\em quantum} piece: \begin{equation} W_\mu\equiv \bar{A}_\mu+ A_\mu \end{equation} \par The gauge transformations are \begin{equation} \left(\overline{A}_\mu+A_\mu\right)^\prime=g\left(\overline{A}_\mu+A_\mu\right)g^{-1}+g\partial_\mu g^{-1} \end{equation} there is a subset of those, that we shall call {\em quantum gauge transformations}, which are those under which the background field remains inert \begin{eqnarray}\label{quantum} &&\bar{A}_\mu^\prime=\bar{A}_\mu\nonumber\\ &&A^\prime_\mu=g\left(\bar{A}_\mu+A_\mu+\partial_\mu\right)g^{-1}-\bar{A}_\mu \end{eqnarray} that is \begin{eqnarray} &&\delta \overline{A}_\mu=0\nonumber\\ &&\delta A^a_\mu= i f^a_{~ b c}~\omega^b~\left(\overline{A}^c_\mu+ A^c_\mu\right)-\partial_\mu \omega^a=-\bar{\nabla}_\mu \omega^a + i f^a_{~bc} \omega^b A^c_\mu \end{eqnarray} those are the gauge transformations that we have got to gauge fix. The thing is that there is another, {\em background gauge} transformation, which can be kept even when gauge fixing \eqref{quantum}. Namely \begin{eqnarray}\label{classical} &&\bar{A}_\mu^\prime=g\left(\bar{A}_\mu+\partial_\mu\right) g^{-1}\nonumber\\ &&A^\prime_\mu=g~A_\mu~g^{-1} \end{eqnarray} under which the quantum fields rotate in the adjoint. This is \begin{eqnarray} &&\delta \overline{A}^a_\mu= i f^a_{~ b c}~\omega^b~\overline{A}^c_\mu-\partial_\mu \omega^a\nonumber\\ &&\delta A^a_\mu= i f^a_{~ b c}~\omega^b~ A^c_\mu \end{eqnarray} \par Currents transform in such a way that \begin{equation} \delta_C \int J_a^\mu A^a_\mu=0 \end{equation} that is \begin{equation} \delta J^a_\mu=if^a_{~bc} \omega^b J^c_\mu \end{equation} We insist that the beauty of the background field method is that it is possible to gauge fix the quantum symmetry while preserving the classical gauge symmetry. All computations are then invariant under gauge transformations of the clasical field, and so are the counterterms. This simplifies the heavy work involved in computing with gravity. \par The simplest background field gauge is \begin{equation} \bar{F}^a[A]\equiv \left(\bar{D}_\mu A^\mu \right)^a \end{equation} where $\bar{D}_\mu$ represents the covariant derivative with respect to the classical field. \par L. Abbott \cite{Abbott} was able to prove a beautiful theorem to the effect that the effective action computed by the background field method is simply related to the ordinary effective action \begin{equation} \Gamma_{BF}[A^{BF}_c,\bar{A}]=\left.\Gamma[A_c]\right|_{A_c=A_c^{BF}+\bar{A}} \end{equation} this means in particular, that \begin{equation} \Gamma[A_c]=\Gamma_{BF}[0,\bar{A}=A_c] \end{equation} at the one loop order all this simplifies enomoursly. Working in Euclidean space \begin{eqnarray} &&e^{-W[\overline{A}]}\equiv \int {\cal D}A\, e^{-S[\overline{A}]-\int A K[\overline{A}]A-\int JA}=\nonumber\\ &&=e^{-S[\overline{A}]-{1\over 2}\text{log~det}~K[\overline{A}]-{1\over 2} \int J K^{-1}[\overline{A}] J} \end{eqnarray} where the operator $K$ incorporates the contributions of $L_{gauge}$ as well as $L_{gf}$. This means that \begin{equation} A_c=- \int K^{-1}~[\overline{A}] J \end{equation} so that \begin{equation} J= -\int K[\overline{A}]~A_c \end{equation} and \begin{eqnarray} &&\Gamma^{BF}[A_c,\overline{A}]=W[J(A_c)]-\int JA_c=\nonumber\\ &&=S[\overline{A}]+{1\over 2}\text{log~det}~K[\overline{A}]+{1\over 2} \int K A_c K^{-1}[\overline{A}] K A_c-\int K A_c A_c=\nonumber\\ && =S[\overline{A}]+{1\over 2}\text{log~det}~K[\overline{A}]-{1\over 2}\int A_c K A_c \end{eqnarray} finally by Abbott's theorem \begin{equation} \Gamma(A_c)=\Gamma^{BF}[0,\overline{A}=A_c]=W[\overline{A}]\equiv S[\overline{A}]+{1\over 2}\text{log~det}~K[\overline{A}] \end{equation} In order to compute the counterterm at one-loop order, we need to take the effective action \begin{equation} e^{iW}=\int {\cal D }A\, e^{\frac{i}{\hbar}S[A]} \end{equation} with the background technique $A\rightarrow \bar{A}+A$, \begin{eqnarray} e^{iW}&&=\int {\cal D }A\, e^{\frac{i}{\hbar}S[\bar{A}+A]}\nonumber\\ &&=\int {\cal D }A\, e^{\frac{i}{\hbar}\left(S[\bar{A}]+\int S_1[\bar{A}]A+\frac{1}{2}\int S_2[\bar{A}]A^2\right)} \end{eqnarray} where \begin{equation} \left.S_n\left[\bar{A}\right]=\frac{\partial^n S}{\partial A^n}\right|_{\bar{A}}\end{equation} rescale now \begin{equation} A\rightarrow \hbar^{1\over 2}\,A \end{equation} \begin{eqnarray} e^{iW}&&=\int {\cal D }A\, e^{\left(\frac{i}{\hbar}S[\bar{A}]+\frac{i}{\hbar^{1/2}}\int S_1[\bar{A}]A+\frac{1}{2}\int S_2[\bar{A}]A^2\right)} \end{eqnarray} It can be proved (cf. \cite{Buchbinder}) that only even powers of $\hbar$ appear in the expansion; and also that only 1PI diagrams need to be considered. The linear term vanishes whenever the classical field is a solution of the equations of motion \begin{equation} S_1[\bar{A}]=0 \end{equation} the first nontrivial order is the one-loop contribution \begin{equation} e^{iW}=\int {\cal D}A\,e^{ {i\over 2} \int S_2[\bar{A}]A^2}=\left(\det\, S_2[\bar{A}]\right)^{- 1/2} \end{equation} when the field is complex, \begin{equation} e^{iW}=\int {\cal D}\varphi{\cal D}\bar{\varphi}\,e^{ {i\over 2}\bar{\varphi} S_2[\bar{A}]\varphi}=\left(\det\, S_2[\bar{A}]\right)^{-1} \end{equation} finally, for fermionic fields \begin{equation} e^{iW}=\int {\cal D}\psi{\cal D}\bar{\psi}\,e^{ {i\over 2}\bar{\psi} S_2[\bar{A}]\psi}=\left(\det\, S_2[\bar{A}]\right) \end{equation} \subsection{Gauge invariance of the one loop effective action.} We have just proved that when an appropiate gauge fixing term is used, background gauge invariance is maintained all the way except for the source term. It is then plausible that when $J=0$, that is, when \begin{equation} \overline{S}_1\equiv{\delta S[\overline{A}]\over \delta \overline{A}}=0 \end{equation} the effective action is background gauge invariant. \par In fact the gauge dependence of the effective action has been discussed extensively by Kallosh \cite{Kallosh} who proved that not only this is indeed the case; but also that when $J=0$; that is \begin{equation} {\delta \Gamma[\overline{A}]\over \delta \overline{A}}=0 \end{equation} the background field effective action is independent of the gauge fixing term. This is a nontrivial statement. \par Let us show a simplified proof of this fact, following \cite{Buchbinder}. We begin with an action \begin{equation} S[A]=-\frac{1}{4}\int d^nx F_{\alpha\beta}F^{\alpha\beta}\end{equation} with \begin{equation} F_{\alpha\beta}=\partial_\alpha A_\beta-\partial_\beta A_\alpha-ig\left[A_\alpha,A_\beta\right]\end{equation} We shall restrict ourselves for simplicity to linear gauge fixing \begin{equation} \chi^\alpha\equiv t^\alpha_\beta A^\beta \end{equation} with $\partial_\mu t^\alpha_\beta\equiv \partial_{A^\mu}t^\alpha_\beta=0$, then the gauge fixing action will be \begin{equation} S_{gf}\equiv\frac{1}{2}\int d^nx g_{\alpha\beta} \chi^\alpha \chi^\beta \end{equation} and the corresponding ghost \begin{equation} S_{gh}\equiv\int d^nx\bar{c}^\alpha M^\beta_{~a} c_\beta \equiv \int d^nx\bar{c}^\alpha t^\mu_\alpha R^\beta_\mu c_\beta \end{equation} where the generator of gauge transformations is \begin{equation} R_\mu^\alpha=\left(\delta^\alpha_\beta\partial_\mu+igf^\alpha_{~\beta\gamma}A^\gamma_\mu\right)\omega^\beta\end{equation} The partition function, with a source $J$, will then depend upon the gauge fixing through both $g_{\alpha\beta}$ and $\chi^\alpha$ \begin{equation} Z[J]\equiv \int {\cal D} A{\cal D} c{\cal D} \bar{c}\,e^{i\left( S[A]+S_{gf}+S_{gh}-{i\over 2}{\rm tr\,}\log\,g_{\alpha\beta}+J A\right)}\equiv\langle 1\rangle \end{equation} then under an arbitrary variation $\delta g_{\alpha\beta}$ and $\delta t^\alpha_\beta$ \begin{eqnarray} \delta Z&&=i\bigg\langle -{1\over 2} g^{\alpha\beta} \delta g_{\alpha\beta}+{1\over 2} \delta g_{\alpha\beta}\chi^\alpha \chi^\beta+\frac{1}{2}g_{\alpha\beta}\left( \delta t^\alpha_\mu t^\beta_\nu+ t^\alpha_\mu \delta t^\beta_\nu\right)A^\mu A^\nu-\nonumber\\ &&- (M^{-1})_\beta^\alpha \delta t^\mu_\alpha R^\beta_\mu+J_\alpha\delta A^\alpha\bigg\rangle \end{eqnarray} Next, we perform a gauge transformation on the fields \begin{equation} A^\alpha=A^\alpha+ R^\alpha_\mu\, \xi^\mu \end{equation} with parameter \begin{equation} \xi^\mu=-(M^{-1})^\mu_\nu\left(\delta t^\nu_\alpha+{1\over 2} g^{\nu\lambda}\delta g_{\lambda\tau} t^\tau_\alpha\right)A^\alpha \end{equation} Then \begin{eqnarray}\label{Z} &&\delta Z=i\bigg\langle -{1\over 2} g^{\alpha\beta} \delta g_{\alpha\beta}+{1\over 2} \delta g_{\alpha\beta}\chi^\alpha \chi^\beta+\frac{1}{2}g_{\alpha\beta}\left( \delta t^\alpha_\mu t^\beta_\nu+ t^\alpha_\mu \delta t^\beta_\nu\right)A^\mu A^\nu- \nonumber\\ &&-(M^{-1})_\beta^\alpha \delta t^\mu_\alpha R^\beta_\mu+\frac{1}{2}g_{\alpha\beta}t^\alpha_\mu t^\beta_\nu \left(A^\mu R^\nu_\lambda+A^\nu R^\mu_\lambda\right)\xi^\lambda-R^\alpha_\mu\xi^\mu_{,\alpha}-\nonumber\\ &&-(M^{-1})^\alpha_\beta t^\beta_ \mu R^\mu_{\alpha,\lambda}R^\lambda_\tau\xi^\tau+J_\alpha A^\alpha+J_\alpha R^\alpha_\mu\xi^\mu\bigg\rangle \end{eqnarray} but \begin{eqnarray} &&\frac{1}{2}g_{\alpha\beta}t^\alpha_\mu t^\beta_\nu \left(A^\mu R^\nu_\lambda+A^\nu R^\mu_\lambda\right)\xi^\lambda=-\frac{1}{2}g_{\alpha\beta}\left( \delta t^\alpha_\mu t^\beta_\nu+ t^\alpha_\mu \delta t^\beta_\nu\right)A^\mu A^\nu-{1\over 2} \delta g_{\alpha\beta}\chi^\alpha \chi^\beta\nonumber\\ \end{eqnarray} and the rest of terms, except the sources one \begin{eqnarray} &&-{1\over 2} g^{\alpha\beta} \delta g_{\alpha\beta}- (M^{-1})_\beta^\alpha \delta t^\mu_\alpha R^\beta_\mu-R^\alpha_\mu\xi^\mu_{,\alpha}-(M^{-1})^\alpha_\beta t^\beta_ \mu R^\mu_{\alpha,\nu}R^\lambda_\tau\xi^\tau=\nonumber\\ &&=-f^\alpha_{\alpha\beta}(M^{-1})^\beta_\tau\left(\delta t^\tau_\alpha+\frac{1}{2}g^{\tau\lambda}\delta g_{\lambda\mu} t^\mu_\alpha\right)A^\alpha=0 \end{eqnarray} taking in to account that the gauge algebra implies that \begin{equation} R^j_\gamma \partial_j R^i_\beta-R^j_\beta \partial_j R^i_\gamma= f^\delta\,_{\gamma\beta} R^i_\delta \end{equation} and that in dimensional regularization (where by definition $\left.{d^n\over dx^n} \delta(x)\right|_{x=0}=0\quad \forall n$) \begin{equation} f^\beta\,_{\beta\gamma}=0 \end{equation} then only the term proportional to the external source \begin{equation} \delta Z=i\left\langle J_\alpha A^\alpha+J_\alpha R^\alpha_\mu\xi^\mu\right\rangle \end{equation} survives. In conclusion, if there is not source \begin{equation} 0=J={\partial \Gamma[A]\over \partial A} \end{equation} it suffices for gauge fixing independence. Note that this argument (which seems to have its origin in DeWitt \cite{DeWitt}) is independent of the background field approach. \par In \cite{Kallosh}, Kallosh was also able to show that when a counterterm vanishes owing to the equations of motion, there is always a different gauge fixing term where the divergences vanish even off shell. \newpage \section{Geometric computation of the one loop effective action.} We have just proved that to one loop order all functional integral computations reduce to gaussian integrals, which can in turn be formally represented as functional determinants. This is hardly of any advantage when computing finite parts of correlators. Contrasting with that, a geometric approach for computing the {\em divergent piece} of the effective action exists. This approach was pioneered by Julian Schwinger and Bryce DeWitt (a former student of Schwinger's). \par When breaking the total gravitational field $g_{\mu\nu}(x)$ into a {\em background part}, $\bar{g}_{\mu\nu}(x)$ and a quantum fluctuation, $h_{\mu\nu}(x)$, we are working in a {\em background manifold}, $\overline{M}$, with metric $\bar{g}_{\mu\nu}(x)$, and thereby avoiding most of the problems of principle of quantum gravity. Quantum gravitational fluctuations are treated as ordinary gauge fluctuations. This approach was culminated by the brilliant work of 't Hooft and Veltman \cite{tHooft}, where it was shown that pure quantum gravity is one loop finite on shell. This is not true any more as soon as some matter is added. Even pure quantum gravity at two-loops is divergent on shell, as was shown by Goroff and Sagnotti \cite{Goroff}. \par The formalism is such that in order to compute the divergent piece of the effective action, background gauge invariance can be maintained, so that we do not commit to any specific background, although we assume that some such background always exists. \par Were we to compute correlators, then the particular Green function appropiate to each background is needed, and then all subtle points associated with background horizons and singularities will reappear. The Unruh radiation is the simplest manifestation of these. \par It is to be emphasized that quantum Diff invariance is spontaneously broken in this approach. The background gauge transformations read \begin{eqnarray} && \delta \bar{g}_{\mu\nu}=\xi^\lambda\partial_\lambda \bar{g}_{\mu\nu}+\bar{g}_{\lambda\nu}\partial_\mu \xi^\lambda +\bar{g}_{\mu\lambda}\partial_\nu\xi^\lambda =\bar{\nabla}_\mu\xi_\nu+\bar{\nabla}_\nu\xi_\mu\nonumber\\ &&\delta h_{\mu\nu}=\xi^\lambda\partial_\lambda h_{\mu\nu}+h_{\lambda\nu}\partial_\mu \xi^\lambda +h_{\mu\lambda} \partial_\nu\xi^\lambda \end{eqnarray} and the quantum gauge transformations read \begin{eqnarray} &&\delta \bar{g}_{\mu\nu}=\xi^\lambda\partial_\lambda \bar{g}_{\mu\nu}\nonumber\\ &&\delta h_{\mu\nu}=\xi^\lambda\partial_\lambda h_{\mu\nu}+\partial_\mu \xi^\lambda \left(\bar{g}_{\lambda\nu}+h_{\lambda\nu}\right)+\partial_\nu\xi^\lambda \left(\bar{g}_{\mu\lambda}+h_{\mu\lambda}\right) \end{eqnarray} Working to one loop order, they simplify to \begin{eqnarray} && \delta \bar{g}_{\mu\nu}=\xi^\lambda\partial_\lambda \bar{g}_{\mu\nu}+\bar{g}_{\lambda\nu}\partial_\mu \xi^\lambda +\bar{g}_{\mu\lambda}\partial_\nu\xi^\lambda =\bar{\nabla}_\mu\xi_\nu+\bar{\nabla}_\nu\xi_\mu\nonumber\\ &&\delta h_{\mu\nu}=\xi^\lambda\partial_\lambda h_{\mu\nu} \end{eqnarray} and to \begin{eqnarray} &&\delta \bar{g}_{\mu\nu}=\xi^\lambda\partial_\lambda \bar{g}_{\mu\nu}\nonumber\\ &&\delta h_{\mu\nu}=\xi^\lambda\partial_\lambda h_{\mu\nu}+\bar{g}_{\lambda\nu}\partial_\mu \xi^\lambda +\bar{g}_{\mu\lambda}\partial_\nu\xi^\lambda \end{eqnarray} they still act nonlinearly of the quantum fluctuations owing to the inhomogeneous terms. This physically means that the quantum fluctuations behave as goldstone bosons of broken Diff invariance. \par To study the Diff invariant phase would mean to compute with \begin{equation} \overline{g}_{\mu\nu}=0 \end{equation} which is not possible, bacause there is then no background geometry. For starters, it is not possible to define even the inverse metrix, $\bar{g}^{\mu\nu}$, and consequently neither can the Christoffels be computed. \par In some cases, and using the first order formalism, it is possible to functionally integrate without the restriction that the determinant of the metric does not vanish $\bar{g}\neq 0$. An example is Witten's treatment \cite{Witten3} of three-dimensional quantum gravity as a gauge theory. \par It is not clear what are the conclusions to draw for the four-dimensional case. \newpage \section{Zeta function} Consider the partition function in euclidean signature \begin{equation} Z\equiv\int{\cal D}\phi~e^{-{1\over 2}\int \sqrt{|g|}d^n x~\phi A\phi} \end{equation} this means that the dimension of the fields $\phi$ must be ${n-d_A\over 2}$, where $d_A$ is the mass dimension of the operator $A$; usually $d_A=2$. The eigenvalues equation for this operator is \begin{equation} A \phi_n=\lambda_n\phi_n \end{equation} the dimension of $\lambda_n$ must necessarily be that of the operator $A$. We can fool around with the dimension of $\phi_n$, or fix it through normalization: \begin{equation} \langle\phi_n|\phi_m\rangle\equiv \int \sqrt{|g|}~d^n x~ \phi^*_n~ \phi_m=\delta_{mn} \end{equation} the dimension of $\phi_m$ is then ${n\over 2}$ in the Kronecker case, or $0$ in the continuous case when the Kronecker delta is replaced by a Dirac delta of momentum $\delta^n(k)$. \par If the set of eigenfunctions is complete in the functional space, it is possible to formally expand \begin{equation} \phi\equiv\sum a_n~\phi_n \end{equation} the dimensions of the expansion coefficients $a_n$ is ${n-d_A\over 2}-{n\over 2}=-{d_A\over 2}$ with the discrete normalization. \par It is tempting to {\em define the functional measure} as the dimensionless quantity \begin{equation} {\cal D}\phi\equiv \prod_n \mu^{d_A\over 2}~da_n \end{equation} then the gaussian integral is represented by the infinite product \begin{eqnarray} &&Z=\prod_n \mu^{d_A\over 2}\int da_n~e^{-{1\over 2}\int \sqrt{|g|}d^n x~a_n\phi_n Aa_n\phi_n}=\nonumber\\ &&=\prod_n \mu^{d_A\over 2}\int da_n~e^{-{a_n^2\over 2}\lambda_n}=\prod_n \mu^{d_A\over 2}\sqrt{2\pi\over \lambda_n} \end{eqnarray} The zeta-function associated to the operator $A$ is now defined by analogy with Riemann's zeta function \begin{equation} \zeta(s)\equiv \sum_n \left({\lambda_n \over \mu^{d_A}}\right)^{-s} \end{equation} and find \begin{equation} \zeta^\prime(s)=-\sum_n~\text{log}~\left({\lambda_n \over \mu^{d_A}}\right)~\left({\lambda_n \over \mu^{d_A}}\right)^{-s} \end{equation} so that \begin{equation} -\zeta^\prime(0)=\sum_n~\text{log}~\left({\lambda_n \over \mu^{d_A}}\right)~=\text{log}~\text{det}~A \end{equation} then the determinant of the operator itself is defined by analytic continuation as \begin{equation} \text{det}~ A\equiv e^{-\zeta^\prime\left(0\right)} \end{equation} this definition was first proposed in the mathematical literature by Ray and Singer \cite{Ray}; in physics was first used by Dowker and Critchley \cite{Dowker} and Hawking \cite{Hawking} which studied its conformal properties and rederived the conformal anomaly. \par It would seem that this definition immediatly implies \begin{equation} \det\,(AB)=\det\,A\det\,B \end{equation} this is in fact obvious in the finite case, because \begin{equation} \zeta_{AB}(s)=\sum_{m,n} \lambda^A_m \lambda^B_n \end{equation} in such a way that \begin{equation} \zeta^\prime_{AB}(0)=-\sum_{mn}\log\,\left(\lambda^A_m \lambda^B_n\right)=-\sum_m \log\,\lambda^A_\mu-\sum_n\,\log\,\lambda^B_n=\zeta^\prime_A(0)+\zeta^\prime_B(0) \end{equation} In general this is not so, and the {\em multiplicative anomaly} \cite{Elizalde}\cite{Kontsevich}\cite{McKenzie-Smith} is defined as \begin{equation} a_{AB}\equiv \log\,\det\,(AB)-\log \det\,A-\log\det\,B \end{equation} Let us work in detail the most basic of all determinants, the one of the flat space d'Alembertian. The dimensionless eigenfunctions are plane waves \begin{equation} \phi_k\equiv {1\over (2\pi)^{n\over 2}}~e^{i k x} \end{equation} and are normalized in such a way that \begin{equation} \int d^n x~\phi_k^*(x) \phi_{k^\prime}(x)=\delta^n~\left(k-k^\prime\right) \end{equation} the eigenvectors are simply \begin{equation} \lambda_k=-k^2 \end{equation} the continuum normalization means that fields are expanded as \begin{equation} \phi(x)=\int d^n k ~a_k~ \phi_k(x) \end{equation} this means that the dimension of the expansion coefficients is now \begin{equation} \left[a_k\right]=-{n+d_A\over 2} \end{equation} the zeta function is given by \begin{equation} \zeta(s)=\int {d^n k\over (2\pi)^n}~\left({-{k^2\over \mu^2}}\right)^{- s}=\int {d^n k\over (2\pi)^n}~e^{-s~\text{log}~\left({-{k^2\over \mu^2}}\right)} \end{equation} This leads to the expression for the determinant of the ordinary d'Alembert operator \begin{equation}\label{flat} \text{log~det}~\Box=\int {d^n~ k\over (2\pi)^n} \text{log}\left({-{k^2\over \mu^2}}\right) \end{equation} \newpage \section{Heat kernel} Let us now follow a slightly different route which is however intimately related. We begin, following Schwinger, by considering the divergent integral which naively is independent of $\lambda$ \begin{equation} I(\lambda)\equiv\int_0^{\infty} \frac{dx}{x}~e^{- x\lambda} \end{equation} the integral is actually divergent, so before we begin speaking about it, it has to be regularized. It can be defined through \begin{equation} I(\lambda)\equiv \lim_{\epsilon\rightarrow 0} I(\epsilon,\lambda)\equiv \lim_{\epsilon\rightarrow 0}\int_\epsilon^{\infty} \frac{dx}{x}e^{- x\lambda} \end{equation} such that \begin{equation} \lim_{\epsilon\rightarrow 0}\frac{\partial I(\epsilon,\lambda)}{\partial\lambda}=-\int_\epsilon^\infty dx\,e^{-x\lambda}=\left.{e^{-x\lambda}\over \lambda}\right|_\epsilon^\infty= -\frac{1}{\lambda} \end{equation} it follows \begin{equation} I(\lambda)=-\log{\lambda}+C \end{equation} \par It is natural to define (for trace class \footnote{ In the physical Lorentzian signature, all quantities will be computed from analytic continuations from Riemannian configurations where they are better defined. This procedure is not always unambiguous when gravity is present.}) operators \begin{equation} \log\det \Delta= \text{tr}\log\Delta\equiv \sum_n \log\lambda_n \end{equation} \par Now given an operator (with purely discrete, positive spectrum) we could generalize the above idea (Schwinger) \begin{equation} \log\det\Delta\equiv -\int_0^\infty \frac{d\tau}{\tau} \text{tr}~ e^{-\tau\Delta} \end{equation} the trace here encompasses not only discrete indices, but also includes an space-time integral. Let is define now the {\em heat kernel} associated to that operator as the operator \begin{equation} K(\tau)\equiv e^{-\tau \Delta} \end{equation} formally the inverse operator is given through \begin{equation} \Delta^{-1}\equiv \int_0^\infty d\tau~K(\tau) \end{equation} where the kernel obeys the heat equation \begin{equation} \left(\frac{\partial}{\partial\tau}+\Delta\right)K(\tau)=0 \end{equation} in all case that will interest us, the operator $\Delta$ will be a differential operator. Then the heat equation is a parabolic equation \begin{equation} \left(\frac{\partial}{\partial\tau}+\Delta\right)K(\tau;x,y)=0 \end{equation} which need to be solved with the boundary condition \begin{equation} K(x,y,0)=\delta^{(n)}(x-y) \end{equation} \par The mathematicians have studied operators which are deformations of the laplacian of the type \begin{equation} \Delta\equiv -D^{\mu}D_{\mu}+Y \end{equation} where $D_\mu$ is a gauge covariant derivative \begin{equation} D_{\mu}\equiv \nabla_{\mu}+X_{\mu} \end{equation} and $\nabla_\mu$ is the usual covariant space-time derivative. \par In the simplest case $X=Y=0$ and $\nabla_\mu=\partial_\mu$, the flat space solution corresponding to the Euclidean Laplacian is given by \begin{equation} K_0(x,y;\tau)=\frac{1}{(4\pi \tau)^{n/2}}e^{-\frac{\sigma(x,y)}{2\tau}} \end{equation} where $\sigma(x,y)$ is Synge's world function \cite{Synge}, which in flat space is simply given by \begin{equation} \sigma(x,y)\equiv {1\over 2}(x-y)^2 \end{equation} This can be easily checked by direct computation. \begin{eqnarray} \langle x \left|K_0(\tau)\right| x^\prime\rangle&&=\langle x \left|e^{\tau\Box}\right| x^\prime\rangle=e^{\tau\Box}\langle x| x^\prime\rangle=\nonumber\\ &&=e^{\tau\Box}\delta^{(n)}\left(x-x^\prime\right)=\int{d^n k\over (2\pi)^n}\,e^{-\tau k^2+i k(x-x^\prime)}=\nonumber\\ &&=\int {d^n k\over (2\pi)^n} e^{-\left(k\sqrt{\tau}-i{x-x^\prime\over 2\sqrt{\tau}}\right)^2}e^{-{(x-x^\prime)^2\over 4 \tau}}={1\over (4 \pi\tau)^{n\over 2}}\,e^{-{(x-x^\prime)^2\over 4\tau}}\nonumber\\ \end{eqnarray} \par Lorentzian signature leads to the replacement of the heat equation by Schr\"odinger's one (cf\cite{HartleHawking}) \begin{equation} S\equiv {1\over 2}\int d^4 x \phi(-\Box)\phi \end{equation} the one-loop operator is $-\Box$. Let us define \begin{equation} K(s)=e^{-is \Box} \end{equation} so that \begin{equation} i{\partial \over \partial s} K(s)= \Box K(s) \end{equation} then \begin{eqnarray} \Delta&&\equiv \int_0^\infty ds \int \frac{d^4 k}{(2\pi)^4} e^{i s k^2}e^{i k x}=\nonumber\\ &&=\int_0^\infty ds \int \frac{d^4 k}{(2\pi)^4} e^{is(k+{x\over 2s})^2-i {x^2\over 4 s}}=\int_0^\infty ds e^{-{i \sigma\over 2 s}} \end{eqnarray} it is possible to regularize the UV divergence at the coincidence limit $x=y$ by substituting \begin{equation} \sigma(x,y)\rightarrow \lim_{\epsilon\rightarrow 0^+}\,\sigma(x,y)- i\epsilon \end{equation} It is unfortunately quite difficult to get explicit solutions of the heat equation except in very simple cases. This limits the applicability of the method for computing finite determinants. These determinants are however divergent in all cases of interest in QFT, and their divergence is due to the lower limit of the proper time integral. It we were able to know the solution close to the lower limit, we could get at least some information on the structure of the divergences. This is exactly how far it is possible to go. \par The small proper time expansion of Schwinger and DeWitt is given by a Taylor expansion \begin{equation} K\left(\tau;x,y\right)=K_0 \left(\tau;x,y\right)~\sum_{p=0}^\infty~a_p \left(x,y\right)\tau^p \end{equation} with \begin{equation} a_0(x,x)=1 \end{equation} the coefficients $a_p\left(x,y\right)$ characterize the operator whose determinant is to be computed. They are {\em universal} in the sense that they are independent on the dimension of the space-time manifold, as well as on the non-invariant characteristics of the gauge fields; they only depend on {\em local} geometrical and gauge invariants \cite{Gilkey}. Actually, for the purpose at hand, only their diagonal part, ${\rm tr\,}\,a_n\left(x,x\right)$ is relevant. \par The integrated diagonal coefficients will be denoted by capital letters \begin{equation} A_n\equiv \int \sqrt{|g|}~d^n x~ a_n(x,x) \end{equation} in such a way that \begin{equation} A_0=vol\equiv \int_M \sqrt{|g|}~d^n x \end{equation} the determinant of the operator is then given by an still divergent integral. The short time expansion did not arrange anything in that respect. This integral has to be regularized by some procedure. One of the possibilities is to keep $x\neq y$ in the exponent, so that \begin{eqnarray} &&\text{log~det}~\Delta\equiv -\int_0^\infty \frac{d\tau}{\tau} \text{tr}~K(\tau)\equiv \nonumber\\ &&\equiv -\lim_{\sigma\rightarrow 0}\int_0^{\infty}\frac{d\tau}{\tau}\frac{1}{(4\pi \tau)^{n/2}}\sum_{p=0}^\infty\tau^p \text{tr}~ a_p(x,y)~ e^{-\frac{\sigma}{2\tau}} \end{eqnarray} we have regularized the determinant by point-splitting. For consistency, also the off-diagonal part of the short-time coefficient ought to be kept. \par All ultraviolet divergences are given by the behavior in the $\tau\sim 0$ endpoint. Changing the order of integration, and performing first the proper time integral, the Schwinger-de Witt expansion leads to \begin{equation} \log\det~\Delta=-\frac{1}{(4\pi)^{n/2}}\int \sqrt{|g|}~d^n x~\lim_{\sigma\rightarrow 0}\sum_{p=0}^\infty \frac{ \sigma^{p-{n\over 2}}}{2^{p-{n\over 2}} }~\Gamma\left({n\over 2}-p\right)~{\rm tr\,}\, a_p (x,y) \end{equation} here it has not been not included the $\sigma$ dependence of \begin{equation} \lim_{\sigma\rightarrow 0}~a_n\left(x,y\right) \end{equation} in flat space this corresponds to \begin{equation} \left(x-y\right)^2=2 \sigma\rightarrow 0 \end{equation} assuming this dependence is analytic, this could only yield higher powers of $\sigma$, as will become plain in a moment. The term $p=0$ diverges in four dimensions when $\sigma\rightarrow 0$ as \begin{equation} {1\over \sigma^2} \end{equation} but this divergence is common to all operators and can be absorbed by a counterterm proportional to the total volume of the space-time manifold. This renormalizes the the cosmological constant. \par The next term corresponds to $p=2$, and is independent on $\sigma$. In order to pinpoint the divergences, When $n=4-\epsilon$ it is given by \begin{equation} \log\det\left.\Delta\right|_{n=4}\equiv\frac{1}{(4\pi)^2}\frac{2}{\epsilon}~ A_2 \end{equation} from this term on, the limit $\sigma\rightarrow 0$ kills everything. A different way to proceed is to take $\sigma=0$ from the beginning and put explicit IR ($\mu$) and UV ($\Lambda$) proper time cutoffs, such that ${\Lambda\over\mu}>>1$. It should be emphasized that these cutoffs are not cutoffs in momentum space; they respect in particular all gauge symmetries the theory may enjoy. \begin{equation} \log\det\Delta\equiv -\int\frac{d\tau}{\tau} \text{tr}~K(\tau)\equiv -\int_{1\over \Lambda^2}^{1\over \mu^2}\frac{d\tau}{\tau}\frac{1}{(4\pi \tau)^{n/2}}\sum_{p=0}\tau^p \text{tr}~ A_p\left[\Delta\right]~ \end{equation} this yields, for example in $n=4$ dimensions \begin{equation} \log\det\Delta=-{1\over (4\pi)^2}\Big[{1\over 2}\left(\Lambda^4-\mu^4\right)+ A_1\left[\Delta\right]~\left(\Lambda^2-\mu^2\right) +A_2\left[\Delta\right]~ \text{log}~{\Lambda^2\over \mu^2}\Big] \end{equation} there are finite contributions that are not captured by the small proper time expansion; those are much more difficult to compute and, as has been already pointed out, the heat kernel method is not particularly helpful in that respect. \par It is possible to relate the $\zeta$ function to the heat kernel, namely \begin{equation} \zeta(s)={1\over \Gamma(s)}\,{\rm tr\,}\,\int_0^\infty d\tau\, \tau^{s-1}\,K(\tau) \end{equation} in such a way that \begin{equation} K(\tau)={1\over 2 \pi i}\oint {d\tau\over \tau}\Gamma(s)\zeta(s) \end{equation} Another possible covariant regulator in the effective action by \begin{equation} W\equiv\lim_{s\rightarrow 0}W_s\equiv \lim_{s\rightarrow 0}\int_0^\infty\,{d\tau\over \tau^{1-s}} K(\tau)= \lim_{s\rightarrow 0} \Gamma(s) \zeta(s)= \end{equation} \begin{equation} =\lim_{s\rightarrow 0}\left({1\over s}-\gamma_E\right)\left(\zeta(0)+ s \zeta^\prime(0)\right)=\lim_{s\rightarrow 0} {\zeta(0)\over s}+\zeta^\prime(0)-\gamma_E\zeta(0) \end{equation} here it is clearly seen a split between the divergence and the finite part. Introducing the short time expansion of the heat kernel before point splitting \begin{eqnarray} &&W=\lim_{s\rightarrow 0}\int_0^\infty\,{d\tau\over \tau^{1-s}} {1\over (4\pi\tau)^{n\over 2}} \sum_p \lim_{\sigma\rightarrow 0}\, e^{-{\sigma\over 2 \tau}} a_p(\sigma) \tau^p= \nonumber\\ &&={1\over (4\pi)^{n\over 2}}\lim_{s\rightarrow 0}\sum_p {1\over 2^{s +p-{n\over 2}}}\Gamma\left({n\over 2}-p-s\right)\lim_{\sigma\rightarrow 0}\,a_p(\sigma)\, \sigma^{s+p-{n\over 2}} \end{eqnarray} this also yields a somewhat symbolic identity valid for $s\sim 0$ \begin{eqnarray} && \zeta(s)={W_s\over \Gamma(s)}= {1\over (4\pi)^{n\over 2}}{1\over \Gamma(s)}\sum_p {1\over 2^{s +p-{n\over 2}}}\Gamma\left({n\over 2}-p-s\right)\lim_{\sigma\rightarrow 0}\,a_p(\sigma)\, \sigma^{s+p-{n\over 2}}\nonumber\\ \end{eqnarray} there is no logarithmic singularity as long as $s\neq 0$, introducing both IR (${1\over \mu^2}$) and UV (${1\over \Lambda^2}$) cutoffs in proper time we are led to \begin{equation} W={1\over (4\pi)^{n/2}}\,\sum_{p=0}^{n\over 2}\,{a_p\over {p-\frac{n}{2}}}\,\Big[\left(\mu^2\right)^{{n\over 2}-p}-\left(\Lambda^2\right)^{{n\over 2}-p}\Big]+{1\over (4\pi)^{n/2}}\,\,a_{n\over 2}\,\log\,{\Lambda^2\over \mu^2} \end{equation} In particular, for Klein-Gordon operator $\Delta_{KG}=-\Box-m^2$ reads \begin{equation} K_{KG}(\tau;x-y)={1\over (4\pi\tau)^{n\over 2}}\,e^{-{(x-y)^2\over 4 \tau}-m^2 \tau} \end{equation} then the $\zeta$-function can be recovered from the heat kernel through \begin{eqnarray} \zeta_{KG}(s)&&={1\over \Gamma(s)}\int_0^\infty {d\tau\over \tau^{1-s}}{1\over (4\pi\tau)^{n\over 2}} e^{-m^2\tau}=\sum_{p=0}^{\infty}a_p{\left(m^2\right)^{{n\over 2}-p-s}\over (4\pi)^{n\over 2}}\,\frac{\Gamma\left(s+p-{n\over 2}\right)}{\Gamma(s)}\nonumber\\ \end{eqnarray} in even dimension, $n\in 2 \mathbb{N}$, \begin{equation} \frac{\Gamma\left(s+p-{n\over 2}\right)}{\Gamma(s)}=\frac{1}{(s-1)(s-2)\ldots \left(s+p-{n\over 2}\right)} \end{equation} so that \begin{equation} \zeta_{KG}(s)=\sum_{p=0}^{\infty}a_p{\left(m^2\right)^{{n\over 2}-p-s}\over (4\pi)^{n\over 2}(s-1)(s-2)\ldots \left(s+p-{n\over 2}\right)} \end{equation} it follows that \begin{equation} \zeta_{KG}(0)=\sum_{p=0}^{\infty}a_p {m^{n-2p}\over (4\pi)^{n\over 2} (-1)^{p-\frac{n}{2}} \left(p-{n\over 2}\right)!} \end{equation} \newpage \section{Covariant perturbation theory.} Let us begin by constructing a perturbation theory for the operator in flat space \cite{Mukhanov} \begin{equation} \Delta\equiv -\Box-E \end{equation} if $E$ can be treated as a small correction to the operator $\Box$, we can assume that \begin{equation} K(\tau)=\sum_{n=0}^\infty K_n(\tau) \end{equation} where $K_n(\tau)$ is of order $E^n$, then the heat equation \begin{equation} {\partial \over \partial \tau} K(\tau)=\left(\Box+E\right)\, K(\tau) \end{equation} implies that \begin{equation} {\partial K_n\over \partial \tau}=\Box K_n+ E K_{n-1} \end{equation} We start the recurrence with \begin{equation} K_0(\tau)=e^{\tau\Box} \end{equation} where the {\em parallel displacement operator} is defined as \begin{equation} \phi(x)=a_0(x,x^\prime)\phi_0(x) \end{equation} It is clear that a formal solution to the recurrence reads \begin{equation} K_n(\tau)= e^{\tau\Box}\int_0^\tau ds\,e^{-s\Box} E\,K_{n-1}(s)=K_0(\tau)\int_0^\tau ds\,K_0(-s)\, E\,K_{n-1}(s) \end{equation} we can compute the trace \begin{eqnarray} &&\langle x\left|K_1(\tau)\right| x\rangle=\left\langle x\left|K_0(\tau)\int_0^\tau ds\,K_0(-s)\, E\,K_0(s)\right| x\right\rangle=\nonumber\\ &&=\int d^n y\,d^n z\int_0^\tau ds\,\left\langle x\left|K_0(\tau-s)\right|y\right\rangle\langle y\left|E(z)\right| z\rangle\langle z\left| K_0(s)\right|x\rangle=\nonumber\\ &&={1\over (4\pi \tau)^{n\over 2}}\int_0^\tau\, ds\, e^{{s(\tau-s)\over \tau}\Box}\,E(x) \end{eqnarray} when the background is not flat, the trick introduced by Barvinsky and Vilkovisky \cite{Barvinsky} consists in postulating an auxiliary metric, $g_{\mu\nu}^a$ such that $R_{\mu\nu\rho\sigma}(g^a)=0$ as well as $\left[\nabla_\mu^a,\nabla_\nu^a\right]\,\phi=0$. We already know how to construct a perturbation theory for the auxiliary space with the operator \begin{equation} \Delta\equiv -\Box^a - E^a \end{equation} then we write \begin{eqnarray} &g_{\alpha\beta}= g_{\alpha\beta}^a+ h_{\alpha\beta}\nonumber\\ &\nabla_\mu\phi=\left(\nabla^a_\mu+\Gamma_\mu\right)\phi \end{eqnarray} and expand in powers of $h$, then \begin{equation} E(x)=\frac{1}{4}\delta_{\mu\nu}\Box h^{\mu\nu}(x)-V(x)+{\cal O}(h^2)\end{equation} The full expression to second order reads \cite{Barvinsky} \begin{eqnarray} {\rm tr\,}\,K(\tau)&&={1\over (4\pi\tau)^{n\over 2}}\int d^n x \sqrt{g}\, {\rm tr\,}\bigg\{1+\tau \left(\frac{R}{6}-V\right) +\nonumber\\ &&+\frac{\tau^2}{2}\left(V-\frac{R}{6}\right) f_1(-\tau\Box) V+\tau^2 V f_2(-\tau\Box) R+\nonumber\\ &&+\tau^2R f_3(-\tau\Box) R+\tau^2R_{\mu\nu}f_4(-\tau\Box)R^{\mu\nu}\bigg\} \end{eqnarray} where the different form factor are given by \begin{eqnarray} &&f_1(x)=\int_0^1\, d\alpha\, e^{-\alpha(1-\alpha) x}\nonumber\\ &&f_2(x)=-\frac{f_1(x)}{6}-\frac{f_1(x)-1}{2x}\nonumber\\ &&f_3(x)=\frac{f_1(x)}{32}+\frac{f_1(x)-1}{8x}-\frac{f_4(x)}{8}\nonumber\\ &&f_4(x)=\frac{f_1(x)-1+\frac{x}{6}}{x^2} \end{eqnarray} \newpage \section{Flat space determinants} Let us see in detail how the heat equation can be iterated to get the coefficients of the short time expansion for operators pertaining to flat space gauge theories. \par Consider the operator \begin{equation} \Delta\equiv -D_\mu D^\mu+Y \end{equation} where \begin{equation} D_\mu\equiv \partial_\mu+A_\mu \end{equation} The small proper time expansion of the heat kernel \begin{eqnarray} &&K\left(\tau;x,y\right)\equiv\frac{1}{(4\pi \tau)^{n/2}}e^{-\frac{\sigma}{2\tau}}\sum_{p=0}^\infty\tau^p a_p(x,y) \end{eqnarray} when substituted into the heat equation leads to \begin{eqnarray} &&{\partial \over \partial \tau}~K\left(\tau;x,y\right) = {1\over (4\pi)^{n\over 2}}~e^{-\frac{\sigma}{2\tau}}\sum_{p=0}^\infty\left(a_p{\sigma\over 2} +(p-{n\over 2}-1)a_{p-1}\right)\tau^{p-2-{n\over 2}}\nonumber\\ \end{eqnarray} \begin{eqnarray} &&D_\mu K\left(\tau;x,y\right)= {1\over (4 \pi \tau)^{n\over 2}}e^{-\frac{\sigma}{2\tau}}\sum_{p=0}^\infty \left(-{\sigma_\mu\over 2\tau}a_p+D_\mu a_p\right)\tau^p\end{eqnarray} \begin{eqnarray} &&\sum_\mu D_\mu^2 K\left(\tau;x,y\right)=\nonumber\\ &&={1\over (4\pi)^{n\over 2}}~e^{-\frac{\sigma}{2\tau}}~\sum_{p=0}^\infty\left(- {n\over 2}~a_{p-1}+{\sigma\over 2}a_p-\sum_\mu~\sigma^\mu~ D_\mu a_{p-1}+ D^2 a_{p-2}\right)\tau^{p-2-{n\over 2}}\nonumber\\\end{eqnarray} \begin{eqnarray} &&-\Delta K\left(\tau;x,y\right)=\left(D_\mu^2-Y\right) K=\nonumber\\ &&={1\over (4\pi)^{n\over 2}}~e^{-\frac{\sigma}{2\tau}}~\sum_{p=0}^\infty\left(- {n\over 2}~a_{p-1}+{\sigma\over 2}a_p-\sum_\mu~\sigma^\mu~ D_\mu a_{p-1}- \Delta a_{p-2}\right)\tau^{p-2-{n\over 2}}\nonumber\\ \end{eqnarray} where $\sigma_\mu\equiv (x_\mu-y_\mu)$. \par The more divergent terms are those in $\tau^{-2-{n\over 2}}$, but they do not give anything new \begin{equation} {a_0\over 4}(x-y)^2={a_0\over 4}(x-y)^2 \end{equation} The next divergent term (only even p contribute to the expansion in a manifold without boundaries) is $\tau^{1-{n\over 2}}$ \begin{equation} -{n\over 2} a_0=-{n\over 2} a_0-\sigma^\mu D_\mu a_0 \end{equation} so that we learn that \begin{equation}\label{cero} \sigma^\mu D_\mu a_0=0. \end{equation} Generically, \begin{equation} \left(p-{n\over 2}-1\right)a_{p-1}=-{n\over 2} a_{p-1}-\sigma^\mu D_\mu a_{p-1}-\Delta a_{p-2} \end{equation} which is equivalent to \begin{equation}\label{iter} (p+1)a_{p+1}+\sigma^\mu D_\mu a_{p+1}+\Delta a_p=0 \end{equation} Taking the covariant derivative of \eqref{cero} \begin{equation}\label{uno} D_{\lambda}(\sigma^{\mu}D_{\mu}a_0)=D_\lambda a_0+\sigma^\mu D_\lambda D_\mu a_0=0 \end{equation} The first {\em coincidence limit} ($x\rightarrow y$) follows \begin{equation} \left[D_{\mu}a_0\right]\equiv \lim_{x\rightarrow y}D_{\mu}a_0(x,y) =0 \end{equation} please note that $[a_0]=1$ which we knew already, does not imply the result. Taking a further derivative of \eqref{uno}, we get \begin{equation} \left[(D_{\mu}D_{\nu}+D_{\nu}D_{\mu})a_0\right]=0 \end{equation} whose trace reads \begin{equation} \left[D^2 a_0\right]=0 \end{equation} With the usual definition of gauge field strength \begin{equation} F_{\mu\nu}\equiv \left[D_{\mu},D_{\nu}\right] \end{equation} implies \begin{equation} \left[D_{\mu}D_{\nu}~a_0\right]=\frac{1}{2}\left[([D_{\mu},D_{\nu}]+\{D_{\mu},D_{\nu}\})a_0\right]=\frac{1}{2}F_{\mu\nu} \end{equation} where the fact has been used that \begin{equation} \left[a_0\right]=1 \end{equation} Taking $p=0$ in (\ref{iter}) \begin{equation} a_1=-\sigma^\mu D_\mu a_1-\Delta a_0 \end{equation} so that \begin{equation} \left[a_1\right]=-\left[\Delta a_0\right]=-Y \end{equation} since $\Delta=-D^2+Y$. When $p=1$ in (\ref{iter}) \begin{equation} -2 a_2 =\Delta a_1 +\sigma^\mu D_\mu a_2 \end{equation} so that \begin{equation} \left[a_2\right]=-\frac{1}{2}\left[\Delta a_1\right] \end{equation} Let us derive again the $p=0$ expression before the coincidence limit: \begin{equation} D_{\mu}a_1=-D_{\mu}D^2 a_0-D_\mu\left(\sigma^\nu D_\nu a_1\right)= -D_\mu D^2 a_0-D_{\mu}a_1-\sigma^{\lambda}D_{\mu}D_{\lambda}a_1 \end{equation} then \begin{equation} 2 D_\mu a_1=-D_\mu\Delta a_0-\sigma^\lambda D_\mu D_\lambda a_1 \end{equation} which implies at the coincidence limit \begin{equation} \left[D^2 a_1\right]=-\frac{1}{3}\left[D^2 \Delta a_0\right] \end{equation} that is \begin{equation} \left[\Delta a_1\right]\equiv \left[D^2 a_1\right]+\left[Y a_1\right]=-\frac{1}{3}\left[D^2 D^2 a_0\right]-Y^2 +\frac{1}{3}D^2 Y \end{equation} Now deriving four times the equation (\ref{cero}) \begin{equation} (D_{\delta}D_{\sigma}D_{\rho}D_{\mu}+D_{\delta}D_{\sigma}D_{\mu}D_{\rho}+ D_{\delta}D_{\rho}D_{\mu}D_{\sigma}+D_{\sigma}D_{\rho}D_{\mu}D_{\delta}+ \sigma^{\lambda}D_{\delta}D_{\sigma}D_{\rho}D_{\mu}D_{\lambda})a_0=0 \end{equation} contracting with $\eta^{\delta\sigma}\eta^{\rho\mu}$ \begin{equation} \left[(D^2 D^2+D^{\mu}D^2 D_{\mu})a_0\right]=0 \end{equation} and contracting instead with $\eta^{\delta\rho}\eta^{\sigma\mu}$ \begin{equation} \left[(D^{\mu}D^{\nu}D_{\mu}D_{\nu})a_0\right]=0 \end{equation} now \begin{equation} \left[(D^{\sigma}D^{\mu}D_{\mu}D_{\sigma})a_0\right]=\left[(D^{\mu}D^{\sigma}D_{\mu}D_{\sigma}a_0+ F^{\sigma\mu}D_\mu D_\sigma a_0\right] \end{equation} it follows that \begin{equation} \left[D^\alpha D^2 D_\alpha a_0\right]=0+F^{\sigma\mu}\left[D_\mu D_\sigma a_0\right]=-{1\over 2} F^2_{\mu\nu} \end{equation} so that \begin{equation} \left[D^2 D^2 a_0\right]= \frac{1}{2} F^2_{\mu\nu} \end{equation} and finally \begin{equation} \left[a_2\right]=-\frac{1}{2}\left[\Delta a_1\right]=\frac{1}{6}\left[D^2 D^2 a_0\right]+\frac{1}{2}Y^2- \frac{1}{6}D^2 Y=\frac{1}{12}F^2_{\mu\nu} +\frac{1}{2}Y^2-\frac{1}{6}\Box Y \end{equation} The final expression for the divergent piece of the determinant of the flat space gauge operator reads \begin{equation}\label{ados} \text{log~ det}~ \Delta =\frac{1}{ (4\pi)^2}\frac{2}{4-n}\int d^n x~ \text{tr}~\left(\frac{1}{12}\,F^2_{\mu\nu}+\frac{1}{2}Y^2\right) \end{equation} the term in $\Box Y$ is a surface term which does not contribute in the absence of boundaries. This computation is immediatly applicable to gauge theories in flat space \cite{Osborn}. The simplest gauge invariant action is \begin{equation} S\equiv -{1\over 4 g^2}\int d^4 x\, F_{\mu\nu}F^{\mu\nu} \end{equation} the gauge fixing term reads \begin{equation} S_{gf}\equiv-{1\over 2 g^2\xi}\int d^4 x\,\partial^\mu A_\mu \partial^\nu A_\nu \end{equation} and the ghost action is \begin{equation} S_{gh}\equiv -{1\over 2 g^2}\int d^4 x\,\partial^\mu\bar{c} \partial_\mu c \end{equation} the expansion up to quadratic order reads \begin{equation} S_2=-\int d^n x\left(\frac{1}{2}A_\mu\Delta^{\mu\nu}A_\nu-\bar{c}\Box c\right)\end{equation} where \begin{equation} \Delta_{\mu\nu}=-\eta_{\mu\nu}\Box+\left(1-\frac{1}{\xi}\right)\partial_\mu\partial_\nu \end{equation} and we have removed the factor $1/g^2$ by redefining \begin{eqnarray} &&A_\mu\rightarrow gA_\mu\nonumber\\ &&c\rightarrow gc \end{eqnarray} Then the one-loop effective action in Minkowski space is given by \begin{equation} \Gamma(\overline{A})\equiv {1\over 2}\left(\log\det\,\Delta_{\mu\nu}- 2\log\det \,\Box\right) \end{equation} Please note the factor and the sign of the ghost contribution. The divergent contributions can be read directly from our analysis: \begin{equation} \log\det\,\Delta_{\mu\nu}=-{2\over 4-n}{C\over (4\pi)^2}{20\over 3}\int d^n x F_{\mu\nu}^2 \end{equation} \begin{equation} \log\det \,\Box={2\over 4-n}{C\over (4\pi)^2}{1\over 3}\int d^nx F_{\mu\nu}^2 \end{equation} Collecting both results \begin{equation} \Gamma(\overline{A})=-{1\over \epsilon}{C\over (4\pi)^2}{22\over 3}\int d^nx F_{\mu\nu}^2 \end{equation} where $\epsilon=4-n$. In order to renormalize the gauge theory to this order we have to write \begin{equation} {1\over g_0^2}=\mu^{-\epsilon}\left({1\over g^2}+{22\over 3}{C\over (4\pi)^2}\frac{1}{\epsilon}\right) \end{equation} The origin of the essential term $\mu^{-\epsilon}$ is that in $n$ complex dimensions the coupling constant has dimensions of \begin{equation} \left[ g^2_0\right]=4-n \end{equation} (the gauge field has mass dimension one in any spacetime dimension). This means that the beta function (more on this momentarily) is given by \begin{equation} \left. \mu{\partial \over \partial \mu} g\right|_{g_0}=-\frac{1}{2}\epsilon g+\beta(g) \end{equation} differentiating $g_0^{-2}$ with respect to $\mu$ then gives \begin{equation} \epsilon\left({1\over g^2}+{22\over 3}{C\over (4\pi)^2}\frac{1}{\epsilon}\right)=\left[-\frac{1}{2}\epsilon g+\beta(g)\right]\frac{\partial}{\partial g}\frac{1}{g^2}\end{equation} finally \begin{equation} \beta(g)=-{11\over 3}{C\over (4\pi)^2} \end{equation} in the usual case when $G=SU(n)$ the index $C=n$. \newpage \section{The conformal anomaly} Every renormalizable massless QFT without dimensionful coupling constants (masses count as coupling constants) enjoys scale invariance in flat space under \begin{equation} x^\mu\rightarrow \lambda x^\mu \end{equation} the conformal group $SO(4,2)$ \cite{Blumenhagen} is an extension of the Lorentz group (which has six generators: three {\em rotations} and three {\em boosts}). This extension has 15 generators; besides the ones of the Lorentz group, one {\em dilatation}, four {\em translations} and four {\em special conformal transformations} \begin{eqnarray} &&x^\mu\rightarrow y^\mu \nonumber\\ &&{y^\mu\over y^2}={x^\mu\over x^2}-b^\mu \end{eqnarray} Although the inversion is not connected to the identity, the special conformal transformation is. An elegant way of characterizing the full set of conformal transformations is as generated by {\em conformal Killing vectors}, under which the Minkowski metric is not fully invariant, but rather is proportional to itself. \begin{equation} \eta_{\mu\nu}\rightarrow \Omega^2(x) \eta_{\mu\nu} \end{equation} this in turn gives the clue as to how to generalize this property to curved spacetimes as Weyl rescalings of the metric tensor, which we shall do in a moment. Even when a theory is classically scale invariant, it fails to to be so when quantum corrections are considered. This is because the theory has to be regularized, and this introduces a mass scale, say $\mu$ as in dimensional regularization. Even after renormalizations has taken rid of all infinities, the coupling constants remember the scale that has been introduced. This is the idea of the {\em beta function}. Somewhat symbollically, under a scale transformation, \begin{equation} \delta {\cal L}=\sum_i \beta(g_i){\partial {\cal L}\over \partial g_i} \end{equation} where \begin{equation} \beta(g_i)\equiv \mu\,{\partial g_i\over \partial \mu} \end{equation} We shall see later on that {\em in addition} to this scale violation by quantum effects, there are other violations once gravitational fields are present; those occur because those background fields are not scale invariant; there is always a scale associated with them. \par Let us consider a generic scalar field theory. Non zero spin does not substantially change the analysis, so that our results will be quite generic. \begin{equation} S=\int d^4 x~\left({1\over 2}\partial_\mu\phi\partial^\mu\phi-{m^2\over 2}\phi^2-{g\over 4!}\phi^4+g_1 \phi^6+\ldots\right) \end{equation} Let us perform a scale transformation \begin{eqnarray} &&x\equiv \lambda x^\prime\nonumber\\ &&\phi=\lambda^{-1}\phi^\prime \end{eqnarray} In terms of the new fields \begin{equation} S=\int d^4 x^\prime~ ~\left({1\over 2}\partial_{\mu^\prime}\phi^\prime\partial^{\mu^\prime}\phi^\prime-{m^2 \lambda^2\over 2}\phi^2-{g \over 4!}\left(\phi^\prime\right)^4+g_1 \lambda^{-2} \left(\phi^\prime\right)^6+\ldots\right) \end{equation} Consider the theory in the vicinity of the gaussian fixed point, which correspond to a free scalar field, that is \begin{equation} S=\int d^4 x^\prime~ ~\left({1\over 2}\partial_{\mu^\prime}\phi^\prime\partial^{\mu^\prime}\phi^\prime\right) \end{equation} When we run towards the infrared ($\lambda\rightarrow \infty$) we see that \begin{eqnarray} &&m^\prime\rightarrow\infty\nonumber\\ &&g_1\rightarrow 0\nonumber\\ &&g\rightarrow g \end{eqnarray} Operators like $m^2 \phi^2$ of dimension less than four are called {\em relevant}. Operators like $g_2\phi^6$ of scaling dimension bigger than four are called {\em irrelevant}. and operators like $\lambda \phi^4$ of scaling dimension exactly equal to four are called {\em marginal}. \par Any physical observable should be independent of the scale $\mu$, which has been introduced as an intermediate step in the regularization and is thus completely arbitrary. Observables then obey \begin{eqnarray} &&0=\left.{d\over d\mu}S\left[p_i,g_0,m_0\right]\right|_{g_0,m_0}=\left.{d\over d\mu}S\left[p_i,g_R,m_R,\mu\right]\right|_{g_0,m_0}=\nonumber\\ &&=\bigg[{\partial\over\partial ~\text{log}~\mu}+\left.\beta(g_R,m_R)~{\partial\over\partial g_R}\right|_{\mu,m_R}-\nonumber\\ &&-\left.\gamma_m(g_R,m_R){\partial\over \partial\text{log}~m_R}\right|_{g_R,\mu}\bigg]S\left[p_i,g_R,m_R,\mu\right]\nonumber\\ \end{eqnarray} this is dubbed the {\em renormalization group equation} (RGE). We have defined \begin{eqnarray} &&\beta(g_R,m_R)\equiv \left.\mu{\partial\over \partial\mu}g_R(\mu)\right|_{g_0,m_0}\nonumber\\ &&\gamma_m(g_r,m_R)\equiv \left.-{\mu\over m_R}{\partial\over\partial \mu}~m_R(\mu)\right|_{g_0,m_0} \end{eqnarray} since the function $S\left[p_i,g_R,m_R,\mu\right]$ is analytic at $n=4$, it is natural to expect that both functions $\beta$ and $\gamma_m$ are analytic as well. In order to compute these universal functions, and remembering that our renormalization counterterms are defined as \begin{equation} m_0=Z_m^{1\over 2}~m_R \end{equation} and \begin{equation} g_0=Z_g~ g_R~ \mu^{\epsilon\over 2} \end{equation} where $\epsilon=4-n$ \begin{eqnarray} &&\beta(g_R,m_R)\equiv \left.g_0 \mu{\partial\over \partial\mu}{1\over Z_g \mu^{\epsilon\over 2}} \right|_{g_0,m_0}\nonumber\\ &&\gamma_m(g_r,m_R)\equiv \left.-{m_0\over m_R}\mu{\partial\over\partial \mu}~Z_m^{-{1\over 2}}\right|_{g_0,m_0} \end{eqnarray} All this is much simpler in a mass independent renormalization scheme, where the renormalization constants are independent of $m_R$ and $\mu$. In fact {\em minimal substraction schemes} MS (or $\overline{MS}$) are such schemes. \par It is plain that \begin{equation} \mu~{\partial \over \partial \mu}~g_0=0=\mu^{\epsilon\over 2}\left({\epsilon\over 2} Z_g g_R+\mu~{\partial\over \partial \mu }\left(Z_g g_R\right)\right) \end{equation} now \begin{equation} g_R~ Z_g=g_R+\sum_{p=1}^\infty a_p(g_R) \left({2\over \epsilon}\right)^p \end{equation} let us make the Laurent-type ansatz (we shall see later that it is actually necessary) \begin{equation} \beta(g_R)\equiv \beta_0(g_R)+\beta_1(g_R)\epsilon+\ldots \end{equation} we get \begin{eqnarray} &&{\epsilon\over 2}\left(g_R+a_1(g_R) {2\over \epsilon}+\ldots\right)+\beta_0(g_R)+\beta_1(g_R)\left(\epsilon\right)+\ldots+\nonumber\\ &&+ {da_1(g_R)\over d g_R}\left(\beta_0(g_R)+\beta_1(g_R)\epsilon+\ldots\right){2\over \epsilon}+\ldots=0 \end{eqnarray} terms of ${\cal O}(\epsilon^0)$ (which are now seen to be neccessary) yield \begin{equation} a_1(g_R)+\beta_0(g_R)+2\beta_1(g_R) {da_1(g_R)\over d g_R}=0\end{equation} and the terms of ${\cal O}(\epsilon^1)$ \begin{equation} {g_R\over 2}+\beta_1(g_R)+2\beta_0(g_R){da_1(g_R)\over d g_R}=0 \end{equation} there are recursion relations worked out by 't Hooft \cite{tHooftS} to compute all $a_p, p>1$ from the knowledge of $a_1$. \par For example, in the theory $\phi^4_4$ the result is \begin{equation} \beta=3~{\lambda^2\over (4\pi)^2} \end{equation} whereas for the (also renormalizable) six-dimensional theory $\phi^3_6$ \begin{equation} \beta=-3~{\lambda^3\over 4 (4\pi)^3} \end{equation} in QED \begin{equation} \beta={e^3\over 12\pi^2} \end{equation} and for a nonabelian $SU(N)$ gauge theory with $n_f$ fermion flavors in the fundamental representation, \begin{equation} \beta=-{g^3\over (4\pi)^2}~\left({11\over 3} N-{4\over 3} n_f T(F)\right) \end{equation} We take a general beta function to assert \begin{equation} \beta\equiv b \lambda^3 \end{equation} with the definition of $\beta$ \begin{equation} \beta=\frac{d\lambda}{d \text{log}~\mu}\end{equation} means that \begin{equation} {d\lambda\over \lambda^3}= b~d~\text{log}~\mu \end{equation} integrating with the boundary conditions that \begin{equation} \lambda=\lambda_0 \end{equation} when \begin{equation} \mu=\mu_0 \end{equation} yields the dependence of the coupling constant on the RG scale, $\mu$ \begin{equation} \lambda^2={\lambda^2_0\over 1-2b \lambda_0^2~\text{log}~{\mu\over\mu_0}} \end{equation} When $b$ is positive (like in $\phi^4_4$ or QED), there is a {\em Landau pole} at \begin{equation} \Lambda\equiv\mu=\mu_0~e^{\frac{1}{2b~\lambda_0^2}} \end{equation} those theories are {\em infrared safe}, but they do not enjoy an UV consistent limit. \par When $b<0$ (this is precisely what happens for $\phi^3_6$ and also for ordinary gauge theories) there is a pole at \begin{equation} \Lambda\equiv\mu= \mu_0~e^{-{1\over 2\left|b\right|~\lambda_0^2}} \end{equation} the paradigm of these theories is QCD. They are {\em asymptotically free} but {\em infrared slave}. The Landau pole is now located in the infrared region. Its scale is also denoted by $\Lambda$ is obviously renormalization group invariant and experimentally its value is \begin{equation} \Lambda\sim 217 MeV \end{equation} and it signals the scale at which QCD starts being strongly coupled. \par Green functions also obey some different renormalization group equations, because they are multiplicatively renormalized. The starting point is that \begin{equation} \mu~{\partial \over \partial \mu}~ \Gamma^0=0 \end{equation} then \begin{eqnarray} &&\bigg\{\mu~\left.{\partial\over \partial\mu}\right|_{g_R,m_R}+\left.\beta(g_R){\partial\over \partial g_R}\right|_{\mu,m_R}-\gamma_m(g_R)\left.{\partial\over \partial m_R}\right|_{\mu,g_R }-\nonumber\\ &&-n \gamma_\phi(g_R)\bigg\}~\Gamma_R\left(p_i, g_R,m_R,\mu\right)=0\nonumber\\ \end{eqnarray} where \begin{equation} \gamma_m(g_R)=-\mu\frac{\partial m_R}{\partial\mu}\end{equation} and we have defined the {\em anomalous dimension} \begin{equation} \gamma_\phi(g_R)\equiv {1\over 2}~\mu{\partial \over \partial \mu} \text{log}~Z_\phi={1\over 2}~\beta(g_R)~{\partial \over \partial g_R} \text{log}~Z_\phi \end{equation} for example, for the theory $\phi^3_6$ \begin{equation} \gamma_\phi(g)={1\over 12}{\lambda^3\over (4\pi)^3}+{13\over 432}~\left({\lambda^2\over (4\pi)^3} \right)^2 \end{equation} The RE equations for 1PI in gauge theories are best writing by first defining the operator \begin{equation} {\cal D}\equiv \mu\frac{\partial}{\partial \mu}+\beta(g_R)\frac{\partial}{\partial g_R}-\gamma_m(g_R)\frac{\partial}{\partial m_R}+\delta(g_R,\lambda_R)\frac{\partial}{\partial \alpha_R}-n_A\gamma_A-n_f \gamma_\psi -n_c \gamma_c \end{equation} the 1PI equation itself reads \begin{equation} {\cal D}~ \Gamma_{R,n}(g_R,m_R,\zeta_R)=0 \end{equation} the number of external gauge fields is ($n_A$), external fermions by ($n_f$) and ghosts ($n_c$), and their corresponding anomalous dimensions $\gamma_A$,$\gamma_f$,$\gamma_c$, where the generic definition of the {\em anomalous dimension} reads \begin{equation} \gamma\equiv \frac{1}{2}\mu\frac{\partial}{\partial \mu}log\,Z \end{equation} these objects are in general gauge dependent. \par We have already mentioned the fact that when a nontrivial gravitational field is present, the generalization of the conformal group $SO(4,2)$ in flat space is given by Weyl rescalings of the spacetime metric \begin{equation} g_{\mu\nu}\rightarrow \Omega^2(x) g_{\mu\nu} \end{equation} when $\Omega$ is constant, a given operator $\Delta$ scales under Weyl transformations as \begin{equation} \Delta\rightarrow \Omega^{-\lambda} \Delta \end{equation} where $\lambda$ is the so called {\em conformal weight} of the operator in question. Recalling that the energy momentum tensor is defined as \begin{equation} \delta W={1\over 2} \int d(vol) T^{\mu\nu}\delta g_{\mu\nu} \end{equation} under a Weyl transformation $\delta g_{\mu\nu}=2 \Omega \,g_{\mu\nu}\delta \Omega$ the change in the effective action is proportional to the trace of the energy momentum tensor. \begin{equation} \delta_\Omega W={1\over 2} \int d(vol)2 \Omega \, T\delta \Omega \end{equation} This means that if the action is Weyl invariant, the trace of the energy momentum tensor vanishes. Let us now compute in an explicit way the change in the effective action following \cite{Hawking} \par The zeta function associated to $\Delta$ transforms under such a global rescaling as \begin{equation} \zeta(s)\rightarrow \Omega^{-\lambda s} \zeta(s) \end{equation} then \begin{equation} \zeta^{'}(s)\rightarrow -\lambda\Omega^{-\lambda s-1} \zeta(s)+\Omega^{-\lambda s} \zeta^{'}(s)\end{equation} which conveys the fact that \begin{equation} \log\,\det\, \Delta \rightarrow \frac{\lambda}{\Omega}\zeta(0)+\log\,\det\,\Delta \end{equation} At the level of the effective action \begin{equation} \delta W={1\over 2} \delta \log\,\det\,\Delta={1\over 2} \delta \log\,\det\,\Delta+{1\over 2}\lambda \zeta(0)\delta \Omega \end{equation} This can be computed from the heat kernel through \begin{equation} \zeta(0)=\lim_{s\rightarrow 0} \frac{1}{\Gamma(s)}\,\int_0^\infty d\tau \tau^{s-1} K(\tau) \end{equation} This result means that the {\em conformal} (or {\em trace}) anomaly is proportional to the divergent part of the effective action when computed in dimensional regularization. In order to calculate the one loop conformal anomaly it is enough to determine the corresponding counterterm. \newpage \section{Vacuum energy} It is well known that when gravitational effects are neglected, the value of the energy is defined only up to an additive constant. The zero point of energy is arbitrary, and can be selected at will. When gravitation is taken into account this is not so, and vacuum energy gravitates (owing to the $\sqrt{|g|}$ in the volume element). The corresponding operator is \begin{equation} {\cal O}\equiv \int \sqrt{|g|} d^n x\,V_0 \end{equation} \par Even in flat space, it is possible to compute the difference of energies between two different vacua. The simplest instance occurs when this difference between vacua is due to the presence of boundary conditions. In the Casimir effect, those boundary conditions are due to the presence of conducting plates, where the electromagnetic field should vanish. \par In the few instances where the vacuum energy can be computed explicitly, \cite{Parker} zeta function reveals itself useful. Consider, as a vanilla example a one-dimensional free scalar field with two Dirichlet boundary conditions in two hyper surfaces \begin{equation} \phi(x,y)=\phi(x,y+2\pi R) \end{equation} the heat kernel can be expanded as \begin{equation} K(\tau, x,x',y,y')=\sum_{p=-\infty}^{\infty}\frac{1}{2\pi R}K_p(\tau,x,x')e^{\frac{i}{R}(y-y')p} \end{equation} Where each Fourier component must satisfy \begin{equation} \frac{\partial}{\partial\tau}K_p(\tau,x,x')=-\left(\Box^x_{n}+m^2\right)K_p(\tau,x,x') \end{equation} The heat equation reads \begin{equation} \frac{\partial}{\partial\tau}K_p(\tau,x,x')=-\left(\Box^x_{n-1}+m^2+\frac{p^2}{R^2}\right)K_p(\tau,x,x') \end{equation} whose solution is \begin{equation} K_p(\tau,x,x')=(4\pi\tau)^{-(n-1)/2}e^{-\frac{(x-x')^2}{4\tau}-\left(m^2+p^2/R^2\right)\tau} \end{equation} The effective action is easily computed \begin{eqnarray} W&&=-\hslash\int\frac{d\tau}{\tau}K(\tau, x,x,y,y)=\nonumber\\ &&=-\hslash\int\frac{d\tau}{\tau}\sum_{p=-\infty}^{\infty}\frac{1}{2\pi R}(4\pi\tau)^{-(n-1)/2}e^{-\left(m^2+p^2/R^2\right)\tau}=\nonumber\\ &&=-\frac{\hslash}{2\pi R}(4\pi)^{-(n-1)/2}\sum_{p=-\infty}^{\infty}\left(m^2+\frac{p^2}{R^2}\right)^{(n-1)/2}\int d\tau \tau^{-1-(n-1)/2}e^{-\tau}=\nonumber\\ &&=-\frac{\hslash}{2\pi R}(4\pi)^{-(n-1)/2}\Gamma\left(\frac{1-n}{2}\right)\sum_{p=-\infty}^{\infty}\left(m^2+\frac{p^2}{R^2}\right)^{(n-1)/2}\end{eqnarray} For a massless scalars \begin{eqnarray} W&&=-\frac{\hslash}{2\pi R^n}(4\pi)^{-(n-1)/2}\Gamma\left(\frac{1-n}{2}\right)\sum_{p=-\infty}^{\infty}p^{(n-1)}=\nonumber\\ &&=-\frac{\hslash}{2\pi R^n}(4\pi)^{-(n-1)/2}\Gamma\left(\frac{1-n}{2}\right)2\zeta(1-n) \end{eqnarray} which in four dimensions, in $n=4$ reduces to \begin{eqnarray} W&&=-\frac{\hslash}{720\pi^2 R^4} \end{eqnarray} Quite often it is expressed in terms of $L=2\pi R$ \begin{eqnarray} W&&=-\frac{\hslash\pi^2}{45 L^4} \end{eqnarray} \newpage \section{The DeWitt computation of gravitational determinants} When the gravitational interaction is physically relevant, things are much more complicated, \cite{DeWitt}. First of all, the space-time manifold is not flat, so that the flat space free solution has got to be generalized. All computations should be covariant. It is precisely at this point that the world function will become handy. On the other hand it is when dealing with this sort of problems that the real power of the heat kernel technique is visible. We shall keep denoting the {\em coincidence limit} of any bi-scalar function by \begin{equation} \left[W\right]\equiv \lim_{x\rightarrow x^\prime}W(x,x^\prime) \end{equation} There is a general rule, called {\em Synge's rule} for computing such limits. The rule as applied to the world function states that \begin{equation} \left[\nabla_{\alpha^\prime} \sigma_{\ldots}\right]=\nabla_{\alpha^\prime}\left[\sigma_{\ldots}\right]-\left[\nabla_\alpha \sigma_{\ldots}\right] \end{equation} where the dots indicate further derivations. Let us prove it. \begin{proof} Consider some bi-scalar (that is, an scalar both at $x$ and $x^\prime$) \begin{equation} \Omega_{A B^\prime}(x,x^\prime) \end{equation} where $A,B,\ldots$ are multi-indexes. Further consider a physical quantity $P^A (x)$ with the same multi-index structure as $A$; and another one $Q^{B^\prime} (x^\prime)$ with the same multi-index structure as $B^\prime$. Both objects are parallel propagated \begin{equation} u^\alpha \nabla_\alpha P^A(x)= u^{\alpha^\prime}\nabla_{\alpha^\prime} Q^{B^\prime}(x^\prime)=0 \end{equation} the bi-scalar \begin{equation} H(x,x^\prime)\equiv \Omega_{A B^\prime}(x,x^\prime) P^A(x) Q^{B^\prime}(x^\prime) \end{equation} can be Taylor expanded in two different ways \begin{eqnarray} &&H(\lambda_1,\lambda_0)=H(\lambda_0,\lambda_0)+(\lambda_1-\lambda_0)\left.{\partial H\over \partial\lambda_1}\right|_{\lambda_1=\lambda_0}+\ldots=\nonumber\\ &&=H(\lambda_1,\lambda_1)-(\lambda_1-\lambda_0)\left.{\partial H\over \partial \lambda_0}\right|_{\lambda_0=\lambda_1}+\ldots \end{eqnarray} Therefore \begin{equation} H(\lambda_0,\lambda_0)\equiv \left[\Omega_{A B^\prime}\right] P^A Q^{B^\prime} \end{equation} obeys \begin{eqnarray} &&{d\over d \lambda_0} H(\lambda_0,\lambda_0)\equiv \lim_{\lambda_1\rightarrow \lambda_0}{H(\lambda_1,\lambda_1)-H(\lambda_0,\lambda_0)\over \lambda_1-\lambda_0}=\nonumber\\ &&=\left.{\partial H\over \partial \lambda_0}\right|_{\lambda_0=\lambda_1}+\left.{\partial H\over \partial \lambda_1}\right|_{\lambda_1=\lambda_0}=\nonumber\\ &&=u^{\alpha^\prime}\left[\nabla_{\alpha^\prime}\Omega_{A B^\prime}\right] P^A Q^{B^\prime}+ u^\mu \left[\nabla_\mu \Omega_{A B^\prime}\right] P^A Q^{B^\prime} \end{eqnarray} Finally \begin{equation} \nabla_{\alpha^\prime}\left[ \Omega_{A B^\prime}\right]=\left[\nabla_{\alpha^\prime}\Omega_{A B^\prime}\right]+\left[\nabla_\alpha \Omega_{A B^\prime}\right] \end{equation} it is the desired result \end{proof} \subsection{Coincidence limits of covariant derivatives of the world function} A basic structural unit in the heat kernel technique is the world function $\sigma(x,x^\prime)$, which obeys \begin{equation} \label{basic}\sigma_\mu \sigma^\mu-2\sigma=0\end{equation} with \begin{equation} \sigma_\mu\equiv \partial_\mu \sigma \end{equation} Let us prove it. \begin{proof} Start for the action for a free particle \begin{equation} S\equiv\int_{x,\tau}^{x^\prime,\tau^\prime}~d\tau~ {1\over 2} g_{\mu\nu} \dot{x}^\mu \dot{x}^\nu\equiv{\sigma(x,x^\prime]\over \tau^\prime-\tau} \end{equation} where the integral is taken over the geodesic $x^\mu=x^\mu(\tau)$ that goes from the {\em base point} $x^\prime$ at value $\tau^\prime$ of the parameter to the {\em field point} $x^\prime$ at value $\tau^\prime$ of the same parameter. This defines the square of the geodesic distance between the points $x^\prime$ and $x$. It is a scalar for independent Einstein transformations of the base and field points. This is exactly Synge's {\em world function}. He used the notation $\Omega$ for it, but nowadays the notation $\sigma$ is much more common. When the geodesic is timelike and parametrized with the proper time \begin{equation} \sigma(x,x^\prime)={(\tau-\tau^\prime)^2\over 2} \end{equation} \par The canonical momentum is given by \begin{equation} p_\mu\equiv \partial_\mu S= {\nabla_\mu\sigma \over \tau^\prime- \tau} \end{equation} The Hamilton-Jacobi equation for the free particle reads \begin{equation} {\partial S\over \partial \tau} + H=-{\sigma\over \left(\tau^\prime-\tau\right)^2}+{1\over 2}{\sigma_\mu \sigma^\mu\over \left(\tau^\prime- \tau\right)^2}=0 \end{equation} leads to the basic equation obeyed by the world function \begin{equation} \sigma_\mu \sigma^\mu= 2 \sigma \end{equation} \end{proof} \par It is instructive to study a more pedestrian derivation. \begin{proof} Consider a variation of the world function \begin{equation} \delta \sigma\equiv \sigma(x+\delta x,x^\prime)-\sigma(x,x^\prime) \end{equation} where we rescale the parameters in such a way that $(\lambda_0,\lambda_1)$ label the ends of the new geodesic. \par The variation can be computed in an standard way \begin{eqnarray} &&\delta \sigma=(\lambda_1-\lambda_0)\int_{\lambda_0}^{\lambda_1}d\lambda\left(g_{\mu\nu}\dot{z}^\mu\delta\dot{z}^\nu+{1\over 2}\partial_\lambda g_{\mu\nu}\dot{z}^\mu\dot{z}^\nu\delta z^\lambda\right)=\nonumber\\ &&=(\lambda_1-\lambda_0)\left. g_{\alpha\beta}\dot{z}^\alpha \delta z^\beta\right|_{\lambda_0}^{\lambda_1}-(\lambda_1-\lambda_0)\int_{\lambda_0}^{\lambda_1}\left(g_{\alpha\beta}\ddot{z}^\beta+\Gamma_{\alpha\beta\gamma}\dot{z}^\beta\dot{z}^\gamma\right)\delta z^\alpha d\lambda \nonumber\\ \end{eqnarray} Inserting the information that the line integral is taken over a geodesic, we learn that \begin{equation} \delta\sigma=(\lambda_1-\lambda_0) g_{\alpha\beta}u^\alpha\delta x^\beta \end{equation} This means that the derivative of the world function is proportional to the tangent vector \begin{equation} \nabla_\alpha \sigma\equiv \sigma_\alpha=(\lambda_1-\lambda_0) u_\alpha=+\sqrt{2 \sigma}~u_\alpha \end{equation} also \begin{equation} \nabla_{\alpha^\prime}\sigma\equiv \sigma_{\alpha^\prime}=-(\lambda_1-\lambda_0) u_{\alpha^\prime} \end{equation} and it is now obvious that \begin{equation} \sigma_\mu\sigma^\mu=\sigma_{\mu^\prime}\sigma^{\mu^\prime}=2 \sigma \end{equation} This implies that the equation of parallel transport of any quantity $T$ can be written as \begin{equation} u^\mu \nabla_\mu T=\sigma^\mu \nabla_\mu T=0 \end{equation} \end{proof} Let us proceed carefully to work out coincidence limits of covariant derivatives of the world function It is plain that \begin{equation} \boxed{\left[\sigma\right]=\left[\sigma_\mu\right]=0} \end{equation} (The second equation is true because there is no prefered vector in the manifold.) Deriving (covariantly) once the equation \eqref{basic} \begin{equation} \sigma_\mu \sigma^{\mu\alpha}=\sigma^{\alpha} \end{equation} Derive again \begin{equation} \sigma_{\mu}^{~\beta}\sigma^{\mu\alpha}+\sigma_\mu\sigma^{\mu\alpha\beta}=\sigma^{\alpha\beta} \end{equation} obviously \begin{equation} \Box \sigma =\sigma_{\mu\nu}\sigma^{\mu\nu}+\sigma_\mu\Box \sigma^\mu \end{equation} It follows that \begin{equation} \boxed{\left[\sigma_{\mu\nu}\right]=g_{\mu\nu}} \end{equation} as well as its trace \begin{equation} \boxed{\left[\Box \sigma\right]=n} \end{equation} If we derive for the third time \eqref{basic} \begin{eqnarray} &&\sigma_{\mu}^{~\beta\gamma}\sigma^{\mu\alpha}+\sigma_{\mu}^{~\beta}\sigma^{\mu\alpha\gamma}+\sigma_{\mu}^{~\gamma}\sigma^{\mu\alpha\beta}+\sigma_\mu\sigma^{\mu\alpha\beta\gamma}=\sigma^{\alpha\beta\gamma} \end{eqnarray} At the coincidence limit \begin{equation} \label{c1}[\sigma_{\beta\alpha\gamma}]+[\sigma_{\gamma\alpha\beta}]=0\end{equation} Ricci's formula for the commutation of covariant derivatives implies that \begin{equation} \sigma_{\alpha\beta\gamma}-\sigma_{\alpha\gamma\beta}=-\sigma_\mu R^\mu_{~\alpha\gamma\beta} \end{equation} (Our conventions are different from \cite{Poisson}). It is easy to prove that the coincidence limit of three derivatives vanishes \begin{equation} \boxed{\left[\sigma_{\mu\nu\lambda}\right]=0} \end{equation} The fourth derivative reads \begin{eqnarray} &&\sigma_{\mu}^{~\beta\gamma\delta}\sigma^{\mu\alpha}+\sigma_{\mu}^{~\beta\gamma}\sigma^{\mu\alpha\delta}+\sigma_{\mu}^{~\beta\delta}\sigma^{\mu\alpha\gamma}+\sigma_{\mu}^{~\beta}\sigma^{\mu\alpha\gamma\delta}+\nonumber\\ &&+\sigma_{\mu}^{~\gamma\delta}\sigma^{\mu\alpha\beta}+\sigma_{\mu}^{~\gamma}\sigma^{\mu\alpha\beta\delta}+\sigma_{\mu}^{~\delta}\sigma^{\mu\alpha\beta\gamma}+\sigma_\mu\sigma^{\mu\alpha\beta\gamma\delta}=\sigma^{\alpha\beta\gamma\delta} \end{eqnarray} at coincidence \begin{equation} \left[\sigma_{\mu\nu\alpha\beta}\right]+\left[\sigma_{\beta\nu\alpha\mu}\right]+\left[\sigma_{\alpha\nu\beta\mu}\right]=0 \end{equation} Commuting covariant derivatives \begin{equation} \sigma_{\mu\nu\alpha\beta}=\sigma_{\nu\mu\alpha\beta} \end{equation} and \begin{equation} \sigma_{\alpha\beta\mu\nu}-\sigma_{\alpha\mu\beta\nu}=-R^{\tau}_{~\alpha\mu\beta}\sigma_{\tau\nu}-\sigma_\tau\nabla_\nu R^\tau_{~\alpha\mu\beta}\end{equation} Then \begin{equation} [\sigma_{\alpha\beta\mu\nu}]-[\sigma_{\alpha\mu\beta\nu}]=-R_{\nu\alpha\mu\beta}\end{equation} In the same way \begin{equation} \left[\sigma_{\alpha\beta\mu\nu}\right]-\left[\sigma_{\alpha\beta\nu\mu}\right]=-R_{\beta\alpha\nu\mu}-R_{\alpha\beta\nu\mu}=0 \end{equation} Summarizing \begin{equation} [\sigma_{\alpha\beta\mu\nu}]-[\sigma_{\mu\nu\alpha\beta}]=0\end{equation} At coincidence \begin{equation}\label{4s} \boxed{\left[\sigma_{\alpha\beta\mu\nu}\right]=-{1\over 3}\left(R_{\alpha\mu\beta\nu}+R_{\alpha\nu\beta\mu}\right)} \end{equation} and in particular \begin{equation} \boxed{\left[\nabla_{\alpha}\nabla_{\beta} \Box\sigma\right]=-{2\over 3}~R_{\alpha\beta}} \end{equation} The expression with five dervatives of \eqref{basic} is also needed \begin{eqnarray} &&\sigma_{\mu}^{~\beta\gamma\delta\epsilon}\sigma^{\mu\alpha}+\sigma_{\mu}^{~\beta\gamma\delta}\sigma^{\mu\alpha\epsilon}+\sigma_{\mu}^{~\beta\gamma\epsilon}\sigma^{\mu\alpha\delta}+\sigma_{\mu}^{~\beta\gamma}\sigma^{\mu\alpha\delta\epsilon}+\sigma_{\mu}^{~\beta\delta\epsilon}\sigma^{\mu\alpha\gamma}+\nonumber\\ &&+\sigma_{\mu}^{~\beta\delta}\sigma^{\mu\alpha\gamma\epsilon}+\sigma_{\mu}^{~\beta\epsilon}\sigma^{\mu\alpha\gamma\delta}+\sigma_{\mu}^{~\beta}\sigma^{\mu\alpha\gamma\delta\epsilon}+\sigma_{\mu}^{~\gamma\delta\epsilon}\sigma^{\mu\alpha\beta}+\sigma_{\mu}^{~\gamma\delta}\sigma^{\mu\alpha\beta\epsilon}+\nonumber\\ &&+\sigma_{\mu}^{~\gamma\epsilon}\sigma^{\mu\alpha\beta\delta}+\sigma_{\mu}^{~\gamma}\sigma^{\mu\alpha\beta\delta\epsilon}+\sigma_{\mu}^{~\delta\epsilon}\sigma^{\mu\alpha\beta\gamma}+\sigma_{\mu}^{~\delta}\sigma^{\mu\alpha\beta\gamma\epsilon}+\sigma_{\mu}^{~\epsilon}\sigma^{\mu\alpha\beta\gamma\delta}+\nonumber\\ &&+\sigma_\mu\sigma^{\mu\alpha\beta\gamma\delta\epsilon}=\sigma^{\alpha\beta\gamma\delta\epsilon} \end{eqnarray} At the coincidence limit \begin{equation} \left[\sigma_{\beta\alpha\gamma\delta\epsilon}\right]+\left[\sigma_{\gamma\alpha\beta\delta\epsilon}\right]+\left[\sigma_{\delta\alpha\beta\gamma\epsilon}\right]+\left[\sigma_{\epsilon\alpha\beta\gamma\delta}\right]=0 \end{equation} Using again Ricci's identity \begin{eqnarray} &&\left[\sigma_{\beta\alpha\gamma\delta\epsilon}\right]=\left[\sigma_{\alpha\beta\gamma\delta\epsilon}\right] \nonumber\\ &&\left[\sigma_{\gamma\alpha\beta\delta\epsilon}\right]=\left[\sigma_{\alpha\beta\gamma\delta\epsilon}\right]-\nabla_\delta R_{\gamma\beta\alpha\epsilon}-\nabla_\epsilon R_{\gamma\beta\alpha\delta} \nonumber\\ &&\left[\sigma_{\delta\alpha\beta\gamma\epsilon}\right]=\left[\sigma_{\alpha\beta\gamma\delta\epsilon}\right]-\nabla_\gamma R_{\delta\beta\alpha\epsilon}-\nabla_\epsilon R_{\delta\beta\alpha\gamma} \nonumber\\ &&\left[\sigma_{\epsilon\alpha\beta\gamma\delta}\right]=\left[\sigma_{\alpha\beta\gamma\delta\epsilon}\right]-\nabla_\gamma R_{\epsilon\beta\alpha\delta}-\nabla_\delta R_{\epsilon\beta\alpha\gamma} \end{eqnarray} Putting all together, \begin{eqnarray} &&4\left[\sigma_{\alpha\beta\gamma\delta\epsilon}\right]-\nabla_\delta R_{\gamma\beta\alpha\epsilon}-\nabla_\epsilon R_{\gamma\beta\alpha\delta}-\nabla_\gamma R_{\delta\beta\alpha\epsilon}-\nabla_\epsilon R_{\delta\beta\alpha\gamma}-\nonumber\\ &&-\nabla_\gamma R_{\epsilon\beta\alpha\delta}-\nabla_\delta R_{\epsilon\beta\alpha\gamma}=0\nonumber\\ \end{eqnarray} Contracting $\alpha=\beta$ \begin{equation} \boxed{\left[\sigma^\alpha_{~\alpha\gamma\delta\epsilon}\right]=-{1\over 2}\left(\nabla_\delta R_{\gamma\epsilon}+\nabla_\epsilon R_{\gamma\delta}+\nabla_\gamma R_{\epsilon\delta}\right)} \end{equation} and using Bianchi's identity $\nabla_\alpha R^{\alpha\beta}=\frac{1}{2}\nabla^\beta R$ \begin{equation} \boxed{\left[\sigma^{\alpha~\beta}_{~\alpha~\beta\lambda}\right]=-\nabla_\lambda R} \end{equation} Finally the expression of six derivatives of \eqref{basic} \begin{eqnarray} &&\sigma_{\mu}^{~\beta\gamma\delta\epsilon\sigma}\sigma^{\mu\alpha}+\sigma_{\mu}^{~\beta\gamma\delta\epsilon}\sigma^{\mu\alpha\sigma}+\sigma_{\mu}^{~\beta\gamma\delta\sigma}\sigma^{\mu\alpha\epsilon}+\sigma_{\mu}^{~\beta\gamma\delta}\sigma^{\mu\alpha\epsilon\sigma}+\sigma_{\mu}^{~\beta\gamma\epsilon\sigma}\sigma^{\mu\alpha\delta}+\nonumber\\ &&+\sigma_{\mu}^{~\beta\gamma\epsilon}\sigma^{\mu\alpha\delta\sigma}+\sigma_{\mu}^{~\beta\gamma\sigma}\sigma^{\mu\alpha\delta\epsilon}+\sigma_{\mu}^{~\beta\gamma}\sigma^{\mu\alpha\delta\epsilon\sigma}+\sigma_{\mu}^{~\beta\delta\epsilon\sigma}\sigma^{\mu\alpha\gamma}+\sigma_{\mu}^{~\beta\delta\epsilon}\sigma^{\mu\alpha\gamma\sigma}+\nonumber\\ &&+\sigma_{\mu}^{~\beta\delta\sigma}\sigma^{\mu\alpha\gamma\epsilon}+\sigma_{\mu}^{~\beta\delta}\sigma^{\mu\alpha\gamma\epsilon\sigma}+\sigma_{\mu}^{~\beta\epsilon\sigma}\sigma^{\mu\alpha\gamma\delta}+\sigma_{\mu}^{~\beta\epsilon}\sigma^{\mu\alpha\gamma\delta\sigma}+\sigma_{\mu}^{~\beta\sigma}\sigma^{\mu\alpha\gamma\delta\epsilon}+\nonumber\\ &&+\sigma_{\mu}^{~\beta}\sigma^{\mu\alpha\gamma\delta\epsilon\sigma}+\sigma_{\mu}^{~\gamma\delta\epsilon\sigma}\sigma^{\mu\alpha\beta}+\sigma_{\mu}^{~\gamma\delta\epsilon}\sigma^{\mu\alpha\beta\sigma}+\sigma_{\mu}^{~\gamma\delta\sigma}\sigma^{\mu\alpha\beta\epsilon}+\sigma_{\mu}^{~\gamma\delta}\sigma^{\mu\alpha\beta\epsilon\sigma}+\nonumber\\ &&+\sigma_{\mu}^{~\gamma\epsilon\sigma}\sigma^{\mu\alpha\beta\delta}+\sigma_{\mu}^{~\gamma\epsilon}\sigma^{\mu\alpha\beta\delta\sigma}+\sigma_{\mu}^{~\gamma\sigma}\sigma^{\mu\alpha\beta\delta\epsilon}+\sigma_{\mu}^{~\gamma}\sigma^{\mu\alpha\beta\delta\epsilon\sigma}+\sigma_{\mu}^{~\delta\epsilon\sigma}\sigma^{\mu\alpha\beta\gamma}+\nonumber\\ &&+\sigma_{\mu}^{~\delta\epsilon}\sigma^{\mu\alpha\beta\gamma\sigma}+\sigma_{\mu}^{~\delta\sigma}\sigma^{\mu\alpha\beta\gamma\epsilon}+\sigma_{\mu}^{~\delta}\sigma^{\mu\alpha\beta\gamma\epsilon\sigma}+\sigma_{\mu}^{~\epsilon\sigma}\sigma^{\mu\alpha\beta\gamma\delta}+\sigma_{\mu}^{~\epsilon}\sigma^{\mu\alpha\beta\gamma\delta\sigma}+\nonumber\\ &&+\sigma_{\mu}^{~\sigma}\sigma^{\mu\alpha\beta\gamma\delta\epsilon}+\sigma_\mu\sigma^{\mu\alpha\beta\gamma\delta\epsilon\sigma}=\sigma^{\alpha\beta\gamma\delta\epsilon\sigma} \end{eqnarray} yields at coincidence \begin{eqnarray} &&[\sigma_{\mu}^{~\beta\gamma\delta}\sigma^{\mu\alpha\epsilon\sigma}]+[\sigma_{\mu}^{~\beta\gamma\epsilon}\sigma^{\mu\alpha\delta\sigma}]+[\sigma_{\mu}^{~\beta\gamma\sigma}\sigma^{\mu\alpha\delta\epsilon}]+[\sigma_{\mu}^{~\beta\delta\epsilon}\sigma^{\mu\alpha\gamma\sigma}]+[\sigma_{\mu}^{~\beta\delta\sigma}\sigma^{\mu\alpha\gamma\epsilon}]+\nonumber\\ &&+[\sigma_{\mu}^{~\beta\epsilon\sigma}\sigma^{\mu\alpha\gamma\delta}]+[\sigma^{\beta\alpha\gamma\delta\epsilon\sigma}]+[\sigma_{\mu}^{~\gamma\delta\epsilon}\sigma^{\mu\alpha\beta\sigma}]+[\sigma_{\mu}^{~\gamma\delta\sigma}\sigma^{\mu\alpha\beta\epsilon}]+[\sigma_{\mu}^{~\gamma\epsilon\sigma}\sigma^{\mu\alpha\beta\delta}]+\nonumber\\ &&+[\sigma^{\gamma\alpha\beta\delta\epsilon\sigma}]+[\sigma_{\mu}^{~\delta\epsilon\sigma}\sigma^{\mu\alpha\beta\gamma}]+[\sigma^{\delta\alpha\beta\gamma\epsilon\sigma}]+[\sigma^{\epsilon\alpha\beta\gamma\delta\sigma}]+[\sigma^{\sigma\alpha\beta\gamma\delta\epsilon}]=0 \end{eqnarray} Using once more time Ricci's identity we are led to \begin{eqnarray} &&\left[\sigma^{\beta\alpha\gamma\delta\epsilon\sigma}\right]=\left[\sigma^{\alpha\beta\gamma\delta\epsilon\sigma}\right]\end{eqnarray} \begin{eqnarray} &&\left[\sigma^{\gamma\alpha\beta\delta\epsilon\sigma}\right]=\left[\sigma^{\alpha\beta\gamma\delta\epsilon\sigma}\right]+\nabla^\epsilon\nabla^\delta R^{\alpha\sigma\beta\gamma}+\nabla^\sigma\nabla^\delta R^{\alpha\epsilon\beta\gamma}+\nabla^\sigma\nabla^\epsilon R^{\alpha\delta\beta\gamma}+\nonumber\\ &&+R^{\alpha\lambda\beta\gamma}\left[\sigma_{\lambda}^{~\delta\epsilon\sigma}\right]\end{eqnarray} \begin{eqnarray} &&\left[\sigma^{\delta\alpha\beta\gamma\epsilon\sigma}\right]=\left[\sigma^{\alpha\beta\gamma\delta\epsilon\sigma}\right]-\nabla^\epsilon\nabla^\gamma R^{\delta\beta\alpha\sigma}-\nabla^\sigma\nabla^\gamma R^{\delta\beta\alpha\epsilon}-\nabla^\sigma\nabla^\epsilon R^{\delta\beta\alpha\gamma}-\nonumber\\ &&-R^{\delta\gamma\beta\lambda}\left[\sigma_{\lambda}^{~\alpha\epsilon\sigma}\right]-R^{\delta\gamma\alpha\lambda}\left[\sigma_{\lambda}^{~\beta\epsilon\sigma}\right]-R^{\delta\beta\alpha\lambda}\left[\sigma_{\lambda}^{~\gamma\epsilon\sigma}\right]\end{eqnarray} \begin{eqnarray} &&\left[\sigma^{\epsilon\alpha\beta\gamma\delta\sigma}\right]=\left[\sigma^{\alpha\beta\gamma\delta\epsilon\sigma}\right]+\nabla^\delta\nabla^\gamma R^{\alpha\sigma\beta\epsilon}+\nabla^\sigma\nabla^\gamma R^{\alpha\delta\beta\epsilon}+\nabla^\sigma\nabla^\delta R^{\alpha\gamma\beta\epsilon}+\nonumber\\ &&+R^{\gamma\lambda\delta\epsilon}\left[\sigma_{~~~\lambda}^{\alpha\beta~\sigma}\right]+R^{\beta\lambda\delta\epsilon}\left[\sigma_{\lambda}^{~\alpha\gamma\sigma}\right]+R^{\alpha\lambda\delta\epsilon}\left[\sigma_{\lambda}^{~\beta\gamma\sigma}\right]+R^{\beta\lambda\gamma\epsilon}\left[\sigma_{\lambda}^{~\alpha\delta\sigma}\right]+\nonumber\\ &&+R^{\alpha\lambda\gamma\epsilon}\left[\sigma_{\lambda}^{~\beta\delta\sigma}\right]+R^{\alpha\lambda\beta\epsilon}\left[\sigma_{\lambda}^{~\gamma\delta\sigma}\right]\end{eqnarray} \begin{eqnarray} &&\left[\sigma^{\sigma\alpha\beta\gamma\delta\epsilon}\right]=\left[\sigma^{\alpha\beta\gamma\delta\epsilon\sigma}\right]+\nabla^\delta\nabla^\gamma R^{\alpha\epsilon\beta\sigma}+\nabla^\epsilon\nabla^\gamma R^{\alpha\delta\beta\sigma}+\nabla^\epsilon\nabla^\delta R^{\alpha\gamma\beta\sigma}+\nonumber\\ &&+R^{\delta\lambda\epsilon\sigma}\left[\sigma_{~~~~\lambda}^{\alpha\beta\gamma}\right]+R^{\gamma\lambda\epsilon\sigma}\left[\sigma_{~~\lambda}^{\alpha\beta~\delta}\right]+R^{\beta\lambda\epsilon\sigma}\left[\sigma_{\lambda}^{~\alpha\gamma\delta}\right]+R^{\alpha\lambda\epsilon\sigma}\left[\sigma_{\lambda}^{~\beta\gamma\epsilon}\right]+\nonumber\\ &&+R^{\gamma\lambda\delta\sigma}\left[\sigma_{~~\lambda}^{\alpha\beta~\epsilon}\right]+R^{\beta\lambda\delta\sigma}\left[\sigma_{\lambda}^{~\alpha\gamma\epsilon}\right]+R^{\alpha\lambda\delta\sigma}\left[\sigma_{\lambda}^{~\beta\gamma\epsilon}\right]+R^{\beta\lambda\gamma\sigma}\left[\sigma_{\lambda}^{~\alpha\delta\epsilon}\right]+\nonumber\\ &&+R^{\alpha\lambda\gamma\sigma}\left[\sigma_{\lambda}^{~\beta\delta\epsilon}\right]+R^{\alpha\lambda\beta\sigma}\left[\sigma_{\lambda}^{~\gamma\delta\epsilon}\right] \end{eqnarray} Now putting all it together \begin{eqnarray}\label{6D} &&5\left[\sigma^{\alpha\beta\gamma\delta\epsilon\sigma}\right]+\nabla^\epsilon\nabla^\delta \left(R^{\alpha\sigma\beta\gamma}+R^{\alpha\gamma\beta\sigma}\right)+\nabla^\sigma\nabla^\delta\left( R^{\alpha\epsilon\beta\gamma}+R^{\alpha\gamma\beta\epsilon}\right)+\nonumber\\ &&+\nabla^\sigma\nabla^\epsilon\left( R^{\alpha\delta\beta\gamma}+R^{\alpha\gamma\beta\delta}\right)+\nabla^\delta\nabla^\gamma\left(R^{\alpha\epsilon\beta\sigma}+R^{\alpha\sigma\beta\epsilon}\right)+\nonumber\\ &&+\nabla^\sigma\nabla^\gamma \left(R^{\alpha\delta\beta\epsilon}+R^{\alpha\epsilon\beta\delta}\right)+\nabla^\epsilon\nabla^\gamma \left(R^{\alpha\delta\beta\sigma}+R^{\alpha\sigma\beta\delta}\right)+\nonumber\\ &&+R^{\alpha\lambda\beta\gamma}\left[\sigma_{\lambda}^{~\delta\epsilon\sigma}\right]+R^{\gamma\lambda\delta\epsilon}\left[\sigma_{~~~\lambda}^{\alpha\beta~\sigma}\right]+R^{\beta\lambda\delta\epsilon}\left[\sigma_{\lambda}^{~\alpha\gamma\sigma}\right]+R^{\alpha\lambda\delta\epsilon}\left[\sigma_{\lambda}^{~\beta\gamma\sigma}\right]+\nonumber\\ &&+R^{\beta\lambda\gamma\epsilon}\left[\sigma_{\lambda}^{~\alpha\delta\sigma}\right]+R^{\alpha\lambda\gamma\epsilon}\left[\sigma_{\lambda}^{~\beta\delta\sigma}\right]+R^{\alpha\lambda\beta\epsilon}\left[\sigma_{\lambda}^{~\gamma\delta\sigma}\right]+R^{\delta\lambda\epsilon\sigma}\left[\sigma_{~~~~\lambda}^{\alpha\beta\gamma}\right]+\nonumber\\ &&+R^{\gamma\lambda\epsilon\sigma}\left[\sigma_{~~\lambda}^{\alpha\beta~\delta}\right]+R^{\beta\lambda\epsilon\sigma}\left[\sigma_{\lambda}^{~\alpha\gamma\delta}\right]+R^{\alpha\lambda\epsilon\sigma}\left[\sigma_{\lambda}^{~\beta\gamma\epsilon}\right]+\nonumber\\ &&+R^{\gamma\lambda\delta\sigma}\left[\sigma_{~~\lambda}^{\alpha\beta~\epsilon}\right]+R^{\beta\lambda\delta\sigma}\left[\sigma_{\lambda}^{~\alpha\gamma\epsilon}\right]+R^{\alpha\lambda\delta\sigma}\left[\sigma_{\lambda}^{~\beta\gamma\epsilon}\right]+R^{\beta\lambda\gamma\sigma}\left[\sigma_{\lambda}^{~\alpha\delta\epsilon}\right]+\nonumber\\ &&+R^{\alpha\lambda\gamma\sigma}\left[\sigma_{\lambda}^{~\beta\delta\epsilon}\right]+R^{\alpha\lambda\beta\sigma}\left[\sigma_{\lambda}^{~\gamma\delta\epsilon}\right]-R^{\delta\gamma\beta\lambda}\left[\sigma_{\lambda}^{~\alpha\epsilon\sigma}\right]-R^{\delta\gamma\alpha\lambda}\left[\sigma_{\lambda}^{~\beta\epsilon\sigma}\right]-\nonumber\\ &&-R^{\delta\beta\alpha\lambda}\left[\sigma_{\lambda}^{~\gamma\epsilon\sigma}\right]+\left[\sigma_{\mu}^{~\beta\gamma\delta}\sigma^{\mu\alpha\epsilon\sigma}\right]+\left[\sigma_{\mu}^{~\beta\gamma\epsilon}\sigma^{\mu\alpha\delta\sigma}\right]+\left[\sigma_{\mu}^{~\beta\gamma\sigma}\sigma^{\mu\alpha\delta\epsilon}\right]+\nonumber\\ &&+\left[\sigma_{\mu}^{~\beta\delta\epsilon}\sigma^{\mu\alpha\gamma\sigma}\right]+\left[\sigma_{\mu}^{~\beta\delta\sigma}\sigma^{\mu\alpha\gamma\epsilon}\right]+\left[\sigma_{\mu}^{~\beta\epsilon\sigma}\sigma^{\mu\alpha\gamma\delta}\right]+\nonumber\\ &&+\left[\sigma_{\mu}^{~\gamma\delta\epsilon}\sigma^{\mu\alpha\beta\sigma}\right]+\left[\sigma_{\mu}^{~\gamma\delta\sigma}\sigma^{\mu\alpha\beta\epsilon}\right]+ \left[\sigma_{\mu}^{~\gamma\epsilon\sigma}\sigma^{\mu\alpha\beta\delta}\right]+\left[\sigma_{\mu}^{~\delta\epsilon\sigma}\sigma^{\mu\alpha\beta\gamma}\right]=0\nonumber\\ \end{eqnarray} and using the expression for $\left[\sigma_{\mu\nu\alpha\beta}\right]$, \eqref{4s}, we have at coincidence \begin{equation}\label{6Dtrace} \boxed{\left[\Box\Box\Box \sigma\right]=-\frac{8}{5}\Box R+{4\over 15} R^2_{\mu\nu}-{4\over 15} R^2_{\mu\nu\alpha\beta}} \end{equation} We also need another scalar combination of six derivatives of $\sigma$ to wit \begin{equation} \boxed{\nabla_\lambda \Box \nabla_\lambda\Box \sigma=\frac{2}{5}\Box R-{2\over 5} R^2_{\mu\nu}-{4\over 15} R^2_{\mu\nu\alpha\beta}} \end{equation} \subsection{Coincidence limits of derivatives of the van Vleck determinant.} As we shall see in a moment, another important piece in De Witt's approach is the van Vleck determinant, defined by \begin{equation} \Delta(x,x^\prime)\equiv \text{det}~\Delta^{\alpha^\prime}\,_{\beta^\prime}(x,x^\prime)\equiv \text{det}~\left(-g^{\alpha^\prime}_\alpha (x^\prime,x)\sigma^\alpha_{\beta^\prime}(x,x^\prime)\right) \end{equation} the parallel propagator is defined in terms of frames (tetrads) at both $x$ and $x^\prime$ as \begin{equation} g^{\alpha^\prime}\,_\alpha (x,x^\prime)\equiv e_a^{\alpha^\prime}(x^\prime) e^a_\alpha(x) \end{equation} in such a way that \begin{equation} \text{det}~\left(g^{\alpha^\prime}\,_\alpha (x,x^\prime)\right)={e(x)\over e^\prime(x^\prime)} \end{equation} Taking determinants in the definition yields \begin{equation} \Delta(x,x^\prime)=-{\text{det}~\left(-\sigma^{\alpha^\prime}_{~\beta^\prime}(x,x^\prime)\right)\over e e^\prime}\equiv -{{\cal D}(x,x^\prime)\over e e^\prime} \end{equation} It is plain that \begin{equation} \left[\Delta^{\alpha^\prime}_{~\beta^\prime}\right]=\delta^{\alpha^\prime}_{~\beta^\prime} \end{equation} \begin{equation} \left[\Delta\right]=1 \end{equation} It is a fact \cite{Poisson} that the van Vleck-Morette determinant obeys the fundamental equation \begin{equation} \nabla_\alpha\left(\Delta~\sigma^\alpha\right)=\sigma^\alpha\nabla_\alpha \Delta+\Delta \Box \sigma=\sigma^\alpha\nabla_\alpha \Delta+\left(1+\sqrt{2 \sigma}~\theta\right)\Delta=n \Delta \end{equation} The failure of the van Vleck determinant to be parallel propagated is measured by the expansion of the geodesic congruence \begin{equation} \sigma^\alpha\nabla_\alpha\left(\text{log}~\Delta\right)=(n-1)-\sqrt{2 \sigma}~\theta \end{equation} Indeed, starting from \begin{equation} \Delta^{\alpha^\prime}\,_{\beta^\prime}=-g^{\alpha^\prime\alpha}\left(\sigma^\gamma\,_\alpha\sigma_{\gamma\beta^\prime}+\sigma^\gamma\sigma_{\alpha\beta^\prime\gamma}\right)=g^{\alpha^\prime}\,_\alpha g^\gamma_{\gamma^\prime}\sigma^\alpha\,_\gamma \Delta^{\gamma^\prime}\,_{\beta^\prime}+\nabla_\gamma \Delta^{\alpha^\prime}_{\beta^\prime}\sigma^\gamma \end{equation} multiplying by the inverse matrix $\Delta^{-1}$ and taking the trace \begin{equation} n= \Box \sigma+(\Delta^{-1})^{\beta^\prime}\,_{\alpha^\prime} \sigma^\gamma\nabla_\gamma\Delta^{\alpha^\prime}_{\beta^\prime} \end{equation} which implies the desired identity. \par The coincidence limit of the fundamental equation holds trivially. \par Let us now draw some consequences for the coincidence limits of derivatives of the van Vleck determinant. First rewrite the fundamental equation \begin{equation} \nabla_\mu\left(\Delta \sigma^\mu\right)=n\Delta \end{equation} like \begin{equation} \label{fundamental} 2\sigma^\mu\Delta^{1/2}_\mu+\Delta^{1/2}\Box\sigma=n\Delta^{1/2} \end{equation} with a boundary condition $\left[\Delta\right]=1$, and taking the derivative of \eqref{fundamental}, \begin{equation} 2\sigma^\mu_{~\alpha}\Delta^{1/2}_\mu+2\sigma^\mu\Delta^{1/2}_{\mu\alpha}+\Delta^{1/2}_{\alpha}\Box\sigma+\Delta^{1/2}\Box\sigma_\alpha=n\Delta^{1/2}_\alpha \end{equation} At coincidence, using former results about $\sigma$, it implies \begin{equation} \boxed{\left[\Delta^{1/2}_\rho\right]=0} \end{equation} The second derivative of \eqref{fundamental} leads to \begin{eqnarray} &&2\sigma^\mu_{~\alpha\beta}\Delta^{1/2}_\mu+2\sigma^\mu_{~\alpha}\Delta^{1/2}_{\mu\beta}+2\sigma^\mu_{~\beta}\Delta^{1/2}_{\mu\alpha}+2\sigma^\mu\Delta^{1/2}_{\mu\alpha\beta}+\Delta^{1/2}_{\alpha\beta}\Box\sigma+\Delta^{1/2}_{\alpha}\Box\sigma_\beta+\nonumber\\ &&+\Delta^{1/2}_\beta\Box\sigma_\alpha+\Delta^{1/2}\Box\sigma_{\alpha\beta}=n\Delta^{1/2}_{\alpha\beta} \end{eqnarray} At coincidence \begin{equation} n\left[\Delta^{1/2}_{\rho\sigma}\right]+\left[\nabla_\rho\nabla_\sigma\Box \sigma\right]+4 \left[\Delta^{1/2}_{\rho\sigma}\right]= n\left[\Delta^{1/2}_{\rho\sigma}\right] \end{equation} so that \begin{equation} \boxed{\left[ \Delta^{1/2}_{\alpha\beta}\right]={1\over 6} R_{\alpha\beta}} \end{equation} Taking the trace \begin{equation} \boxed{\left[\Box \Delta^{1/2}\right]={1\over 6} R} \end{equation} The third derivative of \eqref{fundamental} leads to \begin{eqnarray} &&2\sigma^\mu_{~\alpha\beta\lambda}\Delta^{1/2}_\mu+2\sigma^\mu_{~\alpha\beta}\Delta^{1/2}_{\mu\lambda}+2\sigma^\mu_{~\alpha\lambda}\Delta^{1/2}_{\mu\beta}+2\sigma^\mu_{~\alpha}\Delta^{1/2}_{\mu\beta\lambda}+2\sigma^\mu_{~\beta\lambda}\Delta^{1/2}_{\mu\alpha}+\nonumber\\ &&+2\sigma^\mu_{~\beta}\Delta^{1/2}_{\mu\alpha\lambda}+2\sigma^\mu_{~\lambda}\Delta^{1/2}_{\mu\alpha\beta}+2\sigma^\mu\Delta^{1/2}_{\mu\alpha\beta\lambda}+\Delta^{1/2}_{\alpha\beta\lambda}\Box\sigma+\Delta^{1/2}_{\alpha\beta}\Box\sigma_\lambda+\nonumber\\ &&+\Delta^{1/2}_{\alpha\lambda}\Box\sigma_\beta+\Delta^{1/2}_{\alpha}\Box\sigma_{\beta\lambda}+\Delta^{1/2}_{\beta\lambda}\Box\sigma_\alpha+\Delta^{1/2}_\beta\Box\sigma_{\alpha\lambda}+\Delta^{1/2}_\lambda\Box\sigma_{\alpha\beta}+\nonumber\\ &&+\Delta^{1/2}\Box\sigma_{\alpha\beta\lambda}=n\Delta^{1/2}_{\alpha\beta\lambda} \end{eqnarray} At coincidence this yields \begin{equation} 4\left[\Delta^{1/2}_{\alpha\beta\lambda}\right]+2\left[\Delta^{1/2}_{\lambda\alpha\beta}\right]+\left[\Box\sigma_{\alpha\beta\lambda}\right]=0 \end{equation} Using the commutation of derivatives \begin{equation} \left[\Delta^{1/2}_{\lambda\alpha\beta}\right]=\left[\Delta^{1/2}_{\alpha\beta\lambda}\right]\end{equation} which together with the previous result for $\left[\Box\sigma_{\alpha\beta\lambda}\right]$, implies that \begin{equation} \boxed{\left[\Delta^{1/2}_{\alpha\beta\lambda}\right]=\frac{1}{12}\left(\nabla_\alpha R_{\beta\lambda}+\nabla_\beta R_{\alpha\lambda}+\nabla_\lambda R_{\alpha\beta}\right)} \end{equation} whose trace reads \begin{equation} \boxed{\left[\nabla_\lambda\Box\Delta^{1/2}\right]=\frac{1}{6}\nabla_\lambda R} \end{equation} Finally, the fourth derivative of \eqref{fundamental} \begin{eqnarray} &&2\sigma^\mu_{~\alpha\beta\lambda\tau}\Delta^{1/2}_\mu+2\sigma^\mu_{~\alpha\beta\lambda}\Delta^{1/2}_{\mu\tau}+2\sigma^\mu_{~\alpha\beta\tau}\Delta^{1/2}_{\mu\lambda}+2\sigma^\mu_{~\alpha\beta}\Delta^{1/2}_{\mu\lambda\tau}+2\sigma^\mu_{~\alpha\lambda\tau}\Delta^{1/2}_{\mu\beta}+\nonumber\\ &&+2\sigma^\mu_{~\alpha\lambda}\Delta^{1/2}_{\mu\beta\tau}+2\sigma^\mu_{~\alpha\tau}\Delta^{1/2}_{\mu\beta\lambda}+2\sigma^\mu_{~\alpha}\Delta^{1/2}_{\mu\beta\lambda\tau}+2\sigma^\mu_{~\beta\lambda\tau}\Delta^{1/2}_{\mu\alpha}+2\sigma^\mu_{~\beta\lambda}\Delta^{1/2}_{\mu\alpha\tau}+\nonumber\\ &&+2\sigma^\mu_{~\beta\tau}\Delta^{1/2}_{\mu\alpha\lambda}+2\sigma^\mu_{~\beta}\Delta^{1/2}_{\mu\alpha\lambda\tau}+2\sigma^\mu_{~\lambda\tau}\Delta^{1/2}_{\mu\alpha\beta}+2\sigma^\mu_{~\lambda}\Delta^{1/2}_{\mu\alpha\beta\tau}+2\sigma^\mu_{ \tau}\Delta^{1/2}_{\mu\alpha\beta\lambda}+\nonumber\\ &&+2\sigma^\mu\Delta^{1/2}_{\mu\alpha\beta\lambda\tau}+\Delta^{1/2}_{\alpha\beta\lambda\tau}\Box\sigma+\Delta^{1/2}_{\alpha\beta\lambda}\Box\sigma_\tau+\Delta^{1/2}_{\alpha\beta\tau}\Box\sigma_\lambda+\Delta^{1/2}_{\alpha\beta}\Box\sigma_{\lambda\tau}+\nonumber\\ &&+\Delta^{1/2}_{\alpha\lambda\tau}\Box\sigma_\beta+\Delta^{1/2}_{\alpha\lambda}\Box\sigma_{\beta\tau}+\Delta^{1/2}_{\alpha\tau}\Box\sigma_{\beta\lambda}+\Delta^{1/2}_{\alpha}\Box\sigma_{\beta\lambda\tau}+\Delta^{1/2}_{\beta\lambda\tau}\Box\sigma_\alpha+\nonumber\\ &&+\Delta^{1/2}_{\beta\lambda}\Box\sigma_{\alpha\tau}+\Delta^{1/2}_{\beta\tau}\Box\sigma_{\alpha\lambda}+\Delta^{1/2}_\beta\Box\sigma_{\alpha\lambda\tau}+\Delta^{1/2}_{\lambda\tau}\Box\sigma_{\alpha\beta}+\Delta^{1/2}_\lambda\Box\sigma_{\alpha\beta\tau}+\nonumber\\ &&+\Delta^{1/2}_\tau\Box\sigma_{\alpha\beta\lambda}+\Delta^{1/2}\Box\sigma_{\alpha\beta\lambda\tau}=n\Delta^{1/2}_{\alpha\beta\lambda\tau}\end{eqnarray} At coincidence reads \begin{eqnarray} &&2\left[\sigma^\mu_{~\alpha\beta\lambda}\Delta^{1/2}_{\mu\tau}\right]+2\left[\sigma^\mu_{~\alpha\beta\tau}\Delta^{1/2}_{\mu\lambda}\right]+2\left[\sigma^\mu_{~\alpha\lambda\tau}\Delta^{1/2}_{\mu\beta}\right]+2\left[\Delta^{1/2}_{\alpha\beta\lambda\tau}\right]+\nonumber\\ &&+2\left[\sigma^\mu_{~\beta\lambda\tau}\Delta^{1/2}_{\mu\alpha}\right]+2\left[\Delta^{1/2}_{\beta\alpha\lambda\tau}\right]+2\left[\Delta^{1/2}_{\lambda\alpha\beta\tau}\right]+2\left[\Delta^{1/2}_{\tau\alpha\beta\lambda}\right]+\left[\Delta^{1/2}_{\alpha\beta}\Box\sigma_{\lambda\tau}\right]+\nonumber\\ &&+\left[\Delta^{1/2}_{\alpha\tau}\Box\sigma_{\beta\lambda}\right]+\left[\Delta^{1/2}_{\alpha\lambda}\Box\sigma_{\beta\tau}\right]+\left[\Delta^{1/2}_{\beta\lambda}\Box\sigma_{\alpha\tau}\right]+\left[\Delta^{1/2}_{\beta\tau}\Box\sigma_{\alpha\lambda}\right]+\nonumber\\ &&+\left[\Delta^{1/2}_{\lambda\tau}\Box\sigma_{\alpha\beta}\right]+\left[\Box\sigma_{\alpha\beta\lambda\tau}\right]=0\nonumber\\\end{eqnarray} Commutating derivatives \begin{eqnarray} \left[\Delta^{1/2}_{\lambda\alpha\beta\tau}\right]&&=\left[\Delta^{1/2}_{\alpha\beta\lambda\tau}\right]+\frac{1}{6}R_{\alpha\rho\beta\lambda}R^\rho_{~\tau}\nonumber\\ \left[\Delta^{1/2}_{\tau\alpha\beta\lambda}\right]&&=\left[\Delta^{1/2}_{\alpha\beta\lambda\tau}\right]+\frac{1}{6}R_{\beta\rho\lambda\tau}R^\rho_{~\alpha}+\frac{1}{6}R_{\alpha\rho\lambda\tau}R^\rho_{~\beta}+\frac{1}{6}R_{\alpha\rho\beta\tau}R^\rho_{~\lambda}\nonumber\\ \end{eqnarray} leads to \begin{eqnarray}&&12\left[\Delta^{1/2}_{\alpha\beta\lambda\tau}\right]-\frac{2}{9}\left(R_{\alpha\beta}R_{\lambda\tau}+R_{\alpha\lambda}R_{\beta\tau}+R_{\alpha\tau}R_{\beta\lambda}\right)-\nonumber\\ &&-\frac{1}{9}\Big[\left(R^\mu_{~\beta\alpha\lambda}+R^\mu_{~\lambda\alpha\beta}+3R^\mu_{~\alpha\beta\lambda}\right)R_{\mu\tau}+\left(R^\mu_{~\beta\alpha\tau}+R^\mu_{~\tau\alpha\beta}+3R^\mu_{~\alpha\beta\tau}\right)R_{\mu\lambda}+\nonumber\\ &&+\left(R^\mu_{~\lambda\alpha\tau}+R^\mu_{~\tau\alpha\lambda}+3R^\mu_{~\alpha\lambda\tau}\right)R_{\mu\beta}+\left(R^\mu_{~\lambda\beta\tau}+R^\mu_{~\tau\beta\lambda}+3R^\mu_{~\beta\lambda\tau}\right)R_{\mu\alpha}\Big]+\nonumber\\ &&+\left[\Box\sigma_{\alpha\beta\lambda\tau}\right]=0\nonumber\\ \end{eqnarray} Using \eqref{6Dtrace}, we get the trace as \begin{equation} \left[\Box\Box\Delta^{1/2}\right]=\frac{1}{5}\Box R+\frac{1}{36}R^2-\frac{1}{30}R_{\mu\nu}^2-\frac{1}{30}R_{\mu\nu\alpha\beta}^2\end{equation} \subsection{Schwinger-DeWitt coefficients.} Let us now come back towards the real thing, namely the computation of the coincidence limits of the Schwinger-DeWitt coefficients themselves. From the equation \begin{equation}\label{s0} \sigma^\mu \nabla_\mu a_0=0 \end{equation} Deriving once \begin{equation} \sigma^\mu\,_\nu\nabla_\mu a_0+ \sigma^\mu\nabla_\nu\nabla_\mu a_0=0 \end{equation} it follows \begin{equation} \boxed{\left[\nabla_\mu a_0\right]=0} \end{equation} Deriving again \begin{equation} \sigma^{\mu}\,_{\nu\lambda} \nabla_\mu a_0+\sigma^\mu\,_\nu\nabla_\lambda \nabla_\mu a_0+\sigma^\mu\,_\lambda\nabla_\nu\nabla_\mu a_0+\sigma^\mu \nabla_\lambda\nabla_\nu\nabla_\mu a_0=0 \end{equation} we get \begin{equation} \left[\left(\nabla_\mu\nabla_\nu+\nabla_\nu\nabla_\mu\right) a_0\right]=0 \end{equation} Obviously \begin{equation} \nabla_\mu\nabla_\nu a_0=\nabla_\nu\nabla_\mu a_0\end{equation} so that \begin{equation} \boxed{\left[\nabla_\mu\nabla_\nu a_0\right]=0} \end{equation} Whose trace is \begin{equation} \boxed{\left[\Box a_0\right]=0} \end{equation} The third derivative of \eqref{s0} \begin{eqnarray} &&\sigma^{\mu}_{~\nu\lambda\tau} \nabla_\mu a_0+\sigma^{\mu}_{~\nu\lambda} \nabla_\tau\nabla_{\mu} a_0+\sigma^\mu_{~\nu\tau}\nabla_\lambda \nabla_\mu a_0+\sigma^\mu_{~\nu}\nabla_\tau\nabla_\lambda \nabla_\mu a_0+\nonumber\\ &&+\sigma^\mu_{~\lambda\tau}\nabla_\nu\nabla_\mu a_0+\sigma^\mu_{~\lambda}\nabla_\tau\nabla_\nu\nabla_\mu a_0+\sigma^\mu_{~\tau} \nabla_\lambda\nabla_\nu\nabla_\mu a_0+\sigma^\mu \nabla_\tau\nabla_\lambda\nabla_\nu\nabla_\mu a_0=0\nonumber\\ \end{eqnarray} The fourth derivative of \eqref{s0} \begin{eqnarray} &&\sigma^{\mu}_{~\nu\lambda\tau\alpha} \nabla_\mu a_0+\sigma^{\mu}_{~\nu\lambda\tau}\nabla_\alpha \nabla_\mu a_0+\sigma^{\mu}_{~\nu\lambda\alpha} \nabla_\tau\nabla_{\mu} a_0+\sigma^{\mu}_{~\nu\lambda} \nabla_\alpha\nabla_\tau\nabla_{\mu} a_0+\nonumber\\ &&+\sigma^\mu_{~\nu\tau\alpha}\nabla_\lambda \nabla_\mu a_0+\sigma^\mu_{~\nu\tau}\nabla_\alpha\nabla_\lambda \nabla_\mu a_0+\sigma^\mu_{~\nu\alpha}\nabla_\tau\nabla_\lambda \nabla_\mu a_0+\nonumber\\ &&+\sigma^\mu_{~\nu}\nabla_\alpha\nabla_\tau\nabla_\lambda \nabla_\mu a_0+\sigma^\mu_{~\lambda\tau\alpha}\nabla_\nu\nabla_\mu a_0+\sigma^\mu_{~\lambda\tau}\nabla_\alpha\nabla_\nu\nabla_\mu a_0+\nonumber\\ &&+\sigma^\mu_{~\lambda\alpha}\nabla_\tau\nabla_\nu\nabla_\mu a_0+\sigma^\mu_{~\lambda}\nabla_\alpha\nabla_\tau\nabla_\nu\nabla_\mu a_0+\sigma^\mu_{~\tau\alpha} \nabla_\lambda\nabla_\nu\nabla_\mu a_0+\nonumber\\ &&+\sigma^\mu_{~\tau} \nabla_\alpha\nabla_\lambda\nabla_\nu\nabla_\mu a_0+\sigma^\mu_{~\alpha} \nabla_\tau\nabla_\lambda\nabla_\nu\nabla_\mu a_0+\sigma^\mu \nabla_\alpha\nabla_\tau\nabla_\lambda\nabla_\nu\nabla_\mu a_0=0\nonumber\\ \end{eqnarray} At coincidence \begin{equation} \left[\nabla_\mu\nabla_\tau\nabla_\lambda\nabla_\nu a_0+\nabla_\mu\nabla_\tau\nabla_\nu\nabla_\lambda a_0+\nabla_\mu\nabla_\lambda\nabla_\nu\nabla_\tau a_0+\nabla_\tau\nabla_\lambda\nabla_\mu\nabla_\nu a_0\right]=0 \end{equation} Ricci's identities imply that \begin{eqnarray} &&\left[\nabla_\mu\nabla_\tau\nabla_\nu\nabla_\lambda a_0\right]=\left[\nabla_\mu\nabla_\tau\nabla_\lambda\nabla_\nu a_0\right]\nonumber\\ &&\left[\nabla_\mu\nabla_\lambda\nabla_\nu\nabla_\tau a_0\right]=\left[\nabla_\mu\nabla_\tau\nabla_\lambda\nabla_\nu a_0\right]\nonumber\\ &&\left[\nabla_\tau\nabla_\lambda\nabla_\mu\nabla_\nu a_0\right]=\left[\nabla_\mu\nabla_\tau\nabla_\lambda\nabla_\nu a_0\right] \end{eqnarray} then \begin{equation} \left[\nabla_\mu\nabla_\nu\nabla_\alpha\nabla_\beta a_0\right]=0\end{equation} so that \begin{equation} \boxed{\left[\Box\Box a_0\right]=0} \end{equation} \par The relevant expansion has been worked out by Bryce DeWitt, \cite{DeWitt}. Let us proceed in a pedestrian way, by writing \begin{equation} K\left(\tau;x,x^\prime\right)={1\over (4\pi\tau)^{n\over 2}}N(x,x^\prime)~e^{-{\sigma(x,x^\prime)\over 2 \tau}}~\sum_{p=0}^\infty a_p (x,x^\prime) \tau^p \end{equation} where we have left an arbitrary global coefficient, to be determined later, $N(x,x^\prime)$ in front of the Taylor expansion. The purpose here is to show that it should be equal to the van Vleck determinant. In order to do that, let us now substitute the short time expansion into the heat equation \begin{eqnarray} &&{\partial \over \partial \tau}~K\left(\tau;x,x^\prime\right) = {1\over (4\pi)^{n\over 2}}~N~e^{-{\sigma\over 2\tau}}~\sum_{p=0}\left(a_p{\sigma\over 2} +\left(p-1-{n\over 2}\right)a_{p-1}\right)\tau^{p-2-{n\over 2}}\nonumber\\ \end{eqnarray} Let us do the computation for (minus) the ordinary laplacian \begin{eqnarray} &&\nabla_\mu K\left(\tau;x,x^\prime\right)= {1\over (4 \pi \tau)^{n\over 2}}~e^{-{\sigma\over 2 \tau}}~\sum_p\left(\nabla_\mu ~N~a_p +N~\left(\nabla_\mu a_p-{\sigma_\mu\over 2\tau}a_p\right)\right)\tau^p\nonumber\\ \end{eqnarray} \begin{eqnarray} &&\nabla^2 K\left(\tau;x,x^\prime\right)={e^{-{\sigma\over 2 \tau}}\over (4 \pi )^{n\over 2}}~\sum_{p=0} ~\bigg\{(\nabla^2N) a_{p-2}+ 2 N^\mu \nabla_\mu a_{p-2} -\sigma^\mu (N_\mu) a_{p-1}+\nonumber\\ &&+N\Big({1\over 4} \sigma_\mu \sigma^\mu a_p- \sigma^\mu \nabla_\mu a_{p-1}-{1\over 2}(\Box\sigma) a_{p-1}+\nabla^\mu\nabla_\mu a_{p-2}\Big)\bigg\}\tau^{p-2-{n\over 2}} \end{eqnarray} where we have defined \begin{equation} a_p=0 \end{equation} for negative values of the index $p$. The heat kernel equation provides the recursion relation \begin{eqnarray} &&(\nabla^\mu\nabla_\mu N) a_{p-2}+ 2 \nabla^\mu N \nabla_\mu a_{p-2}-\sigma^\mu (\nabla_\mu N) a_{p-1}+N\Bigg\{{1\over 4} \sigma_\mu \sigma^\mu a_p- \nonumber\\ &&-\sigma^\mu \nabla_\mu a_{p-1}-{1\over 2}(\Box\sigma) a_{p-1}+\nabla^\mu\nabla_\mu a_{p-2}-a_p{\sigma\over 2} -\left(p-1-{n\over 2}\right)a_{p-1}\Bigg\}=0 \nonumber\\\end{eqnarray} We can simplify this expression with our former result like \begin{eqnarray} &&\sigma_\mu \sigma^\mu=2\sigma\nonumber\\ &&\Box\sigma=n\end{eqnarray} and the parallel transport \begin{equation}\sigma^\mu\nabla_\mu N=0\end{equation} \begin{eqnarray} &&(\nabla^\mu\nabla_\mu N) a_{p-2}+ 2 \nabla^\mu N \nabla_\mu a_{p-2}+N\nabla^\mu\nabla_\mu a_{p-2}-N\sigma^\mu \nabla_\mu a_{p-1}-\nonumber\\ &&-N\left(p-1\right)a_{p-1}=0 \nonumber\\\end{eqnarray} This can also be written in terms of $\Delta\equiv N^2$ as \begin{eqnarray}\label{rr} &&\Box(\Delta^{1/2}a_p)-\Delta^{1/2}\sigma^\mu \nabla_\mu a_{p+1}-\Delta^{1/2}\left(p+1\right)a_{p+1}=0\end{eqnarray} In the simplest case $p=0$, we have \begin{eqnarray}\label{p0} &&\Box(\Delta^{1/2}a_0)-\Delta^{1/2}\sigma^\mu \nabla_\mu a_{1}-\Delta^{1/2}a_{1}=0\end{eqnarray} At the coincidence limit \begin{eqnarray} &&\left[\Box\Delta^{1/2}\right]-\left[a_{1}\right]=0\end{eqnarray} then \begin{equation}\boxed{\left[a_{1}\right]=\frac{1}{6}R}\end{equation} Taking the derivative of \eqref{p0} \begin{eqnarray} &&\nabla_\alpha\Box(\Delta^{1/2}a_0)-\nabla_\alpha\Delta^{1/2}\sigma^\mu \nabla_\mu a_{1}-\Delta^{1/2}\sigma^\mu_{~\alpha} \nabla_\mu a_{1}-\Delta^{1/2}\sigma^\mu \nabla_\alpha \nabla_\mu a_{1}-\nonumber\\ &&-\nabla_\alpha(\Delta^{1/2})a_{1}-\Delta^{1/2}\nabla_\alpha a_{1}=0\end{eqnarray} At coincidence \begin{eqnarray} &&\left[\nabla_\alpha\Box(\Delta^{1/2}a_0)\right]-2\left[ \nabla_\alpha a_{1}\right]=0\end{eqnarray} then \begin{equation}\left[\nabla_\alpha a_{1}\right]=\frac{1}{12}\nabla_\alpha R\end{equation} Deriving again \begin{eqnarray} &&\nabla_\beta\nabla_\alpha\Box(\Delta^{1/2}a_0)-\nabla_\beta\nabla_\alpha\Delta^{1/2}\sigma^\mu \nabla_\mu a_{1}-\nabla_\alpha\Delta^{1/2}\sigma^\mu_{~\beta} \nabla_\mu a_{1}-\nonumber\\ &&-\nabla_\alpha\Delta^{1/2}\sigma^\mu \nabla_\beta\nabla_\mu a_{1}-\nabla_\beta\Delta^{1/2}\sigma^\mu_{~\alpha} \nabla_\mu a_{1}-\Delta^{1/2}\sigma^\mu_{~\alpha\beta} \nabla_\mu a_{1}-\nonumber\\ &&-\Delta^{1/2}\sigma^\mu_{~\alpha} \nabla_\beta\nabla_\mu a_{1}-\nabla_\beta\Delta^{1/2}\sigma^\mu \nabla_\alpha \nabla_\mu a_{1}-\Delta^{1/2}\sigma^{\mu}_{~\beta} \nabla_\alpha \nabla_\mu a_{1}-\nonumber\\ &&-\Delta^{1/2}\sigma^\mu \nabla_\beta\nabla_\alpha \nabla_\mu a_{1}-\nabla_\beta\nabla_\alpha(\Delta^{1/2})a_{1}-\nabla_\alpha(\Delta^{1/2})\nabla_\beta a_{1}-\nonumber\\ &&-\nabla_\beta\Delta^{1/2}\nabla_\alpha a_{1}-\Delta^{1/2}\nabla_\beta\nabla_\alpha a_{1}=0\end{eqnarray} At coincidence \begin{eqnarray} &&\left[\nabla_\beta\nabla_\alpha\Box(\Delta^{1/2}a_0)\right]-3\left[\nabla_\alpha\nabla_\beta a_{1}\right]-\left[\nabla_\beta\nabla_\alpha(\Delta^{1/2})a_{1}\right]=0\end{eqnarray} whose trace reads \begin{equation} \left[\Box a_{1}\right]=\frac{1}{15}\Box R+\frac{1}{90}R_{\mu\nu\alpha\beta}^2-\frac{1}{90}R_{\mu\nu}^2\end{equation} Next, we take $p=1$ in the recursion relation \begin{eqnarray}\label{p1} &&\Box(\Delta^{1/2}a_1)-\Delta^{1/2}\sigma^\mu \nabla_\mu a_{2}-2\Delta^{1/2}a_{2}=0\end{eqnarray} The coincidence limit reads \begin{eqnarray} &&\left[\Box(\Delta^{1/2}a_1)\right]-2\left[a_{2}\right]=0\end{eqnarray} Collecting all results \begin{equation} \boxed{\left[a_2\right]=\frac{1}{30}\Box R+\frac{1}{180}R_{\mu\nu\alpha\beta}^2-\frac{1}{180}R_{\mu\nu}^2+\frac{1}{72}R^2} \end{equation} \newpage \section{Recursion relations for the coefficients of the short time expansion of the heat kernel.} There are some recursion relations obtained by Gilkey \cite{Gilkey} in a remarkable paper. They greatly simplify the computation of the Schwinger-DeWitt coefficients. We are refering to an operator on a riemannian manifold, $M$, \begin{equation} \Delta\equiv -\left(g^{\mu\nu}\partial_\mu\partial_\nu+ A^\sigma \partial_\sigma+ B\right) \end{equation} There is a unique endomorphism $E$ allowing us to rewrite the operator as \begin{equation} \Delta=-(g^{\mu\nu}\nabla_\mu\nabla_\nu+E)\end{equation} where \begin{eqnarray} &&A^\lambda=2 \omega^\lambda -g^{\mu\nu}\Gamma^\lambda_{\mu\nu}\nonumber\\ &&B=\partial_\mu \omega^\mu-g^{\mu\nu}\Gamma^\lambda_{\mu\nu}\omega_\lambda+\omega_\mu\omega^\mu+E \end{eqnarray} First of all, it can be proved that the general form of the Schwinger-DeWitt coefficients is \begin{eqnarray} a_0(f,\Delta)&&=\int d^n x\sqrt{g}\,{\rm tr\,}\left\{f(\alpha_0)\right\} \nonumber\\ a_2(f,\Delta)&&={1\over 6}\int d^n x\sqrt{g}\,{\rm tr\,}\left\{f(\alpha_1 E+\alpha_2 R)\right\} \nonumber\\ a_4(f,\Delta)&&={1\over 360}\int d^n x\sqrt{g}\,{\rm tr\,}\Big\{f(\alpha_3 \Box E+ \alpha_4 R E+\alpha_5 E^2+\alpha_6 \Box R+\nonumber\\ &&+\alpha_7 R^2+\alpha_8 R_{\mu\nu}^2+\alpha_9 R_{\mu\nu\rho\sigma}^2+\alpha_{10} W_{\mu\nu}^2)\Big\} \end{eqnarray} There are three lemmas that we are going to use in the sequel in order to determine $\alpha_i$. {\bf Lemma 1.} \begin{equation} \boxed{\left.{d\over d\epsilon}a_m\left(e^{-2 \epsilon f}\Delta\right)\right|_{\epsilon=0}=(n-m)a_m\left(f,\Delta\right)}\label{l1} \end{equation} \begin{proof} Consider the family \begin{equation} \Delta(\epsilon)\equiv e^{2 \epsilon f} \Delta-\epsilon F \end{equation} it so happens that \begin{equation} \left.{d\over d\epsilon}\Delta(\epsilon)\right|_{\epsilon=0}=-2 f \Delta-F \end{equation} now we can use the lemma in Gilkey's book \cite{Gilkey}, page 78, asserting that whenever $[P,Q]=0$, \begin{equation} \left.{d\over d \epsilon}{\rm tr\,}\, Q(\epsilon) e^{-t P(\epsilon)}\right|_{\epsilon=0}={\rm tr\,}\, \left(-t \dot{P} Q+ \dot{Q}\right)\,e^{-t P} \end{equation} where $Q\equiv Q(0)$, $P\equiv P(0)$. It follows that \begin{eqnarray} \left.{d\over d \epsilon}{\rm tr\,}\, e^{-t \Delta(\epsilon)}\right|_{\epsilon=0}&&=t\, {\rm tr\,}\,\left( 2 f \Delta\, e^{-t \Delta}\right)+ t\,{\rm tr\,}\, \left(F e^{-t \Delta}\right)=\nonumber\\ &&=-2 t {\partial\over \partial t}\, {\rm tr\,}\,\left( f \, e^{-t \Delta}\right)+ t\,{\rm tr\,}\, \left(F e^{-t \Delta}\right) \end{eqnarray} that is \begin{eqnarray} &&\left.{d\over d \epsilon}\sum a_p\left(e^{-2 \epsilon f}\,\Delta-\epsilon F\right)\right|_{\epsilon=0}\,t^{p-n\over 2}=\nonumber\\ &&=\sum(p-n) a_p(f,\Delta)t^{p-n\over 2}+\sum a_p(F,\Delta)\,t^{p-n+2\over 2} \end{eqnarray} making $F=0$ we get the result \end{proof} {\bf Lemma 2.} \begin{equation} \boxed{\left.{d\over d \epsilon} a_p\left(\Delta-\epsilon F\right)\right|_{\epsilon=0}=a_{p-2}\left(F,\Delta\right)}\label{l2} \end{equation} \begin{proof} Consider now the operator \begin{equation} \Delta_{\epsilon,\delta}\equiv e^{-2 f \epsilon}\,\left(\Delta-\delta F\right) \end{equation} then \begin{eqnarray} &&\partial_\epsilon a_n\left(\Delta_{\epsilon,\delta}\right)=\partial_\delta \partial_\epsilon a_n\left(\Delta_{\epsilon,\delta}\right)=\partial_\epsilon \partial_\delta a_n\left(\Delta_{\epsilon,\delta}\right)=\nonumber\\ &&=\partial_\epsilon a_{n-2}\left(e^{-2 f \epsilon}F,e^{-2\epsilon f} \Delta\right)=0 \end{eqnarray} \end{proof} {\bf Lemma 3.} \begin{equation} \boxed{\left.{d\over d \epsilon} a_{n-2}\left(e^{-2 f \epsilon}F,e^{-2\epsilon f} \Delta\right)\right|_{\epsilon=0}=0}\label{l3} \end{equation} Next, let us determine the coefficients $\alpha_i$ \begin{itemize} \item The coefficient $\alpha_0$ follows from the heat kernel expansion for the scalar laplacian, so that \boxed{\alpha_0=1} \item Now we use \eqref{l2} with $p=2$ \begin{equation} {1\over 6}{\rm tr\,}\,\left( \alpha_1 F\right)={\rm tr\,}\, F \end{equation} and we can directly extract \boxed{\alpha_1= 6} \item Taking now \eqref{l2} with $p=4$, \begin{equation} {1\over 360}{\rm tr\,}\,\left(\alpha_4 F R+2 \alpha_5 F E\right)={1\over 6} {\rm tr\,}\,\left(\alpha_1 F E +\alpha_2 F R\right)\end{equation} from this equation we get \boxed{\alpha_5=180} \quad \text{and}\quad \boxed{\alpha_4= 60 \alpha_2} \end{itemize} To proceed further we have to take into account local scale transformations defined in \eqref{l1} and \eqref{l3}. A list of the relevant transformations reads \begin{eqnarray}\label{ee} &&\left.\frac{d}{d\epsilon}\bar{E}\right|_{\epsilon=0}=-2f E+\frac{n-2}{2}\Box f\nonumber\\ &&\left.\frac{d}{d\epsilon}\bar{R}\right|_{\epsilon=0}=-2f R-2(n-1)\Box f\nonumber\\ &&\left.\frac{d}{d\epsilon}\bar{\Box}\bar{E}\right|_{\epsilon=0}=-4f \Box E-2E\Box f+\frac{n-2}{2}\Box^2 f-2\nabla_\mu f\nabla^\mu E\nonumber\\ &&\left.\frac{d}{d\epsilon}\bar{R}\bar{E}\right|_{\epsilon=0}=-4f R E+\frac{n-2}{2}R\Box f-2(n-1)E\Box f\nonumber\\ &&\left.\frac{d}{d\epsilon}\bar{E}^2\right|_{\epsilon=0}=-4f E^2+(n-2)E\Box f\nonumber\\ &&\left.\frac{d}{d\epsilon}\bar{\Box}\bar{R}\right|_{\epsilon=0}=-4f\Box R -2R\Box f-2(n-1)\Box^2 f-2\nabla_\mu f\nabla^\mu R\nonumber\\ &&\left.\frac{d}{d\epsilon}\bar{R}^2\right|_{\epsilon=0}=-4fR^2 -4(n-1)R\Box f\nonumber\\ &&\left.\frac{d}{d\epsilon}\bar{R}_{\mu\nu}^2\right|_{\epsilon=0}=-4fR_{\mu\nu}^2-2R\Box f -2(n-2)\nabla_\mu\nabla_\nu R\nabla^\mu\nabla^\nu f\nonumber\\ &&\left.\frac{d}{d\epsilon}\bar{R}_{\mu\nu\rho\sigma}^2\right|_{\epsilon=0}=-4fR_{\mu\nu\rho\sigma}^2 -8\nabla_\mu\nabla_\nu R\nabla^\mu\nabla^\nu f\nonumber\\ &&\left.\frac{d}{d\epsilon}\bar{W}_{\mu\nu}^2\right|_{\epsilon=0}=-4fW_{\mu\nu}^2 \end{eqnarray} Using these relations we can employ \eqref{l3} to compute the coefficients. \begin{itemize} \item Applying \eqref{l3} to $(n,p)=(4,2)$ we have \begin{equation} \left.{d\over d\epsilon}\,a_2\left(e^{-2\epsilon f}\,F,e^{-2\epsilon f}\,\Delta\right)\right|_{\epsilon=0}=0 \end{equation} with \eqref{ee}, it follows that \begin{equation} 0={\rm tr\,}\,\left(\alpha_1-6 \alpha_2\right)F\,\Box f \end{equation} ergo \boxed{\alpha_2=1} and \boxed{\alpha_4=60} \item Consider now a product metric $M=M_1\times M_2$ with laplacian $\Delta=\Delta_1 + \Delta_2$. It can be shown that \begin{equation} a_4(\Delta)=\alpha_4(\Delta_1)+\alpha_4(\Delta_2)+\alpha_2(\Delta_1)\alpha_2(\Delta_2) \end{equation} the only cross term comes from \begin{equation} R^2(M_1 \times M_2)=R^2(M_1)+R^2(M_2)+2 R(M_1)R(M_2) \end{equation} i.e. \begin{equation} 2{1\over 360}\alpha_7=\left({1\over 6} \alpha_2\right)^2 \end{equation} this means that \boxed{\alpha_7=5} \item Let us now apply again \eqref{l3} with $(n,p)=(6,4)$. It follows \begin{equation} \left.\frac{d}{d\epsilon} a_4\left(e^{-2\epsilon f}\,F,e^{-2\epsilon f} \Delta\right)\right|_{\epsilon=0}=0 \end{equation} with \eqref{ee}, then \begin{eqnarray} &0={\rm tr\,}\,\bigg\{F\left(-2 \alpha_3-10\alpha_4 + 4 \alpha_5\right)E\Box f +\left(2 \alpha_3 -10\alpha_6\right)\Box^2 f+\nonumber\\ &+\left(2\alpha_4-2 \alpha_6-20\alpha_7-2\alpha_8\right)R\Box f -8\left(\alpha_8+\alpha_9\right)\nabla_\mu\nabla_\nu R\nabla^\mu\nabla^\nu f\bigg\}\nonumber\\ \end{eqnarray} we conclude \boxed{\alpha_3=60}, \boxed{\alpha_6=12}, \boxed{\alpha_8=-2} and \boxed{\alpha_9=2} \item In order to get $\alpha_{10}$ we shall follow Fujikawa's method as worked out by Nepomechie \cite{Nepomechie}\cite{Fujikawa}. Consider the operator {\em in flat space} \begin{equation} \Delta\equiv -\left(\delta^{\mu\nu} \nabla_\mu\nabla_\nu+E\right) \end{equation} where \begin{equation} \nabla_\mu\equiv \partial_\mu+A_\mu \end{equation} The heat kernel will be given by \begin{eqnarray} K(\tau;x,y)&&=\langle x\left|e^{-\tau \Delta}\right|y\rangle=e^{-\tau \Delta} \langle x|y\rangle=\nonumber\\ &&=\int {d^n k\over (2\pi)^n}\,e^{-\tau \Delta}\langle x|k\rangle\langle k|y\rangle=\int {d^n k\over (2\pi)^n}\, e^{-i k y} e^{-\tau \Delta} e^{ikx}\nonumber\\ \end{eqnarray} Now it so happens that \begin{equation} \nabla_\mu e^{ikx}=e^{i kx}\,\left(ik_\mu+\partial_\mu+A_\mu\right)=e^{i kx}\,\left(ik_\mu+\nabla_\mu\right) \end{equation} as well as \begin{eqnarray} \nabla^\mu\nabla_\mu e^{ikx}&&=\nabla_\mu\,e^{i kx}\,\left(ik_\mu+\nabla_\mu\right)=e^{ikx}\left(i k_\mu+\nabla_\mu\right)\left(i k^\mu +\nabla^\mu\right)=\nonumber\\ &&=e^{ikx}\left(- k^2+2 i k^\mu\nabla_\mu+\nabla^2\right) \end{eqnarray} ergo \begin{equation} \Delta e^{ikx}=-\left(\nabla^\mu\nabla_\mu+E\right)e^{i k x}=e^{ikx}\left( k^2-2 i k.\nabla+\Delta\right) \end{equation} Rescaling now \begin{equation} k\rightarrow {k\over \sqrt{\tau}} \end{equation} we arrive at \begin{eqnarray} K(\tau\;x,y)&&={1\over \tau^{n\over 2}}\int{ d^n k\over (2\pi)^n}\,e^{-k^2}\,e^{2 i\sqrt{\tau} k.\nabla-\tau \Delta} =\nonumber\\ &&={1\over \tau^{n\over 2}}\int{ d^n k\over (2\pi)^n}\,e^{-k^2}\sum_{p=0}^\infty {\left(2 i\sqrt{\tau} k.\nabla-\tau \Delta\right)^p\over p!} \end{eqnarray} We would like to single out the power of $\tau^2$ in the expansion. We notice that in the expansion of $\left(\sqrt{\tau}A+ \tau B\right)^p$ the coefficients of $\tau^2$ come from the terms \begin{equation} {1\over 4!} A^4+{1\over 3!}\left(A^2 B+ B A^2+ A B A\right)+ {1\over 2} B^2 \end{equation} where in our case \begin{eqnarray} &A\equiv 2 i k^\mu\nabla_\mu\nonumber\\ &B\equiv -\Delta=\nabla_\mu\nabla^\mu+E \end{eqnarray} let us write those terms explicitly \begin{eqnarray} &&\frac{2}{3}\Big[k^\mu\nabla_\mu k^\nu\nabla_\nu k^\rho\nabla_\rho k^\sigma\nabla_\sigma-k^\mu\nabla_\mu k^\nu\nabla_\nu\Box-k^\mu\nabla_\mu k^\nu\nabla_\nu E-\nonumber\\ &&-\Box k^\mu\nabla_\mu k^\nu\nabla_\nu-E k^\mu\nabla_\mu k^\nu\nabla_\nu-k^\mu\nabla_\mu \Box k^\nu\nabla_\nu-k^\mu\nabla_\mu Ek^\nu\nabla_\nu\Big]+\nonumber\\ &&+\frac{1}{2}\Big[\Box^2+\Box E+e\Box+E^2\Big] \end{eqnarray} the momentum integrations are given by \begin{eqnarray} &&\int {d^n k\over (2\pi)^n}e^{-k^2}=1\nonumber\\ &&\int {d^n k\over (2\pi)^n}e^{-k^2}k_\mu k_\nu={1\over 2} \delta_{\mu\nu}\nonumber\\ &&\int {d^n k\over (2\pi)^n}e^{-k^2}k_\mu k_\nu k_\alpha k_\beta={1\over 4}\left(\delta_{\mu\nu}\delta_{\alpha\beta}+\delta_{\mu\alpha}\delta_{\nu\beta}+\delta_{\mu\beta}\delta_{\nu\alpha}\right) \end{eqnarray} therefore \begin{eqnarray} a_4&&\rightarrow\frac{1}{6}\Big[\Box^2+\nabla^\mu\nabla^\nu\nabla_\mu\nabla_\nu+\nabla^\mu\Box\nabla_\mu\Big]-\nonumber\\ &&-\frac{1}{3}\Big[2\Box^2+\Box E+E\Box+\nabla^\mu\Box\nabla_\mu+\nabla_\mu E\nabla^\mu\Big]+\nonumber\\ &&+\frac{1}{2}\Big[\Box^2+\Box E+E\Box+E^2\Big] \end{eqnarray} ignoring the terms that can be taken as surface terms and combining the derivatives \begin{equation} a_4\rightarrow\left({1\over 2} E^2+{1\over 12} W_{\mu\nu}^2+{1\over 6}\Box E\right) \end{equation} which implies that $\boxed{\alpha_{10}=30}$. \end{itemize} In conclusion the heat kernel coefficients are \begin{eqnarray} a_0(f,D)&&=1 \nonumber\\ a_2(f,D)&&={1\over 6}\int d^n x\sqrt{g}\,{\rm tr\,}\left\{6 E+ R\right\} \nonumber\\ a_4(f,D)&&={1\over 360}\int d^n x\sqrt{g}\,{\rm tr\,}\Big\{60 \Box E+ 60 R E+180 E^2+12 \Box R+\nonumber\\ &&+5R^2-2 R_{\mu\nu}^2+2 R_{\mu\nu\rho\sigma}^2+30 W_{\mu\nu}^2\Big\} \end{eqnarray} in flat space we recover \begin{eqnarray} \left[a_0\right]&&=1\nonumber\\ \left[a_2\right]&&=-Y\nonumber\\ \left[a_4\right]&&=\frac{1}{12}W_{\mu\nu}^2+\frac{1}{2}Y^2-\frac{1}{6}\Box Y \end{eqnarray} where $E=-Y$. \newpage \section*{Acknowledgements} We acknowledge partial financial support by the Spanish MINECO through the Centro de excelencia Severo Ochoa Program under Grant CEX2020-001007-S funded by MCIN/AEI/10.13039/501100011033 We also acknowledge the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 860881-HIDDeN and also byGrant PID2019-108892RB-I00 funded by MCIN/AEI/ 10.13039/501100011033 and by ``ERDF A way of making Europe'' \newpage \newpage
fb5f78cb68c8154ff0e04d3cde616e28bdd43ee3
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} The last three decades have provided a prodigious number of exoplanet detections and observations. From knowing very little about exoplanets, even whether they exist at all, this observationally-driven progress has completely revolutionized our understanding of exoplanets and their occurrence in the universe \citep{Deeg17}. The dedicated {\it Kepler} mission \citep{Kepler10} has provided vast statistical information about transiting exoplanet masses, sizes, orbital separations, and inclinations, that is expected to greatly increase with the upcoming {\it TESS} mission \citep{TESS15}. This wealth of transit observations is supplemented by other exoplanet observational techniques, such as radial velocities \citep[see review by][]{Fischer16}, gravitational microlensing \citep[e.g.,][]{Bond04}, and direct imaging \citep[e.g.,][]{Lagrange10}. The growing amount of new data has led to a shift in the theoretical investigations in the field from {\it detection} to {\it characterization} of the formation, evolution, internal structure, and atmospheres of exoplanets. This theoretical shift is being accompanied by growing observational effort to detect spectral emission from exoplanetary atmospheres \cite[see review][]{Bailey14}, Lyman $\alpha$ signatures of atmospheric evaporation \citep[e.g.,][]{Vidal-Madjar03,Ehrenreich15,Bourrier16,Salz16,Spake18}, and chromospheric signatures of magnetic star-planet interaction \citep[e.g.,][]{shkolnik03,Shkolnik05,Shkolnik08,Fares10,Gurdemir12,Shkolnik18}. Unfortunately, all these characterization methods are extremely hard to realize due to the intrinsically weak planetary signal. Such data is expected to remain very limited even with the upcoming {\it JWST} mission \citep{JWST06}. An additional path in the search and characterization of exoplanets is observations in the radio bands, which can shed light on plasma processes that lead to the generation of radio signal. Relevant processes are expected to operate in the low-frequency range of the radio spectrum, around few tens of MHz and below. However, this range of radio frequencies could be masked by the plasma cutoff frequency of the Earth's ionosphere \citep[e.g.,][]{Davies:69,Yeh.Liu:82}, turning ground-based observations of these radio sources to be extremely challenging. Of particular interest in the context of radio observations of exoplanets are the radio waves that are generated by the planet as a radio wave source, in addition to the ambient stellar radio background. Recent theoretical work has been focused on estimating radio emissions that are generated due to the interaction between the stellar wind and the planetary magnetosphere. The interaction leads to particle acceleration that is manifested in auroral emissions and magnetosphere-ionosphere field aligned currents, both associated with known mechanisms to generate radio waves \citep[e.g.,][]{Zarka07,Lazio07,Grismeier07,Vidotto15,See15,Nichols16,Alvarado-Gomez.etal:16b,BurkhartLoeb17,Turnpenney18,Lynch18}. \cite{See15} have used Zeeman-Doppler imaging (ZDI) maps to estimate the temporal variations of radio emissions from exoplanets directly from the magnetic maps, and using some empirical estimation for the radio power (assuming planetary auroral emissions). \cite{Llama18} presented a more detailed calculation of the coronal radio emissions from V374 Peg using potential field approximation, and hydrostatic coronal density (non-MHD solution). However, both of these studies did not include an actual planet in their simulations. Here, we take an alternative approach to investigate the detectability of exoplanets in the radio bands. Instead of detecting the planet {\it as a radio source}, we estimate the {\it planet's induced modulation of the background coronal radio emission}. As a starting point, we explore this effect in a limited range of stellar and planetary parameters, assuming the exoplanet is stationary in the frame rotating with the stellar rotation, and demonstrate that our model can provide predictions for exoplanetary radio modulations. In particular, we narrow down on the radio bands needed for potential observations of these modulations. We describe our model and the synthetic radio image tool in Section~\ref{Model}, which is based on the development of \citet{Moschou.etal:18}, and detail the results in Section~\ref{Results}. We discuss our findings and state the next step of our investigation of exoplanetary radio modulations in Sections~\ref{Discussion}, and conclude our work in \ref{Conclusions}. \section{Synthetic Radio Imaging of Stellar Coronae} \label{Model} \subsection{MHD Model for the Stellar Corona} \label{MHDmodel} Our method employs a numerical model of the corona and wind of a star computed self-consistently with an orbiting planet stationary in the frame rotating with the stellar rotation. To produce a solution for the stellar corona, we use the {\it BATS-R-US} MHD model \citep{Powell:99,Toth.etal:12} and its version for the solar corona and solar wind \citep{Vanderholst.etal:14}. The model is driven by photospheric magnetic field data, while taking into account the stellar radius, mass, and rotation period. The non-ideal MHD equations are solved on a spherical grid which is stretched in the radial direction, taking into account Alfv\'en wave coronal heating and wind acceleration that are manifested as additional momentum and heating terms. The model also takes into account thermodynamic heating and cooling effects, Poynting flux that enters the corona, and empirical turbulent length-scales. We refer the reader to \cite{Sokolov.etal:13} and \cite{Vanderholst.etal:14} for a complete model description and validation. The model provides a steady-state, self-consistent, three-dimensional MHD solution for the hot corona and accelerated stellar wind (in the reference frame rotating with the star), assuming the given, data-driven boundary conditions. Thus, it provides the three-dimensional distribution of the plasma density, temperature, velocity and magnetic field (the complete set of MHD plasma properties). With all the plasma parameters defined everywhere, we can also deduce the plasma frequency, $\omega_p$, at each cell, which enables us to track the refraction of the radio waves through the model domain. In this study, we use a simple, solar-like dipole field of $10~G$, and solar values for the radius, mass, and rotation period. As the baseline for our investigation, we choose to study a Sun-like star, but we also study a case with a stellar dipole field of $100~G$ representing a moderately active star, or non-active M-dwarf star. As we demonstrate in Section~\ref{Results}, this simplified setting might be sufficient to qualitatively capture the main radio modulation effects regardless of the particular star we use. The strength of the planetary field is chosen to be $0.3~G$ (Earth-like) and $1~G$ (Jupiter-like), with semi-major axis, $a$, of $6,~9,~12,$ and $15~R_\star$ located along the $x=0$ axis. These distances translate to $0.028,~0.042,~0.056,$ and $0.070~AU$. The planet is embedded as a second boundary condition as described in \cite{Cohen11}, where we use planetary boundary number density value of $10^7$~cm$^{-3}$, and planetary boundary temperature value of $10^4$~K. These values produce a thermal outflow from the planet in the range of $10^6-10^7~g~s^{-1}$, which is much lower than observed in hot jupiters \citep[$10^{10}$~g~s$^{-1}$, e.g.,][]{Vidal-Madjar03,Murray-Clay09,Linsky10}, but is sufficiently high to modulate the background coronal density (a stronger outflow will only intensify the modulations). For reference, the escape rate from Saturn is estimated to be between $10^2-10^4~g~s^{-1}$ \citep{Glocer07}. Future studies that focus on specific planetary systems will require a more detailed planetary and outflow description. Such details could be obtain by coupling the coronal model described here with a model for the planetary magnetosphere \citep[such as code coupling in the Space Weather Modeling Framework, see][]{Toth.etal:12}. We require at least 10 grid cells across the planetary body in order to properly resolve it well. Therefore, we use grid refinement with very high resolution around the planet so that the grid size near the planet is $\Delta x \le 0.01~R_\star$. In cases where the planet is closer to the star, the initial spherical grid refinement is sufficient. When the planet is further out from the star, we add an additional ring of high resolution along the orbit of the plane. Due to the grid limitations, we set the planet size to be $0.3~R_\star$ which is roughly three Jupiter radii. We performed several tests that have shown that setting a smaller planet size would only require much higher resolution around it, but the results were similar up to $R_p=0.15~R_\star$, and with the radio modulations remain at a similar magnitude. This is because the modulations also occur due to the surrounding plasma around the planet and not only by the planet itself. We limit ourselves to the case of steady-state solutions, with a stationary planet, that are viewed from different angles within the orbital plane to mimic the orbital phase. Nevertheless, the simulated radio wave modulations from these static cases can only be enhanced by time-dependent effects due to the extra contributions by the dynamic interaction of the planetary magnetosphere with the stellar corona. Thus, here we provide a lower limit for the radio wave modulations. It is possible that the modulations of the radio waves induced by the planet have a similar magnitude to the stellar variation of the background radio flux, just like a dip in the visible flux might be attributed to a starspot instead of a planet transit. \subsection{Synthetic Radio Images} \label{RadioImages} A new tool to create synthetic radio images has been recently added to {\it BATS-R-US} \citep{Benkevitch10,Benkevitch12,Moschou.etal:18}. The tool accounts for the free-free Bremsstrahlung radiation that is created in the corona, and propagates through the non-uniform density of the circumstellar medium \citep[e.g.,][]{Kundu:65book,Oberoi.etal:11,Casini.etal:17,Mohan.Oberoi:17}. The wave refraction depends on the local plasma density and the wave frequency. Thus, the radio waves of a given frequency propagate along curved rather than straight lines. The new radio image tool performs ray-tracing of the curved (i.e. not straight Line-of-Sight) propagation of the waves for a particular frequency, and calculates the integrated intensity of the radio wave at the end of the ray path, at a given pixel on the observing plane. The collection of all the pixels provides a radio image for the particular frequency. The intensity of each pixel, $I_\nu$, for a given frequency, $\nu$, is the integral over the emissivity along the ray. Thus, the intensity is given by \begin{equation} I_\nu=\int B_\nu(T) \kappa_\nu ds. \end{equation} For Bremsstrahlung emission, where $h\nu<<k_BT$, the Planckian spectral black body intensity is \citep{Karzas.Latter:61} \begin{equation} B_\nu(T) = \frac{2k_BT_e\nu^2}{c^2} \end{equation} with $k_B$ being the Boltzmann constant, $T_e$ is the electron temperature, and $c$ is the speed of light. The absorption coefficient, $\kappa_\nu$ is \begin{equation} \kappa_\nu =\frac{n^2_ee^6}{\nu^2(k_BT_e)^{3/2} m^{3/2}_e c}<g_{ff}> \end{equation} Here $n_e$ is the electron number density, $m_e$ is the electron mass, $e$ is the electron charge, and $<g_{ff}>$ is the Gaunt factor, which is assumed to be equal to 10 \citep{Karzas.Latter:61}. The ray-tracing path for a given angular frequency, $\omega=2\pi \nu$, is defined by the radio wave refraction from one grid cell to the next one. The index of refraction is related to the dielectric permittivity, $\epsilon$, and is given by \begin{equation} n^2=\epsilon=1-\frac{\omega^2_p}{\omega^2}, \end{equation} with $\omega^2_p=4\pi e^2 n_e/m_e$ being the plasma frequency (rad~s$^{-1}$). Assuming plasma quasi-neutrality, where the densities of electrons and ions are the same, we can write the mass density as $\rho=m_pn_e$ with $m_p$ being the proton mass. Thus we have \begin{equation} \label{Refraction} \epsilon=1-\frac{\rho}{\rho_c}, \end{equation} where $\rho_c=m_pm_e\omega^2/4\pi e^2$ is the critical plasma density at which the refraction index equals zero and the wave cannot be transmitted through. Eq.~\ref{Refraction} demonstrates that higher frequency radio waves can penetrate deeper into the solar atmosphere, where the density is higher. Thus, the synthesized images for higher frequencies capture more detailed structures, such as active regions, in contrast to the lower frequencies synthesized images, that capture the lower density regions at the top of the corona \citep[as demonstrated in][]{Moschou.etal:18}. \section{Results} \label{Results} Our simulations provide the average radio flux intensity over the radio image (for a given frequency and orbital phase). The intensity modulation is estimated by normalizing the flux for each case, frequency, and phase by the associated flux obtained in the case where there is no planet embedded in the simulation (i.e., the ambient flux). It is assumed that in order for the modulations to be observable, the ambient flux itself should be of an observable magnitude (see Section~\ref{Discussion}). Figure~\ref{fig:f1} shows the radio flux intensity for the different frequencies as obtained from our synthetic radio images for the ambient stellar corona, with stellar magnetic fields of $10~G$ and $100~G$, and without the planet. Overall, the synthetic flux intensities match rather well the observed flux densities for the quiet Sun as observed from the Earth \cite[see Figure 1 in][]{Zarka07}. The deviations are clearly due to the lack of any active regions in our dipolar stellar field, which provide additional radio flux, especially in the higher frequency range. For the same reason, the synthetic radio flux becomes flat above $750~MHz$. The flux in the case of a stellar field of $100~G$ is higher because of an overall increase in the coronal plasma density since the plasma is confined in coronal loops with larger scale-heights, compared to the $10~G$ dipole case. Figure~\ref{fig:f1} also shows how these synthetic radio spectra appear if the source, which is solar-like, is located at $10~pc$. Figure~\ref{fig:f2} shows a three dimensional view of the solutions with planetary magnetic fields of $0.3$ and $1~G$, a stellar dipole field of $10G$, and different orbital separations. The plots are colored with the number density, where selected magnetic field lines are also shown. The star and the planet are shown as red and blue spheres, respectively. The most notable feature in the plots is that the ambient coronal density is modulated by the planet and its magnetosphere. When the planet is closer, at $0.028$ or $0.042~AU$, the lower-density planetary magnetosphere takes over a higher coronal density region, while in the cases of further orbital separations, at $0.056$ and $0.070~AU$, the ambient coronal density is lower than the density of the planetary magnetosphere. It should be noted that the main driver for the modulations in the radio intensity is the density contrast between the ambient corona and the planetary densities at the planetary orbit. This contrast leads to a change in the local plasma frequency, and as a result, the refraction of the radio wave, as well as the radio flux intensity, are modulated. Thus, if the planet significantly modifies the density in a region, it will significantly affect the radio wave refraction, and the question is by how much. Alternatively, if the density contrast between the corona and the planetary magnetosphere is small, we expect weak modulation of the radio flux intensity. Figure~\ref{fig:f3} and \ref{fig:f4} show a three dimensional view, as well as synthetic radio images for frequencies of $30~MHz$ and $250~MHz$ for cases with planetary field of $0.3~G$ and planet located at $0.028~AU$ and $0.070~AU$, respectively. The plots are displayed from four viewpoints on the simulation domain, which represent four particular phases along the planet's orbit. In the first viewpoint, labeled ``L", the planet appears to the left of the star (pre-transit), in the second viewpoint, labeled ``T", the planet is transiting the star, in the third viewpoint, labeled ``R", the planet appears to the right of the star (post-transit), and in the fourth viewpoint, labeled ``E", the planet is being eclipsed by the star. The local radio flux intensity is in units $[W\;m^{-2}\;Hz^{-1}]$. The figures clearly show that the modulations of the ambient stellar corona plasma by the planet are reflected in the radio images, and that the modulations are different for the different orbital phases. Figure~\ref{fig:f5} shows synthetic light curves of the intensity modulation of the radio flux as a function of orbital phase. The transit phase is located in the middle of each plot (at a phase of $0.5$), while the planetary eclipse is located at phase $0$. Each curve represents the relative intensity modulation at a given frequency, with respect to the flux intensity of that frequency for the no-planet, ambient case. The sampled frequencies are $10,~30,~100,~250,~500,~750~MHz$ and $1,~10~GHz$. This range covers the potential frequencies that could be used to detect the planet modulations. It can be seen that in most cases, the low frequency range of $10-30~MHz$ is visibly modulated by the planet. Some weaker modulations occur in the $100~MHz$ and above bands. \subsection{Short Orbits Intensity Modulations} When the planet resides at $a=0.028~AU$, the intensity modulations are driven by the strong star-planet interaction since the planet is located at or close to the Alfv\'en surface (see top panels in Figure~\ref{fig:f2}). While the majority of the background stellar radio emission comes from the dense, helmet streamer regions that face the observer, in the case of $a=0.028~AU$, the edge of the helmet streamer is disrupted by the interaction with the planet. This disruption could involve plasma compression, mixing of coronal and magnetospheric plasmas, as well as creating plasma cavities. As a result, the radio intensity in different bands can be modulated by this local interaction region. Since the helmet streamers emissions are blocked during transit, the contribution to the background emission depends on the emissions generated at the interaction region. Looking closely at the synthetic radio images and Figure~\ref{fig:f2}, we find that for the case of planetary field of $1~G$, the intensity contribution of the interaction region in both the $10~MHz$ and $30~MHz$ bands is greater than the ambient intensity from the helmet streamers. Thus, there is a significant intensity increase in these bands during transit. Similar trend is found in the $30~MHz$ intensity for the case with planetary field of $0.3~G$. However, the intensity contribution in the $10~MHz$ with this field strength is found to be negligible comparing to the ambient helmet streamers intensity, which is blocked during transit. Thus, the overall trend we find for this case is an intensity drop in the $10~MHz$ during transit. In the higher frequency bands, even in the $10~GHz$ band, we find a small but noticeable drop of about 10\% in the intensity due to the blockage of the helmet streamers by the planet. An interesting feature in the $a=0.028~AU$ and a weak planetary field case is that the emission peaks are slightly shifted from the transit point by about $\sim15$ degrees. This shift seems to happen due to a late but still strong interaction between the planetary magnetosphere and the stellar corona beyond the transit point, and the fact that the interaction between the planetary and stellar plasma occurs at or within the Alfv\'enic point. This a-symmetry is not visible in any of the other cases. \subsection{Mid-range Orbits Intensity Modulations} For the $a=0.042~AU$ cases, we find similar trends to that of the $a=0.028~AU$ case, but with a significantly reduced magnitude, within the 10\% range of intensity increase or decrease. While the planet and the helmet streamers can still interact at this orbit, the interaction is much weaker than the case with $a=0.028~AU$. For the $a=0.056~AU$ cases, almost no signs of star-planet interaction is visible, with the exception of a small increase in the $10~MHz$ intensity. This increase is slightly larger for the case with a stronger planetary field, where the magnetosphere is larger comparing to the $0.3~G$ planetary field case, and a stronger plasma compression at the magnetopause. Interestingly, the transit shadowing of the helmet streamers emissions in the $100~MHz$ is still visible, with a decrease of almost 20\% in transit. This particular feature, which has significant modulation beyond the very low frequency range, could potentially be observed. \subsection{Long Orbits Intensity Modulations} At larger orbital separation of $a=15~R_\star$ and planetary field of $0.3~G$, there is a significant enhancement in the $10~MHz$ band. This enhancement is due to a cavity created in front of the planet (see bottom-left panel of Figure~\ref{fig:f2}), which seems to compress the top of the helmet streamer and increase the emissions in this band. This cavity does not appear in the case with a stronger planetary field, due to the increase in plasma density near its magnetopause. The intensity of the $30~MHz$ in the case of weaker planetary field is reduce in the form of two "wings" of the light curve. This pattern suggests that the $30~MHz$ band represents the flanks or edge of the planetary magnetosphere, but it is now shadowing the $30~MHz$ ambient emissions. The magnetosphere for the case with planetary field of $1~G$ is larger. As a result, the density of the magnetospheric plasma in this case is slightly lower, leading to a smaller enhancement in the $10~MHz$ band. The ambient emissions in the $30~MHz$ are still shadowed by the planet, but the magnetospheric impact on the shape of the light curve is not noticeable in the case of the stronger planetary field. The transit intensity drop in the $100~MHz$ is still noticeable at this longer orbit. \section{Discussion} \label{Discussion} As shown in recent papers \citep[e.g.,][]{Zarka07,BurkhartLoeb17,Turnpenney18}, the detection of exoplanetary radio emissions as a source of the radio signal (due to auroral emission) seems to be challenging due to the very low flux in the low-frequency range and due to the fact that ground-based observations cannot be made for frequencies below the ionospheric cutoff frequency of $10MHz$. For the ideal Sun-like cases presented here (as seen from Figure~\ref{fig:f1}), the fluxes are obviously too small for detection. However, there are known radio sources, even solar-like stars \citep{Villadsen14}, that are observable and potentially could host a planet. For example, HD 225239, a G2V star, is $18.4~pc$ away from us, and has a radio flux of $0.18~mJy$ in the $8.44~GHz$ band \citep{Wendker95}. In addition, recent radio observations of stars with known exoplanets have revealed feasible radio fluxes. For example, an intensity of up to few $mJy$ in the $150~MHz$ in HD 189733 and HD 209458 \citep[GMRT,][]{LecavelierDesEtangs11}, and intensity of few $mJy$ in the $1~GHz$ from Proxima Centauri \citep[The Australia Telescope and Anglo-Australian Telescope,][]{Slee03}. Our results show that close-in exoplanets could modulate the ambient coronal radio emissions by a significant amount (10\% or more). This means that {\it if the ambient flux itself could be observed, so do the modulations} in many cases. The most significant modulations in the stationary cases are seen in the low-frequencies ($10-100~MHz$). This is not surprising, since these frequencies are associated with emissions from coronal regions with lower densities that the planet disturbs the most (higher frequencies are emitted from regions much closer to the stellar surface). From our numerical simulations, we were able to identify two different mechanisms that contribute in the modulation of the stellar radio corona. Our results indicate that the largest modulations of the ambient radio emissions are created by the strong star-planet interaction and the interaction of the planet with the stellar helmet streamers. The other main modulation is created by the shadowing of the emitting streamer regions during transit. This shadowing is visible in a significant manner even in the higher frequency range, and it is probably the main feature that could be observed with current radio observing facilities. Our results also indicate that the most notable modulations occur when the planet is very close to the star, and the star-planet interaction is strong, or in the case where the planet is located at rather longer orbits, where it dominates the low-density, ambient plasma, but it still affects the background emission (this effect is probably reduced with greater orbital separation). The modulations are smallest for the intermediate cases, where the ambient plasma is still quite high, but the star-planet interaction is weaker. In our investigation, we use a limited parameter space that covers a solar-like stellar magnetic field, two possible planetary fields, and the semi-major axis. In order to extend our parameter space, we also look at the impact of the planetary field polarity, and the magnitude of the stellar field. In both cases, we only test the cases when the planet is very close or far from the star ($6$ and $15~R_\star$), and a planetary field strength of $1G$. \subsection{The Effect of the Planetary Magnetic Field Polarity} \label{PMeffect} Figure~\ref{fig:f6} compares the modulations for the cases where the planetary field is $\pm1~G$ (with respect to the stellar dipole polarity). It can be clearly seen that when the planet is further away from the star, the results are not affected at all by the polarity of the planetary field. However, when the planet is close to the star, we find a greater difference between the two cases. The planetary field polarity has a stronger impact on the planetary density profile at closer orbits, while the effect is significantly reduced at further orbits. This happens since at closer orbits, where the planet is located at or below the Alfv\'en point, the star-planet interaction is more sensitive to the polarity of the planetary field. In particular, the strong enhancements in the $10$ and $30~MHz$ bands for the case where the planetary field polarity is the same as the star are generated by local plasma enhancements through the star-planet interaction. When the field polarity is opposite, magnetospheric plasma is allowed to escape, so the local density enhancements are reduced, resulting in a suppression of the intensity enhancements in the low frequency bands. \subsection{The Effect of the Stellar Magnetic Field Strength} \label{StrongB} Finally, we investigate how a stronger stellar magnetic field affects the modulations of the coronal radio emission induced by the planet. This is an important factor when considering M-dwarf stars, which are known to potentially have extremely strong magnetic fields of up to few kG \citep{Reiners.Basri:10}, and are much more magnetically active than the Sun. These strong stellar fields could potentially produce strong coronal radio emissions \citep[such as in V374 Peg,][]{Llama18}. We repeat the simulations with a stellar dipole field of $100G$, and the comparison is shown in Figure~\ref{fig:f7}. In general, increasing the stellar dipole strength leads to a larger size of the coronal loops so the helmet streamers extend to a greater distance comparing to the case with a weaker stellar dipole field. The coronal plasma is trapped in these larger closed loops, resulting in an overall enhancement of the coronal density, and a reduction of the density drop with radius. When a planet resides at certain distance from the star, it is surrounded by certain plasma density. By increasing the stellar field strength, we practically move the planet to a higher density region than before. This behavior is clearly seen in Figure~\ref{fig:f7}. The case of $a=0.028~AU$ is initially located (with stellar field of $10~G$) at the top of the helmet streamers, and experiences strong interaction with the lower density plasma. This interaction is visible in the $10-30~MHz$ bands. When we increase the stellar field to $100~G$, the planet is surrounded by, and interacts with a much more dense plasma. As a result, the significant modulations in the low frequency range is reduced, and the modulations are more visible in the $100~MHz$ band. When the planet is at $a=0.070~AU$, the increase in the stellar field strength practically move the planet inwards, and the intensity modulation trends resemble the case for weaker stellar field, and a planet at $a=0.028~AU$ (top-right panel of Figure~\ref{fig:f2}). \subsection{Realistic Radio Observations, Temporal Modulations, and Simulations of Real Planetary Systems} Our simplified approach here uses a idealized, dipolar stellar magnetic field, and a static, steady-state solutions for the structure of the stellar corona with a planet embedded in it. The phase variations are mimicked by viewing the static, three-dimensional solution from different angles. A number of factors, if included, could immediately provide additional variability of the radio intensity. In reality, the structure of the stellar magnetic field and the stellar corona is more complex than the axisymmetric dipolar geometry we use here, and the corona, which hosts the planet, has sectors with different plasma properties along the planetary orbit. Thus, one should expect variations in the plasma density along the planetary orbit as the planet crosses from one plasma sector to another. For short orbits of few days, the size of the coronal sectors should be of the order of the spatial coverage of a large helmet streamer over the orbital plane. Such variations could be captured by modeling more realistic stellar systems. Simulations using ZDI magnetic maps \citep{Donati.etal:89} have been done using our code \citep[e.g.,][]{Cohen.etal:10,Cohen.etal:14,Garraffo.etal:16,Alvarado-Gomez.etal:16a,Alvarado-Gomez.etal:16b,Garraffo17,Pognan18} to simulate the coronae and winds of specific stars. Synthetic radio images could be produced for these more realistic coronal solutions in the same manner of the results presented here. The orbital motion of the planet with respect to the ambient coronal plasma, not included in our static simulation, could be included in a time-dependent model as presented in \cite{Cohen11a,Cohen11b}. We expect that the implementation of the planetary orbital motion will enhance the star-planet interaction, and potentially increase the modulations of the radio intensity given the particular geometry of the planetary and stellar magnetic fields (see discussion in Section~\ref{PMeffect}). An additional factor to consider in a realistic case is the temporal variations of the radio signal itself. Variations in the intensity of the radio signal can be due to photospheric convective and diffusive motions, coronal waves, stellar wind, and stellar rotation. All these processes create temporal density variations in the medium through which the radio waves propagate. These variations extend to a wide range of temporal scales --- seconds, minutes, hours, and days. Of course, it is important to identify these variations in the radio signal in order to isolate the variations that are associated with the planetary orbital motion, which should be of the order of a few hours or more for a planetary orbit of few days. It is also important to remember that radio observations are not the typical time-series of a source, but it is an observation that is quite diffusive in the spatial manner and it is sometimes hard to identify the exact source location due to refraction and scattering effects. It is reasonable to assume that the short time variations of the order of less than an hour are associated with plasma variations in the low corona and close to the photosphere. Radio emissions of the dense plasma associated with these regions appear in the higher frequency range of the radio spectrum at $1~GHz$ or above. Thus, we expect the ambient large-scale variations of the radio signal, originating from higher altitudes and in the range below $1~GHz$, to have longer time scales than minutes. The coronal helmet streamers tend to stick around for days and even months. Therefore, signal variations in the $10-100~MHz$ range, which originates from the helmet streamers, should have a similar, longer timescale comparing to the variation timescales at higher frequencies. Stellar flares can also dramatically disrupt the coronal density structure and affect the radio signal. There is an active search for such radio signal in an attempt to observe stellar Coronal Mass Ejections \citep[CMEs, e.g.,][]{Villadsen17,Crosley18}. Of course, stellar flares can mask the radio modulation created by an exoplanet. However, stellar flares are typically visible in other wavelengths, and a flare-like different emission mechanism would be easy to distinguish by using, for example, polarization measurements. Therefore, we should have a pretty good idea whether or not a flare occurs during the time of radio exoplanet observation so that period could be excluded to avoid uncertainties. Our new radio tool could provide predictions for the frequency range at which it is most likely to detect a signal for specific targets. Such predictions can be used by observational radio facilities, such as LOFAR\footnote{\url{http://www.lofar.org}}, MWA\footnote{\url{http://www.mwatelescope.org/}}, Effelsberg\footnote{\url{https://www.mpifr-bonn.mpg.de/en/effelsberg}} and VLA\footnote{\url{https://public.nrao.edu/telescopes/vla/}}. It is important to note that here we investigate planets with a very short orbital period to maximize the modulation effect, and we find modulation of 50\% or more in some cases. While we expect that planets with a larger orbital separation will have a much smaller modulation effect, the modulations can still be of the order of few percent. In addition, as described in Section~\ref{StrongB}, in systems where the stellar field is much stronger, such as M-dwarf systems, the reduction due to the orbital separation can be compensated by the increase in stellar field and coronal density. \section{Conclusions} \label{Conclusions} We use the modeling tool presented by \citet{Moschou.etal:18}, which provides synthetic radio images of the free-free Bremsstrahlung stellar coronae radiation, to calculate the modulations of this coronal radio emission by a close-orbit exoplanet. The source of the modulations is the modification of the radio wave refraction pattern as a result of the change in the ambient plasma density by the planet. We find that the absolute magnitude of the modulation is significant, and can reach above 100\% in the $10-100MHz$ bands and between 2-10\% in the frequencies above $250MHz$ in some cases. Thus, our model shows that exoplanet radio transit signals could be detectable if the ambient coronal radio emissions are observable, potentially even in the higher-frequency radio bands. We find that the intensity modulation is driven by the star-planet interaction for short-orbit planets, and by the density contrast between the planet and the ambient coronal plasma for longer-orbit planets. We find that the strength of the stellar magnetic field affects the modulation while the polarity of the planetary magnetic field matters only for the short-orbit cases. We plan to apply the new radio tool to specific planetary systems. These simulations will include a realistic stellar magnetic field and the relative motion between the star and the planet. Thus, such simulations could provide predictions for exoplanetary radio search in these systems. \acknowledgments We thank for an unknown referee for her/his useful comments and suggestions. The work presented here was funded by a NASA Living with a Star grants NNX16AC11G and NASA NExSS grant NNX15AE05G. JDAG was supported by Chandra grants AR4-15000X and GO5-16021X. Simulation results were obtained using the Space Weather Modeling Framework, developed by the Center for Space Environment Modeling, at the University of Michigan with funding support from NASA ESS, NASA ESTO-CT, NSF KDI, and DoD MURI. Simulations were performed on the Massachusetts Green High Performance Computing Center (MGHPCC) cluster.
bbe10c0dd5a22b22dfbabcd0d30b817762d0d2cf
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{The algorithm} \label{sec:algorithm} The reconstruction of the mass of the $\PHiggs$ boson pair is based on maximizing the likelihood function: \begin{align} & \mathcal{P}(\bm{p}^{\ensuremath{\textrm{vis}}(1)},\bm{p}^{\ensuremath{\textrm{vis}}(2)},\bm{p}^{\ensuremath{\textrm{vis}}(3)},\bm{p}^{\ensuremath{\textrm{vis}}(4)};\ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}}|m_{\textrm{X}}) = \frac{32\pi^{4}}{s} \, \int \, d\Phi_{n} \, \cdot \nonumber \\ & \quad \delta\left( \left(\sum_{i=1}^{4} \, \hat{E}_{\Pgt(i)}\right)^{2} - \left(\sum_{i=1}^{4} \, \bm{\hat{p}}^{\Pgt(i)}\right)^{2} - m_{\textrm{X}} \right) \delta\left( \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}} + \sum_{i=1}^{4} \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\Pgt(i)} \right) \cdot \delta\left( \ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} + \sum_{i=1}^{4} \ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\Pgt(i)} \right) \cdot \nonumber \\ & \quad \vert \ensuremath{\textrm{BW}}^{(1)}_{\Pgt} \vert^{2} \cdot \vert \mathcal{M}^{(1)}_{\Pgt\to\cdots}(\bm{\hat{p}}) \vert^{2} \cdot W(\bm{p}^{\ensuremath{\textrm{vis}}(1)}|\bm{\hat{p}}^{\ensuremath{\textrm{vis}}(1)}) \, \cdot \, \vert \ensuremath{\textrm{BW}}^{(2)}_{\Pgt} \vert^{2} \cdot \vert \mathcal{M}^{(2)}_{\Pgt\to\cdots}(\bm{\hat{p}}) \vert^{2} \cdot W(\bm{p}^{\ensuremath{\textrm{vis}}(2)}|\bm{\hat{p}}^{\ensuremath{\textrm{vis}}(2)}) \, \cdot \nonumber \\ & \quad \vert \ensuremath{\textrm{BW}}^{(3)}_{\Pgt} \vert^{2} \cdot \vert \mathcal{M}^{(3)}_{\Pgt\to\cdots}(\bm{\hat{p}}) \vert^{2} \cdot W(\bm{p}^{\ensuremath{\textrm{vis}}(3)}|\bm{\hat{p}}^{\ensuremath{\textrm{vis}}(3)}) \, \cdot \, \vert \ensuremath{\textrm{BW}}^{(4)}_{\Pgt} \vert^{2} \cdot \vert \mathcal{M}^{(4)}_{\Pgt\to\cdots}(\bm{\hat{p}}) \vert^{2} \cdot W(\bm{p}^{\ensuremath{\textrm{vis}}(4)}|\bm{\hat{p}}^{\ensuremath{\textrm{vis}}(4)}) \, \cdot \nonumber \\ & \quad W_{\ensuremath{\textrm{rec}}}( \ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} | \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} ) \label{eq:likelihood} \end{align} with respect to the parameter $m_{\textrm{X}}$, the mass of the postulated heavy particle $\textrm{X}$ that decays into a pair of $\PHiggs$ bosons. We refer to the electron, muon, or hadrons produced in each $\Pgt$ decay as the ``visible'' $\Pgt$ decay products. Their energy (momentum) is denoted by the symbol $E_{\ensuremath{\textrm{vis}}(i)}$ ($\bm{p}^{\ensuremath{\textrm{vis}}(i)}$), where the index $i$ ranges between $1$ and $4$. The symbol $E_{\Pgt(i)}$ ($\bm{p}^{\Pgt(i)}$) denotes the energy (momentum) of the $i$-th $\Pgt$ lepton. Bold letters represent vector quantities. The true values of energies and momenta are indicated by a hat, while symbols without a hat represent the measured values. We use a Cartesian coordinate system, the $z$-axis of which is defined by the proton beam direction. The symbol $d\Phi_{n} = \prod_{i}^{n} \, \frac{d^{3}\bm{p}^{(i)}}{(2\pi)^{3} \, 2 E_{(i)}}$ denotes the differential $n$-particle phase-space element, where $n$ refers to the number of particles in the final state. The symbol $\vert \ensuremath{\textrm{BW}}^{(i)}_{\Pgt} \vert^{2} \cdot \vert \mathcal{M}^{(i)}_{\Pgt\to\cdots}(\bm{\hat{p}}) \vert^{2}$ denotes the squared modulus of the ME for the decay of the $i$-th $\Pgt$ lepton. The $\delta$-function $\delta\left( \left(\sum_{i=1}^{4} \, E_{\Pgt(i)}\right)^{2} - \left(\sum_{i=1}^{4} \, \bm{\hat{p}}^{\Pgt(i)}\right)^{2} - m_{\textrm{X}} \right)$ enforces the condition that the mass of the system of four $\Pgt$ leptons equals the value of the parameter $m_{\textrm{X}}$ given on the left-hand-side of the equation. The functions $W(\bm{p}^{\ensuremath{\textrm{vis}}(i)}|\bm{\hat{p}}^{\ensuremath{\textrm{vis}}(i)})$ and $W_{\ensuremath{\textrm{rec}}}( \ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} | \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} )$ are referred to as ``transfer functions'' (TF). They quantify the experimental resolutions with which the momenta of particles are measured in the detector. The nomenclature $W(\bm{p}|\bm{\hat{p}})$ has the following meaning: The value of the function $W(\bm{p}|\bm{\hat{p}})$ represents the probability density to observe the measured momentum $\bm{p}$, given that the true value of the momentum is $\bm{\hat{p}}$. The function $W(\bm{p}^{\ensuremath{\textrm{vis}}(i)}|\bm{\hat{p}}^{\ensuremath{\textrm{vis}}(i)})$ represents the resolution for measuring the momentum of the visible $\Pgt$ decay products, while the function $W_{\ensuremath{\textrm{rec}}}( \ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} | \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} )$ quantifies the resolution for measuring the momentum, in the $x$-$y$ plane, of the hadronic recoil. The hadronic recoil is defined as the vectorial sum of all particles in the event that do not originate from the decay of the two $\PHiggs$ bosons. Conservation of momentum in the plane transverse to the beam direction implies that the components $\ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}}$ and $\ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}}$ of its true momentum are equal to the negative sum of the momentum components $\ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\Pgt(i)}$ and $\ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\Pgt(i)}$ of the four $\Pgt$ leptons, \begin{equation*} \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}} = -\left( \sum_{i=1}^{4} \, \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\Pgt(i)} \right) \quad \mbox{ and } \quad \ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} = -\left( \sum_{i=1}^{4} \, \ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\Pgt(i)} \right) \, , \end{equation*} as enforced by the two $\delta$-functions $\delta\left( \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}} + \sum_{i=1}^{4} \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\Pgt(i)} \right)$ and $\delta\left( \ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} + \sum_{i=1}^{4} \ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\Pgt(i)} \right)$ in the integrand. The TF for the visible $\Pgt$ decay products and for the hadronic recoil are taken from Ref.~\cite{SVfitMEM}. The resolution on the $\ensuremath{p_{\textrm{T}}}\xspace$ of $\ensuremath{\Pgt_{\textrm{h}}}\xspace$ is modelled by the function: \begin{equation} W_{\ensuremath{\textrm{h}}}( \ensuremath{p_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}} | \ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}} ) = \begin{cases} \mathcal{N} \, \xi_{1} \, \left( \frac{\alpha_{1}}{x_{1}} - x_{1} - \frac{x - \mu}{\sigma} \right)^{-\alpha_{1}} \, & \mbox{if } x < x_{1} \\ \mathcal{N} \, \exp\left( -\frac{1}{2} \, \left( \frac{x - \mu}{\sigma} \right)^{2} \right) \, & \mbox{if } x_{1} \leq x \leq x_{2} \\ \mathcal{N} \, \xi_{2} \, \left( \frac{\alpha_{2}}{x_{2}} - x_{2} - \frac{x - \mu}{\sigma} \right)^{-\alpha_{2}} \, & \mbox{if } x > x_{2} \, , \end{cases} \label{eq:tf_tauToHadDecays_pT} \end{equation} where we use the values $\mu = 1.0$, $\sigma = 0.03$, $x_{1} = 0.97$, $\alpha_{1} = 7$, $x_{2} = 1.03$, and $\alpha_{2} = 3.5$ for its parameters, while its $\eta$, $\phi$, and mass are assumed to be reconstructed with negligible experimental resolution. The latter assumption is also made for the $\ensuremath{p_{\textrm{T}}}\xspace$, $\eta$, and $\phi$ of electrons and muons. The momentum of the hadronic recoil is modelled by a two-dimensional normal distribution and assumed to be reconstructed with a resolution of $\sigma_{\textrm{x}} = \sigma_{\textrm{y}} = 10$~\ensuremath{\textrm{GeV}}\xspace on each of its components $\ensuremath{p_{\textrm{x}}}\xspace$ and $\ensuremath{p_{\textrm{y}}}\xspace$: \begin{align} W_{\ensuremath{\textrm{rec}}}( \ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} | \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} ) = & \frac{1}{2\pi \, \sqrt{\vert V \vert}} \, \exp \left( -\frac{1}{2} \left( \begin{array}{c} \Delta\ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}} \\ \Delta\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} \end{array} \right)^{T} \cdot V^{-1} \cdot \left( \begin{array}{c} \Delta\ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}} \\ \Delta\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} \end{array} \right) \right) \, , \nonumber \\ \quad \mbox{ with } \quad V = & \left( \begin{array}{cc} \sigma_{x}^{2} & 0 \\ 0 & \sigma_{y}^{2} \end{array} \right) \, . \end{align} The number of particles in the final state, $n$, depends on how many $\Pgt$ leptons decay to electrons or muons and how many decay to hadrons. Following the formalism developed in Ref.~\cite{SVfitMEM}, we treat hadronic $\Pgt$ decays as two-body decays into a hadronic system $\ensuremath{\Pgt_{\textrm{h}}}\xspace$ and a $\Pnut$. Correspondingly, $n$ increases by $3$ for each $\Pgt$ lepton that decays to an electron or a muon and by $2$ units for each $\Pgt$ lepton that decays hadronically. Particles that are part of the hadronic recoil are treated as described in Section~2.2 of Ref.~\cite{SVfitMEM} and do not increase $n$. The dimensionality of the integration over the phase-space element $d\Phi_{n}$ can be reduced by means of analytic transformations. Two (three) variables are sufficient to fully parametrize the kinematics of hadronic (leptonic) $\Pgt$ decays. Following Ref.~\cite{SVfitMEM}, we choose to parametrize hadronic $\Pgt$ decays by the variables $z$ and $\phi_{\ensuremath{\textrm{inv}}}$, and leptonic $\Pgt$ decays by the variables $z$, $\phi_{\ensuremath{\textrm{inv}}}$, and $m_{\ensuremath{\textrm{inv}}}$. The variable $z$ corresponds to the fraction of $\Pgt$ lepton energy, in the laboratory frame, that is carried by the visible $\Pgt$ decay products. The variable $\phi_{\ensuremath{\textrm{inv}}}$ specifies the orientation of the $\bm{p}^{\ensuremath{\textrm{inv}}}$ vector relative to the $\bm{p}^{\ensuremath{\textrm{vis}}}$ vector (see Fig.~\ref{fig:tauDecayParametrization} for illustration), where the vector $\bm{p}^{\ensuremath{\textrm{inv}}}$ refers to the vectorial sum of the momenta of the two neutrinos (to the momentum of the single $\Pnut$) produced in leptonic (hadronic) $\Pgt$ decays. The variable $m_{\ensuremath{\textrm{inv}}}$ denotes the mass of the neutrino pair produced in leptonic $\Pgt$ decays. \begin{figure}[h] \begin{center} \includegraphics*[height=58mm]{figures/tauDecayParametrization.pdf} \end{center} \caption{ Illustration of the variable $\phi_{\ensuremath{\textrm{inv}}}$ that specifies the orientation of the $\bm{p}^{\ensuremath{\textrm{inv}}}$ vector relative to the $\bm{p}^{\ensuremath{\textrm{vis}}}$ vector. The angle $\theta_{\ensuremath{\textrm{inv}}}$ between the $\bm{p}^{\ensuremath{\textrm{inv}}}$ vector and the $\bm{p}^{\ensuremath{\textrm{vis}}}$ vector is related to the variable $z$, as described in Section 2.4 of Ref.~\cite{SVfitMEM}, from which the illustration was taken. } \label{fig:tauDecayParametrization} \end{figure} Expressions for the product of the phase-space element $d\Phi_{n}$ with the squared moduli $\vert \ensuremath{\textrm{BW}}^{(i)}_{\Pgt} \vert^{2} \cdot \vert \mathcal{M}^{(i)}_{\Pgt\to\cdots}(\bm{\hat{p}}) \vert^{2}$ of the ME for the $\Pgt$ decays, obtained by the aforementioned transformations, are given by Eq.~(33) in Ref.~\cite{SVfitMEM}. The expressions read: \begin{align} \vert \ensuremath{\textrm{BW}}_{\Pgt} \vert^{2} \cdot \vert \mathcal{M}^{(i)}_{\Pgt\to\cdots}(\bm{\tilde{p}}) \vert^{2} \, d\Phi^{(i)}_{\ensuremath{\tauh \, \Pnu_{\kern-0.10em \Pgt}}\xspace} = & \, \frac{\pi}{m_{\Pgt} \, \Gamma_{\Pgt}} \, f_{\ensuremath{\textrm{h}}}\left(\bm{\hat{p}}^{\ensuremath{\textrm{vis}}(i)}, m^{\ensuremath{\textrm{vis}}(i)}, \bm{\hat{p}}^{\ensuremath{\textrm{inv}}(i)}\right) \, \frac{d^{3}\bm{\hat{p}}^{\ensuremath{\textrm{vis}}}}{2 \hat{E}_{\ensuremath{\textrm{vis}}}} \, dz \, d\phi_{\ensuremath{\textrm{inv}}} \quad \mbox{ and } \nonumber \\ \vert \ensuremath{\textrm{BW}}_{\Pgt} \vert^{2} \cdot \vert \mathcal{M}^{(i)}_{\Pgt\to\cdots}(\bm{\tilde{p}}) \vert^{2} \, d\Phi^{(i)}_{\ensuremath{\Plepton \, \APnu_{\kern-0.10em \Plepton} \, \Pnu_{\kern-0.10em \Pgt}}\xspace} = & \, \frac{\pi}{m_{\Pgt} \, \Gamma_{\Pgt}} \, f_{\ell}\left(\bm{\hat{p}}^{\ensuremath{\textrm{vis}}(i)}, m^{\ensuremath{\textrm{vis}}(i)}, \bm{\hat{p}}^{\ensuremath{\textrm{inv}}(i)}\right) \, \frac{d^{3}\bm{\hat{p}}^{\ensuremath{\textrm{vis}}}}{2 \hat{E}_{\ensuremath{\textrm{vis}}}} \, dz \, dm^{2}_{\ensuremath{\textrm{inv}}} \, d\phi_{\ensuremath{\textrm{inv}}} \nonumber \, , \end{align} where the functions $f_{\ensuremath{\textrm{h}}}$ and $f_{\ell}$ are defined as: \begin{align} f_{h}\left(\bm{p}^{\ensuremath{\textrm{vis}}}, m_{\ensuremath{\textrm{vis}}}, \bm{p}^{\ensuremath{\textrm{inv}}}\right) = & \frac{\vert\mathcal{M}^{\ensuremath{\textrm{eff}}}_{\Pgt \to \ensuremath{\Pgt_{\textrm{h}}}\xspace\Pnut}\vert^{2}}{256\pi^{6}} \cdot \frac{E_{\ensuremath{\textrm{vis}}}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}}\vert \, z^{2}} \quad \mbox{ and } \nonumber \\ f_{\Plepton}\left(\bm{p}^{\ensuremath{\textrm{vis}}}, m_{\ensuremath{\textrm{vis}}}, \bm{p}^{\ensuremath{\textrm{inv}}}\right) = & \frac{I_{\ensuremath{\textrm{inv}}}}{512\pi^{6}} \cdot \frac{E_{\ensuremath{\textrm{vis}}}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}}\vert \, z^{2}} \nonumber \, , \end{align} with: \begin{align} \vert \mathcal{M}^{\textrm{eff}}_{\Pgt \to \ensuremath{\tauh \, \Pnu_{\kern-0.10em \Pgt}}\xspace} \vert^{2} = & 16 \pi \, m_{\Pgt} \, \Gamma_{\Pgt} \cdot \frac{m_{\Pgt}^{2}}{m_{\Pgt}^{2} - m_{\ensuremath{\textrm{vis}}}^{2}} \cdot \mathcal{B}(\Pgt \to \textrm{hadrons} + \Pnut) \quad \mbox { and } \nonumber \\ I_{\ensuremath{\textrm{inv}}} = & \, \frac{1}{2} \, m_{\ensuremath{\textrm{inv}}} \, \int \, \frac{d\Omega_{v}}{(2\pi)^{3}} \, \vert\mathcal{M}_{\Pgt \to \ensuremath{\Plepton \, \APnu_{\kern-0.10em \Plepton} \, \Pnu_{\kern-0.10em \Pgt}}\xspace}\vert^{2} \, , \quad \mbox { where } \nonumber \\ \vert\mathcal{M}_{\Pgt \to \ensuremath{\Plepton \, \APnu_{\kern-0.10em \Plepton} \, \Pnu_{\kern-0.10em \Pgt}}\xspace} \vert^{2} = & 64 \, G^{2}_{F} \, \left( E_{\Pgt} \, E_{\APnu_{\Plepton}} - \bm{p}^{\Pgt} \cdot \bm{p}^{\APnu_{\Plepton}} \right) \, \left( E_{\Plepton} \, E_{\Pnut} - \bm{p}^{\Plepton} \cdot \bm{p}^{\Pnut} \right) \nonumber \end{align} and $\mathcal{B}(\Pgt \to \textrm{hadrons} + \Pnut) = 0.648$~\cite{PDG} denotes the measured branching fraction for $\Pgt$ leptons to decay hadronically. The knowledge that the four $\Pgt$ leptons originate from the decay of two $\PHiggs$ bosons is incorporated into the likelihood function $\mathcal{P}$ by suitably chosen constraints. For the purpose of defining the constraints, it is useful to enumerate the $\Pgt$ leptons such that the two $\Pgt$ leptons with indices $i=1$ and $i=2$ (and similarly the two $\Pgt$ leptons with indices $i=3$ and $i=4$) are interpreted as originating from the same $\PHiggs$ boson. We then require that the visible $\Pgt$ decay products corresponding to the indices $i=1$ and $i=2$ have opposite charge, and the same applies to the visible $\Pgt$ decay products corresponding to the indices $i=3$ and $i=4$. As the width of the $\PHiggs$ boson is known to be small~\cite{HIG-14-002,Aad:2015xua} compared to the experimental resolution that we aim to achieve on $m_{\PHiggs\PHiggs}$, we choose to neglect it and use the narrow-width approximation (NWA) for each $\PHiggs$ boson. The NWA introduces two $\delta$-functions, \begin{align} & \delta\left( (\hat{E}_{\Pgt(1)} + \hat{E}_{\Pgt(2)})^{2} - (\bm{\hat{p}}^{\Pgt(1)} + \bm{\hat{p}}^{\Pgt(2)})^{2} - m_{\PHiggs}^{2} \right) \quad \mbox{ and } \nonumber \\ & \delta\left( (\hat{E}_{\Pgt(3)} + \hat{E}_{\Pgt(4)})^{2} - (\bm{\hat{p}}^{\Pgt(3)} + \bm{\hat{p}}^{\Pgt(4)})^{2} - m_{\PHiggs}^{2} \right) \end{align} into Eq.~(\ref{eq:likelihood}). For the purpose of evaluating the $\delta$-functions, we make the simplifying assumption that the angle between the vectors $\bm{\hat{p}}^{\ensuremath{\textrm{vis}}(i)}$ and $\bm{\hat{p}}^{\ensuremath{\textrm{inv}}(i)}$ is negligible. The assumption is justified by the fact that at the LHC the $\ensuremath{p_{\textrm{T}}}\xspace$ of the visible $\Pgt$ decay products are typically large compared to the mass, $m_{\Pgt} = 1.777$~\ensuremath{\textrm{GeV}}\xspace~\cite{PDG}, of the $\Pgt$ lepton. With this assumption, the $\delta$-functions simplify to: \begin{equation*} \delta\left(\frac{m_{\ensuremath{\textrm{vis}}(12)}}{z_{1} \, z_{2}} - m_{\PHiggs}^{2}\right) \quad \mbox{ and } \quad \delta\left(\frac{m_{\ensuremath{\textrm{vis}}(34)}}{z_{3} \, z_{4}} - m_{\PHiggs}^{2}\right) \, , \end{equation*} where we denote by the symbol $m_{\ensuremath{\textrm{vis}}(ij)}$ the ``visible mass'' of the decay products of $\Pgt$ leptons $i$ and $j$: \begin{equation*} m_{\ensuremath{\textrm{vis}}(ij)} = (\hat{E}_{\ensuremath{\textrm{vis}}(i)} + \hat{E}_{\ensuremath{\textrm{vis}}(i)})^{2} - (\bm{\hat{p}}^{\ensuremath{\textrm{vis}}(i)} + \bm{\hat{p}}^{\ensuremath{\textrm{vis}}(j)})^{2} \, . \end{equation*} The $\delta$-functions are used to eliminate the integration over the variables $z_{2}$ and $z_{4}$. The $\delta$-function rule, \begin{equation*} \delta \left( g(x) \right) = \sum_{k} \, \frac{\delta \left( x - x_{k} \right)}{\vert g'(x_{k}) \vert} \, , \end{equation*} where the sum extends over all roots $x_{k}$ of the function $g(x)$, yields the two factors: \begin{equation*} \frac{z_{2}}{m_{\PHiggs}^{2}} \quad \mbox{ and } \quad \frac{z_{4}}{m_{\PHiggs}^{2}} \, , \end{equation*} with the roots: \begin{equation*} z_{2} = \frac{m_{\ensuremath{\textrm{vis}}(12)}}{m_{\PHiggs}^{2} \, z_{1}} \quad \mbox{ and } \quad z_{4} = \frac{m_{\ensuremath{\textrm{vis}}(34)}}{m_{\PHiggs}^{2} \, z_{3}} \, . \end{equation*} The condition $\delta\left( \left(\sum_{i=1}^{4} \, E_{\Pgt(i)}\right)^{2} - \left(\sum_{i=1}^{4} \, \bm{\hat{p}}^{\Pgt(i)}\right)^{2} - m_{\textrm{X}} \right)$ is used to eliminate the integration over the variable $z_{3}$. It yields the factor: \begin{equation} \lvert \frac{z_{1} \, z_{3}^{2}}{b \, z_{3}^{2} - c} \rvert \, , \label{eq:deltaFuncFactor} \end{equation} with the two roots: \begin{equation*} z_{3}^{(+)} = \frac{a + \sqrt{b}}{c} \quad \mbox{ and } \quad z_{3}^{(-)} = \frac{a - \sqrt{b}}{c} \, , \end{equation*} where: \begin{align} a = & (m_{\textrm{X}}^{2} - 2 \, m_{\PHiggs}^{2}) \, z_{1} \, , \nonumber \\ b = & \frac{m_{\ensuremath{\textrm{vis}}(14)}^{2}}{m_{\ensuremath{\textrm{vis}}(34)}^{2}} \, m_{\PHiggs}^{2} + \frac{m_{\ensuremath{\textrm{vis}}(24)}^{2}}{m_{\ensuremath{\textrm{vis}}(12)}^{2} \, m_{\ensuremath{\textrm{vis}}(34)}^{2}} \, m_{\PHiggs}^{4} \, z_{1}^{2} \quad \mbox{ and } \nonumber \\ c = & m_{\ensuremath{\textrm{vis}}(13)}^{2} + \frac{m_{\ensuremath{\textrm{vis}}(23)}^{2}}{m_{\ensuremath{\textrm{vis}}(12)}^{2}} \, z_{1}^{2} \, . \end{align} The requirement that the energies of electrons, muons, and $\ensuremath{\Pgt_{\textrm{h}}}\xspace$ as well as the energies of the neutrinos produced in the $\Pgt$ decays are positive restricts the variable $z_{3}$ to the range $0 < z_{3} \leq 1$. In case the roots $z_{3}^{(+)}$ and $z_{3}^{(-)}$ are both within this range, the integrand is evaluated for each root separately and the values obtained for each root are summed. Otherwise, only the root satisfying the condition $0 < z_{3} \leq 1$ is retained. Expressions for the likelihood function $\mathcal{P}$, obtained after performing these analytic transformations, are given by Eqs.~(\ref{eq:likelihood_thththth}) to~(\ref{eq:likelihood_llll}) in the Appendix. We refer to the different decay channels of the four $\Pgt$ leptons as $\ensuremath{\Pgt_{\textrm{h}}}\xspace\tauh\ensuremath{\Pgt_{\textrm{h}}}\xspace\tauh$, $\Plepton\ensuremath{\Pgt_{\textrm{h}}}\xspace\tauh\ensuremath{\Pgt_{\textrm{h}}}\xspace$, $\Plepton\Plepton\ensuremath{\Pgt_{\textrm{h}}}\xspace\tauh$, $\Plepton\Plepton\Plepton\ensuremath{\Pgt_{\textrm{h}}}\xspace$, and $\Plepton\Plepton\Plepton\Plepton$, where the symbol $\Plepton$ refers to an electron or muon, and the neutrinos produced in the $\Pgt$ decays are omitted from the nomenclature. The dimension of integration varies between $5$ for events in the $\ensuremath{\Pgt_{\textrm{h}}}\xspace\tauh\ensuremath{\Pgt_{\textrm{h}}}\xspace\tauh$ decay channel and $9$ for events in the $\Plepton\Plepton\Plepton\Plepton$ channel. The expressions given in the Appendix correspond to one particular association of reconstructed electrons, muons, and $\ensuremath{\Pgt_{\textrm{h}}}\xspace$ to the indices $1$, $2$, $3$, and $4$, which enumerate the $\Pgt$ decay products in Eqs.~(\ref{eq:likelihood_thththth}) to~(\ref{eq:likelihood_llll}). Expressions for alternative associations can be obtained by appropriate permutations of the indices. For any one of these associations the best estimate, $m_{\PHiggs\PHiggs}$, for the mass of the $\PHiggs$ boson pair is obtained by finding the value of $m_{\textrm{X}}$ that maximizes the value of $\mathcal{P}$. The integrand in Eqs.~(\ref{eq:likelihood_thththth}) to~(\ref{eq:likelihood_llll}) is evaluated for a series of mass hypotheses $m_{\textrm{X}}^{(i)}$. Starting from the initial value $m_{\textrm{X}}^{(0)} = 1.0125 \cdot \max (2 \, m_{\PHiggs}, m_{\PHiggs\PHiggs}^{\ensuremath{\textrm{vis}}})$, where $m_{\PHiggs\PHiggs}^{\ensuremath{\textrm{vis}}} = \sqrt{\left(\sum_{i=1}^{4} \, E_{\ensuremath{\textrm{vis}}(i)}\right)^{2} - \left(\sum_{i=1}^{4} \, \bm{p}^{\ensuremath{\textrm{vis}}(i)}\right)^{2}}$, the next mass hypothesis in the series is defined by the recursive relation $m_{\textrm{X}}^{(i+1)} = (1 + \delta) \cdot m_{\textrm{X}}^{(i)}$. The step size $\delta = 0.025$ is chosen such that it is small compared to the resolution on $m_{\PHiggs\PHiggs}$ that we expect our algorithm to achieve. The evaluation of the integral is performed numerically, using the VAMP algorithm~\cite{VAMP}, an improved implementation of the VEGAS algorithm~\cite{VEGAS}. For each mass hypothesis $m_{\textrm{X}}^{(i)}$, the integrand is evaluated $20\,000$ times. We note in passing that our algorithm alternatively supports an integration methods based on a custom implementation of the Markov-Chain integration method with the Metropolis--Hastings algorithm~\cite{Metropolis_Hastings}. The latter allows to reconstruct the $\ensuremath{p_{\textrm{T}}}\xspace$, pseudo-rapidity $\eta$, and azimuthal angle $\phi$ of the resonance $\textrm{X}$ also. In this paper, we focus on the reconstruction of the mass, however. A remaining issue for the algorithm is that in $\PHiggs\PHiggs \to \Pgt\Pgt\Pgt\Pgt$ events there exist two possibilities for building pairs of $\Pgt$ leptons of opposite charge. The ambiguity is resolved, and a unique value of $m_{\PHiggs\PHiggs}$ is obtained for each event, by first discarding pairings for which either $m_{\ensuremath{\textrm{vis}}(12)}$ or $m_{\ensuremath{\textrm{vis}}(34)}$ exceeds $m_{\PHiggs}$ and then selecting the pairing for which the likelihood function $\mathcal{P}$ attains the maximal value (for any $m_{\textrm{X}}$). We will demonstrate in Section~\ref{sec:performance} that this choice yields the correct pairing for the majority of events. \section{Appendix} \label{sec:appendix} \subsubsection{$\PHiggs\PHiggs \to \Pgt\Pgt\Pgt\Pgt \to \ensuremath{\Pgt_{\textrm{h}}}\xspace\tauh\ensuremath{\Pgt_{\textrm{h}}}\xspace\tauh$ decay channel} \begin{align} & \mathcal{P}(\bm{p}^{\ensuremath{\textrm{vis}}(1)},\bm{p}^{\ensuremath{\textrm{vis}}(2)},\bm{p}^{\ensuremath{\textrm{vis}}(3)},\bm{p}^{\ensuremath{\textrm{vis}}(4)};\ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}}|m_{\textrm{X}}) = \frac{32\pi^{8}}{m_{\Pgt} \, \Gamma_{\Pgt} \, s} \, \nonumber \\ & \qquad \int \, dz_{1} \, d\phi_{\ensuremath{\textrm{inv}}(1)} \, d\phi_{\ensuremath{\textrm{inv}}(2)} \, d\phi_{\ensuremath{\textrm{inv}}(3)} \, d\phi_{\ensuremath{\textrm{inv}}(4)} \, \sum_{z_{3}^{+},z_{3}^{-}} \, \Bigr\lvert \frac{z_{1} \, z_{3}^{2}}{b \, z_{3}^{2} - c} \Bigr\rvert \cdot \nonumber \\ & \qquad \frac{\vert\mathcal{M}^{\ensuremath{\textrm{eff}}(1)}_{\Pgt \to \ensuremath{\Pgt_{\textrm{h}}}\xspace\Pnut}\vert^{2}}{256\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(1)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(1)}\vert \, z_{1}^{2}} \, \frac{d\ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(1)}}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(1)}} \cdot W_{\ensuremath{\textrm{h}}}( \ensuremath{p_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(1)} | \ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(1)} ) \cdot \nonumber \\ & \qquad \frac{\vert\mathcal{M}^{\ensuremath{\textrm{eff}}(2)}_{\Pgt \to \ensuremath{\Pgt_{\textrm{h}}}\xspace\Pnut}\vert^{2}}{256\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(2)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(2)}\vert \, z_{2}^{2}} \, \frac{d\ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(2)}}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(2)}} \cdot W_{\ensuremath{\textrm{h}}}( \ensuremath{p_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(2)} | \ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(2)} ) \cdot \nonumber \\ & \qquad \frac{\vert\mathcal{M}^{\ensuremath{\textrm{eff}}(3)}_{\Pgt \to \ensuremath{\Pgt_{\textrm{h}}}\xspace\Pnut}\vert^{2}}{256\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(3)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(3)}\vert \, z_{3}^{2}} \, \frac{d\ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(3)}}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(3)}} \cdot W_{\ensuremath{\textrm{h}}}( \ensuremath{p_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(3)} | \ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(3)} ) \cdot \nonumber \\ & \qquad \frac{\vert\mathcal{M}^{\ensuremath{\textrm{eff}}(4)}_{\Pgt \to \ensuremath{\Pgt_{\textrm{h}}}\xspace\Pnut}\vert^{2}}{256\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(4)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(4)}\vert \, z_{4}^{2}} \, \frac{d\ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(4)}}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(4)}} \cdot W_{\ensuremath{\textrm{h}}}( \ensuremath{p_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(4)} | \ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(4)} ) \cdot \nonumber \\ & \qquad W_{\ensuremath{\textrm{rec}}}( \ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} | \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} ) \label{eq:likelihood_thththth} \end{align} \subsubsection{$\PHiggs\PHiggs \to \Pgt\Pgt\Pgt\Pgt \to \Plepton\ensuremath{\Pgt_{\textrm{h}}}\xspace\tauh\ensuremath{\Pgt_{\textrm{h}}}\xspace$ decay channel} \begin{align} & \mathcal{P}(\bm{p}^{\ensuremath{\textrm{vis}}(1)},\bm{p}^{\ensuremath{\textrm{vis}}(2)},\bm{p}^{\ensuremath{\textrm{vis}}(3)},\bm{p}^{\ensuremath{\textrm{vis}}(4)};\ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}}|m_{\textrm{X}}) = \frac{32\pi^{8}}{m_{\Pgt} \, \Gamma_{\Pgt} \, s} \, \nonumber \\ & \qquad \int \, dz_{1} \, dm^{2}_{\ensuremath{\textrm{inv}}(1)} \, d\phi_{\ensuremath{\textrm{inv}}(1)} \, d\phi_{\ensuremath{\textrm{inv}}(2)} \, d\phi_{\ensuremath{\textrm{inv}}(3)} \, d\phi_{\ensuremath{\textrm{inv}}(4)} \, \sum_{z_{3}^{+},z_{3}^{-}} \, \Bigr\lvert \frac{z_{1} \, z_{3}^{2}}{b \, z_{3}^{2} - c} \Bigr\rvert \cdot \nonumber \\ & \qquad \frac{I_{\ensuremath{\textrm{inv}}(1)}}{512\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(1)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(1)}\vert \, z_{1}^{2}} \, \frac{1}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(1)}} \cdot \frac{\vert\mathcal{M}^{\ensuremath{\textrm{eff}}(2)}_{\Pgt \to \ensuremath{\Pgt_{\textrm{h}}}\xspace\Pnut}\vert^{2}}{256\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(2)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(2)}\vert \, z_{2}^{2}} \, \frac{d\ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(2)}}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(2)}} \cdot W_{\ensuremath{\textrm{h}}}( \ensuremath{p_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(2)} | \ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(2)} ) \cdot \nonumber \\ & \qquad \frac{\vert\mathcal{M}^{\ensuremath{\textrm{eff}}(3)}_{\Pgt \to \ensuremath{\Pgt_{\textrm{h}}}\xspace\Pnut}\vert^{2}}{256\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(3)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(3)}\vert \, z_{3}^{2}} \, \frac{d\ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(3)}}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(3)}} \cdot W_{\ensuremath{\textrm{h}}}( \ensuremath{p_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(3)} | \ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(3)} ) \cdot \nonumber \\ & \qquad \frac{\vert\mathcal{M}^{\ensuremath{\textrm{eff}}(4)}_{\Pgt \to \ensuremath{\Pgt_{\textrm{h}}}\xspace\Pnut}\vert^{2}}{256\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(4)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(4)}\vert \, z_{4}^{2}} \, \frac{d\ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(4)}}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(4)}} \cdot W_{\ensuremath{\textrm{h}}}( \ensuremath{p_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(4)} | \ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(4)} ) \cdot \nonumber \\ & \qquad W_{\ensuremath{\textrm{rec}}}( \ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} | \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} ) \label{eq:likelihood_lththth} \end{align} \subsubsection{$\PHiggs\PHiggs \to \Pgt\Pgt\Pgt\Pgt \to \Plepton\Plepton\ensuremath{\Pgt_{\textrm{h}}}\xspace\tauh$ decay channel} \begin{align} & \mathcal{P}(\bm{p}^{\ensuremath{\textrm{vis}}(1)},\bm{p}^{\ensuremath{\textrm{vis}}(2)},\bm{p}^{\ensuremath{\textrm{vis}}(3)},\bm{p}^{\ensuremath{\textrm{vis}}(4)};\ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}}|m_{\textrm{X}}) = \frac{32\pi^{8}}{m_{\Pgt} \, \Gamma_{\Pgt} \, s} \, \nonumber \\ & \qquad \int \, dz_{1} \, dm^{2}_{\ensuremath{\textrm{inv}}(1)} \, d\phi_{\ensuremath{\textrm{inv}}(1)} \, dm^{2}_{\ensuremath{\textrm{inv}}(2)}\, d\phi_{\ensuremath{\textrm{inv}}(2)} \, d\phi_{\ensuremath{\textrm{inv}}(3)} \, d\phi_{\ensuremath{\textrm{inv}}(4)} \, \sum_{z_{3}^{+},z_{3}^{-}} \, \Bigr\lvert \frac{z_{1} \, z_{3}^{2}}{b \, z_{3}^{2} - c} \Bigr\rvert \cdot \nonumber \\ & \qquad \frac{I_{\ensuremath{\textrm{inv}}(1)}}{512\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(1)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(1)}\vert \, z_{1}^{2}} \, \frac{1}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(1)}} \cdot \frac{I_{\ensuremath{\textrm{inv}}(2)}}{512\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(2)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(2)}\vert \, z_{2}^{2}} \, \frac{1}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(2)}} \cdot \nonumber \\ & \qquad \frac{\vert\mathcal{M}^{\ensuremath{\textrm{eff}}(3)}_{\Pgt \to \ensuremath{\Pgt_{\textrm{h}}}\xspace\Pnut}\vert^{2}}{256\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(3)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(3)}\vert \, z_{3}^{2}} \, \frac{d\ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(3)}}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(3)}} \cdot W_{\ensuremath{\textrm{h}}}( \ensuremath{p_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(3)} | \ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(3)} ) \cdot \nonumber \\ & \qquad \frac{\vert\mathcal{M}^{\ensuremath{\textrm{eff}}(4)}_{\Pgt \to \ensuremath{\Pgt_{\textrm{h}}}\xspace\Pnut}\vert^{2}}{256\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(4)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(4)}\vert \, z_{4}^{2}} \, \frac{d\ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(4)}}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(4)}} \cdot W_{\ensuremath{\textrm{h}}}( \ensuremath{p_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(4)} | \ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(4)} ) \cdot \nonumber \\ & \qquad W_{\ensuremath{\textrm{rec}}}( \ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} | \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} ) \label{eq:likelihood_llthth} \end{align} \subsubsection{$\PHiggs\PHiggs \to \Pgt\Pgt\Pgt\Pgt \to \Plepton\Plepton\Plepton\ensuremath{\Pgt_{\textrm{h}}}\xspace$ decay channel} \begin{align} & \mathcal{P}(\bm{p}^{\ensuremath{\textrm{vis}}(1)},\bm{p}^{\ensuremath{\textrm{vis}}(2)},\bm{p}^{\ensuremath{\textrm{vis}}(3)},\bm{p}^{\ensuremath{\textrm{vis}}(4)};\ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}}|m_{\textrm{X}}) = \frac{32\pi^{8}}{m_{\Pgt} \, \Gamma_{\Pgt} \, s} \, \nonumber \\ & \qquad \int \, dz_{1} \, dm^{2}_{\ensuremath{\textrm{inv}}(1)} \, d\phi_{\ensuremath{\textrm{inv}}(1)} \, dm^{2}_{\ensuremath{\textrm{inv}}(2)} \, d\phi_{\ensuremath{\textrm{inv}}(2)} \, dm^{2}_{\ensuremath{\textrm{inv}}(3)} \, d\phi_{\ensuremath{\textrm{inv}}(3)} \, d\phi_{\ensuremath{\textrm{inv}}(4)} \, \sum_{z_{3}^{+},z_{3}^{-}} \, \Bigr\lvert \frac{z_{1} \, z_{3}^{2}}{b \, z_{3}^{2} - c} \Bigr\rvert \cdot \nonumber \\ & \qquad \frac{I_{\ensuremath{\textrm{inv}}(1)}}{512\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(1)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(1)}\vert \, z_{1}^{2}} \, \frac{1}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(1)}} \cdot \frac{I_{\ensuremath{\textrm{inv}}(2)}}{512\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(2)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(2)}\vert \, z_{2}^{2}} \, \frac{1}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(2)}} \cdot \nonumber \\ & \qquad \frac{I_{\ensuremath{\textrm{inv}}(3)}}{512\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(3)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(3)}\vert \, z_{3}^{2}} \, \frac{1}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(3)}} \cdot \frac{\vert\mathcal{M}^{\ensuremath{\textrm{eff}}(4)}_{\Pgt \to \ensuremath{\Pgt_{\textrm{h}}}\xspace\Pnut}\vert^{2}}{256\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(4)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(4)}\vert \, z_{4}^{2}} \, \frac{d\ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(4)}}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(4)}} \cdot W_{\ensuremath{\textrm{h}}}( \ensuremath{p_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(4)} | \ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}(4)} ) \cdot \nonumber \\ & \qquad W_{\ensuremath{\textrm{rec}}}( \ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} | \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} ) \label{eq:likelihood_lllth} \end{align} \subsubsection{$\PHiggs\PHiggs \to \Pgt\Pgt\Pgt\Pgt \to \Plepton\Plepton\Plepton\Plepton$ decay channel} \begin{align} & \mathcal{P}(\bm{p}^{\ensuremath{\textrm{vis}}(1)},\bm{p}^{\ensuremath{\textrm{vis}}(2)},\bm{p}^{\ensuremath{\textrm{vis}}(3)},\bm{p}^{\ensuremath{\textrm{vis}}(4)};\ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}}|m_{\textrm{X}}) = \frac{32\pi^{8}}{m_{\Pgt} \, \Gamma_{\Pgt} \, s} \, \nonumber \\ & \qquad \int \, dz_{1} \, dm^{2}_{\ensuremath{\textrm{inv}}(1)} \, d\phi_{\ensuremath{\textrm{inv}}(1)} \, dm^{2}_{\ensuremath{\textrm{inv}}(2)} \, d\phi_{\ensuremath{\textrm{inv}}(2)} \, dm^{2}_{\ensuremath{\textrm{inv}}(3)} \, d\phi_{\ensuremath{\textrm{inv}}(3)} \, dm^{2}_{\ensuremath{\textrm{inv}}(4)} \, d\phi_{\ensuremath{\textrm{inv}}(4)} \, \sum_{z_{3}^{+},z_{3}^{-}} \, \Bigr\lvert \frac{z_{1} \, z_{3}^{2}}{b \, z_{3}^{2} - c} \Bigr\rvert \cdot \nonumber \\ & \qquad \frac{I_{\ensuremath{\textrm{inv}}(1)}}{512\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(1)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(1)}\vert \, z_{1}^{2}} \, \frac{1}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(1)}} \cdot \frac{I_{\ensuremath{\textrm{inv}}(2)}}{512\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(2)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(2)}\vert \, z_{2}^{2}} \, \frac{1}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(2)}} \cdot \nonumber \\ & \qquad \frac{I_{\ensuremath{\textrm{inv}}(3)}}{512\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(3)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(3)}\vert \, z_{3}^{2}} \, \frac{1}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(3)}} \cdot \frac{I_{\ensuremath{\textrm{inv}}(4)}}{512\pi^{6}} \frac{E_{\ensuremath{\textrm{vis}}(4)}}{\vert\bm{p}^{\ensuremath{\textrm{vis}}(4)}\vert \, z_{4}^{2}} \, \frac{1}{2 \, \hat{E}_{\ensuremath{\textrm{vis}}(4)}} \cdot \nonumber \\ & \qquad W_{\ensuremath{\textrm{rec}}}( \ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} | \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} ) \label{eq:likelihood_llll} \end{align} \section{Introduction} \label{sec:introduction} The discovery of the Higgs ($\PHiggs$) boson by the ATLAS and CMS experiments at the LHC~\cite{Higgs-Discovery_CMS,Higgs-Discovery_ATLAS} represents a major step towards our understanding of electroweak symmetry breaking (EWSB) and of the mechanism that generates the masses of quarks and leptons, which constitute the ``ordinary'' matter in our Universe. In a combined analysis of the data recorded by ATLAS and CMS, the mass, $m_{\PHiggs}$, of the $\PHiggs$ boson has been measured to be $m_{\PHiggs} = 125.09 \pm 0.24$~\ensuremath{\textrm{GeV}}\xspace~\cite{HIG-14-042}. The Standard Model (SM) of particle physics makes precise predictions for all properties of the $\PHiggs$ boson, given its mass and the vacuum expectation value $v=246$~\ensuremath{\textrm{GeV}}\xspace~\cite{PDG} of the Higgs field. So far, all properties that have been measured agree with the expectation for a SM $\PHiggs$ boson~\cite{HIG-15-002}. The rate for its decay to a pair of $\Pgt$ leptons has been measured recently and found to be consistent with the SM expectation within the uncertainties of these measurements, at present amounting to $20$--$30\%$~\cite{HIG-13-004,Aad:2015vsa,HIG-15-002,HIG-16-043,ATLAS:2018lur}, One important prediction of the SM yet has to be verified experimentally, however: the $\PHiggs$ boson self-interaction. The SM predicts $\PHiggs$ boson self-interactions via trilinear and quartic couplings. Measurements of the $\PHiggs$ boson self-interactions will ultimately either confirm or falsify that the Brout-Englert-Higgs mechanism of the SM is responsible for EWSB and the flavour hierarchy of the SM. The trilinear coupling ($\lambda_{\PHiggs\PHiggs\PHiggs}$) can be determined at the LHC, by measuring the rate for $\PHiggs$ boson pair ($\PHiggs\PHiggs$) production. The measurement is challenging, because of the small signal cross section, which results from the destructive interference of two competing production processes, and suffers from sizeable backgrounds. The leading order (LO) Feynman diagrams for SM $\PHiggs\PHiggs$ production are shown in Fig.~\ref{fig:FeynmanDiagrams_smHH}. The cross section amounts to about $\sigma = 34$~fb in proton-proton collisions at $\sqrt{s}=13$~\ensuremath{\textrm{TeV}}\xspace center-of-mass energy. The ``triangle'' diagram shown on the left depends on $\lambda_{\PHiggs\PHiggs\PHiggs}$, while the ``box'' diagram shown on the right does not. The quartic coupling is not accessible at the LHC, as the cross section of the corresponding process, triple $\PHiggs$ boson production, is too small to be measured even with the large dataset that is expected to be collected by the end of the LHC operation. \begin{figure}[h!] \setlength{\unitlength}{1mm} \begin{center} \begin{picture}(180,34)(0,0) \put(1.5, 0.0){\mbox{\includegraphics*[height=34mm] {figures/feynman_nonresonant_triangle.pdf}}} \put(81.5, 0.0){\mbox{\includegraphics*[height=34mm] {figures/feynman_nonresonant_box.pdf}}} \end{picture} \end{center} \caption{ LO Feynman diagrams for $\PHiggs\PHiggs$ production within the SM.} \label{fig:FeynmanDiagrams_smHH} \label{fig:massDistributions} \end{figure} Deviations of $\lambda_{\PHiggs\PHiggs\PHiggs}$ from its SM value of $\lambda_{\textrm{SM}}=\frac{m_{\PHiggs}^2}{2 \, v}$, referred to as anomalous $\PHiggs$ boson self-couplings, alter the interference between the two diagrams, resulting in a change in the $\PHiggs\PHiggs$ production cross section and a change in the distribution of the mass, $m_{\PHiggs\PHiggs}$, of the $\PHiggs$ boson pair. Regardless of the value of $\lambda_{\PHiggs\PHiggs\PHiggs}$, a broad distribution in $m_{\PHiggs\PHiggs}$ is expected, motivating the convention to refer to the interference of box and triangle diagram as ``non-resonant'' $\PHiggs\PHiggs$ production. The shape of the distribution in $m_{\PHiggs\PHiggs}$ provides a handle to determine $\lambda_{\PHiggs\PHiggs\PHiggs}$, complementary to measuring the $\PHiggs\PHiggs$ production cross section. Various scenarios beyond the SM feature anomalous $\PHiggs$ boson self-couplings, for example two-Higgs-doublet models~\cite{Branco:2011iw}, the minimal supersymmetric extension of the SM (MSSM)~\cite{Gunion:1989we}, and models with composite $\PHiggs$ bosons~\cite{Grober:2010yv,Contino:2012xk}. The prospects for improving the sensitivity to determine $\lambda_{\PHiggs\PHiggs\PHiggs}$ by utilising information on the mass of the $\PHiggs$ boson pair have been studied in events in which the $\PHiggs$ boson pair decays via $\PHiggs\PHiggs \to \PW\PW\PW\PW$, with subsequent decay of the $\PW$ bosons to electrons, muons, or jets, in Refs.~\cite{Baur:2002rb,Baur:2002qd}. The information that can be extracted from the distribution in $m_{\PHiggs\PHiggs}$ is limited, however, by the fact that the distribution in $m_{\PHiggs\PHiggs}$ changes only moderately with $\lambda_{\PHiggs\PHiggs\PHiggs}$. The rate for $\PHiggs\PHiggs$ production may be enhanced significantly in case an as yet undiscovered heavy resonance $\textrm{X}$ decays into pairs of $\PHiggs$ bosons. Several models beyond the SM give rise to such decays, for example Higgs portal models~\cite{Englert:2011yb,No:2013wsa} and models involving warped extra dimensions~\cite{Randall:1999ee}, as well as two-Higgs-doublet models and models with composite Higgs bosons. If the lifetime $t$ of the resonance is sufficiently large, $t \gtrsim 10^{-25}\textrm{~s}/m_{\textrm{X}}\textrm{~[100~GeV]}$, where $m_{\textrm{X}}$ denotes the mass of the resonance, the distribution in $m_{\PHiggs\PHiggs}$ is expected to exhibit a narrow peak at $m_{\textrm{X}}$. In this paper we present an algorithm for reconstructing the mass $m_{\PHiggs\PHiggs}$ of the $\PHiggs$ boson pair in events in which the $\PHiggs$ boson pair originates from the decay of a heavy resonance $\textrm{X}$ and decays via $\PHiggs\PHiggs \to \Pgt\Pgt\Pgt\Pgt$, with subsequent decay of the $\Pgt$ leptons via $\Pgt \to \ensuremath{\Pe \, \APnu_{\kern-0.10em \Pe} \, \Pnu_{\kern-0.10em \Pgt}}\xspace$, $\Pgt \to \ensuremath{\Pgm \, \APnu_{\kern-0.10em \Pgm} \, \Pnu_{\kern-0.10em \Pgt}}\xspace$, or $\Pgt \to \textrm{hadrons} + \Pnut$. We refer to $\Pgt$ decays to an electron or muon (to hadrons) as ``leptonic'' (``hadronic'') $\Pgt$ decays. The system of hadrons produced in a hadronic $\Pgt$ decay is denoted by the symbol $\ensuremath{\Pgt_{\textrm{h}}}\xspace$. The decay of $\PHiggs$ boson pairs to four $\Pgt$ leptons ($\PHiggs\PHiggs \to \Pgt\Pgt\Pgt\Pgt$) has not been discussed in the literature so far. This decay channel provides a small branching fraction, but is expected to benefit from comparatively low backgrounds. The resolution on $m_{\PHiggs\PHiggs}$, achieved by our algorithm, varies between $7$ and $22\%$, depending on the mass of the resonance. We expect that the reconstruction of $m_{\PHiggs\PHiggs}$ will significantly improve the separation of the $\PHiggs\PHiggs \to \Pgt\Pgt\Pgt\Pgt$ signal from residual backgrounds, thereby increasing the sensitivity to either find evidence for the presence of such a signal in the LHC data or to set stringent exclusion limits. The reconstruction of $m_{\PHiggs\PHiggs}$ in $\PHiggs\PHiggs \to \Pgt\Pgt\Pgt\Pgt$ events is based on the formalism, developed in Ref.~\cite{SVfitMEM}, for treating $\Pgt$ lepton decays in the so-called matrix element (ME) method~\cite{Kondo:1988yd,Kondo:1991dw}. The algorithm presented in this paper does not employ the full ME treatment, but is based on a simplified likelihood approach. The simplified approach is motivated by the studies performed in Ref.~\cite{SVfitMEM}, which found that the difference in mass resolution between the approximate likelihood treatment and the full ME formalism, applied to the task of reconstructing the mass of the $\PHiggs$ boson in events containing a single $\PHiggs$ boson that decays via $\PHiggs \to \Pgt\Pgt$, is small, while the likelihood approach provides a significant reduction in computing time. Our algorithm for reconstructing the mass of the $\PHiggs$ boson pair in $\PHiggs\PHiggs \to \Pgt\Pgt\Pgt\Pgt$ events is presented in Section~\ref{sec:algorithm}. The resolution achieved by the algorithm in reconstructing $m_{\PHiggs\PHiggs}$ for hypothetical resonances $\textrm{X}$ of different mass is studied in Section~\ref{sec:performance}. The paper concludes with a summary in Section~\ref{sec:summary}. \section{Performance} \label{sec:performance} The performance of the algorithm is quantified in terms of the resolution achieved in reconstructing $m_{\PHiggs\PHiggs}$. The resolution is studied using simulated samples of events in which a heavy resonance $\textrm{X}$ decays into a pair of $\PHiggs$ bosons, and the $\PHiggs$ bosons subsequently decay to four $\Pgt$ leptons. Samples are produced for $m_{\textrm{X}} = 300$, $500$, and $800$~\ensuremath{\textrm{GeV}}\xspace. We expect the resolution to be similar for resonances of spin $0$ and spin $2$, but focus on studying resonances of spin $0$ in this paper. Events are generated for proton-proton collisions at $\sqrt{s} = 13$~\ensuremath{\textrm{TeV}}\xspace centre-of-mass energy, using the leading order program MadGraph, in the version MadGraph\_aMCatNLO 2.3.2.2~\cite{MadGraph_aMCatNLO}, with the NNPDF3.0 set of parton distribution functions~\cite{NNPDF1,NNPDF2,NNPDF3}. Parton shower and hadronization processes are modelled using the generator PYTHIA 8.2~\cite{pythia8} with the tune CUETP8M1~\cite{PYTHIA_CUETP8M1tune_CMS}. The decays of $\Pgt$ leptons, including polarization effects, are modelled by PYTHIA. We select events in the decay channel $\textrm{X} \to \PHiggs\PHiggs \to \Pgt\Pgt\Pgt\Pgt \to \Plepton\Plepton\ensuremath{\Pgt_{\textrm{h}}}\xspace\tauh$ and study them on generator level. Reconstruction effects are simulated by varying the generator-level quantities within their experimental resolution, which we perform by randomly sampling from the TF $W_{\ensuremath{\textrm{h}}}( \ensuremath{p_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}} | \ensuremath{\hat{p}_{\textrm{T}}}\xspace^{\ensuremath{\textrm{vis}}} )$ and $W_{\ensuremath{\textrm{rec}}}( \ensuremath{p_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{p_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} | \ensuremath{\hat{p}_{\textrm{x}}}\xspace^{\ensuremath{\textrm{rec}}},\ensuremath{\hat{p}_{\textrm{y}}}\xspace^{\ensuremath{\textrm{rec}}} )$ described in Section~\ref{sec:algorithm}. The electrons, muons, and $\ensuremath{\Pgt_{\textrm{h}}}\xspace$ are required to satisfy conditions on $\ensuremath{p_{\textrm{T}}}\xspace$ and $\eta$, which are typical for data analyses performed by the ATLAS and CMS collaborations during LHC Run $2$. Electrons (muons) are required to be within $\vert\eta\vert < 2.5$ ($\vert\eta\vert < 2.4$). The lepton of higher (lower) $\ensuremath{p_{\textrm{T}}}\xspace$ is required to pass a $\ensuremath{p_{\textrm{T}}}\xspace$ threshold of $25$ ($15$)~\ensuremath{\textrm{GeV}}\xspace. Each of the two $\ensuremath{\Pgt_{\textrm{h}}}\xspace$ is required to satisfy $\ensuremath{p_{\textrm{T}}}\xspace > 20$~\ensuremath{\textrm{GeV}}\xspace and $\vert\eta\vert < 2.3$. The resolution on $m_{\PHiggs\PHiggs}$ is studied in terms of the ratio between the reconstructed value of $m_{\PHiggs\PHiggs}$ and the true mass $m_{\PHiggs\PHiggs}^{\textrm{true}}$ of the $\PHiggs$ boson pair. Distributions in this ratio are shown in Fig.~\ref{fig:massDistributions}. They are shown separately for chosen (pairings that maximize the likelihood function $\mathcal{P}$) and for discarded (other) pairings and for events in which electrons, muons, and $\ensuremath{\Pgt_{\textrm{h}}}\xspace$ are correctly associated to $\PHiggs$ boson pairs and events with spurious pairings. The correct pairing is chosen in $87$, $98$, and $>99\%$ of the events with $m_{\textrm{X}} = 300$, $500$, and $800$~\ensuremath{\textrm{GeV}}\xspace, respectively. The resolution on $m_{\PHiggs\PHiggs}$ for the chosen pairings amounts to $22$, $7$, and $9\%$, relative to the true mass of the $\PHiggs$ boson pair. The mass resolution for resonances of $m_{\textrm{X}} = 300$, near the kinematic threshold $m_{\textrm{X}} \approx 2 \, m_{\PHiggs}$, is limited by the fact that the wrong pairing is chosen in $13\%$ of events. For events in which the correct pairing is chosen, the resolution on $m_{\PHiggs\PHiggs}$ amounts to $4$, $6$, and $8\%$ for $m_{\textrm{X}} = 300$, $500$, and $800$~\ensuremath{\textrm{GeV}}\xspace, respectively. We leave the optimization of the choice of the correct pairing for resonances of low mass to future studies. \begin{figure} \setlength{\unitlength}{1mm} \begin{center} \begin{picture}(180,212)(0,0) \put(-4.5, 152.0){\mbox{\includegraphics*[height=60mm] {plots/makeSVfit4tauPlots_x_to_hh_300_chosen_log.pdf}}} \put(81.5, 152.0){\mbox{\includegraphics*[height=60mm] {plots/makeSVfit4tauPlots_x_to_hh_300_discarded_log.pdf}}} \put(-4.5, 78.0){\mbox{\includegraphics*[height=60mm] {plots/makeSVfit4tauPlots_x_to_hh_500_chosen_log.pdf}}} \put(81.5, 78.0){\mbox{\includegraphics*[height=60mm] {plots/makeSVfit4tauPlots_x_to_hh_500_discarded_log.pdf}}} \put(-4.5, 4.0){\mbox{\includegraphics*[height=60mm] {plots/makeSVfit4tauPlots_x_to_hh_800_chosen_log.pdf}}} \put(81.5, 4.0){\mbox{\includegraphics*[height=60mm] {plots/makeSVfit4tauPlots_x_to_hh_800_discarded_log.pdf}}} \put(38.0, 148.0){\small (a)} \put(124.0, 148.0){\small (b)} \put(38.0, 74.0){\small (c)} \put(124.0, 74.0){\small (d)} \put(38.0, 0.0){\small (e)} \put(124.0, 0.0){\small (f)} \end{picture} \end{center} \caption{ Distributions in $m_{\PHiggs\PHiggs}$, relative to the true mass $m_{\PHiggs\PHiggs}^{\textrm{true}} = m_{\textrm{X}}$ of the $\PHiggs$ boson pair, in events in which a heavy resonance $\textrm{X}$ of mass $m_{\textrm{X}} = 300$ (a,b), $500$ (c,d), and $800$~\ensuremath{\textrm{GeV}}\xspace (e,f) decays via $\textrm{X} \to \PHiggs\PHiggs \to \Pgt\Pgt\Pgt\Pgt \to \Plepton\Plepton\ensuremath{\Pgt_{\textrm{h}}}\xspace\tauh$. The distributions are shown separately for the chosen (a,c,e) and for the discarded (b,d,f) pairings, and are further subdivided into correct and spurious pairings. The axis of abscissae ranges from $0.2$ to $5$. } \label{fig:massDistributions} \end{figure} The algorithm requires typically $2$s of CPU time per event to reconstruct $m_{\PHiggs\PHiggs}$. \section{Summary} \label{sec:summary} An algorithm for the reconstruction of the mass $m_{\PHiggs\PHiggs}$ of the $\PHiggs$ boson pair in events in which the Higgs boson pair decays via $\PHiggs\PHiggs \to \Pgt\Pgt\Pgt\Pgt$ and the $\Pgt$ leptons subsequently decay into electrons, muons, or hadrons has been presented. The resolution on $m_{\PHiggs\PHiggs}$ has been studied in simulated events and amounts to $22$, $7$, and $9\%$, relative to the true mass of the $\PHiggs$ boson pair, for events containing resonances $\textrm{X}$ of mass $m_{\textrm{X}} = 300$, $500$, and $800$~\ensuremath{\textrm{GeV}}\xspace, respectively. The mass resolution for resonances of low mass, near the kinematic threshold $m_{\textrm{X}} \approx 2 \, m_{\PHiggs}$, is limited by the fact that the algorithm chooses the wrong association of electrons, muons, and $\ensuremath{\Pgt_{\textrm{h}}}\xspace$ to $\PHiggs$ boson pairs in $13\%$ of events. The probability to choose a spurious pairing decreases for resonances of higher mass and becomes negligible for $m_{\textrm{X}} \gtrsim 500$~\ensuremath{\textrm{GeV}}\xspace. The optimization of the choice of the correct pairing for resonances of low mass is left to future studies. We expect that our algorithm will be useful in searches for heavy resonances decaying to $\PHiggs$ boson pairs at the LHC.
e763ea3ea7a7fee70af736ed0ac47a99663151b7
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Robotic dexterous hands provide a promising base for supplanting human hands in the execution of tedious and dangerous tasks. When autonomous manipulation of dexterous hands handles complex perception , teleoperation is superior to intelligent programming when it comes to taking fast decisions and dealing with corner cases. Unlike contacting or wearable device-based teleoperation, markerless vision-based teleoperation \cite{tele2007advances} offers the advantages of showing natural human-limb motions and of being less invasive. Analytical vision-based teleoperation falls into two categories: model- and appearance-based approaches. Model-based approaches \cite{markerless2gripper, hand-arm} provide continuous solutions but are computationally costly and typically depend on the availability of a multicamera system \cite{model-based-vision-teleop}. Conversely, appearance-based approaches \cite{phdthesis, gesture_teleop} recognize a discrete number of hand poses that correspond typically to the method’s training set without high computational cost and hardware complexity. Recently, an increasing number of researchers have been focusing on the data-driven vision-based teleoperation methods which get the 3D hand pose or recognize the class of hand gestures using first the deep convolutional neural network (CNN) then mapping the locations or the corresponding poses to the robot. However, all these solutions not only strongly depend on the accuracy of the hand pose estimation or the classification but also suffer the time cost of post-processing. We instead seek to take a noisy depth image of the human hand as input and produce joint angles of the robot hand as output by training a deep CNN. The end-to-end vision-based teleoperation can be a natural and an intuitive way to manipulate the remote robot and is user-friendly to the novice teleoperators. Therefore, it is essential to design an efficient network which could learn the corresponding robot pose feature in human pose space. Since the end-to-end method depends on massive human-robot teleoperation pairings, we aim to explore an efficient method which collects synchronized hand data both for the robot and the human. \begin{figure}[t] \includegraphics[width=0.35\textheight]{img/intro_new-crop.pdf} \caption{Our vision-based teleoperation architecture. (Center) TeachNet is trained offline to predict robot joint angles from depth images of a human hand using our 400k pairwise human-robot hand dataset. (Left)Depth images of the operator's hand are captured by a depth camera then feed to TeachNet. (Right) The joint angles produced by TeachNet are executed on the robot to imitate the operator's hand pose. } \label{intro} \vskip -0.15in \end{figure} In this paper, we present a novel scheme for teleoperating the Shadow dexterous hand based on a single depth image (see Fig. \ref{intro}). Our primary contributions are: 1) We propose an end-to-end teacher-student network(TeachNet), which learns the kinematic mappings between the robot and the human hand. 2) We build a pairwise human-robot hand dataset that includes pairs of depth images in the same gesture, as well as corresponding joint angles of the robot hand. 3) We design an optimized mapping method that matches the Cartesian position and the link direction of shadow hand from the human hand pose and properly takes into account possible self-collisions. During the network evaluation, TeachNet achieves higher accuracy and lower error compared to other end-to-end baselines. As we illustrated in our robotic experiments, our method allows the Shadow robot to imitate human gestures and to finish the grasp tasks significantly faster than state-of-the-art data-driven vision-based teleoperation. \section{Related Work} \noindent \textbf{Markerless Vision-Based Teleoperation.} Human teleoperation of robots has usually been implemented through contacting devices such as tracking sensors \cite{SCHUNKS5FHteleop}, gloves instrumented with angle sensors \cite{fang2015robotic, fang20183d}, inertial sensors \cite{miller2004motion} and joysticks \cite{cho2010teleoperation}. Stanton \textit{et al}.~ \cite{humanoid2012pairs} suggest an end-to-end teleoperation on a 23 degree of freedom (DOF) robot by training a feed-forward neural network for each DOF of the robot to learn the mapping between sensor data from the motion capture suit and the angular position of the robot actuator to which each neural network is allocated. However, wearable devices are customized for a certain size range of the human hand or the human body, and contacting methods may hinder natural human-limb motion. Compared to these methods, markerless vision-based teleoperation is less invasive and performs natural and comfortable gestures. Visual model-based methods, such as \cite{markerless2gripper, hand-arm}, compute continuous 3D positions and orientations of thumb and index finger from segmented images based on a camera system and control a parallel jaw gripper mounted on a six-axis robot arm. Romero \cite{phdthesis} classifies human grasps into grasp classes and approaches based on human hand images then maps them to a discrete set of corresponding robot grasp classes following the external observation paradigm. Compared to analytical methods, data-driven techniques place more weight on object representation and perceptual processing, e.g., feature extraction, object recognition or classification and pose estimation. Michel \textit{et al}.~ \cite{markerless_humanpose} provide a teleoperate method for a NAO humanoid robot that tracks human body motion from markerless visual observations then calculates the inverse kinematics process. But this method does not consider the physical constraints and joint limits of the robots, so it easily generates the poses that the robot cannot reach. Nevertheless, these methods strongly depend on the accuracy of the hand pose estimation or the classification and lose much time for post-processing. In this work, we aim to design an end-to-end vision-based CNN which generates continuous robot poses and provides the fast and intuitive experience of teleoperation. \noindent \textbf{Depth-Based 3D Hand Pose Estimation.} 3D hand pose estimation typically is one of the essential research fields in vision-based teleoperation. Although the field of 3D hand pose estimation has advanced rapidly, isolated 3D hand pose estimation only achieves low mean errors (10 mm) in the view point range of [70, 120] degrees \cite{depth_survey}. According to the representation of the output pose, the 3D hand pose estimation methods consist of detection- and regression-based methods. Detection-based methods \cite{v2v} give the probability density map for each joint, while regression-based methods \cite{deepprior++, ren} directly map the depth image to the joint locations or the joint angles of a hand model. Regardless of whom the output joint pose belongs to, the regression-based network is similar to our end-to-end network. \begin{figure}[ht] \centering \includegraphics[width=0.35\textheight]{img/network5-crop.pdf} \caption{TeachNet Architecture. Top: human branch, Bottom: robot branch. The input depth images $I_H$ and $I_R$ are fed to the corresponding branch that predicts the robot joint angels $\Theta_H$, $\Theta_R$. The residual module is a convolutional neural network with a similar architecture as ResNet~\cite{resnet}. FC denotes a fully-connected layer, BN denotes a batch normalization layer, R denotes a Rectified Linear Unit. } \label{teachnet} \vskip -0.15in \end{figure} \noindent \textbf{Master-Slave Pairing in Teleoperation.} To learn the pose feature of the robot from the images of the human hand, we have to consider how to get a vast number of the human-robot pairings. Prior work in \cite{humanoid2012pairs, gussian_pairs, tcn} acquired the master-slave pairings by demanding a human operator to imitate the robot motion synchronously. The pairing data is costly to collect like this and typically comes with noisy correspondences. Also, there is no longer an exact correspondence between the human and the robot because physiological differences make the imitation non-trivial and subjective to the imitator. In fact, the robot state is more accessible and is relatively stable concerning the human hand, and there are many existing human hand datasets. Since training on real images may require significant data collection time, an alternative approach is to learn on simulated images and to adapt the representation to real data \cite{openai}. We propose a novel criterion of generating human-robot pairing from these results by using an existing dataset of labeled human hand depth images, manipulating the robot and recording corresponding joint angles and images in simulation, and performing extensive evaluations on a physical robot. \noindent \textbf{Teleoperation Mapping Methods.} Conventional teleoperation mapping methods are divided into three main categories: joint mapping which is useful for power grasps \cite{joint_teleop}, fingertip mapping which is suitable for precision grasps \cite{optimizedfingertip}, and pose mapping which interprets the function of the human grasp rather than replicating hand position \cite{meeker2018intuitive}. However, in most cases considering only one type of mapping method is not enough \cite{hybrid_teleop}. For example, fingertip mapping neglects the position and orientation of the phalanges and does not consider the special mechanical difference between the slave and the master. \section{Teacher-Student Network} Solving joint regression problems directly from human images is quite challenging because the robot hand and the human hand occupy two different domains. Specifically, imagine that we have image $I_R$ of a robotic hand and image $I_H$ of a human hand, while the robotic hand in the image acts exactly the same as the human hand. The problem of mapping the human hand image to the corresponding robotic joint could be formulated as below: \begin{align}\begin{split}\label{human_regress} & f_{feat}: I_H \in \mathbb{R}^2 \rightarrow z_{pose} \\ & f_{regress}: z_{pose} \rightarrow \Theta\text{.} \end{split}\end{align} To better process the geometric information in the input depth image and the complex constraints on joint regression, we adopt an encode-decode style deep neural network. The upper branch in Fig. \ref{teachnet} illustrates the network architecture we used. However, the human hand and shadow hand basically come from different domains, thus it could be difficult for $f_{feat}$ to learn an appropriate latent feature $z_{pose}$ in pose space. In contrast, the mapping from $I_R$ to joint target $\Theta$ will be more natural as it is exactly a well-defined hand pose estimation problem. Intuitively, we believe that for a paired human and robotic image, their latent pose features $z_{pose}$ should be encouraged to be consistent as they represent the same hand pose and will be finally mapped to the same joint target. Also, based on the observation that the mapping from $I_R$ to $\Theta$ performs better than $I_H $ (these preliminary results can be found in Fig. \ref{angle_eval}), the encoder $f_{feat}$ of $I_R$ could extract better pose features, which could significantly improve the regression results of the decoder. With these considerations above, we propose a novel teacher-student network (TeachNet) to tackle the vision-based teleoperation problem~\eqref{human_regress} in an end-to-end fashion. TeachNet consists of two branches, the robot branch which plays the role of a teacher and the human branch as the student. \noindent \textbf{Joint angle loss.} Each branch is supervised with a mean squared error (MSE) loss $\mathcal{L}_{ang}$ : \begin{equation} \label{angloss} \mathcal{L}_{ang} = \|\Theta - J \|^2\text{,} \end{equation} where $J$ is the groundtruth joint angles. Besides the encoder-decoder structure that maps the input depth image to joint prediction, we define a consistency loss $\mathcal{L}_{cons}$ between two latent features $z_{H}$ and $z_{R}$ to exploit the geometrical resemblance between human hands and the robotic hand. Therefore, $\mathcal{L}_{cons}$ forces the human branch to be supervised by a pose space shared with the robot branch. To explore the most effective aligning mechanism, we design two kinds of consistency losses and two different aligning positions: \noindent \textbf{Hard consistency loss.} The most intuitive mechanism for feature alignment would be providing an extra mean squared error (MSE) loss over the latent features of these two branches: \begin{align}\label{hard_loss} \mathcal{L}_{cons\_h} = \|z_{H} - z_{R} \|_2\text{.} \end{align} \noindent \textbf{Soft consistency loss.} Sometimes, \eqref{hard_loss} could distract the network from learning hand pose representations especially in the early training stage. Inspired by~\cite{villegas2018neural}, we feed $z_{H}$ and $z_{R}$ into a discriminator network $D$~\cite{gan} to compute a \textit{realism score} for \textit{real} and \textit{fake} pose features. The soft consistency loss is basically the negative of this score: \begin{align}\label{soft_loss} \mathcal{L}_{cons\_s} = \log\left(1-D(z_{H})\right)\text{.} \end{align} As for the aligning position, we propose \textit{early teaching} and \textit{late teaching} respectively. For the former, we put the alignment layer after the encoder and embedding module, while in the latter the alignment layer is positioned on the last but one layer of the whole model (which means that the regression module will only contain one layer). In the following, we will refer to early teaching by $\mathcal{L}_{cons\_s}$ as Teach Soft-Early, late teaching by $\mathcal{L}_{cons\_s}$ as Teach Soft-Late, early teaching by $\mathcal{L}_{cons\_h}$ as Teach Hard-Early, and late teaching by $\mathcal{L}_{cons\_h}$ as Teach Hard-Late. We also introduce an auxiliary loss to further improve our teleoperation model: \noindent \textbf{Physical loss.} The physical loss $\mathcal{L}_{phy}$ which enforces the physical constraints and joint limits is defined by: \begin{equation} \label{phyloss} \mathcal{L}_{phy}(\Theta)=\sum\limits_i[\max(0, (\theta_{max} - \Theta_i)) + \max(0, (\Theta_i - \Theta_{min}))]\text{.} \end{equation} Overall, the complete training objective for each branch is: \begin{equation} \label{teachloss} \mathcal{L}_{teach}(\Theta)=\mathcal{L}_{ang} + \mathcal{L}_{phy} \end{equation} \begin{equation} \label{studloss} \mathcal{L}_{stud}(\Theta)=\mathcal{L}_{ang} + \alpha * \mathcal{L}_{cons} + \mathcal{L}_{phy}\text{,} \end{equation} where $\alpha=1$ for hard consistency loss and $\alpha=0.1$ for soft consistency loss. \section{Dataset Generation} \label{dataset} Training the TeachNet which learns the kinematic mapping between the human hand and the robot hand strongly relies on a massive dataset with human-robot pairings. We achieve this by using the off-the-shelf human hand dataset BigHand2.2M Dataset \cite{bighand2} and an optimized mapping method using the pipeline of Fig. \ref{map}. With this pipeline, we collect a training dataset that contains 400K pairs of simulated robot depth images and human hand depth images, with corresponding robot joint angles and poses. \begin{figure}[!t] \centering \includegraphics[width=0.35\textheight]{img/dataset2-crop.pdf} \caption{Pipeline for dataset generation. (Top left) The human hand model has 21 joints and moves with 31 degrees of freedom in the BigHand2.2M dataset. (Bottom left) A depth image example from the BigHand2.2M dataset. (Middle) Optimized mapping method from the human hand to the Shadow hand. (Top right) The Shadow hand with BioTac sensors has 24 joints and moves with 19 degrees of freedom. (Bottom right) The corresponding RGB and depth images of Shadow gestures obtained from Gazebo. The colored circles denote the joint keypoint positions on the hand, and the green triangles denote the common reference frame $F$.} \label{map} \vskip -0.15in \end{figure} \subsection{Human-Robot Hand Kinematic Configuration} The Shadow Dexterous Hand \cite{hand2013e1} used in this work is motor-controlled and equipped with 5 fingers, and its kinematic chain is shown in the right side of Fig. \ref{error}. Each of these fingers has a BioTac tactile sensor attached which replaces the last phalanx and the controllability of the last joint. Each finger has four joints, the distal, middle, proximal, and the metacarpal joint, but the first joint of each finger is stiff. The little finger and the thumb are provided with an extra joint for holding the objects. Summed up, this makes 17 DOF plus two in the wrist makes 19. In contrast to the robot hand, the human hand model from BigHand2.2M dataset has 21 joints and can move with 31 DOF, as shown in Fig. \ref{map}. The main kinematic differences between the robot hand and the human hand are the limited angle ranges of the robot joints and the structure of the wrist joints. Simplifying the dissimilarity between the Shadow hand and the human hand, two wrist joints of the Shadow at $0$ rad are fixed and only 15 joint keypoints which are TIP, PIP, MCP in each finger of the Shadow are considered. \subsection{Optimized Mapping Method} Effectively mapping the robot pose from the human hand pose plays a significant role in our training dataset. In order to imitate the human hand pose, we propose an optimized mapping method integrating position mapping, orientation mapping and properly taking into account possible self-collisions. Firstly, we use the common reference frame $F$ located at the human wrist joint and 34mm above the z-axis of the robot wrist joint. Note that 34mm is the height from the wrist joint to the base joint of the thumb. These locations are chosen because they lie at locations of high kinematic similarity. Secondly, we enforce position mapping to the fingertips with a strong weight $\omega_{pf}$ and to PIP joints with minor weight $\omega_{pp}$. Thirdly, direction mapping with weight $\omega_{d}$ is applied to five proximal phalanges and distal phalanges of thumb. In our dataset, we set $\{\omega_{pf}, \omega_{pp}, \omega_{d}\} = \{1, 0.2 ,0.2\}$. Taking advantage of BioIK solver \cite{bioik} to determine the robot joint angles $\Theta \in R^{17}$, the robot execute movements in Gazebo and check self-collision by MoveIt. In case BioIK gives a self-collision output, we define a cost function $F_{cost}$ which measures the distance between two links \begin{figure}[htb] \centering \includegraphics[width=0.16\textheight]{img/dataset_result_bright-crop.pdf} \caption{The Shadow depth images from nine viewpoints corresponding to one human gesture in our dataset.} \label{dataset_result} \vskip -0.15in \end{figure} \begin{equation} \label{collision} F_{cost}=\max(0, R\_col - \|P_i - P_j \|^2)\text{,} \end{equation} where $P_i$, $P_j$ respectively denote the position of link i, link j, $R\_col$ is the minimum collision free radius between two links. Considering the BigHand2.2M dataset spans a wide range of observing viewpoints to the human hand, it is indispensable to increase the diversity of viewpoints of the robot data. Thus we collect visual samples of the robot through nine simulated depth cameras with different observing positions in Gazebo and record nine depth image for each pose simultaneously. As an example, in Fig.~\ref{dataset_result} we present the depth images of the robot from nine viewpoints corresponding to the human hand pose at the bottom left in Fig.~\ref{map}. \section{Experiment} \subsection{TeachNet Evaluation} \label{network baseline} We examined whether the TeachNet could learn more indicative visual representations that were the kinematic structure of the human hand. The proposed TeachNet was evaluated on our paired dataset with the following experiments: 1) To explore the appropriate position of the alignment layer and the proper align method, we compared the proposed four network structures: Teach Soft-Early, Teach Soft-Late, Teach Hard-Early, and Teach Hard-Late. 2) To validate the significance of the alignment layer, we designed an ablation analysis by removing consistency loss $\mathcal{L}_{cons}$ and separately training the single human branch and the single robot branch. We respectively refer to these two baselines as Single Human and Single Robot. 3) We compared our end-to-end method with the data-driven vision-based teleoperation method which mapped the position of the robot from the human joint locations based on the 3D hand estimation. We refer to this baseline as HandIK solution. There were three evaluation metrics used in this work: 1) the fraction of frames whose maximum/average joint angle errors are below a threshold; 2) the fraction of frames whose maximum/average joint distance errors are below a threshold; 3) the average angle error over all angles in $\Theta$. The input depth images of all network evaluations were extracted from the raw depth image as a fixed-size cube around the hand and resized to $100 \times 100$. Note that although we have nine views of Shadow images which correspond to one human pose, during the training process of the TeachNet we randomly chose one view of Shadow images to feed into the robot branch. For the HandIK solution, we trained the DeepPrior++ network on our dataset, and the reason we chose DeepPrior++ was that its architecture was similar to the single branch of TeachNet. We obtained the $21 \times 3$ human joint locations from DeepPrior++ then used the same mapping method in section \ref{dataset} to acquire the joint angles of the Shadow hand. \begin{figure*}[htbp] \centering \includegraphics[width=1.0\textwidth]{img/curve_4merged_w_legend.pdf} \caption{The fraction of frames whose maximum/average joint angle/distance error are below a threshold between Teach Hard-Late approach and different baselines on our test dataset. These show that Teach Hard-Late approach has the best accuracy over all evaluation metrics. } \vskip -0.15in \label{angle_eval} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[width=\textwidth]{img/error_bar_w_hand.pdf} \caption{(Left) Comparison of average angle error on the individual joint between the Teach Hard-Late approach and different baselines on our test dataset. FF means the first finger, LF means the little finger, MF means the middle finger, RF means the ring finger, TH means the thumb. (Right) The kinematic chain of the Shadow hand. In this work, joint 1 of each finger is stiff.} \label{error} \vskip -0.15in \end{figure*} \begin{table}[ht] \centering \caption{Accuracy under High-Precision Conditions} \vskip -0.15in \begin{tabular}{cccc} \multicolumn{3}{c}{}\\ \hlineB{2} Max Err. & Single Human & Teach Soft-Early & Teach Soft-Late \\ \hline 0.1 rad & 21.24\% & 12.31\% & 12.77\% \\ 0.15 rad & 45.57\% & 38.06\% & 10.37\% \\ 0.2 rad & 69.08\% & 63.18\% & 26.16\% \\ \hlineB{2} \multicolumn{3}{c}{}\\ \hlineB{2} Max Err. & Teach Hard-Early & Teach Hard-Late & Hand IK \\ \hline 0.1 rad & 7.40\% & \textbf{24.63\%} & 0.00\% \\ 0.15 rad & 24.67\% & \textbf{50.11\%} & 0.14\% \\ 0.2 rad & 45.63\% & \textbf{72.04\%} & 0.62\% \\ \hlineB{2} \end{tabular} \label{tab:high_precision_acc} \vskip -0.2in \end{table} The comparative results, shown in Fig. \ref{angle_eval} and Fig. \ref{error}, indicate that the Single Robot method is the best concerning all evaluation metrics and has the capability of the training "supervisor". Meanwhile, the Teach Hard-Late method outperforms the other baselines, which verifies that the single human branch is enhanced through an additional consistency loss. Especially regarding the high-precision condition, only the Teach Hard-Late approach shows an average $3.63\%$ improvement of the accuracy below a maximum joint angle which is higher than that of the Single Human method (Table \ref{tab:high_precision_acc}). We refer that the later feature space $f_{feat}$ of the depth images contains more useful information and the MSE method displays the stronger supervision in our case. And the regression-based HandIK method shows the worst performance among our three metrics. The unsatisfying outcome of the HandIK solution is not only down to our network giving a better representation of the hand feature but also due to the fact that this method does not consider the kinematic structure and the special limitation of the robot. Furthermore, direct joint angle regression should have decent accuracy on angles since that is the learning objective. The missing $L_{phy}$ also gives rise to poor accuracy. Moreover, Fig. \ref{error} demonstrates that the second joint, the third joint and the base joint of the thumb are harder to be learned. These results are mainly because that 1) The fixed distal joints of the robot in our work affect the accuracy of its second joint and third joint. 2) these types of joints have a bigger joint range than other joints, especially the base joint of the thumb. 3) there is a big discrepancy between the human thumb and the Shadow thumb. \subsection{Robotic Experiments} To verify the reliability and intuitiveness of our method, real-world experiments were performed with five grown-up subjects. The slave hand of our teleoperation system is the Shadow dexterous hand where the first joint of each finger is fixed. The depth sensor is the Intel RealSense F200 depth sensor which is suitable for close-range tracking. The poses of the teleoperators' right hand are limited to the viewpoint range of [70$^{\circ}$, 120$^{\circ}$] and the distance range of [15mm, 40mm] from the camera. Since the vision-based teleoperation is susceptible to the light situation, all the experiments were carried out under a uniform and bright light source as much as possible. The average computation time of the Teach Hard-Late method is 0.1051s(Alienware15 with Intel Core i7-4720HQ CPU). Code and video are available at \href{https://github.com/TAMS-Group/TeachNet_Teleoperation}{https://github.com/TAMS-Group/TeachNet\_Teleoperation}. \subsubsection{Simulation Experiments} The five novice teleoperators stood in front of the depth sensor and performed 0-9 in American sign language and random common gestures in a disordered way, then teleoperated the simulated Shadow robot The operators did not need to know the control mechanism of the robot and naturally implemented the experiment. Qualitative results of teleoperation by the Teach Hard-Late method are illustrated in Fig. \ref{simulation_result}. We can see that the Shadow hand vividly imitates human gestures of different size of human hands. These experiments demonstrate that the TeachNet enables a robot hand to perform continuous, online imitation of human hand without explicitly specifying any joint-level correspondences. Owing to the fact that we fixed two wrist joints of the Shadow hand, we did not care if the depth sensor captures the teleoperator's wrist. \begin{figure}[ht] \centering \subfigure[Successful teleoperation results]{ \begin{minipage}{0.45\textwidth} \centering\label{good} \includegraphics[width=1\textwidth]{img/good_result_line-crop.pdf} \end{minipage}} \subfigure[Failed teleoperation results]{ \begin{minipage}{0.36\textwidth} \centering\label{bad} \includegraphics[width=1\textwidth]{img/bad_result-crop.pdf} \end{minipage}} \caption{Teleoperation results using the Shadow hand on real-world data.} \label{simulation_result} \end{figure} However, visible errors occurred mainly with the second joint, the third joint of the fingers, and the base joint of the thumb, probably caused by the special kinematic structure of the slave, occlusions and the uncertain lighting conditions. \subsubsection{Manipulation Experiments} We compared the Teach Hard-Late method with the deep prior++ HandIK method on a slave robot. To simplify our experiments, we set the control mode of the robot to be the trajectory control within a proper maximum force for each joint. We used time to complete an in-hand grasp and release task as a metric for usability. We placed a series of objects in the slave hand which was in the \textit{open} pose one at a time to facilitate easier grasping with the robotic fingers and asked subjects to grasp them up then release them. The objects used for the grasp and release tasks were: a water bottle, a small mug, a plastic banana, a cylinder can, and a plastic apple. We required the operators to use power grasp for the water bottle and the mug, and to use precision grasp for other objects. If the user did not complete the task in four minutes, they were considered to be unable to grasp the object. Table \ref{grasp} numerically shows the average time a novice took to grasp an object using each of the control methods. We find that the low accuracy, especially for the thumb, and the post-processing of the HandIK solution results in a longer time to finish the task. The users needed to open the thumb first then perform proper grasp action, so HandIK solution shows worse performance for the objects with a big diameter. Besides that, grasping the banana took the longest time on our method because the long and narrowly shaped object needed more precious fingertip position. \begin{table}[ht] \centering \caption{Average Time a Novice Took to Grasp and Release an Object} \begin{tabular}{ccccccc}\hlineB{2} Methods & Bottle & Mug & Banana & Can & Apple & Average\\ \hline Hand IK & 44.15& 46.32& 35.78& 25.50& 30.22& 36.394\\ Ours & 23.67& 18.82& 25.80& 19.75& 15.60& 20.728\\ \hlineB{2} \end{tabular} \label{grasp} \vskip -0.1in \end{table} \section{Conclusions and Future Work} This paper presents a systematic vision-based method for finding kinematic mappings between the anthropomorphic robot hand and the human hand. This method develops an end-to-end teacher-student network (TeachNet) and creates a dataset containing 400K pairs of the human hand depth images, simulated robot depth images in the same poses and a corresponding robot joint angle. This dataset generation method, which maps the keypoints position and link direction of the robot from the human hand by an improved mapping method, and manipulates the robot and records the robot state in simulation, is efficient and reliable. By the network evaluation and the robotic experiments, we verify the applicability of the Teach Hard-Late method to model poses and the implicit correspondences between robot imitators and human demonstrators. The experimental results also present that our end-to-end teleoperation allows novice teleoperators to grasp the in-hand objects faster and more accurately than the HandIK solution. Although our method performs well in real-world tasks, it has some limitations. First, it requires the operator's hand in a fixed range and has a higher error of occluded joints. Since 3D volumetric representation outperforms 2D input on capturing the spatial structure of the depth data, training an end-to-end model combining a higher level representation would likely lead to more efficient training. Second, when we manipulated the robot to grasp tasks, we did not consider the precious tactile feedback of the robot. To perform more complicated robotic tasks, we are going to use the tactile modality of the Shadow hand combined with our teleoperation method. In addition, we would like to extend our method for teleoperating other body parts of the robots. \small{ \section*{ACKNOWLEDGMENT} This research was funded jointly by the National Science Foundation of China (NSFC) and the German Research Foundation (DFG) in project Cross Modal Learning, NSFC 61621136008/DFG TRR-169. It was also partially supported by National Science Foundation of China (Grant No.91848206, U1613212) and project STEP2DYNA (691154). We would like to thank Chao Yang, Dr. Norman Hendrich, and Prof. Huaping Liu for their generous help and insightful advice. } \bibliographystyle{IEEEtran}
42bd4c892c5297ae846c9d1392dd5dbf76ec2c72
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{INTRODUCTION}\label{intro} \par The solenoid scan is one of the most commonly used methods for the in-situ measurement of the thermal emittance of a photocathode in an electron gun. \cite{bazarov2008thermal,hauri2010intrinsic,qian2012experimental,lee2015review,maxson2017direct,graves2001measurement,gulliford2013demonstration,miltchev2005measurements,bazarov2011thermal}. The measurement has a simple experimental configuration: an rf or dc photocathode gun followed by a transport line consisting of a solenoid and a drift. The photoelectron beam exits the gun at relatively high energy after which it immediately enters the transport line where it is focused by a solenoid onto a screen located at the end of a drift. The thermal emittance of the photocathode is obtained by measuring the beam's transverse size on the screen as a function of the solenoid focusing strength and then fitting these sizes according to the linear transfer matrix of the transport line. \par Driven by the desire for high brightness electron sources, the thermal emittance of both metal and semiconductor cathodes has been intensively investigated with measurements in the past few decades \cite{hauri2010intrinsic,qian2012experimental,graves2001measurement,gulliford2013demonstration,miltchev2005measurements,prat2015measurements}. Some of these measurements have deviated significantly from the theoretical predictions. For copper cathodes illuminated by a 266~nm laser, the theoretical thermal emittance is 0.5~mm\,mrad/mm \cite{dowell2010cathode} while the measured values vary from 0.57~mm\,mrad/mm \cite{prat2015measurements} to 1.17~mm\,mrad/mm \cite{qian2012experimental}. For cesium telluride cathodes and a 262~nm laser, the thermal emittance is predicted to be 0.9~mm\,mrad/mm \cite{dowell2010cathode} while the measured values vary from 0.54~mm\,mrad/mm \cite{prat2015measurements} to 1.2~mm\,mrad/mm \cite{miltchev2005measurements}. These discrepancies are only partially explained by actual increases of the thermal emittance (due to surface roughness and/or impurities) \cite{qian2012experimental,zhang2015analytical} so the remainder of the discrepancies must be due to the measurement method itself. \par Accurate measurement of the thermal emittance via the solenoid scan method depends on three factors: (1) accurate beam size measurement at the screen (2) accurate knowledge of the transfer matrix and (3) reduction of the sources of emittance growth so that only the thermal emittance remains. The first factor can be improved by employing a high sensitivity CCD camera \cite{qian2012experimental} and a thin YAG:Ce screen \cite{maxson2017direct}. The second factor requires accurate knowledge of the fields of the beamline elements and the distances between the elements. Finally, in the third category, there are a number of well-known factors that increase the emittance of the beam thus leading to an overestimation of the thermal emittance. Inside the gun, these known factors include the nonlinear effects from space charge (SC) \cite{lee2015review,miltchev2005measurements} and rf effects \cite{chae2011emittance} that increase the projected emittance. SC effects are mitigated by using low charge beams while rf effects are reduced by using short beams. After the gun, the solenoid’s spherical and chromatic aberrations \cite{dowell2016sources, mcdonald1989methods} will also induce growth of the rms emittance. Both of these aberrations scale with the square of the transverse beam size. In theory, these effects can be mitigated by keeping the beam size small inside the bore of the solenoid but in practice the beam often becomes large inside the solenoid during the scan. The chromatic aberration scales with the beam's energy spread and can therefore be mitigated by operating with a short bunch at low charge - consistent with mitigation of rf and SC effects in the gun. In summary, the traditional solenoid scan method uses a bunch with low charge and short length to reduce the SC, rf, and chromatic sources of emittance growth. \par This paper presents a previously overlooked source of emittance growth due to coupled transverse dynamics aberration that leads to an overestimation of the thermal emittance measured via the solenoid scan. The work presented in this paper was inspired by the recent publication by Dowell et al. \cite{,dowell2018exact}. Whereas the reference focused on the sources of emittance growth due to the coupled transverse dynamics aberration and its elimination, here we are focused on the impact of this aberration on the thermal emittance measurement via the solenoid scan which arises inescapably because the scanning solenoid itself is the source of the aberration. This situation was not addressed in Ref.~\cite{dowell2018exact} so this paper presents a systematic study of the thermal emittance overestimation from the coupled transverse dynamics aberrations using the solenoid scan technique. This aberration arises when the beam motion in the $x-x'$ plane becomes correlated with $y-y'$ plane causing an emittance growth in the 2D phase space. Two aberration sources (thoroughly explained in the reference \cite{,dowell2018exact}) exist in the measurement beamline: (1) an rf quadrupole field, in the rf gun, followed by a rotation in the solenoid, and (2) a constant quadrupole field, inside or before the solenoid, followed by a rotation in the solenoid. In the remainder of this paper, these are referred to as the gun quadrupole and the solenoid quadrupole. \par This paper is organized as follows. Section \ref{section2} describes the gun and solenoid quadrupole focusing in the solenoid scan beamline. Section \ref{section3} discusses the emittance growth due to the coupled transverse dynamics aberration in the solenoid scan beamline. In section \ref{s3}, the thermal emittance overestimation due to the aberrations in the solenoid scan technique is studied both analytically and numerically. In section \ref{ex}, the overestimation is experimentally verified using an L-band 1.6-cell photocathode rf gun with a cesium telluride cathode. Finally, we propose a flexible and compact quadrupole corrector in section \ref{corrector} to minimize the coupled transverse dynamics aberrations so as to improve the accuracy of thermal emittance measurement using solenoid scan. \section{Quadrupole focusing in the solenoid scan beamline}\label{section2} \par In this section we derive transfer matrices for the quadrupole focusing that arises in the solenoid scan beamline. The layout of the beamline is shown in FIG.~\ref{FIG.beamline_show}. Here the cathode of the rf gun, the solenoid entrance, the solenoid exit and the YAG screen are marked as Position 0 to 3 respectively. In this section only the beamline from the cathode (Position 0) to the solenoid exit (Position 2) are used. Undesired quadrupole fields often exist in rf photocathode guns and, to the best of our knowledge, always exist in the solenoid field of all fabricated solenoid magnets used with rf guns. The quadrupole fields can be aligned either normally or rotated about the z-axis (beam transport direction). The focusing due to quadrupole components is presented for three cases: the rf gun alone, the solenoid alone, and simultaneously in both the gun and solenoid. \begin{figure}[hbtp] \centering \includegraphics[width=0.43\textwidth]{setup_PRAB5} \caption{\label{FIG.beamline_show} Thermal emittance measurement setup at AWA.} \end{figure} \subsection{quadrupole focusing from the rf gun} \par Quadrupole fields often exist in rf photocathode guns due to the asymmetric geometry of the cells due to openings in the side walls: rf coupling ports, pumping ports, laser ports, etc. Recent rf guns have eliminated the quadrupole field by using racetrack \cite{Xiao2005Dual} or four-port \cite{hong2011high,zheng2016development} geometries in the cells. However, many older rf guns, without these symmetrizing features, are still in operation since the redesign, fabrication, and commissioning of a new gun is time-consuming and expensive. The gun used in our study (the drive gun \cite{conde2001argonne} at the Argonne Wakefield Accelerator (AWA) facility) is of the older style and has only one rf coupling port and a small vacuum port on the opposite side of the cell, as illustrated in FIG.~\ref{FIG.gun_image}(a). This gun has a strong quadrupole field, as illustrated in the CST Microwave Studio simulation \cite{CST} shown in FIG.~\ref{FIG.gun_image}(b). \begin{figure}[hbtp] \centering \includegraphics[width=0.4\textwidth]{gun_image} \caption{\label{FIG.gun_image} The drive gun at the AWA. (a) 3D model in CST Microwave Studio; (b) Azimuthal magnetic field $h_{\phi}$ along the angular direction at the center of the full cell with a 15~mm radius. A Fourier analysis shows that the quadrupole strength with respect to the monopole one is 7.2$\times10^{-3}$, which is much larger than the LCLS gun (1.2$\times10^{-4}$) \cite{houjunphdthesis} and the newly designed Tsinghua gun (5.6$\times10^{-5}$) \cite{houjunphdthesis}.} \end{figure} \par The AWA rf drive gun has a normal quadrupole component due to the location of its rf coupling port and vacuum pumping port in the vertical direction. The normalized transverse momentum due to the gun quadrupole can be expressed as\cite{chae2011emittance} \begin{equation}\label{kick} \frac{p_ \bot }{p_z} = -2a\alpha L\sin {\varphi _0}(x\hat x - y\hat y) \end{equation} where $p_ \bot$ is the transverse momentum and $p_z$ is the longitudinal momentum. $a$ is the parameter characterizing the relative strength of the quadrupole field to the monopole one, $\alpha$ is the normalized rf field strength, $L$ is the full cell length, and $\varphi _0$ is the phase when the electron arrives at the full cell entrance. \par The beam trajectory in the trace space $x$ and $x'$ after the gun quadrupole is given by \begin{equation}\label{trajectory} \left[ {\begin{array}{*{20}{c}} x\\ {x'} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} 1&0\\ { - \frac{1}{{{f_g}}}}&1 \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{x_0}}\\ {{{x'}_0}} \end{array}} \right] \end{equation} where ${f_g} $ is the equivalent focal length of the gun quadrupole. \par Based on Eqn.~(\ref{kick}) and (\ref{trajectory}) and assuming $x'_0=0$, the slope of the trajectory after the gun quadrupole should be $x' = \frac{{{p_x}}}{{{p_{z}}}} = - 2a\alpha L\sin {\varphi _0}{x_0} = - \frac{{{x_0}}}{{{f_g}}}$. Therefore, the focal length due to the gun quadrupole is \begin{equation}\label{fgun} {f_g} = \frac{1}{{2a\alpha L\sin {\varphi _0}}} \end{equation} and the transfer matrix due to the normal quadrupole focusing in the rf gun can be expressed as \begin{equation} {R_{gunquad}} = \left[ {\begin{array}{*{20}{c}} 1&0&0&0\\ {-\frac{1}{{{f_g}}}}&1&0&0\\ 0&0&1&0\\ 0&0&{\frac{1}{{{f_g}}}}&1 \end{array}} \right] \end{equation} \subsection{quadrupole focusing from the solenoid} \par The AWA solenoid has quadrupole fields due to the asymmetry of the solenoid's yoke and/or coil windings. While its quadrupole field has not been measured via a rotating wire, it should be similar in character to the LCLS solenoid shown in FIG.~2 of Ref.~\cite{dowell2018exact} where the solenoid field was measured to have a rotated quadrupole component at the entrance and exit of the solenoid. (This assumption is validated in Section~\ref{ex} where the angle of the rotated quadrupole is measured using a beam-based method.) We only need to consider the quadrupole located at the solenoid entrance since the focusing followed by the rotation is the source of the coupling between the transverse planes as explained above. Let the rotated quadrupole field in the solenoid have strength ${f_s}$ and rotation angle $\eta$, then its transfer matrix can be written as \begin{equation} {R_{solquad}} = \left[ {\begin{array}{*{20}{c}} 1&0&0&0\\ { - \frac{{\cos 2\eta }}{{{f_s}}}}&1&{ - \frac{{\sin 2\eta }}{{{f_s}}}}&0\\ 0&0&1&0\\ { - \frac{{\sin 2\eta }}{{{f_s}}}}&0&{\frac{{\cos 2\eta }}{{{f_s}}}}&1 \end{array}} \right] \end{equation} \subsection{combined quadrupole focusing from the rf gun and solenoid} \par When both quadrupole fields are present, the above two transfer matrices of the gun and the solenoid quadrupoles can be easily combined as \begin{equation}\label{zuhe} \begin{aligned} {R_{(g + s)quad}} =& \left[ {\begin{array}{*{20}{c}} 1&0&0&0\\ { - \frac{{\cos 2\eta }}{{{f_s}}}}&1&{ - \frac{{\sin 2\eta }}{{{f_s}}}}&0\\ 0&0&1&0\\ { - \frac{{\sin 2\eta }}{{{f_s}}}}&0&{\frac{{\cos 2\eta }}{{{f_s}}}}&1 \end{array}} \right]\left[ {\begin{array}{*{20}{c}} 1&0&0&0\\ { - \frac{1}{{{f_g}}}}&1&0&0\\ 0&0&1&0\\ 0&0&{\frac{1}{{{f_g}}}}&1 \end{array}} \right]\\ =& \left[ {\begin{array}{*{20}{c}} 1&0&0&0\\ { - \frac{1}{{{f_g}}} - \frac{{\cos 2\eta }}{{{f_s}}}}&1&{ - \frac{{\sin 2\eta }}{{{f_s}}}}&0\\ 0&0&1&0\\ { - \frac{{\sin 2\eta }}{{{f_s}}}}&0&{\frac{{\cos 2\eta }}{{{f_s}}} + \frac{1}{{{f_g}}}}&1 \end{array}} \right]\\ =& \left[ {\begin{array}{*{20}{c}} 1&0&0&0\\ { - \frac{{\cos 2\theta }}{{{f_c}}}}&1&{ - \frac{{\sin 2\theta }}{{{f_c}}}}&0\\ 0&0&1&0\\ { - \frac{{\sin 2\theta }}{{{f_c}}}}&0&{\frac{{\cos 2\theta }}{{{f_c}}}}&1 \end{array}} \right] \end{aligned} \end{equation} where the combined focusing strength $f_c$ and rotation angle $\theta$ of the combined transfer matrix (Eqn.~\ref{zuhe}) is given by \begin{equation}\label{combined} \left\{ \begin{aligned} {f_c} &= \frac{1}{{\sqrt {\frac{1}{{{f_g}^2}} + \frac{{2\cos 2\eta }}{{{f_g}{f_s}}} + \frac{1}{{{f_s}^2}}} }}\\ \theta &= \frac{1}{2}\arcsin \left( {\frac{{{f_c}}}{{{f_s}}}\sin 2\eta } \right)\\ \end{aligned} \right. \end{equation} \section{Emittance growth due to the coupled aberrations in the solenoid scan beamline}\label{section3} \par In this section, we present an analytical estimate of the emittance growth due to the coupled transverse dynamics aberration present in the solenoid scan and verify the estimate with numerical simulations. The transverse coupling aberration in the solenoid scan beamline is generated when the quadrupole focusing first focuses the beam to an elliptical shape, the beam is rotated in the solenoid which results in the transverse coupling. We analyze the emittance growth for the same three cases outlined in the previous section; once again only the beamline from the cathode (Position 0) to the solenoid exit (Position 2) is used here. \par The emittance of the beam, after passing through the gun and solenoid, is given by \begin{equation}\label{total_emittacne} {\varepsilon} = \sqrt {{\varepsilon _{therm}}^2 + {\varepsilon _{coupled}}^2 + {\varepsilon _{other}}^2} \end{equation} where $\varepsilon _{therm}$ is the thermal emittance, $\varepsilon _{coupled}$ is the emittance growth due to the transverse coupled dynamics, and $\varepsilon _{other}$ is the emittance growth due to space charge, rf, spherical and chromatic aberrations, etc. Therefore, to estimate $\varepsilon$ we need separate estimates of its three components. In the ideal solenoid scan case the final emittance is equal to the thermal emittance so the second and third terms of Eqn.~\ref{total_emittacne} should be zero. This is true for the last term, $\varepsilon _{other}$, since the solenoid scan parameters are chosen to minimize it as described in section~\ref{intro}, so $\varepsilon _{other}\approx0$ for our analytical estimate. However, we show below that the middle term is, in general, not zero and this term causes a growth of the final emittance. \par The thermal emittance $\varepsilon _{therm}$, is estimated with the three-step model. It can be expressed as $\epsilon=\sigma_l\sqrt{\frac{2E_K}{3m_ec^2}}$, where $\sigma_l$ is the rms laser spot size and $m_ec^2$ is the electron rest energy~\cite{flottmann1997note}. The excess energy of the cesium telluride cathode can be expressed as $2E_K=\phi_l-E_g-E_a+\phi_{Sch}$, where $\phi_l$ is the photon energy, $E_g$ is the gap energy, $E_a$ is the electron affinity, $\phi_{Sch}=\sqrt{\frac{e^3}{4\pi\epsilon_0}\beta E_c}$ is the barrier reduction by the applied electric field due to the Schottky effect \cite{chen2012surface}. The typical cathode barrier $E_g+E_a$ of the cesium telluride is reported to be 3.5~eV \cite{dowell2010cathode,miltchev2005measurements}. By assuming the field enhancement factor $\beta=1$ and using 248~nm UV laser ($\phi_l=5$~eV), the theoretical thermal emittance should be 1.05~mm\,mrad/mm. In the numerical simulation and during the experiment, the initial electron beam spot size has a uniform transverse distribution with 12~mm diameter (rms spot size 3~mm). Therefore, the estimated rms thermal emittance is 3~mm$\times$(1.05~mm\,mrad/mm), or 3.15~mm mrad. \par The coupled emittance estimate after passing through the solenoid, according to Ref.~\cite{dowell2018exact}, is given by \begin{equation}\label{emittacne_coupled} {\varepsilon _{coupled}} = \beta \gamma \frac{{{\sigma _{x,sol}}{\sigma _{y,sol}}}}{{{f_c}}}\left| {\sin 2(KL + \theta )} \right| \end{equation} where $K = \frac{eB_0}{2\beta\gamma mc}$, $L$, $KL$, and $B_0$, denote the strength, the effective length, the Larmor angle, and the peak magnetic field of the solenoid, respectively. Therefore, the analytical estimate of $\varepsilon$ has a minimum value of $3.15~mm mrad$ due to $\varepsilon _{therm}$ added in quadrature to the sinusoidal oscillation term of $\varepsilon _{coupled}$ given in Eqn.~\ref{emittacne_coupled}. \par To verify the analytic estimate of the emittance growth, an ASTRA \cite{floettmann2011astra} beam dynamics simulation was performed. In the simulation results below, the cathode gradient is 32~MV/m, corresponding to a maximum acceleration phase of 37$^{\circ}$ and an ideal solenoid is used (i.e. one without quadrupole focusing) with peak field is fixed at 0.1974~T corresponding to a Larmor angle $KL$ of -30$^{\circ}$. \par To minimize $\varepsilon _{other}$ in the ASTRA simulation, a short pulse, low charge bunch is used. The rf emittance growth due to the phase-dependent rf kick (including dipole, quadrupole and higher order fields) \cite{chae2011emittance} was minimized to 1.4\% by the use of a 1.5~ps FWHM pulse length and verified by ASTRA simulations. The emittance growth due to space charge is zero since the charge is set to zero during the ASTRA simulations. This was done to speed up the simulations but we also confirmed that a sub-picocoulomb charge has less than 1\% contribution to the final emittance. Note that this short pulse, low charge combination also minimizes the emittance growth due to the chromatic aberration of the solenoid to 1.6\%. \subsection{emittance growth due to the gun quadrupole} \par For the first case, only the gun quadrupole is taken into consideration while ignoring the one in the solenoid. In the ASTRA simulation, a 3D rf field map was used for the gun (exported from CST Microwave Studio) and an ideal 1D field map (exported from POISSON) for the solenoid. The total simulated emittance after the ideal solenoid as a function of the laser injection phase is shown in Figure ~\ref{FIG.emt_phase}. This simulation result can be compared to the analytic one by setting $f_c=f_g$, $f_s=\infty$, $\varepsilon _{therm}=3.15~$mm mrad. The emittance growth due to the coupled aberration can be found by substituting Eqn.~\ref{fgun} into Eqn.~\ref{emittacne_coupled} which will be zero when $\varphi_0=0^{\circ}$. This corresponds to a laser injection phase of 49$^{\circ}$ which was found by simulating the electron travel time from the cathode to the full cell. The minimum total emittance is close to the thermal emittance value (3.15~mm mrad) when the laser injection phase is 49${^\circ}$, which demonstrates good agreement between the simulation and analytic results. The difference between the minimal total emittance in the ASTRA simulation and the thermal value is mainly caused by the aforementioned rf effects and chromatic effects. \begin{figure}[hbtp] \centering \includegraphics[width=0.4\textwidth]{emt_phase.eps} \caption{\label{FIG.emt_phase} Emittance at Position 2 based on the realistic rf gun (includes quadrupole) and ideal solenoid (no quadrupole component) at different laser injection phases. The gun launch phase corresponding to $\varphi_0=0^\circ$ is 49${^\circ}$.} \end{figure} \subsection{emittance growth due to the solenoid quadrupole} For the second case, only the solenoid quadrupole is taken into consideration while ignoring the one in the gun. In the ASTRA simulation, a 1D rf field map in the gun is used instead of the 3D field map and, once again, an ideal 1D field map (exported from POISSON) for the solenoid. To model the solenoid quadrupole, a quadrupole element was added to ASTRA at the same location as the solenoid. Its longitudinal field profile is the same as the ideal solenoid and its strength is set to 77~Gauss/m based on the experimental study introduced in Sec.~\ref{ex}. The simulated emittance after the solenoid/quad location (Position 2) as a function of the rotation angle of the solenoid quadrupole $\eta$ is shown in FIG.~\ref{FIG.emt_eta_solenoid_only}. Comparing this simulation result to the analytic expression in Eqn.~(\ref{emittacne_coupled}) (by setting $f_c=f_s$, $f_g=\infty$, $\varepsilon _{therm}=3.15~\rm mm mrad$) we see the emittance oscillates sinusoidally with $\theta=\eta$ which again demonstrates good agreement between the simulation and analytic results. \begin{figure}[hbtp] \centering \includegraphics[width=0.4\textwidth]{emt_eta_solenoid_only.eps} \caption{\label{FIG.emt_eta_solenoid_only} Emittance at Position 2 based on the ideal rf gun (no quadrupole component) and realistic solenoid (includes quadrupole) as a function of the rotation angle of the solenoid quadrupole.} \end{figure} \subsection{emittance growth due to both the gun and in the solenoid quadrupole} For the third and final case, both the gun quadrupole and the solenoid quadrupole are taken into consideration. In the ASTRA simulation, a 3D rf field map was used for the gun (exported from CST Microwave Studio), an ideal 1D field map (exported from POISSON) for the solenoid and a quadrupole element was added to the beamline as described above. The laser injector phase is fixed at 43 deg which produces negative $\varphi_0=-6^\circ$ and $f_g=-137$ m. The total simulated emittance after the solenoid/quad location (Position 2) as a function of the rotation angle of the solenoid quadrupole $\eta$ is shown in FIG.~\ref{FIG.emt_eta_gun_solenoid}. Note that the emittance oscillation curves are different in comparison to FIG.~\ref{FIG.emt_eta_solenoid_only} due to the combined focal length $f_c$ and the angle $\theta$. The gun quadrupole can partially cancel or add to the solenoid quadrupole when $\eta$ is around $150^\circ$ or $60^\circ$, making the emittance growth smaller or larger. \begin{figure}[hbtp] \centering \includegraphics[width=0.4\textwidth]{emt_eta_gun_solenoid.eps} \caption{\label{FIG.emt_eta_gun_solenoid} Emittance at Position 2 based on the realistic rf gun (includes quadrupole) and solenoid (includes quadrupole) as a function of the rotation angle of the solenoid quadrupole.} \end{figure} \section{Thermal emittance overestimation in solenoid scan}\label{s3} \par In this section we show that the emittance measured by the solenoid scan, i.e., the fitted emittance based on the fitting of the rms beamsize and the solenoid strength, $\varepsilon _{fit}$, is an overestimation of the thermal emittance. Further, $\varepsilon _{fit}$, is approximately equal to the quadrature sum of the thermal emittance, $\varepsilon _{therm}$, and the emittance growth due to the transverse coupled aberration, $\varepsilon _{coupled}$. \subsection{solenoid scan formalism}\label{formalism} \par First we present the formalism for the normal solenoid scan, i.e. without any aberrations. Similar to the previous section, $\varepsilon _{other}\approx0$ since the solenoid scan parameters are chosen to minimize it as described in section~\ref{intro}. The solenoid scan beamline begins at the solenoid entrance and ends at the YAG screen; i.e. from positions $1\rightarrow3$ (FIG.~\ref{FIG.beamline_show}). Its transfer matrix is given by the linear transfer matrix for the solenoid $R_{sol}$ and the drift $R_d$ successively (quadrupole components are not considered in the normal solenoid scan). The solenoid's matrix can be expressed as \begin{equation}\label{sol_matrix} {R_{sol}} = \left[ {\begin{array}{*{20}{c}} {{C^2}}&{\frac{{SC}}{K}}&{SC}&{\frac{{{S^2}}}{K}}\\ { - KSC}&{{C^2}}&{ - K{S^2}}&{SC}\\ { - SC}&{ - \frac{{{S^2}}}{K}}&{{C^2}}&{\frac{{SC}}{K}}\\ {K{S^2}}&{ - SC}&{ - KSC}&{{C^2}} \end{array}} \right]=R_{rot}R_{foc} \end{equation} where $S\equiv{\rm sin}(KL)$, $C\equiv{\rm cos}(KL)$. $R_{rot}$ and $R_{foc}$ are the rotation matrix and the focusing matrix respectively. \begin{equation} {R_{rot}} = \left[ {\begin{array}{*{20}{c}} C&0&S&0\\ 0&C&0&S\\ { - S}&0&C&0\\ 0&{ - S}&0&C \end{array}} \right] \end{equation} \begin{equation} {R_{foc}} = \left[ {\begin{array}{*{20}{c}} C&{\frac{S}{K}}&0&0\\ { - KS}&C&0&0\\ 0&0&C&{\frac{S}{K}}\\ 0&0&{ - KS}&C \end{array}} \right] \end{equation} The drift's matrix can be expressed as \begin{equation}\label{R_d} {R_d} = \left[ {\begin{array}{*{20}{c}} 1&{{L_d}}&0&0\\ 0&1&0&0\\ 0&0&1&{{L_d}}\\ 0&0&0&1 \end{array}} \right] \end{equation} where $L_{d}$ is the length of the drift. The thermal emittance $\epsilon_x$ and $\epsilon_y$ are complicated to deduce because the beam trajectories in x and y directions are coupled due to the rotation matrix of the solenoid. For simplicity the rotation term $R_{rot}$ is usually ignored in the beam moments calculation \cite{qian2012experimental,lee2015review,Scifo2018}, and the transfer matrix of the solenoid scan beamline without aberrations is expressed as $R\equiv R_d R_{foc}$. Therefore, the beam size squared at the end of the drift is \begin{equation}\label{eq10} \begin{aligned} {\sigma _{3}}^2 =& {R(1,1)}^2\langle {x_1}^2\rangle + 2{R(1,1)}{R(1,2)}\langle {x_1}{x'_1}\rangle\\ &+{R(1,2)}^2\langle {x'_1}^2\rangle \end{aligned} \end{equation} where $\langle {x_1}^2\rangle$, $\langle {x_1}{{x'}_1}\rangle$, and $\langle {{x'}_1}^2\rangle$ are the beam moments at the solenoid entrance. Note that the solenoid scan method requires prior knowledge of $R$ to find the beam moments and thus the emittance. Substituting in the values from the transfer matrices we obtain an analytical expression for the expected x-beam spot size squared at 3, \begin{equation}\label{eq33a} \begin{aligned} {\sigma _{3}}^2 =& {\left( {C - {L_d}KS} \right)^2}\langle {x_{1}}^2\rangle \\ & + 2\left( {C - {L_d}KS} \right) \left( {S/K + C{L_d}} \right)\langle {x_{1}}{{x'}_{1}}\rangle \\ & + {\left( {S/K + C{L_d}} \right)^2}\langle {{x'}_{1}}^2\rangle \\ \end{aligned} \end{equation} \subsection{Measured spot sizes on the screen} \par In this section, we simulate the measured spot sizes on the screen by propagating the initial beam moments (Position 1 in FIG.~\ref{FIG.beamline_show}) through the solenoid scan beamline transfer matrix $M$ (which now includes aberrations) to the YAG screen (Position 3) for different solenoid strength settings $K$. This beamline is the same as the one in Sec.~\ref{formalism} except that a thin quadrupole lens is now placed at the solenoid entrance (Position 1). For simplicity, let the thin quadrupole lens have a normal rotation angle with focal length of $f$, so its transfer matrix can be expressed as \begin{equation}\label{R_f1} {R_f} = \left[ {\begin{array}{*{20}{c}} 1&0&0&0\\ { - \frac{1}{f}}&1&0&0\\ 0&0&1&0\\ 0&0&{\frac{1}{f}}&1 \end{array}} \right] \end{equation} and the transfer matrix of the solenoid scan beamline with aberrations is $M \equiv R_d R_{sol} R_f$. \par The initial beam is characterized by the beam sigma matrix at the solenoid entrance (Position 1 in FIG.~\ref{FIG.beamline_show}). In order to keep the analysis simple, we assume the initial beam has zero-emittance, uniform transverse distribution, with the same beam size squared in both x and y directions ($\langle {x_1}^2\rangle$), and perfectly parallel rays. The initial beam matrix and initial emittance at the solenoid entrance (Position 1 in FIG.~\ref{FIG.beamline_show}) can be expressed as \begin{equation}\label{eq150} \begin{array}{l} {\Sigma _1} = \left( {\begin{array}{*{20}{c}} {\langle {x_1}^2\rangle}&0&0&0\\ 0&0&0&0\\ 0&0&{\langle {x_1}^2\rangle}&0\\ 0&0&0&0 \end{array}} \right) \\ \varepsilon_{1}=0 \end{array} \end{equation} Eqn.~\ref{eq150} completely specifies the initial beam conditions. \par The beam sigma matrix at the screen (Position 3 in FIG.~\ref{FIG.beamline_show}) can be expressed as \begin{equation}\label{eq31} {\Sigma _{scr}} = M{\Sigma _1}{M^T} \end{equation} so that the measured x-beam sizes squared at the screen $\sigma_{scr}^2$ as a function of the solenoid strength $K$ can be expressed as \begin{equation}\label{eq100} \begin{array}{l} {\sigma _{scr}}^2 = {\Sigma _{scr}}(1,1) = \\ \frac{\langle {x_1}^2\rangle}{{{f^2}{K^2}}}\left[ \begin{array}{l} {\left( {K(f + {L_d})SC + (1 - f{K^2}{L_d}){S^2}} \right)^2} + \\ {\left( {K(f - {L_d}){C^2} - (1 + f{K^2}{L_d})SC} \right)^2} \end{array} \right] \end{array} \end{equation} \subsection{Fitting} \par To retrieve the emittance measured by the solenoid scan, the x-spot sizes squared at the screen (Eqn.~\ref{eq100}) are compared to the analytical expectation of the beam sizes squared (Eqn.~\ref{eq33a}) in order to obtain the fitted beam moments at Position 1: $\langle {x_{fit}}^2\rangle$, $\langle {x_{fit}}{{x'}_{fit}}\rangle$ and $\langle {{x'}_{fit}}^2\rangle$. Note that if the solenoid scan beamline had no aberrations then the fitting routine would retrieve the initial beam moments (Eqn.~\ref{eq150}); i.e. $\langle {x_{fit}}^2\rangle=\langle {x_{1}}^2\rangle$, $\langle {x_{fit}}{{x'}_{fit}}\rangle=\langle {{x'}_{fit}}^2\rangle=0$, and the measured (fitted) emittance $\varepsilon_{fit}=0$. However, as we will show this is not the case due to the transverse coupled aberration. The fitted beam sizes squared ${\sigma _{fit}}^2$ as a function of the fitted beam moments can be expressed as \begin{equation}\label{eq33} \begin{aligned} {\sigma _{fit}}^2 =& {\left( {C - {L_d}KS} \right)^2}\langle {x_{fit}}^2\rangle \\ & + 2\left( {C - {L_d}KS} \right) \left( {S/K + C{L_d}} \right)\langle {x_{fit}}{{x'}_{fit}}\rangle \\ & + {\left( {S/K + C{L_d}} \right)^2}\langle {{x'}_{fit}}^2\rangle \\ \end{aligned} \end{equation} and the goal of the fitting routine is to minimize $\left| {{\sigma _{fit}}^2 - {\sigma _{scr}}^2} \right|$ to retrieve the fitted beam moments $\langle {x_{fit}}^2\rangle$, $\langle {x_{fit}}{{x'}_{fit}}\rangle$ and $ \langle {{x'}_{fit}}^2\rangle$ and thus the fitted emittance. \par The next step is to scan the solenoid strength $K$ in order to generate a series of spot sizes on the screen $\sigma_{scr}^2$ (Eqn.~\ref{eq100}) and then fit them to $\sigma_{fit}^2$ (Eqn.~\ref{eq33}). A Taylor expansion method is used to scan $K$ about its value at the beam waist ($k_0$). During the solenoid scan, the maximum beam size at the screen is typically limited to about twice the minimum beam size (at the waist) to ensure accuracy~\cite{houjunphdthesis}. As a result, the range of the solenoid strength $K$ during the scan is small compared to $k_0$. For example, the range only varies by 5.1\% during the solenoid scan introduced in Sec.~\ref{ex}. Therefore, the solenoid strength $K$ can be expanded about $k_0$ \begin{equation}\label{eq019} K=k_0+\Delta k \end{equation} where $k_0$ is the solenoid strength that focuses the beam to the waist, and $\Delta k \ll k_0$ in the range. The relationship between $k_0$ and the drift length $L_d$ is given by \begin{equation}\label{eq020} {L_d} = \frac{{\cot (k_0L)}}{{k_0}} \end{equation} \par To obtain the Taylor series expansion of the screen beam spot size squared ($\sigma_{scr}^2$) we substitute Eqn.~\ref{eq019} and Eqn.~\ref{eq020} into Eqn.~\ref{eq100} give to second order in $\Delta k$ \begin{widetext} \begin{equation}\label{eq32} \begin{aligned} {\sigma _{scr}}^2 =& \frac{\langle {x_{1}}^2\rangle}{{{f^2}{{k_0}^2}}}\left( {3{c^4} + {{{c^6}} \mathord{\left/ {\vphantom {{{c^6}} {{s^2}}}} \right. \kern-\nulldelimiterspace} {{s^2}}} + 3{c^2}{s^2} + {s^4}} \right) + \frac{{2\langle {x_{1}}^2\rangle}}{{{f^2}{{k_0}^3}}}\left( \begin{array}{l} f{{k_0}^2}L{c^4} - {c^4} + fk_0{{{c^5}} \mathord{\left/ {\vphantom {{{c^5}} s}} \right. \kern-\nulldelimiterspace} s} + f{{k_0}^2}L{{{c^6}} \mathord{\left/ {\vphantom {{{c^6}} {{s^2}}}} \right. \kern-\nulldelimiterspace} {{s^2}}} - \\ 2{c^2}{s^2} - f{{k_0}^2}L{c^2}{s^2} - fk_0c{s^3} - {s^4} - f{{k_0}^2}L{s^4} \end{array} \right)\Delta k\\ &+ \frac{\langle {x_{1}}^2\rangle}{{{f^2}{{k_0}^4}}}\left( \begin{array}{l} 2{c^4} + {f^2}{{k_0}^2}{c^4} - 10f{{k_0}^2}L{c^4} - 3{{k_0}^2}{L^2}{c^4} + 3{f^2}{{k_0}^4}{L^2}{c^4} - 2k_0L{{{c^5}} \mathord{\left/ {\vphantom {{{c^5}} s}} \right. \kern-\nulldelimiterspace} s} + 2{f^2}{{k_0}^3}L{{{c^5}} \mathord{\left/ {\vphantom {{{c^5}} s}} \right. \kern-\nulldelimiterspace} s} - \\ 8f{{k_0}^3}{L^2}{{{c^5}} \mathord{\left/ {\vphantom {{{c^5}} s}} \right. \kern-\nulldelimiterspace} s} + 2f{{k_0}^2}L{{{c^6}} \mathord{\left/ {\vphantom {{{c^6}} {{s^2}}}} \right. \kern-\nulldelimiterspace} {{s^2}}} - {{k_0}^2}{L^2}{{{c^6}} \mathord{\left/ {\vphantom {{{c^6}} {{s^2}}}} \right. \kern-\nulldelimiterspace} {{s^2}}} + {f^2}{{k_0}^4}{L^2}{{{c^6}} \mathord{\left/ {\vphantom {{{c^6}} {{s^2}}}} \right. \kern-\nulldelimiterspace} {{s^2}}} - 2fk_0{c^3}s - 4k_0L{c^3}s + \\ 4{f^2}{{k_0}^3}L{c^3}s - 16f{{k_0}^3}{L^2}{c^3}s + 5{c^2}{s^2} + {f^2}{{k_0}^2}{c^2}{s^2} - 10f{{k_0}^2}L{c^2}{s^2} - 3{{k_0}^2}{L^2}{c^2}{s^2} + \\ 3{f^2}{{k_0}^4}{L^2}{c^2}{s^2} + 2fk_0c{s^3} - 2k_0Lc{s^3} + 2{f^2}{{k_0}^3}Lc{s^3} - 8f{{k_0}^3}{L^2}c{s^3} + 3{s^4} + 2f{{k_0}^2}L{s^4} - \\ {{k_0}^2}{L^2}{s^4} + {f^2}{{k_0}^4}{L^2}{s^4} \end{array} \right)(\Delta k)^{2}\\ &+ {\rm O}((\Delta k)^{3}) \end{aligned} \end{equation} \end{widetext} where $ s\equiv{\rm sin}(k_0L)$ and $c\equiv{\rm cos}(k_0L)$. \par Similarly, to obtain the Taylor series expansion of the fitted beam spot size squared ($\sigma_{fit}^2$) we substitute Eqn.~\ref{eq019} and Eqn.~\ref{eq020} into Eqn.~\ref{eq33} \begin{widetext} \begin{equation}\label{eq34} \begin{aligned} {\sigma _{fit}}^2 =& \frac{{\langle {{x'}_{fit}}^2\rangle }}{{{{k_0}^2}}}{\left( {{{{c^2}} \mathord{\left/ {\vphantom {{{c^2}} s}} \right. \kern-\nulldelimiterspace} s} + s} \right)^2} - \frac{2}{{{{k_0}^3}}}\left( \begin{array}{l} \langle {{x'}_{fit}}^2\rangle {c^2} + 2\langle {x_{fit}}{{x'}_{fit}}\rangle {{k_0}^2}L{c^2} + \langle {x_{fit}}{{x'}_{fit}}\rangle k_0{{{c^3}} \mathord{\left/ {\vphantom {{{c^3}} s}} \right. \kern-\nulldelimiterspace} s} + \langle {x_{fit}}{{x'}_{fit}}\rangle {{k_0}^2}L{{{c^4}} \mathord{\left/ {\vphantom {{{c^4}} {{s^2}}}} \right. \kern-\nulldelimiterspace} {{s^2}}} + \\ \langle {x_{fit}}{{x'}_{fit}}\rangle k_0cs + c{s^2} + \langle {x_{fit}}{{x'}_{fit}}\rangle {{k_0}^2}L{s^2} \end{array} \right)\Delta k\\ &+ \left( \begin{array}{l} \frac{{\langle {x_{fit}}^2\rangle }}{{{{k_0}^2}}}{\left( {c + k_0L{{{c^2}} \mathord{\left/ {\vphantom {{{c^2}} s}} \right. \kern-\nulldelimiterspace} s} + k_0Ls} \right)^2} - \frac{{2\langle {x_{fit}}{{x'}_{fit}}\rangle }}{{{{k_0}^3}}}\left( {k_0L{{{c^4}} \mathord{\left/ {\vphantom {{{c^4}} {{s^2}}}} \right. \kern-\nulldelimiterspace} {{s^2}}} - cs - k_0L{s^2}} \right) + \\ \frac{{\langle {{x'}_{fit}}^2\rangle }}{{{{k_0}^4}}}\left( {{s^2} + \left( {{{{c^2}} \mathord{\left/ {\vphantom {{{c^2}} s}} \right. \kern-\nulldelimiterspace} s} + s} \right)\left( {2s - 2k_0Lc - {{k_0}^2}{L^2}{{{c^2}} \mathord{\left/ {\vphantom {{{c^2}} s}} \right. \kern-\nulldelimiterspace} s} - {{k_0}^2}{L^2}s} \right)} \right) \end{array} \right)(\Delta k)^{2} + {\rm O}((\Delta k)^{3}) \end{aligned} \end{equation} \end{widetext} \par By comparing of the coefficients of each $\Delta k$ power exponents in Eqn.~\ref{eq32} and Eqn.~\ref{eq34}, the fitted beam moments can be solved for as \begin{widetext} \begin{equation}\label{eq35} \begin{array}{l} \langle {x_{fit}}^2\rangle = \frac{{\langle {x_{1}}^2\rangle\left( \begin{array}{l} 30{f^2}{{k_0}^3}L{c^8} - 160f{{k_0}^3}{L^2}{c^8} + 35{f^2}{{k_0}^5}{L^3}{c^8} + 4{f^2}{{k_0}^2}{{{c^9}} \mathord{\left/ {\vphantom {{{c^9}} s}} \right. \kern-\nulldelimiterspace} s} - 32f{{k_0}^2}L{{{c^9}} \mathord{\left/ {\vphantom {{{c^9}} s}} \right. \kern-\nulldelimiterspace} s} + 45{f^2}{{k_0}^4}{L^2}{{{c^9}} \mathord{\left/ {\vphantom {{{c^9}} s}} \right. \kern-\nulldelimiterspace} s} - \\ 120f{{k_0}^4}{L^3}{{{c^9}} \mathord{\left/ {\vphantom {{{c^9}} s}} \right. \kern-\nulldelimiterspace} s} + 15{f^2}{{k_0}^3}L{{{c^{10}}} \mathord{\left/ {\vphantom {{{c^{10}}} {{s^2}}}} \right. \kern-\nulldelimiterspace} {{s^2}}} - 80f{{k_0}^3}{L^2}{{{c^{10}}} \mathord{\left/ {\vphantom {{{c^{10}}} {{s^2}}}} \right. \kern-\nulldelimiterspace} {{s^2}}} + 21{f^2}{{k_0}^5}{L^3}{{{c^{10}}} \mathord{\left/ {\vphantom {{{c^{10}}} {{s^2}}}} \right. \kern-\nulldelimiterspace} {{s^2}}} + {f^2}{{k_0}^2}{{{c^{11}}} \mathord{\left/ {\vphantom {{{c^{11}}} {{s^3}}}} \right. \kern-\nulldelimiterspace} {{s^3}}} - \\ 8f{{k_0}^2}L{{{c^{11}}} \mathord{\left/ {\vphantom {{{c^{11}}} {{s^3}}}} \right. \kern-\nulldelimiterspace} {{s^3}}} + 18{f^2}{{k_0}^4}{L^2}{{{c^{11}}} \mathord{\left/ {\vphantom {{{c^{11}}} {{s^3}}}} \right. \kern-\nulldelimiterspace} {{s^3}}} - 48f{{k_0}^4}{L^3}{{{c^{11}}} \mathord{\left/ {\vphantom {{{c^{11}}} {{s^3}}}} \right. \kern-\nulldelimiterspace} {{s^3}}} + 3{f^2}{{k_0}^3}L{{{c^{12}}} \mathord{\left/ {\vphantom {{{c^{12}}} {{s^4}}}} \right. \kern-\nulldelimiterspace} {{s^4}}} - 16f{{k_0}^3}{L^2}{{{c^{12}}} \mathord{\left/ {\vphantom {{{c^{12}}} {{s^4}}}} \right. \kern-\nulldelimiterspace} {{s^4}}} + \\ 7{f^2}{{k_0}^5}{L^3}{{{c^{12}}} \mathord{\left/ {\vphantom {{{c^{12}}} {{s^4}}}} \right. \kern-\nulldelimiterspace} {{s^4}}} + 3{f^2}{{k_0}^4}{L^2}{{{c^{13}}} \mathord{\left/ {\vphantom {{{c^{13}}} {{s^5}}}} \right. \kern-\nulldelimiterspace} {{s^5}}} - 8f{{k_0}^4}{L^3}{{{c^{13}}} \mathord{\left/ {\vphantom {{{c^{13}}} {{s^5}}}} \right. \kern-\nulldelimiterspace} {{s^5}}} + {f^2}{{k_0}^5}{L^3}{{{c^{14}}} \mathord{\left/ {\vphantom {{{c^{14}}} {{s^6}}}} \right. \kern-\nulldelimiterspace} {{s^6}}} + 6{f^2}{{k_0}^2}{c^7}s - \\ 48f{{k_0}^2}L{c^7}s + 60{f^2}{{k_0}^4}{L^2}{c^7}s - 160f{{k_0}^4}{L^3}{c^7}s + 30{f^2}{{k_0}^3}L{c^6}{s^2} - 160f{{k_0}^3}{L^2}{c^6}{s^2} + \\ 35{f^2}{{k_0}^5}{L^3}{c^6}{s^2} + 4{f^2}{{k_0}^2}{c^5}{s^3} - 32f{{k_0}^2}L{c^5}{s^3} + 45{f^2}{{k_0}^4}{L^2}{c^5}{s^3} - 120f{{k_0}^4}{L^3}{c^5}{s^3} + 15{f^2}{{k_0}^3}L{c^4}{s^4}\\ - 80f{{k_0}^3}{L^2}{c^4}{s^4} + 21{f^2}{{k_0}^5}{L^3}{c^4}{s^4} + {f^2}{{k_0}^2}{c^3}{s^5} - 8f{{k_0}^2}L{c^3}{s^5} + 18{f^2}{{k_0}^4}{L^2}{c^3}{s^5} - 48f{{k_0}^4}{L^3}{c^3}{s^5} + \\ 3{f^2}{{k_0}^3}L{c^2}{s^6} - 16f{{k_0}^3}{L^2}{c^2}{s^6} + 7{f^2}{{k_0}^5}{L^3}{c^2}{s^6} + 3{f^2}{{k_0}^4}{L^2}c{s^7} - 8f{{k_0}^4}{L^3}c{s^7} + {f^2}{{k_0}^5}{L^3}{s^8} \end{array} \right)}}{{{f^2}{{k_0}^2}{{\left( {{{{c^2}} \mathord{\left/ {\vphantom {{{c^2}} s}} \right. \kern-\nulldelimiterspace} s} + s} \right)}^2}{{\left( {c + k_0L{{{c^2}} \mathord{\left/ {\vphantom {{{c^2}} s}} \right. \kern-\nulldelimiterspace} s} + k_0Ls} \right)}^2}\left( {2k_0L{c^2} + {{{c^3}} \mathord{\left/ {\vphantom {{{c^3}} s}} \right. \kern-\nulldelimiterspace} s} + k_0L{{{c^4}} \mathord{\left/ {\vphantom {{{c^4}} {{s^2}}}} \right. \kern-\nulldelimiterspace} {{s^2}}} + cs + k_0L{s^2}} \right)}}\\ \langle {x_{fit}}{{x'}_{fit}}\rangle = \frac{{{\sigma _{x0}}^2\left( {2f{{k_0}^2}L{c^6} + 2fk_0{{{c^7}} \mathord{\left/ {\vphantom {{{c^7}} s}} \right. \kern-\nulldelimiterspace} s} + 3f{{k_0}^2}L{{{c^8}} \mathord{\left/ {\vphantom {{{c^8}} {{s^2}}}} \right. \kern-\nulldelimiterspace} {{s^2}}} + fk_0{{{c^9}} \mathord{\left/ {\vphantom {{{c^9}} {{s^3}}}} \right. \kern-\nulldelimiterspace} {{s^3}}} + f{{k_0}^2}L{{{c^{10}}} \mathord{\left/ {\vphantom {{{c^{10}}} {{s^4}}}} \right. \kern-\nulldelimiterspace} {{s^4}}} - 2f{{k_0}^2}L{c^4}{s^2} - 2fk_0{c^3}{s^3} - 3f{{k_0}^2}L{c^2}{s^4} - fk_0c{s^5} - f{{k_0}^2}L{s^6}} \right)}}{{{f^2}k_0{{\left( {{{{c^2}} \mathord{\left/ {\vphantom {{{c^2}} s}} \right. \kern-\nulldelimiterspace} s} + s} \right)}^2}\left( {2k_0L{c^2} + {{{c^3}} \mathord{\left/ {\vphantom {{{c^3}} s}} \right. \kern-\nulldelimiterspace} s} + k_0L{{{c^4}} \mathord{\left/ {\vphantom {{{c^4}} {{s^2}}}} \right. \kern-\nulldelimiterspace} {{s^2}}} + cs + k_0L{s^2}} \right)}}\\ \langle {{x'}_{fit}}^2\rangle {\rm{ = }}\frac{{{\sigma _{x0}}^2\left( {3{c^4} + {{{c^6}} \mathord{\left/ {\vphantom {{{c^6}} {{s^2}}}} \right. \kern-\nulldelimiterspace} {{s^2}}} + 3{c^2}{s^2} + {s^4}} \right)}}{{{f^2}{{\left( {{{{c^2}} \mathord{\left/ {\vphantom {{{c^2}} s}} \right. \kern-\nulldelimiterspace} s} + s} \right)}^2}}} \end{array} \end{equation} \end{widetext} \par Finally, using these fitted beam moments, we can calculate the measured (fitted) emittance at Position 1 as \begin{equation}\label{eq33c} \varepsilon_{fit} = \sqrt {\langle {x_{fit}}^2\rangle \langle {{x'}_{fit}}^2\rangle - {{\langle {x_{fit}}{{x'}_{fit}}\rangle }^2}} \end{equation} which is, in general, not equal to actual emittance at Position 1, $\varepsilon _{1}=0$. \par It is informative to calculate the actual emittance at the YAG screen (Position 3) for the beam waist condition. Using Eqn.~\ref{eq31} when $K=k_0$ we find, \begin{equation}\label{eq33b} {\varepsilon _{3}} = \left| {{\Sigma _{scr}}(1:2,1:2)} \right| = \left| {\frac{{2\langle {x_{1}}^2\rangle c s}}{{f}}} \right| \end{equation} Note that this equation can also be derived with Eqn.~\ref{emittacne_coupled}. \par FIG.~\ref{FIG.emt_KL} compares the measured (fitted) emittance from the solenoid scan $\varepsilon_{fit}$ (Eqn.~\ref{eq33c}) to the final emittance at the end of the solenoid scan beamline $\varepsilon_{3}=\varepsilon_{coupled}$ (Eqn.~\ref{eq33b}) for the special case of zero initial emittance. Let $\sqrt{\langle {x_{1}}^2\rangle}$=3~mm and $L$=0.4~m, then $\epsilon_{fit}$ and $\epsilon_{3}$ are plotted as a function of the Larmor angle $k_0L$ and focal length $f$ in FIG.~\ref{FIG.emt_KL} and show good agreement. The difference between them is relatively large when the Larmor angle $k_0L$ is very small and $f$ is large, which are not common for realistic beamline parameters. This plot shows that the measured (fitted) emittance $\varepsilon_{fit}$ is equal to the emittance after the solenoid beamline $\varepsilon _{3}$ for realistic beamline parameters. In this specific case of zero initial emittance ($\varepsilon _{1}=0$), the final emittance is equal to emittance growth from the coupled aberration, $\varepsilon_{3}=\varepsilon_{coupled}$. As we show next for the general solenoid scan case having $\varepsilon_{1}=\varepsilon_{therm}$, then the measured (fitted) emittance $\varepsilon_{fit}$ is still equal to the final emittance but it is now quadrature sum of $\varepsilon_{therm}$ and $\varepsilon_{coupled}$. \begin{figure}[hbtp] \centering \includegraphics[width=0.4\textwidth]{emt_k0L2.eps} \caption{\label{FIG.emt_KL} Measured (fitted) emittance in the solenoid scan $\varepsilon_{fit}$ and the final emittance on the YAG screen $\epsilon_3$ as a function of Larmor angle and focal length.} \end{figure} \par Next, the above analytic results for the solenoid scan are verified with numerical simulations using ASTRA but now will include a non-zero initial emittance. The beamline layout and the parameter settings used in this simulation are the same as in the experiment introduced in Sec.~\ref{ex}. The initial emittance is equal to the thermal emittance at the cathode, $\varepsilon_{0}=\varepsilon_{therm}$. A realistic laser intensity distribution was used (instead of a uniform distribution) in the ASTRA simulation leading to slight difference of the thermal emittance value between x and y planes. The CST 3D field map containing quadrupole field is used for the gun and a quadrupole component is added to the solenoid. The longitudinal field profile of the solenoid quadrupole is the same as the solenoid, and the solenoid quadrupole strength is proportional to the solenoid field strength. \par In the ASTRA simulation of the solenoid scan measurement, the solenoid strength is scanned and a series of spot sizes at the screen are generated for the four different solenoid quadrupole settings shown in Table~\ref{t12}. The emittance measured by the solenoid scan is calculated with the fitting method of Eqn.~\ref{eq33} and the ASTRA screen spot sizes. The simulation results are listed in Table~\ref{t12}, which include the initial emittance $\epsilon_{0}$, the actual emittance on the screen $\epsilon_{3}$, and the measured (fitted) emittance by solenoid scan $\epsilon_{fit}$. The variation of the actual emittance at the screen $\epsilon_{3}$ is due to the dependence of $\epsilon_{3}$ on the solenoid and solenoid quadrupole strength according to Eqn.~\ref{emittacne_coupled}. The results prove that the measured (fitted) emittance from the solenoid scan $\epsilon_{fit}$ is very close to the actual emittance at the screen $\epsilon_3$ (the quadrature sum of $\varepsilon_{therm}$ and $\varepsilon_{coupled}$). Therefore, the coupled transverse dynamics aberration can lead to an overestimation of the thermal emittance in the solenoid scan method. \begin{table*}[hbtp] \caption{\label{t12} Comparing the initial emittance on the cathode $\epsilon_{0}$, the actual emittance at the screen $\epsilon_{3}$, and the measured emittance with solenoid scan $\epsilon_{fit}$ under different strength and rotation angle of the solenoid quadrupole. The unit of the emittances is mm mrad.} \renewcommand\tabcolsep{11pt} \renewcommand\arraystretch{1.3} \begin{tabular}{*{7}{c}} \toprule[1.5pt] & $\epsilon_{x0}$ & $\epsilon_{x3}$ & $\epsilon_{xfit}$ & $\epsilon_{y0}$ & $\epsilon_{y3}$ & $\epsilon_{yfit}$ \\ \midrule[1pt] 0.005 T/m,$\eta$=0 deg & 2.938 & 3.181-3.209 & 3.206 & 2.718 & 3.052-3.081 & 3.080 \\ 0.005 T/m,$\eta$=45 deg & 2.938 & 3.667-3.682 & 3.677 & 2.718 & 3.757-3.781 & 3.799 \\ 0.01 T/m,$\eta$=0 deg & 2.938 & 3.947-4.039 & 4.027 & 2.718 & 3.774-3.866 & 3.827 \\ 0.01 T/m,$\eta$=45 deg & 2.938 & 5.478-5.526 & 5.495 & 2.718 & 5.624-5.680 & 5.716 \\ \bottomrule[1.5pt] \end{tabular} \end{table*} \section{thermal emittance measurement}\label{ex} \par The thermal emittance of a cesium telluride photocathode was experimentally measured with the solenoid scan method at the Argonne Wakefield Accelerator (AWA) facility to experimentally demonstrate the overestimation of the thermal emittance due to the coupled transverse dynamics aberration. The layout of the beamline is shown in FIG.~\ref{FIG.beamline_show}. An L-band rf gun with a cesium telluride photocathode is illuminated by a 248~nm UV laser. The transverse profile of the laser is homogenized with a micro-lens array~\cite{halavanau2017spatial}. The cathode gradient is 32 MV/m and the laser launch phase with respect to rf is $43^\circ$ resulting in a beam energy of 3.2~MeV. The bunch charge is kept below 1~pC to make the space charge effect negligible. A PI-MAX Intensified CCD (ICCD) camera~\cite{camera} with 100~ns shutter gating is employed to capture the beam images on the YAG screen. The resolution was measured to be 60 $\mu$m with a USAF target. \subsection{thermal emittance in the linear regime} \par The measured thermal emittance (i.e. the fitted emittance from the solenoid scan) for different laser spot sizes is shown in FIG.~\ref{FIG.beamsize1}. The measured results can be classified into two regimes: a linear regime where the rms spot size is ($<$0.75~mm) and a nonlinear regime where the spot size is ($>$0.75~mm). In the linear regime, the measured emittance has a linear relationship with the spot size and the slope of this line ($1.05\pm0.04$ mm\,mrad/mm) can be used to extrapolate an accurate measurement of the thermal emittance from the fitted values. This slope is in good agreement with the theoretical value of the thermal emittance of a cesium telluride photocathode illuminated by 248~nm laser as introduced in Sec.~\ref{section3}. In the nonlinear regime, the data deviates from the linear fit due to the coupled transverse dynamics aberrations. The deviation from the linear fit and the data is the overestimation of the thermal emittance and it becomes larger with increasing laser spot size, reaching 35\% at the largest laser rms spot size of 2.7~mm. \par These results show that the thermal emittance can be measured with the solenoid scan method as long as one is able to extrapolate the measurements into the linear regime. In our example of the AWA L-band (1.3 GHz) rf photoinjector we achieved an accurate measurement of the thermal emittance when the laser rms spot size is less than 0.75~mm. However, this becomes more difficult to do at higher frequency since the quadrupole fields in the gun and the solenoid become relatively stronger. Therefore, accurate measurements of the thermal emittance in rf photoinjectors are more difficult at S-band (2.856 GHz) and even more so for X-band (11.424 GHz). If we assume that the size of the beamline elements scale as the inverse of the frequency, then the laser rms spot size should be less than 0.34~mm for S-band photoinjectors and less than 0.085~mm for X-band photoinjectors to eliminate the coupled transverse dynamics aberrations to get an accurate measurement of the thermal emittance using the solenoid scan. From this we can conclude that lower frequency rf photoinjectors are preferred for accurate measurements of the thermal emittance. \begin{figure}[hbtp] \centering \includegraphics[width=0.4\textwidth]{emt_beamsize_paper1_2} \caption{\label{FIG.beamsize1} Measured emittance as a function of laser spot size. The slope of the line in the linear regime (laser spot size $<$0.75~mm) gives a thermal emittance of $1.05\pm0.04$ mm\,mrad/mm.} \end{figure} \subsection{Beam based method for the measurement of solenoid quadrupole in the nonlinear regime} The strength and rotation angle of the effective solenoid quadrupole term can be inferred by using the solenoid scan in the nonlinear regime. This method can be used in lieu of the measurement of the actual solenoid quadrupole term with a rotating wire when it is not practical to remove the solenoid from the beamline. Note that the strength and rotation angle of the gun quadrupole are known from the 3D gun field map. A large laser spot size (diameter=11~mm or rms=2.7~mm) is used in the following measurements to determine the solenoid quadrupole. Figure.~\ref{FIG.images} shows a series beam spots on the screen (Position 3) for various solenoid currents. The top row corresponds to the solenoid current flowing in one direction while the current has been flipped in the bottom row (hereinafter these directions are referred to as counterclockwise (ccw) and clockwise (cw)). The images of all the electron beams are elliptical for both the ccw and cw directions, however, the tilt angles are different for the two current directions. Notice that the beam images in the top row are tilted and therefore have a strong x-y correlation while the bottom row has beams that are nearly normally oriented and therefore have a weak x-y correlation. Since the coupled aberration is due to the x-y correlation we expect that when the solenoid current is in the cw direction (bottom row) the emittance growth should be less. This is indeed the case as we show below. \begin{figure}[hbtp] \centering \includegraphics[width=0.48\textwidth]{image_paper1} \caption{\label{FIG.images} Electron beam images on the YAG screen for different solenoid currents. The upper and lower rows correspond to ccw and cw solenoid current directions respectively.} \end{figure} \par Figure~\ref{FIG.ex_result} shows the $x$ and $y$ rms beam spot size as a function of the solenoid strength during the scan for both directions of the solenoid current. While the theoretical thermal emittance is 2.835~mm\,mrad, both solenoid scan fits yield higher emittance values as expected from the coupled aberration. Note that the emittance overestimation is larger for the ccw direction than the cw direction; as expected. \begin{figure}[hbtp] \centering \includegraphics[width=0.4\textwidth]{reverse_solenoid.eps} \caption{\label{FIG.ex_result} Solenoid scan experiment data. rms beamsize as a function of the solenoid field strength and fitted emittances taken during the solenoid scan experiment.} \end{figure} \par To estimate the strength and rotation angle of the effective solenoid quadrupole an ASTRA simulation of the experimental results is performed. The simulation uses the 3D rf gun field map (therefore the gun quadrupole is assumed known), an ideal solenoid, and an ASTRA quad element to model solenoid quadrupole. The ASTRA quadrupole element has: (i) length equal to the solenoid and (ii) strength proportional to the solenoid current. Two variables of the quadrupole element, its strength and rotation angle, are numerically scanned to fit the simulation to the experimental results of Figure~\ref{FIG.ex_result}. The best fit of the simulation is shown in FIG.~\ref{FIG.ex_result_si}. \begin{figure}[hbtp] \centering \includegraphics[width=0.4\textwidth]{reverse_sol_simulation.eps} \caption{\label{FIG.ex_result_si} Solenoid scan simulation fit results. rms beam size as a function of the solenoid field in ASTRA. The solenoid quadrupole was scanned to fit the simulation to the data.} \end{figure} \par The fit of the ASTRA simulation to the experimental results yields the strength of the quadrupole solenoid as 77~Gauss/m for a solenoid field of 0.1974~T. The rotation angles of the quadrupole solenoid are different for ccw and cw current directions. The values of the rotation angles are illustrated in FIG.~\ref{FIG.solenoid_reverse_demo}. For the ccw current direction, the solenoid field is -0.1974~T, the corresponding Larmor angle is $15^\circ$, and the rotation angle of the quadrupole is $12^\circ$. For the cw current direction, the solenoid field is 0.1974~T, the corresponding Larmor angle is $-15^\circ$, and the rotation angle of the quadrupole is $-78^\circ$. The emittance growth contributed by the gun quadrupole is constant regardless of the change of the solenoid current direction, so the difference of the emittances between the two solenoid current directions is determined by the quadrupole in the solenoid. According to Eqn.~\ref{emittacne_coupled}, the emittance growth due to the solenoid quadrupole is proportional to $\left| {\sin 2(KL + \eta )} \right|$. For the parameters shown in FIG.~\ref{FIG.solenoid_reverse_demo}, $\left| {\sin 2(KL + \eta )} \right|=0.809$ and $0.1045$ for the ccw and the cw current direction respectively, which explains the larger emittance overestimation for the ccw current direction. In summary, this beam based method shows that the AWA solenoid quadrupole has strength 77~Gauss/m and rotation angle $12^\circ$ for the ccw direction and $-78^\circ$ for the cw direction. \begin{figure}[hbtp] \centering \includegraphics[width=0.43\textwidth]{solenoid_reverse_demo5} \caption{\label{FIG.solenoid_reverse_demo} Left: The solenoid field is in the $-z$ direction for the ccw solenoid current direction, and the rotation angle of the solenoid quadrupole is $12^\circ$. Right: The solenoid field is in the $+z$ direction for the cw solenoid current direction, and the rotation angle of the solenoid quadrupole becomes $-78^\circ$. The rotation angle is defined as the angle between the quadrupole focusing direction and the x-axis.} \end{figure} \section{quadrupole corrector}\label{corrector} \par To eliminate the emittance growth due to the coupled transverse dynamics aberration, several types of quadrupole correctors have been proposed. A quadrupole corrector is useful for the specific case of improving the solenoid scan fidelity but also for the more general case of reducing emittance growth \cite{dowell2018exact,bartnik2015operational,schietinger2016commissioning,krasilnikov2018electron}. A dedicated quadrupole corrector has also been designed for use at the AWA to cancel the emittance overestimation in solenoid scan, as shown in FIG.~\ref{FIG.corrector}. The corrector consists of a pair of normal and skew quadrupoles in order to obtain a quadrupole with variable strength and rotation angle. \begin{figure}[hbtp] \centering \includegraphics[width=0.43\textwidth]{corrector_drawing} \caption{\label{FIG.corrector} The mechanical design of the quadrupole corrector at AWA.} \end{figure} The quadrupole corrector will be installed at the solenoid exit. An ASTRA simulation is employed to study the correction effect. The solenoid quadrupole used in this simulation is in the ccw direction as described in Sec.~\ref{ex}. The laser spot is uniform in the transverse direction with a rms spot size of 2.7~mm. The other parameters are kept the same as in the experiment. Since the scan range of the solenoid strength is small in the solenoid scan (5.1\% of the solenoid strength), a constant strength of the quadrupole corrector is sufficient to cancel the emittance growth. Figure~\ref{FIG.emt_theta_paper1} shows the measured emittance as a function of the quadrupole corrector strength and rotation angle. The figure clearly shows that the amount of the emittance overestimation due to the coupled transverse dynamics aberration depends on the strength and rotation angle of the quadrupole corrector. Moreover, the emittance measured by the solenoid scan can be made equal to the thermal emittance if the quadrupole corrector setting is chosen appropriately. \begin{figure}[hbtp] \centering \includegraphics[width=0.43\textwidth]{emt_theta_paper1} \caption{\label{FIG.emt_theta_paper1} Simulated emittance correction by scanning the strength and rotation angle $\alpha$ of the quadrupole corrector. The emittance is normalized by the laser spot size.} \end{figure} \section{conclusion} \par The overestimation of the thermal emittance due to the coupled transverse dynamics aberration in solenoid scan has been systematically studied in this paper. Two sources of aberrations that lead to emittance growth were analyzed: the quadrupole field in the rf gun followed by a solenoid, and the quadrupole field of the solenoid. Analytical expressions and beam dynamics simulations demonstrated that the emittance measured by solenoid scan is an overestimation which is very close to the quadrature sum of the thermal emittance and coupled emittance. \par The overestimation effect in the solenoid scan was demonstrated with a thermal emittance measurement experiment at the AWA facility. The experiment measured the thermal emittance of a cesium telluride photocathode in the AWA drive gun, as L-band 1.6-cell rf gun. Elliptical beam images were observed on the YAG screen indicating the existence of quadrupole components in the beamline. A nonlinear curve of the measured emittance as a function of the laser spot size is observed, which shows a thermal emittance overestimation of 35\% with a 2.7 mm rms laser spot size. A beam based method was used to measure the solenoid quadrupole by flipping the solenoid current direction and matching the simulation with the experimental results. Its strength is found to be 77 Gauss/m with a solenoid field of 0.1974 T and its rotation angle is discovered to be $12^{\circ}$ and $-78^{\circ}$ with the two opposite solenoid current directions, respectively. \par A compact and flexible quadrupole corrector is proposed to be installed at the exit of the solenoid, which will fully eliminate the overestimation effect due to the coupled transverse dynamics aberrations so as to improve the thermal emittance measurement accuracy by the solenoid scan method. \begin{acknowledgments} \par This work is supported by the U.S. Department of Energy, Offices of HEP and BES, under Contract No. DE-AC02-06CH11357. It is also funded by the National Natural Science Foundation of China (NSFC) No. 11435015 and No. 11375097. \end{acknowledgments}
8a1fa8e08127defc80cecbc7684545f397517353
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Colloidal systems of spherical particles have been studied extensively, and support a wide range of behavior, including solid and fluid phases in one-component systems, and liquid-vapor phase transitions in colloid-polymer mixtures~\cite{Poon02}. The behavior of \emph{anisotropic} colloids is even richer, including liquid crystals~\cite{Onsager1949,Mukhija:2011aa}, exotic crystalline phases~\cite{Rossi:2015aa,Chen:2011eu}, and liquids with unusual structure~\cite{Ashton2015-porous,Ruzicka:2011ud}. New colloidal synthesis methods have stimulated recent work in this area, with a range of anisotropic particles now available, including indented (lock-and-key) particles~\cite{Sacanna:2010ys,Sacanna:2013kq}, fused spheres~\cite{Sacanna:2013aa,Kraft12}, ellipsoids~\cite{Dugyala:2013aa} and superballs~\cite{Rossi:2015aa,Rossi:2011qd}. When such anisotropic particles are mixed with a non-adsorbing polymer, one finds depletion forces between colloids that depend strongly on their orientations \cite{Florea:2014aa, Karas:2016aa, Anders:2014la}, which can lead to self-assembly of complex structures. The microscopic mechanism for depletion forces is well-understood~\cite{Asakura1954,Lekkerkerker:2011} -- mixing colloids with a non-adsorbing depletant leads to unbalanced osmotic pressures on the colloids, resulting in an effective attraction. Depletion forces between spherical colloids can be characterised theoretically by integrating out the depletant~\cite{Likos:2001fk,Asakura1954,Dijkstra1999,Lekkerkerker:2011} -- whether this is easy or difficult depends on the type of depletant particles, but quantitative theoretical predictions can be made, at least for two body effective interactions between the colloids\cite{Ashton:2011kx}. Three and higher body interactions are usually harder to calculate but are expected to be negligible in comparison to two-body forces if the ratio of depletant size to colloid size is small\cite{Ashton2014-jcp}. By contrast, accurate characterisation of two body effective interactions between anisotropic particles is difficult in general. For instance in the case of uniaxial particles (whose orientations can be described by a single unit vector) the effective potential is a function of four variables -- such functions may not be easy to infer or parameterise theoretically. Particles of lower symmetry require even greater number of variables. In this article, we introduce a general strategy for developing approximate (coarse-grained) interaction potential between anisotropic particles, and we apply it to a system of indented (lock-and-key) colloids~\cite{Sacanna:2010ys,Sacanna:2013kq}. \rlj{Such systems have been studied quite extensively in theory and simulation~\cite{Odriozola:2008uq,Marechal:2010fk,Odriozola2013,Ahmed:zr,Melendez2015,Villegas2016,Chang:2015aa}: despite their simplicity, they exhibit strong directional bonds, which can lead to rich phenomenology, both for packing~\cite{Ahmed:zr,Ashton2013} and phase behavior~\cite{Ashton2015-porous,Ashton2015-wall}.} Our general coarse-graining method is designed to yield piecewise-constant interaction potentials that match the binding free energies for the different regimes in which anisotropic colloids can associate with each other. \rlj{For systems of spherical particles with short-ranged interactions, the extended law of corresponding states~\cite{Noro2000}, means that matching these free energies leads to coarse-grained models that are very effective in reproducing systems' phase behaviour. For anisotropic particles with strong directional binding, Wertheim's theory~\cite{Wertheim:1984zr} indicates that these free energies again control the behaviour of the system, as found for lock-and-key colloids in Ref.~\onlinecite{Ashton2013}.} These free energies are characterised in terms of the second virial coefficient of the coarse-grained system, so for spherical colloids, the simplest version of the method would yield a square-well attraction between the colloids, with a second virial coefficient chosen to match the fully-interacting system. For anisotropic particles, one arrives at a more complex effective interaction, but the physical motivation is similar, so one can hope that the coarse-grained model will match the full system at a similar level of accuracy. \rlj{Hence, our method, which is tailored towards colloidal systems with hard cores and short-ranged interactions, differs from methods used in molecular or polymeric systems~\cite{Noid2008,Prapotnik2007,Likos:2001fk,Shelley2001}.} The form of the paper is as follows: Sec.~\ref{sec:model} describes our model and Sec.~\ref{sec:theory} describes the general theory that we use to develop a coarse-grained effective interaction. In Sec.~\ref{sec:lock-key} we describe the relatively simple case of an effective interaction between an indented colloid (a lock) and a hard sphere (a key). In Sec.~\ref{sec:lock-lock} we discuss the effective interaction between two lock particles, which depends in a complex way on the relative orientations of the two particles. Sec.~\ref{sec:simp} addresses the relationship between our approach here and a simplified version of this effective potential that was used in Ref.~\onlinecite{Ashton2015-porous}. Our conclusions are summarized in Sec.~\ref{sec:conc}. \section{Model: lock-and-key colloids} \label{sec:model} Our model system is based on the experimental system of Sacanna and co-workers~\cite{Sacanna:2010ys}: it was introduced in Ref.~\onlinecite{Ashton2013} and further studied in Refs.~\onlinecite{Ashton2015-porous,Ashton2015-wall}. Similar model systems have also been studied in theory and simulation~\cite{Odriozola:2008uq,Marechal:2010fk,Odriozola2013,Ahmed:zr,Melendez2015,Villegas2016,Chang:2015aa}. The model consists of hard particles of different sizes and shapes, as shown in Fig.~\ref{fig:model}. To define the anisotropic particles, consider a hard spherical particle of diameter $\sigma$, in which we make a concave indentation by cutting away its intersection with a second sphere of diameter $\sigma_c$. The distance between the centers of the original sphere and the cutting sphere is $d_c$. The orientation of an indented colloid is described by a unit vector $\bm{n}$ that points from the center of the original sphere towards the center of the cutting sphere. These indented particles interact with a depletant consisting of smaller hard spheres of diameter $q\sigma$. In some cases, we also mix these two components with additional hard spheres of diameter $\sigma_{\rm K}$. To make contact with Refs.~\onlinecite{Ashton2013,Ashton2015-porous,Ashton2015-wall}, note that if $\sigma_{\rm c}=\sigma$ then the depth of the indentation (measured from the lip) is $h=(\sigma - d_{\rm c})/2$, so specifying the depth $h$ is equivalent to specifying the shape parameter $d_{\rm c}$. \begin{figure} \includegraphics[width=7cm]{fig1.pdf} \caption{Illustration of the different hard particles considered in this work. (a) Indented colloid (lock) particle, defined by considering a sphere of diameter $\sigma$ and cutting away its intersection with a second sphere of diameter $\sigma_{\rm c}$. (b) A spherical colloid (key) particle with diameter $\sigma_{\rm K}$, comparable with $\sigma$. (c) Smaller depletant particle of diameter $q\sigma$: in this work we take $q=0.1$ so the depletant is significantly smaller than the colloidal particles.} \label{fig:model} \end{figure} We refer to the indented particles as \emph{lock particles}, since the spherical keys fit within the indentation, leading to \emph{lock-and-key binding}. Compared with the interaction between spherical particles, this binding is strong, due to the complementary shapes of the lock and key particles~\cite{Kinoshita:2002kx,Konig2008,Odriozola:2008uq,Sacanna:2010ys,Anders:2014la}. We refer to both the indented particles and the keys as \emph{colloidal particles}, to distinguish them from the depletant. We have in mind that both species of colloidal particles have comparable sizes, while the depletant particles are considerably smaller. (In this work we take $\sigma_{\rm K}=\sigma$ and $q=0.1$ throughout.) It is therefore useful to \emph{integrate out} the depletant degrees of freedom, to arrive at a coarse-grained system in which only the colloids survive, and the effect of the depletant is captured via a two-body effective interaction~\cite{Dijkstra1999}. This interaction depends on the chemical potential of the depletant, which we describe in terms of its (reservoir) volume fraction $\eta$. Our numerical method for integrating out the depletant involves explicit simulations of a pair of colloids in a depletant fluid. To the extent that such simulations are feasible, it is applicable to any type of depletant fluid and is thus quite general. We note that one can avoid explicit simulation of the depletant fluid if one assumes that the depletant is `ideal', in which case the depletion potential can be estimated via a numerical integration scheme. This is the approach taken in Ref.~\onlinecite{Villegas2016}, which, in a spirit similar to the present work, makes estimates of how the binding free energy of lock and key colloids depends on their geometry and relative orientations. We have performed Monte Carlo simulations of interacting lock and key colloids with a hard sphere depletant, and separately, colloids that interact with each other through an effective interaction that is designed to mimic the full colloid-depletant mixture. In all cases we use the geometrical cluster algorithm~\cite{Dress1995,Liu2004} (GCA) to move the particles, following the same methods as in Refs.~\onlinecite{Ashton2013,Ashton2015-porous}. \section{Theory} \label{sec:theory} \newcommand{\overline{\rho}}{\overline{\rho}} \newcommand{k_{\rm B}}{k_{\rm B}} Before describing results for indented colloids, we present our general method for inferring (from simulation data) the effective interactions between anisotropic colloids. We begin with a brief review of the situation for isotropic (spherical) particles. In this case, the effective interaction can be defined in terms of the radial distribution function, in the dilute limit. Given a large system of colloidal particles interacting with a depletant at (reservoir) volume fraction $\eta$, one defines \begin{equation} g_\eta(r) = \frac{\rho_\eta^{(2)}(\bm{R},\bm{R}')}{\overline{\rho}^2} \end{equation} where translational invariance means that the right-hand side depends only on $r=|\bm{R}-\bm{R}'|$, we have introduced the mean colloid density $\overline{\rho}$, and the two-body density \begin{equation} \rho_\eta^{(2)}(\bm{R},\bm{R}') = \langle \rho(\bm{R}) \rho(\bm{R}') \rangle_\eta - \overline{\rho}\delta(\bm{R}-\bm{R}'). \end{equation} Angle brackets $\langle \cdot \rangle_\eta$ indicate equilibrium averages in the colloid-depletant mixture, with depletant volume fraction $\eta$. Given these definitions, the (dimensionless) effective potential between the colloids can be defined as \begin{equation} W_{\rm eff}^\eta(r) = \lim_{\overline{\rho}\to0} \left[ - \log \frac{g_\eta(r)}{g_0(r)} \right] \label{equ:weff-sph} \end{equation} for all $r$ where $g_0(r)>0$, and $W_{\rm eff}(r)=0$ otherwise. Here $g_0(r)=g_{\eta=0}(r)$ is the radial distribution function in the absence of depletant. For hard spherical colloids of diameter $\sigma$, we have that $g_0(r)\to\Theta(r-\sigma)$ as $\overline{\rho}\to0$, so the denominator in (\ref{equ:weff-sph}) is not required, but we include it for later convenience. For anisotropic colloids, there is a corresponding two-body density $\rho_\eta^{(2)}(\bm{R},\Omega,\bm{R}',\Omega')$ which depends on the positions $\bm{R}$ and orientations $\Omega$ of both colloids. The corresponding effective potential is (by analogy with (\ref{equ:weff-sph})) \begin{equation} W_{\rm eff}^\eta(\bm{R},\Omega,\bm{R}',\Omega') = - \lim_{\overline{\rho}\to0} \left[ \log \frac{\rho_\eta^{(2)}(\bm{R},\Omega,\bm{R}',\Omega')}{\rho_0^{(2)}(\bm{R},\Omega,\bm{R}',\Omega')} \right] \label{equ:weff-aniso} \end{equation} In contrast to the spherical case where the two-body density depends only on the distance between the particles, this two-body density depends on more than one variable. For example, if the orientation of each colloid can be described in terms of a single orientation vector (as for the indented colloids considered here), then $W_{\rm eff}^\eta$ depends on the distance between the particles and on three angular co-ordinates that describe the relative orientation of the two colloids (see Fig.~\ref{fig:lock-lock-coord}). This makes estimation of $W_{\rm eff}^\eta$ much more challenging for anisotropic colloids, because while $g(r)$ can be inferred from simulation data via a simple one-dimensional histogram, the direct generalisation of that method to anisotropic particles would require assembly of a four-dimensional histogram. \rlj{ For a one-dimensional histogram, one might expect to represent the effective potential accurately using a histogram with around 100 bins. To obtain a four-dimensional histogram at similar accuracy, one would require $100^4$ bins, and assembly of such a histogram would require a data set with at least $100$ times as many samples as there are bins in the histogram. One easily sees that this method quickly becomes unfeasible. Moreoever, it does not provide a simple or intuitive representation of the effective interaction.} The method that we now present shows how the complexity of these high-dimensional distributions can be reduced by an appropriate choice of co-ordinate system, leading to a parameterisation of the effective potential. These procedures require physical insight into the physics of the interacting system, but we show that accurate results are still available even if the complicated four-dimensional function $W_{\rm eff}$ is simplified very considerably. \begin{figure} \includegraphics[width=8cm]{fig2.pdf} \caption{(a) Co-ordinate system describing the relative orientation of two indented colloids. We define $\theta_{1,2}$ as the angles between the particles' orientation vectors $\bm{n}_{1,2}$ and the interparticle displacement vector. [The interparticle vector is defined between the geometric centres of the locks: here we show the case $d_{\rm c}=\sigma/2$, for which the geometrical centre of a lock lies on its concave surface. We define $\theta_{\rm R}=\min(\theta_1,\theta_2)$ and $\theta_{\rm I}=\max(\theta_1,\theta_2)$. The orientation vectors $\bm{n}_{1,2}$ are not in general co-planar with the interparticle vector so in order to describe the relative position and orientation of the two particles, we must also specify the angle $\phi=\cos^{-1}(\bm{n}_1\cdot\bm{n}_2)$. All angles take values in the range $[0,\pi]$. Interchanging the particle labels 1 and 2 leaves the angles $\theta_{\rm R},\theta_{\rm I},\phi$ invariant so any effective potential that depends on only these angles and the particle separation is automatically independent of particle labelling. (b) The specific (lock-and-key) binding regime is associated with small values of $\theta_{\rm R}$ and $\phi$ (in this case $\theta_{\rm R}=\theta_1\approx0$), and large values of $\theta_{\rm I}$ (c) The non-specific (back-to-back) regime is associated with large values of $\theta_{\rm R}$, in which case the interaction strength also depends weakly on $\theta_{\rm I},\phi$. (d) The \emph{mouth-to-mouth} regime is associated with small values of $\theta_{\rm R},\theta_{\rm I}$ and large values of $\phi$.} \label{fig:lock-lock-coord} \end{figure} \subsection{Second virial coefficients} The key to the accuracy of our scheme is that we develop an approximate effective potential which matches precisely the second virial coefficient associated with the true effective potential. In the absence of a depletant, we describe the interactions between colloidal particles via a two-body potential $v_0(\bm{R},\Omega,\bm{R}',\Omega')$. We then define a second virial coefficient associated with the effective interactions among the colloids, which is \begin{equation} B_2^\eta = \frac12 \int [ 1 - {\rm e}^{-\beta v_{\rm eff}(\bm{R},\Omega,\bm{R}',\Omega')} ] \mathrm{d}\bm{R}' \mathrm{d}\Omega' \label{equ:b2} \end{equation} where $\beta v_{\rm eff} = \beta v_0 + W_{\rm eff}$, and $\beta=1/k_{\rm B} T$ is the inverse temperature. The right hand side of (\ref{equ:b2}) is independent of $(\bm{R},\Omega)$ since the system is translationally and rotationally invariant. For later convenience, we define the orientational integral to be normalised such that $\int\mathrm{d}\Omega=1$ so if the orientation of the particle can be described by a single unit vector $\bm{n}$ then $\mathrm{d}\Omega = \mathrm{d}^2\bm{n}/(4\pi)$. Now imagine fixing the position and orientation $\bm{R},\Omega$ of the first colloidal particle, and decomposing the domain of the integral in (\ref{equ:b2}) into several regions -- each region will correspond to a particular set of positions and orientations of a second particle. For example, for the lock-shaped colloids shown in Fig.~\ref{fig:lock-lock-coord}, one such region will involve the two particles bonded in the ``back-to-back'' binding mode (Fig.~\ref{fig:lock-lock-coord}c). The contribution of region $X$ to the second virial coefficient is \begin{equation} B_2^\eta(X) = \frac12 \int_X [ 1 - {\rm e}^{-\beta v_{\rm eff}(\bm{R},\Omega,\bm{R}',\Omega')} ] \mathrm{d}\bm{R}' \mathrm{d}\Omega' . \label{equ:B2X} \end{equation} Our aim in this work is to define an approximate parameterisation $W_{\rm app}$ of the effective potential so that for each relevant region $X$, the integral $B_2(X)$ evaluated with the approximated potential matches the value $B_2^\eta(X)$ obtained with the depletant in place. That is, we define \begin{equation} B_2^{\rm app}(X) = \frac12 \int_X [ 1 - {\rm e}^{-\beta v_{\rm app}(\bm{R},\Omega,\bm{R}',\Omega')}]\mathrm{d}\bm{R}' \mathrm{d}\Omega' , \label{equ:B2app} \end{equation} with $\beta v_{\rm app}=\beta v_0 + W_{\rm app}$ and we choose $W_{\rm app}$ such that for a specific set of regions $X$, we have $B_2^{\rm app}(X) = B_2^\eta(X)$. We choose $W_{\rm app}$ according to this criterion instead of (for example) matching the \emph{values} of $W_{\rm eff}$ and $W_{\rm app}$, since $B_2(X)$ determines the probability that two particles bind together with a relative orientation $X$, and this is the most important quantity for the physical properties of the coarse-grained system. This approach is also useful in other settings, for example in understanding the phase behaviour of systems with short-ranged interactions\cite{Noro2000,Vliegenthart2000}, or the application to anisotropic particles of Wertheim's theory of associating fluids\cite{Wertheim:1984zr,Ashton2013}. The second virial coefficients for different binding regimes are also related to equilibrium constants associated with binding/unbinding\cite{Melendez2015,Villegas2016} [in the simplest case, one has an equilibrium constant for binding in regime $X$ which is $K_X\approx -B_2(X)$, where the approximate equality is accurate when the effective interactions are strong ($\mathrm{e}^{-\beta v_{\rm eff}}\gg 1$).] \subsection{Estimation of piecewise linear effective interactions} Within this scheme, the simplest way to define an approximate effective potential is to choose a set of regions $X_1,X_2,\dots$, and take $W_{\rm app}$ to be a piecewise constant function, with a different (constant) value in each region. (This approach is used, for example, when describing the attraction between spherical colloids by a square-well potential.) To this end, it is useful to consider a system of just two colloidal particles in the presence of a depletant. Integrating out the depletant yields $Z_2^\eta = (V/2) \int {\rm e}^{-\beta v_{\rm eff}(\bm{R},\Omega,\bm{R}',\Omega')} \mathrm{d}\bm{R}' \mathrm{d}\Omega'$: standard results from liquid-state theory ensure that the effective potential $W_{\rm eff}$ that appears in this integral (via $v_{\rm eff}$) is the same as that defined in (\ref{equ:weff-aniso}). From the definition of $Z_2^\eta$, one immediately sees that $Z_2^\eta = V [(V/2) - B_2^\eta]$. It follows that if a large set of simulation data samples the configuration space of this system, the fraction of data points for which the relative colloid co-ordinates are in region $X$ will be \begin{align} P_2^\eta(X) & = \frac{V}{2Z_2^\eta} \int_X {\rm e}^{-\beta v_{\rm eff}(\bm{R},\Omega,\bm{R}',\Omega')} \mathrm{d}\bm{R}' \mathrm{d}\Omega' \nonumber \\ & = \frac{V(X) - 2B_2^\eta(X)}{V-2B_2^\eta} \label{equ:P2B2} \end{align} where $V(X) = \int_X \mathrm{d}\bm{R}' \mathrm{d}\Omega'$ is the volume of region $X$. From the definition of $B_2^{\rm app}$ in (\ref{equ:B2app}) and using the fact that $W_{\rm app}$ is constant within region $X$, we obtain $B_2^{\rm app}(X) = \frac12 [ V(X) - \mathrm{e}^{-W_{\rm app}(X)} \int_X \mathrm{e}^{-\beta v_0} \mathrm{d}\bm{R}' \mathrm{d}\Omega']$. Choosing the value of $W_{\rm app}(X)$ such that $B_2^{\rm app}(X) = B_2^\eta(X)$, we obtain \begin{align} \mathrm{e}^{-W_{\rm app}(X)} &= \frac{V(X) - 2B_2^\eta(X)}{\int_X \mathrm{e}^{-\beta v_0} \mathrm{d}\bm{R}' \mathrm{d}\Omega'} \nonumber \\ & = \frac{V(X) - 2B_2^{\rm app}(X)}{V(X) - 2B_2^0(X)} \end{align} where the notation $B_2^0$ indicates $B_2^{\eta=0}$, so the second equality follows from (\ref{equ:b2}). Hence we can use (\ref{equ:P2B2}) to write \begin{align} W_{\rm app}(X) = -\log \left[ \frac{P_2^\eta(X)}{P_2^0(X)} \cdot \frac{1-(2B_2^\eta/V)}{1 - (2B_2^0/V)}\right] . \label{equ:Wapp-P} \end{align} Given simulation data for two colloids interacting with depletant, and accompanying data for two colloids alone in the simulation box, the right hand side of (\ref{equ:Wapp-P}) can be estimated (see below). This provides a value for $W_{\rm app}(X)$. In this case, the choice of the regions $X$ uniquely determines the approximate (coarse-grained) potential -- whatever choice is made, the $B_2^{\rm app}(X)$ will match exactly the $B_2^\eta(X)$ evaluated for the true depletion potential. Similarly, the total virial coefficient $B_2$ for the coarse-grained model exactly matches that of the true effective potential. Computationally, the scheme is efficient because $P_2^\eta(X)$ is a simple probability -- it avoids the requirement for binning and making histograms from high-dimensional data sets. We note that the factor $1-(2B_2^\eta/V)$ that appears in~(\ref{equ:Wapp-P}) can also be obtained from the data for two interacting particles. Given a range $R$ that is larger than the range of the effective interaction (but smaller than half the periodic box), the probability that the two particles have a separation $r>R$ is easily verified to be $P_R^\eta=(V-4\pi R^3/3)/(V-2B^\eta_2)$, from which we obtain \begin{equation} \left(1-\frac{2B_2^\eta}{V}\right) = \frac{2Z_2^\eta}{V^2} = \left( 1-\frac{4\pi R^3}{3V}\right) \frac{1}{P_R^\eta} \label{equ:B2-Z-PR} \end{equation} Using this result, the right hand side of (\ref{equ:Wapp-P}) can be evaluated from simulation data. Eq.~(\ref{equ:B2-Z-PR}) also allows straightforward estimation of the second virial coefficient \cite{Ashton2014-pre,Ashton2014-jcp}. Finally, we note that the use of a piecewise constant interaction potential is convenient because of the very simple expression (\ref{equ:Wapp-P}) that allows estimation of $W_{\rm app}$. Such a potential is often appropriate if the resulting coarse-grained system is to be simulated by a Monte Carlo method. In other cases, a continuous approximation to the effective potential may be required. Matching of second virial coefficients for different regions $X$ can still be achieved in this case, but is slightly more complicated. An example is given in Sec.~\ref{sec:lock-lock}, below. \section{Results -- depletion potential for lock-key binding} \label{sec:lock-key} \begin{figure} \includegraphics[width=8.5cm]{fig3.pdf} \caption{(a) Effective potential $W_{\rm eff}(r,\cos\theta)$ for a lock and sphere, given a depletant with $\eta=0.08$ (b) piecewise constant approximation $W_{\rm app}$ for the same potential, inferred according to our scheme. The potential at a given point is the free energy cost for introducing a sphere whose centre is located at that point, given that the lock particle is positioned as shown.} \label{fig:LK-eff} \end{figure} \begin{figure} \includegraphics[width=7.0cm]{fig4.pdf} \caption{Configuration space for a lock interacting with a sphere, partitioned into three regimes and their underlying regions. For illustrative purposes, this figure shows a case where the range of the interaction is larger than in Fig.~\ref{fig:LK-eff}.} \label{fig:LK-schem} \end{figure} So far, we have assumed that all colloids in the system are of a single species. However, the theory presented above can easily be extended to mixtures of colloids. In this section, we consider the effective potential between the indented (lock) particles and spherical (key) particles. While considering mixtures might appear complicated, this situation is in fact rather simple because the effective potential depends on just two co-ordinates\cite{Odriozola:2008uq,Villegas2016}. Let the positions of one lock and one key be $\bm{R}_{\rm L}$ and $\bm{R}_{\rm K}$ and let the orientation of the lock be $\bm{n}_{\rm L}$. Then the effective potential depends only on $r=|\bm{R}_{\rm L} - \bm{R}_{\rm K}|$ and the angle $\theta$ between the lock orientation and the interparticle vector, which can be calculated from $\cos\theta = \bm{n}_{\rm L}\cdot(\bm{R}_{\rm K} - \bm{R}_{\rm L}) /r$, taking $0\leq \theta \leq \pi$. \subsection{Perfectly-fitting lock and key} \label{subsec:lk-exact} We first consider a lock particle with $\sigma_{\rm c}=\sigma$ and $d_c=0.5\sigma$, and a spherical (key) particle of diameter $\sigma_{\rm K}=\sigma$, that fits exactly within the lock. We used the GCA to simulate one lock and one key particle, interacting with depletant particles at various volume fractions. We constructed two-dimensional histograms of the separation $r$ and angular co-ordinate $\cos\theta$. Taking each bin of the histogram to be a region $X$ and using (\ref{equ:Wapp-P}), we arrive at a potential $W_{\rm app}$ that accurately represents $W_{\rm eff}$. (In the limit where the bin size of the histogram goes to zero, this $W_{\rm app}$ converges exactly to $W_{\rm eff}$.) The resulting estimate of $W_{\rm eff}$ is shown in Fig.~\ref{fig:LK-eff}(a). We used the same simulation results to infer an approximate (coarse-grained) effective potential $W_{\rm app}$, as we will describe shortly. We then performed GCA simulations for one lock and one key particle, interacting by this effective potential (in the absence of the depletant). Fig.~\ref{fig:LK-eff}(b) shows results for this coarse-grained system. We note that Figs.~\ref{fig:LK-eff}(a,b) are both generated from GCA simulations, using the same data analysis routines -- the differences between these figures arise because one set of GCA simulations include the depletant explicitly while the other set uses a coarse-grained model of interacting colloids. The visual agreement between $W_{\rm eff}$ and $W_{\rm app}$ is good: as noted above, the contributions of the relevant binding regimes to the second virial coefficient also match exactly. To construct $W_{\rm app}$, we follow the general approach described above. We partition the two-dimensional space parameterised by $(r,\cos\theta)$ into several different regions, as illustrated in Fig.~\ref{fig:LK-schem}. Given this partitioning, the potential $W_{\rm app}$ follows directly. (For separations outside the shaded regions in Fig.~\ref{fig:LK-schem}, we take $W_{\rm app}=0$.) We describe the various regions in turn before summarising our main results and their dependence on the depletant volume fraction $\eta$. The range of $r,\theta$ for which we obtain simulation data is limited by the hard core repulsion of the colloids (small-$r$) and by the finite box size (large-$r$). For the case of a perfectly fitting key of the same size as the lock ($\sigma=\sigma_{\rm K}=\sigma_c$), and for any $d_c$, the region forbidden by hard-core repulsion is $r(\theta) < r_0(\theta)$ with \begin{equation} r_0(\theta) = \begin{cases} \sigma, & \text{if}\ \theta\geq\theta^* \\ \sigma\cos(\theta^*-\theta), & \text{if}\ \theta<\theta^* \end{cases} \label{equ:r0-overlap} \end{equation} [The angle $\theta^*$ is defined as in Fig.~\ref{fig:LK-schem} as the angular co-ordinate of the lip of the lock. In this section we have $\sigma_{\rm c}=\sigma_{\rm K}$, and hence $\cos\theta^*=d_{\rm c}/\sigma$.] \subsubsection{Bound regime (specific lock-and-key binding)} The distance of closest approach between lock and key is $d_{\rm c}$ since in this case the key coincides with the cutting sphere that is used to define the lock shape. The strongest effective interaction between lock and key occurs when $r$ is close to $d_{\rm c}$. This is only possible for small angles $\theta$, due to the colloidal shapes. It is therefore sufficient to define the bound (lock-and-key) regime solely in terms of $r$: we define three regions (indexed by $n=1,2,3$) based on the distance between the colloids, which are denoted by $X_{\mathrm{LK},n}$. As shown in Fig.~\ref{fig:LK-schem}, the $n$th region includes separations $r$ satisfying \begin{equation} \tfrac13(n-1)q\sigma\leq (r-d_{\rm c})<\tfrac13 nq\sigma. \label{equ:lk-bound-ineq} \end{equation} Recall that $q\sigma$ is the diameter of a depletant particle, which determines the range of the depletion attraction. Based on numerical simulations of two colloidal particles interacting with the depletant, we evaluated (\ref{equ:Wapp-P}) for each of these three regions, and for a range of depletant volume fractions $\eta$. For $\eta=0.08$ (the case illustrated in Fig.~\ref{fig:LK-eff}) the values of the effective potential in the three regions are $W_{\rm app}=-13.1,-9.1,-4.9$, consistent with the expected strong lock-and-key binding. (Recall $W_{\rm eff}$ and $W_{\rm app}$ are dimensionless potentials, normalised by $k_{\rm B} T$, so large negative values of $W_{\rm app}$ correspond to strong attractive forces.) \subsubsection{Non-specific regime} Lock-and-key binding occurs when the key particle approaches the concave surface of the lock. However, there are also significant depletion attractions when the key approaches the convex surface of the lock~\cite{Ashton2015-porous,Melendez2015,Villegas2016}: this is similar to the depletion attraction between two spheres. To account for this effect, we define two regions (indexed by $n=1,2$) which are denoted by $X_{\mathrm{BB},n}$ and illustrated in Fig.~\ref{fig:LK-schem}. These regions are specified by \begin{align} \theta & > \theta^\dag \nonumber \nonumber \\ r^{{\rm BB}}_{n-1} & < (r-\sigma) < r^{{\rm BB}}_n \end{align} with $(r^{{\rm BB}})_n=(0,q\sigma/3,q\sigma)$ for $n=0,1,2$, and the angle $\theta^\dag$ satisfies $\cos\theta^\dag=0.1$. We use a radial decomposition into just two regions for simplicity: the effective potential in this case resembles that between two spheres, and depends strongly on $r$ for small $r$ while the dependence for larger $r$ is weaker. The choice of the angle $\theta^\dag$ will be discussed in the next subsection. For $\eta=0.08$ the effective potential in these two back-to-back regions is $W_{\rm app}=-0.9,0.1$, showing that this potential is weaker than the lock-key binding (as expected). However, this attraction can still be significant, particularly since the entropy (or number of configurations) compatible with this binding mode is much larger than for the lock-and-key case. \begin{figure} \includegraphics[width=8cm]{fig5.pdf} \caption{Approximated effective potential in the intermediate regime, as a function of $\cos\theta$, for separations $r$ between $r_c(\theta)$ and $r_c(\theta)+q\sigma/3$. The different values of $\theta$ are illustrated by sketches, with shaded regions indicating the depletion volume: large depletion volumes correspond to strong attractive forces. For small values of $\cos\theta$, the behaviour is similar to the non-specific binding regime. For $\cos\theta\approx 1$, the system approaches the lock-and-key binding regime, although the strongly bound configurations are included in the bound regions, leading to the relatively weak effective potential for this regime. In the intermediate regime, the key rolls around the lip of the lock, leading to a reduced depletion volume and therefore a reduced effective potential.} \label{fig:LK-ang} \end{figure} \subsubsection{Intermediate binding regime} The specific and non-specific binding modes considered so far tend to dominate the behaviour of this system. In between, there is an intermediate regime, as shown in Fig.~\ref{fig:LK-schem}. Within this regime, the effective potential depends on both the separation $r$ and the angle $\theta$. To capture this, we defined regions $X_{\mathrm{int},n,m}$ by \begin{align} r^{{\rm BB}}_{n-1} & < r-r_0(\theta) < r^{{\rm BB}}_n \nonumber \\ \theta_{m-1} & < \theta < \theta_m \nonumber \nonumber \\ r &> d_c + \tfrac13 q\sigma \label{equ:lk-roll-ineq} \end{align} Here $n$ is an index associated with the particle separation ($n=1,2$) and $m=1\dots M$ is associated with the angular co-ordinate $\theta$, for which we use a larger number of bins, equally spaced in $\cos\theta$. (In this work we have taken $M=180$ although a smaller number of regions would also be possible.) The third inequality in (\ref{equ:lk-roll-ineq}) simply ensures that these intermediate regions do not overlap with the lock-and-key bound regions defined in (\ref{equ:lk-bound-ineq}). For the inner region ($n=1$) we show the values of $W_{\rm app}$ in Fig.~\ref{fig:LK-ang}, as a function of the angular co-ordinate (or equivalently the index $m$). For large $\theta$ (or small $\cos\theta$) the behaviour is similar to the non-specifically bound regime and $W_{\rm app}\approx -0.9$, consistent with that case. The largest angle that falls inside the intermediate regime is $\theta_M=\theta^\dag$, and $\theta^\dag$ is chosen large enough so that the intermediate regime includes all angles for which the behaviour differs significantly from the non-specifically bound regime, hence our choice $\cos\theta^\dag=0.1$. For small $\theta$ (or large $\cos\theta$), the system approaches the lock-and-key binding state, although there is no overlap lock-and-key bound regime. As one passes through the intermediate regime, there is a maximum in $W_{\rm app}$. To explain this, we sketch in Fig.~\ref{fig:LK-ang} the depletion volume for representative configurations. The obtain this volume, we consider for each colloid the volume that is inaccessible to the centre of a depletant particle. As two colloids approach each other, the depletion volume is the intersection between their inaccessible volumes, which provides an estimate of the strength of the interaction~\cite{Asakura1954}. The reduction in depletion volume as the key passes through the intermediate regime explains the maximum in $W_{\rm app}$. \begin{figure} \includegraphics[width=7.0cm]{fig6.pdf} \caption{(a) Strength of the effective interactions as a function of depletant volume fraction $\eta$. (b) Contributions of these two binding modes to the second virial coefficient (measured in units where $\sigma=1$). One sees that even if the effective potential is strong for lock-and-key (specific) binding, the small volume $V(X)$ of the bound regions means that the contribution of this binding mode to the second virial coefficient becomes significant only for $\eta\gtrsim 0.06$, after which it quickly becomes very strong.} \label{fig:lk-b2} \end{figure} \subsubsection{Summary} The good agreement between exact and approximated effective interactions is shown in Fig.~\ref{fig:LK-eff}. The approximated interaction includes three parameters associated with the lock-and-key (specific) binding state, two parameters associated with back-to-back (non-specific) binding, and a lookup table for the intermediate regime. All parameters are inferred automatically from data for the two-colloid system (with and without depletant). Depending on the accuracy required for the effective potential in the intermediate regime, we anticipate that a considerably reduced approximate description would still be feasible and would capture the most important features of the system. See also Sec.~\ref{sec:simp} below. In Fig.~\ref{fig:lk-b2}(a) we plot the well depths associated with the specific and non-specific binding regimes, as a function of the depletant volume fraction $\eta$. (These are the values of the effective potential in the innermost regions, $n=1$.) The lock-and-key interaction strength is strong and increases strongly with $\eta$, as expected, while the non-specific binding is weaker. However, lock-and-key binding requires localisation of the key particle in a small binding region, so the effective potential itself does not reflect the probability of binding in a given model. In Fig.~\ref{fig:lk-b2}(b) we show the contributions of the two binding modes to the second virial coefficient (measured in units where $\sigma=1$). The larger volume (and hence larger entropy) associated with the non-specific binding means that this binding mode is preferred for small $\eta$ (where the interactions are weak in any case), but the lock-and-key binding regime depends strongly on $\eta$ and dominates for $\eta\gtrsim 0.07$. \subsection{Imperfectly fitting lock and key} \begin{figure} \includegraphics[width=8.5cm]{fig7.pdf} \caption{(a) Effective potential $W_{\rm eff}$ at $\eta=0.08$ for a spherical particle that does not exactly fit the indentation in the lock particle. (b) Piecewise constant approximation $W_{\rm app}$ to this interaction. The repulsive region in the lock mouth is not captured: this is due to layering of the depletant particles in this region. This effect could be captured by introducing an extra region in the approximated interaction, but we chose to ignore it, for simplicity.} \label{fig:LK-rattle-eff} \end{figure} To illustrate the general applicability of this method, we now apply it to the effective interaction between a lock particle and a spherical `key' whose diameter does not precisely fit the indentation on the lock. Specifically, we take a lock particle with $d_{\rm c}=0.5\sigma$ (as before) but $\sigma_{\rm c}=0.7\sigma$, and we keep the same key particle as before ($\sigma_{\rm K}=\sigma$). Fig.~\ref{fig:LK-rattle-eff} shows results for $W_{\rm eff}$ and $W_{\rm app}$ in this case, which can be compared with Fig.~\ref{fig:LK-eff}. Considering first $W_{\rm eff}$, the general structure is very similar. The main notable features are that the bound region is more spread out but the depletion attraction is weaker ($W_{\rm eff}$ less negative) due to the imperfect fit of the key within the lock. There is also a region where the depletion interaction is repulsive, which is located near to the bound region. This effect is due to layering of the (hard) depletant particles near the surface of the lock. The method for inferring the approximate effective potential $W_{\rm app}$ follows closely that of the previous section. The main differences are as follows. The specific binding regime is encapsulated by a single region with $d_c \leq r < d_c+q\sigma/3$, which plays the part of the innermost region $X_{{\rm LK},n=1}$ for exactly-fitting key. The non-specific region is identical to that of the previous section. The intermediate regime is treated in the same way as before (separated into two regions according to the separation and 180 regions according to angle), the only difference being that the function $r_0(\theta)$ which describes the excluded volume of the lock is slightly more complex. Comparing Fig.~\ref{fig:LK-rattle-eff}(a,b), one sees that the region of repulsion between lock and key is not captured by this method. Generalisation of the method to include this region would be straightforward, but the effect is relatively weak in this case so we have ignored it for the purposes of this study. Also, it is apparent from Fig.~\ref{fig:LK-rattle-eff} that the strength of the depletion potential in the intermediate regime (as defined here) is comparable with its strength in the specifically bound regime, due to the more delocalised nature of the bound state. \subsection{Summary of results for effective interactions between lock and sphere particles} We have demonstrated that the method of Sec.~\ref{sec:theory} can be used to describe the effective interactions between lock and key particles. This method can be fully automated, so even if the parameterised effective interactions depend on a large number of parameters, these can be easily extracted from available simulation data. However, two comments are in order. First, the aim of the effective potential is to allow efficient simulation of a system of many interacting colloids, but any such application requires an effective interaction potential between the locks -- we discuss this interaction in the next section . Second, for the simple systems considered so far, where the effective interactions depend on only two co-ordinates, one can imagine defining the effective potential by using large lookup tables based on the results in Figs.~\ref{fig:LK-eff}(a) and \ref{fig:LK-rattle-eff}(a), without defining regions associated with lock-and-key and specific binding. However, for the lock-lock interactions described in the next section, the effective potential cannot be described by a simple two-dimensional histogram -- it depends on a set of four co-ordinates which are required in order to specify the relative position and orientation of the locks. In that case a direct parameterisation of the effective potential would require a four-dimensional histogram instead of the two-dimensional histograms in Fig.~\ref{fig:LK-eff}(a). This is impossible for practical purposes, so the theoretical approach described in Sec.~\ref{sec:theory} becomes essential. Indeed, we note that most previous studies have concentrated on interactions between locks and spheres~\cite{Odriozola:2008uq,Odriozola2013,Melendez2015,Villegas2016}, presumably because of the difficulty of characterising the lock-lock interaction. \section{Results -- interaction between lock particles} \label{sec:lock-lock} In this section we consider effective interactions between indented colloids with $\sigma_{\rm c}=\sigma$ and $d_{\rm c}=0.5\sigma$: these are the same particles considered in Sec.~\ref{subsec:lk-exact}. However, as far as possible, we describe our methods in a way that is easily generalized to other values of the lock shape parameters. \subsection{Choice of co-ordinate system and analogy with lock-key binding} \newcommand{\theta_{\rm R}}{\theta_{\rm R}} \newcommand{\theta_{\rm I}}{\theta_{\rm I}} The definition of an approximate effective potential $W_{\rm app}$ for interacting lock particles requires a suitable co-ordinate system, which we take as in Fig.~\ref{fig:lock-lock-coord}. Specifically, consider two particles, let the positions of their geometrical centres be $\bm{R}_{1},\bm{R}_{2}$ and their orientations be $\bm{n}_1,\bm{n}_2$. The distance between them is $r = |\bm{R}_1-\bm{R}_2|$. We define three angles by $\cos\theta_1=\bm{n}_1\cdot (\bm{R}_2-\bm{R}_1)/r$, $\cos\theta_2=\bm{n}_2\cdot (\bm{R}_1-\bm{R}_2)/r$ and $\cos\phi=\bm{n}_1\cdot\bm{n}_2$, with all three angles chosen in the range $[0,\pi]$. To ensure that the potential is symmetric under interchange of the two particles, it is useful to define a \emph{relevant} angle $\theta_{\rm R}=\min(\theta_1,\theta_2)$ and an \emph{irrelevant} angle $\theta_{\rm I}=\max(\theta_1,\theta_2)$. The naming of the irrelevant angle anticipates the fact that our approximate effective interaction will not depend explicitly on $\theta_{\rm I}$: see below. On the other hand, the role of the relevant angle $\theta_{\rm R}$ in this interaction is analogous to the role of the angle $\theta$ in the lock-sphere interaction considered in Sec.~\ref{sec:lock-key}. To illustrate the analogy with the lock-sphere interaction, we define the probability density for $(r,\cos\theta_{\rm R})$, based on the data for the two particle system. That is, \begin{multline} p^\eta(\hat{r},\hat{c}_r) = \frac{1}{2Z_2^\eta} \int \delta(\hat{r}-r) \delta(\hat{c}_r-\cos\theta_{\rm R}) \\ \times \mathrm{e}^{-v_{\rm eff}(\bm{R}_1,\Omega_1,\bm{R}_2,\Omega_2)} \mathrm{d}\bm{R}_1 \mathrm{d}\Omega_1 \mathrm{d}\bm{R}_2 \mathrm{d}\Omega_2 \end{multline} [The notation here is that $r=|\bm{R}_1-\bm{R}_2|$ is the particle separation (which is being integrated over) and $\hat{r}$ is the value of the separation at which the probability density is evaluated. Similarly $\theta_{\rm R}$ is the relevant angle (which is being integrated) and $\hat{c}_{\rm r}$ is the argument of the probability density. In the following, we omit the hats in cases where this does not lead to any ambiguity.] The distribution $p^\eta({r},{c}_r)$ can be estimated numerically by binning and histogramming data for $r$ and $\cos\theta_{\rm R}$ from a computer simulation of two colloidal particles, interacting with a depletant. Then we define a free energy (strictly, a free energy {difference}) that depends on these two co-ordinates, as \begin{equation} w^\eta(r,c_r) = -\log \left[ \frac{p^\eta(r,c_r)}{p^0(r,c_r)} \cdot \frac{Z_2^\eta}{Z_2^0} \right] \end{equation} where $p^0$ indicates $p^{\eta=0}$, as usual. The partition functions $Z_2^{\eta,0}$ are evaluated using (\ref{equ:B2-Z-PR}). For the lock-key system, this free energy is equal to the effective interaction. In the lock-lock system, this is not the case because the effective interaction $W_{\rm eff}$ depends on two additional variables $\phi,\theta_{\rm I}$, as we discuss below. However, $w(r,c_r)$ is a useful quantity to measure, because it shows the values of $r$ and $\theta_{\rm R}$ which are enhanced or suppressed by the depletant. \begin{figure} \includegraphics[width=8.5cm]{fig8.pdf} \caption{(a) Free energy $w(r,c_r)=w(r,\cos\theta)$ evaluated for two locks interacting with a depletant. Comparing with Fig.~\ref{fig:LK-eff}, one sees regions associated with specific and non-specific binding, as expected. The dotted line is Eq.~(\ref{equ:r0-overlap}) which is the distance of closest approach of a spherical key particle. In contrast to the lock-key case, there is a finite probability of the two particles approaching closer than this boundary: this corresponds to lock particles approaching each other in a ``mouth-to-mouth'' configuration. (b)~Free energy $w(r,c_r)$ calculated for locks interacting via the effective potential (without depletant). (c)~Enlarged figure showing the free energy in the lock-key bound region, where the system with depletant (data from panel (a)) is compared with the coarse-grained (approximate) potential (data from panel (b)). } \label{fig:ll-eff-r-cr} \end{figure} The free energy $w(r,c_r)$ is plotted in Fig.~\ref{fig:ll-eff-r-cr}(a). This shows that the effective potential is attractive in two main regions, which correspond to the specific (lock-and-key) and non-specific (back-to-back) binding modes shown in Fig.~\ref{fig:lock-lock-coord}(b,c). Comparing with Fig.~\ref{fig:LK-eff}, there is also finite probability density for the lock particles to approach each other more closely than is possible for a lock and key. The effect arises when the two lock mouths (indentations) are oriented towards each other, as in Fig.~\ref{fig:lock-lock-coord}(d). In that case one sees that a particle overlap would result if either lock was replaced by a sphere. \subsection{Choice of regions $X$} As for the case of a sphere interacting with a lock, we define the approximate effective potential $W_{\rm app}$ by dividing the parameter space into regions and considering them in turn. Given the similarities between Fig.~\ref{fig:ll-eff-r-cr}a and Fig.~\ref{fig:LK-eff}a, the natural choice is to retain the three main regimes shown in Fig.~\ref{fig:LK-schem}, associated with specific (lock-and-key) binding, non-specific (back-to-back) binding and an intermediate regime. In addition, there is a fourth regime which consists of those values of $r,c_r$ which were inaccessible in the lock-sphere case: this corresponds to the mouth-to-mouth case in Fig.~\ref{fig:lock-lock-coord}(d). In the following, we consider these four regimes in turn. Each regime is defined by constraints on the values of $r$ and $c_r=\cos\theta$. Within each regime, we identify different regions and we either use (\ref{equ:Wapp-P}) to assign a value of $W_{\rm app}$, or in some cases we use an alternative but similar method (see below). The resulting approximate effective interaction is independent of $\theta_{\rm I}$. In the back-to-back (non-specific) regime and the intermediate regime it is also independent of $\phi$, and the effective interaction is very similar to the lock-sphere interaction. In the lock-and-key regime and the mouth-to-mouth regime, the effective potential does depend on the angle $\phi$. \subsubsection{Specific (lock-and-key) binding} \label{subsec:ll-specific} \begin{figure} \includegraphics[width=7cm]{fig9.pdf} \caption{(a) Free energy as a function of $(c_{\rm I},c_\phi)=(\cos\theta_{\rm I},\cos\phi)$, given that the particles are in the lock-and-key binding regime. The attraction is strongest when $\phi$ is small ($c_\phi\approx 1$). (b) Analogous free energy as a function of $c_\phi$ only, but subdivided into three regions according to the distance of between the lock particles. The points show the measured free energy and the lines show the $\phi$-dependence of the approximate coarse-grained potential $W_{\rm app}$. The angular dependence of $W_{\rm app}$ does not match perfectly but the contributions of the $\phi$-dependent regions to the second virial coefficient do match the true effective potential. } \label{fig:ll-ci-cphi} \end{figure} The specific (lock-and-key) binding regime in Fig.~\ref{fig:LK-schem} has an analogue for the interacting lock case which is \begin{equation} r_0(\theta) < r < d_c+q\sigma \label{equ:ineq-A} \end{equation} This region corresponds to the bound area in Fig.~\ref{fig:LK-eff}. The explicit lower bound on $r$ comes from (\ref{equ:r0-overlap}) and means that this specific binding regime only includes states where one of the locks could be replaced by a spherical key particle without overlapping the other lock. (Particles approaching more closely than this will be considered in the mouth-to-mouth regime, see below.) Within this regime, the effective interaction depends strongly on the angle $\phi$. To quantify this, define an (un-normalised) probability density for the cosines of the angles $\theta_{\rm I},\phi$, for states restricted to this lock-and-key binding regime \begin{multline} p^\eta(\hat{c}_i,\hat{c}_\phi|{\rm LK}) = \frac{1}{2Z_2^\eta} \int_{\rm LK} \delta(\hat{c}_i-\cos\theta_{\rm I}) \delta(\hat{c}_\phi-\cos\phi) \\ \times \mathrm{e}^{-v_{\rm eff}(\bm{R}_1,\Omega_1,\bm{R}_2,\Omega_2)} \mathrm{d}\bm{R}_1 \mathrm{d}\Omega_1 \mathrm{d}\bm{R}_2 \mathrm{d}\Omega_2 \label{equ:pcicp} \end{multline} where the integral domain is given by (\ref{equ:ineq-A}). There is an associated free energy \begin{equation} w^\eta(c_i,c_\phi|{\rm LK}) = -\log \left[ \frac{p^\eta(c_i,c_\phi|{\rm LK})}{p^0(c_i,c_\phi|{\rm LK})} \cdot \frac{Z_2^\eta}{Z_2^0} \right] \end{equation} which is negative if the depletant enhances the probability of finding a particular value of these co-ordinates, given that (\ref{equ:ineq-A}) is satisfied. This free energy is plotted in Fig.~\ref{fig:ll-ci-cphi}(a). Two points are noteworthy: first, all the data lie close to a diagonal line in this two-dimensional space. The reason is purely geometrical -- if one fixes $\theta_R=0$ then one must have $\phi=\pi-\theta_{\rm I}$, just from the definition of the co-ordinate system and independent of the colloid shapes. Given that all data in Fig.~\ref{fig:ll-ci-cphi} come from the specific binding regime and therefore have small values of $\theta_{\rm R}$, this explains the inaccessible (white) regions in Fig.~\ref{fig:ll-ci-cphi}. Second, there is significant dependence of the effective potential on these angles, with strong interactions when $\phi$ is small ($c_\phi\approx 1$) and $\theta_{\rm I}$ is large ($c_i\approx -1$). The reason for this strong dependence is illustrated by the snapshots in the same figure, which show that only when $c_\phi$ is large does one observe strong lock-and-key binding. Motivated by the one-dimensional structure in Fig.~\ref{fig:ll-ci-cphi}(a), we define an analogous free energy for $c_\phi$ alone. \begin{equation} w^\eta(c_\phi|X) = -\log \left[ \frac{p^\eta(c_\phi|X)}{p^0(c_\phi|X)} \cdot \frac{Z_2^\eta}{Z_2^0} \right] \end{equation} as a function of the single variable $c_\phi$, now restricted to a specific region $X$. We subdivide the lock-and-key binding regime into three regions $X_{{\rm LK},n}$, according to the same distance cutoffs used for the analogous case in (\ref{equ:lk-bound-ineq}). Fig.~\ref{fig:ll-ci-cphi}(b) shows the effective interactions as a function of $\phi$ for these three regions. The $\phi$-dependence of $W_{\rm eff}$ could be captured approximately with a piecewise constant function but we choose an alternative approach here. For $\cos\phi \geq \cos\phi^*$ and within each region $X_{{\rm LK},n}$, we use a constant value of $W_{\rm app}$. We take $\cos\phi^*=0.4$, consistent with Fig.~\ref{fig:ll-ci-cphi} and $W_{\rm app}$ is determined from (\ref{equ:Wapp-P}) using the regions $X_{{\rm LKf},n}$ obtained from $X_{{\rm LK},n}$ by restricting also to $\cos\phi\geq\cos\phi^*$. For $-1\leq\cos\phi<\cos\phi^*$ we take $W_{\rm app}$ to have linear dependence on $\cos\phi$, to reflect the structure in $w^\eta(c_\phi|X)$. The result is shown in Fig.~\ref{fig:ll-ci-cphi}(b). In the linear regime, we fix $W_{\rm app}$ to be continuous at $\phi^*$ and we choose the value of the intercept $W_{\rm app}(c_\phi=-1)$ so that the contribution of region $X_{{\rm LK},n}$ to the second virial coefficient matches the corresponding value of the exact effective interaction: see Appendix~\ref{app:wapp-fit}. One sees from Fig~\ref{fig:ll-ci-cphi} that this method overestimates the strength of the effective interaction for negative values of $c_\phi$, but this is not a serious approximation since such configurations are rather rare in any case. The poor agreement for $c_\phi\approx-1$ occurs because the second virial coefficient is dominated by values of $\phi$ for which the interaction is strong: in that case the factor ${\rm e}^{-\beta v_{\rm eff}}$ in (\ref{equ:b2}) is large. Hence, the method of matching second virial coefficients tends to parameterise the effective interaction most accurately in regions of strong binding (in this case, large $c_\phi$). In fact, this feature is a key strength of the method, since regions of strong binding are the most important feature that the coarse-grained model should capture. \rlj{Note that an alternative strategy might have been to fit the \emph{values} of the effective potential in Fig.~\ref{fig:ll-ci-cphi}(b) instead of matching $B_2(X)$. This would lead to a better apparent fit in the figure, but the physical behaviour of the system is controlled by $B_2(X)$, so we would expect the resulting model to be less accurate in predicting this physical behavior.} (One might also improve the approximate effective interaction shown in Fig.~\ref{fig:ll-ci-cphi}, for example by following the parameterisation strategy used in the intermediate regime for the lock-sphere interaction. The difficulty in this case is that the function $w^\eta(c_\phi)$ is rather expensive to estimate numerically, since such configurations are very rare in the simulations of locks without depletant. Hence our use of a piecewise linear approximation.) As a final test of our effective potential in this regime, we consider Fig.~\ref{fig:ll-eff-r-cr}(c), which is an enlarged plot of $w^\eta(r,c_r)$, concentrating on lock-and-key binding. The true free energy is shown in the upper panel, and is compared with the same free energy evaluated using the approximate effective interaction $W_{\rm app}$. The agreement is good. \subsubsection{Intermediate and back-to-back regimes} The effective potential in the back-to-back (non-specific) regime and the intermediate regime follows exactly the procedure described in Sec.~\ref{subsec:lk-exact}, except that the definition of the intermediate regime includes a constraint that $r>r_0(\theta)$, as in (\ref{equ:ineq-A}). Since the regions are defined in this way, $W_{\rm app}$ does not depend on $\theta_{\rm I},\phi$ within these areas. Of course this represents an approximation (particularly in the intermediate regime) but the probability of binding in that regime is not high so this approximation does not have a strong impact on the resulting coarse-grained model. \begin{figure} \includegraphics[width=7cm]{fig10.pdf} \caption{Free energy $w^\eta(c_\phi|X_{{\rm MM},n})$ for mouth-to-mouth regions, analogous to the bound case shown in Fig.~\ref{fig:ll-ci-cphi}b. Points show the free energy evaluated from simulation and solid lines show the effective potential $W_{\rm app}$. Configurations with high and low values of $\cos\phi$ are illustated. The difference between this case and Fig.~\ref{fig:ll-ci-cphi} is that if the locks labelled 2 in these configurations were replaced spheres then they would overlap with the locks labelled 1: this is the distinction between the mouth-to-mouth and lock-and-key binding regimes.} \label{fig:ll_mm_cp} \end{figure} \subsubsection{Mouth-to-mouth regime} The mouth-to-mouth binding regime is defined by those values of $(r,\cos\theta_{\rm R})$ that would not be possible for a lock interacting with a spherical particle, that is \begin{equation} r<r_0(\theta) \label{equ:reg-mm} \end{equation} with $r_0(\theta)$ given by (\ref{equ:r0-overlap}). These are binding modes which are impossible for a spherical key interacting with a lock, but are possible for two locks. The resulting effective interactions are strong only for small $r$ so within the mouth-to-mouth regime we take $W_{\rm app}\neq0$ only for \begin{equation} r < d_{\rm c} + q\sigma \label{equ:mm-cut} \end{equation} which is the same large-$r$ cutoff as we used for the specific (lock-and-key) binding regime. Insisting always that (\ref{equ:reg-mm}) holds we then define three regions $X_{{\rm MM},n}$ (with $n=1,2,3$) using the same distance cutoffs (\ref{equ:lk-bound-ineq}) as in the lock-and-key (specific) binding regime. The primary contribution to the resulting effective interaction arises from configurations that are very close to the bound regime. The generalisation of Fig.~\ref{fig:ll-ci-cphi}(b) for these regions is shown in Fig.~\ref{fig:ll_mm_cp}. Compared to the lock-and-key binding regime, the accessible range of $c_\phi$ is reduced, due to the excluded volume interactions. For the $c_\phi$ values that are accessible, we choose a linear dependence of the effective potential on $\phi$, as in the lock-and-key binding regime. We fix the potential to zero at $c_\phi=-1$ and we adjust the slope of the effective potential using the method described in Appendix~\ref{app:wapp-fit}, so that the contribution of this region to the second virial coefficient matches between exact and approximate effective potentials. The potentials themselves do not agree exactly but this has little effect on the overall physics because the probability of binding in this regime is much smaller than that of regular lock-and-key binding. [In fact, the entropy associated with binding in this regime is very low, due to the strong geometrical constraints on ($r,\theta_{\rm R},\phi)$.] Finally, we note that the good agreement between the fully interacting model and the coarse-grained model in Fig.~\ref{fig:ll-eff-r-cr}(c) does depend on a suitable $\phi$-dependent parameterisation of $W_{\rm app}$ in the mouth-to-mouth regime, since the range of accessible values of $\phi$ depends strongly on $r,c_r$. In particular, using the regions $X_{{\rm MM},n}$ but neglecting the $\phi$-dependence of $W_{\rm app}$ in this regime leads to $w(r,c_r)$ depending only on $r$, which is not consistent with the true free energy. \begin{figure} \includegraphics[width=7cm]{fig11.pdf} \caption{Fraction $X_{\rm LL}$ of unoccupied lock-and-key binding sites in a system of indented colloids, as a function of the depletant volume fraction. The ``exact'' data are taken from Ref.~\onlinecite{Ashton2013}, and involve expensive simulation of lock particles interacting with a depletant. The ``coarse-grained'' data were obtained from a (much shorter) simulation of colloids interacting by the effective potential described in this work. The simulations involve $N=60$ colloids at number density $\rho=0.2\sigma^{-3}$} \label{fig:ll_chain} \end{figure} \subsection{Verification} With these effective potentials in place, it is straightforward to simulate systems of indented colloids, interacting by the approximate effective potential $W_{\rm app}$. Fig.~\ref{fig:ll_chain} shows results for this case, compared to results for the fully interacting system of colloids and depletant, from Ref.~\onlinecite{Ashton2013}. As in that work, it is convenient to measure the number of lock-and-key bonds $N_{\rm LL}$ between the colloids, and to normalize this by the total number of particles, $N$. Then we define $X_{\rm LL} = 1 - (N_{\rm LL}/N)$ which is the fraction of colloidal indentations that are not involved in any lock-key bond. Hence $X_{\rm LL}$ decreases from a value close to unity at $\eta=0$ to a value close to zero when the interactions are very strong. The agreement in Fig.~\ref{fig:ll_chain} between the exact and coarse-grained models is good. Deviations are visible for large $\eta$: we note that in this case, equilibration of simulations with depletant is challenging, and it is possible that the deviations between exact and coarse-grained models are due to a failure to equilibrate the fully-interacting system. We also note that theoretical predictions of $X_{\rm LL}$ can be obtained from the contribution to the second virial coefficient from lock-and-key bonding, using Wertheim's theory\cite{Ashton2013}, so the fact that the coarse-grained interaction matches this contribution means that agreement between exact and coarse-grained models should be expected. However, the agreement of the coarse-grained model with the exact results in Fig.~\ref{fig:ll_chain} is significantly better than the agreement with Wertheim's theory in Ref.~\onlinecite{Ashton2013}, showing that it is not sufficient just to match this second virial coefficient: a reasonably accurate description of effective interaction is also required in order to achieve this agreement. \section{Simplified lock-lock potential} \label{sec:simp} \newcommand{W_{\epsilon}}{W_{\epsilon}} Finally, we discuss the connection of the results of this work to the effective potential used in Ref.~\onlinecite{Ashton2015-porous}, which was developed with the aid of some of the results presented here. We refer to that interaction potential as $W_{\epsilon}$ to avoid confusion with the coarse-grained potential $W_{\rm app}$ discussed here. \subsection{Comparison with results for colloidal polymers} The potential $W_{\epsilon}$ is defined in Ref.~\onlinecite{Ashton2015-porous}: we give a brief recap here. In the back-to-back regime, $W_{\epsilon}=-\epsilon_{\rm BB}$ throughout a region defined by $\sigma<r<\sigma(1+\xi)$ and $\theta_{\rm R}>\theta^*$. The value of $\xi$ is fixed at $0.1$. \rlj{In terms of Fig.~\ref{fig:LK-schem}, this corresponds to using a single region for the whole non-specific regime, instead of two regions as in this work.} \rlj{In the intermediate regime identified in this work (recall Fig.~\ref{fig:LK-schem}), we take $W_{\epsilon}=0$, for simplicity. (The justification for this assumption is that the intermediate binding regime is rarely seen in practice since it competes with specific lock-and-key binding, which is much stronger.)} For the specific (lock-and-key) regime, $W_{\epsilon}$ is defined in terms of a single region defined as $r<d_c+\sigma\xi$, which includes both the specific binding and mouth-to-mouth binding regimes described here for the lock-lock interaction. \rlj{Comparing with Fig.~\ref{fig:LK-schem}, the three regions in the ``specific binding'' regime are replaced by a single region, which also includes the ``mouth-to-mouth'' regime (not shown in Fig.~\ref{fig:LK-schem}). Within this regime $W_{\epsilon}$ depends on $\phi$, in a similar way to $W_{\rm app}$, except for two simplications. First, the linear segment of the effective potential shown in Fig.~\ref{fig:ll-ci-cphi}b is constrained to reach zero when $\cos\phi=-1$; second, the linear segment of the effective potential in Fig.~\ref{fig:ll_mm_cp} is taken exactly to equal to that in Fig.~\ref{fig:ll-ci-cphi}b. From inspection of Figs.~\ref{fig:ll-ci-cphi},\ref{fig:ll_mm_cp}, these additional constraints will lead to a slightly less accurate representation of the exact data. However, the key advantage is that when inferring the effective potential $W_{\epsilon}$ from data for colloids interacting with depletant, the value of $W_\epsilon$ at $\cos\phi=+1$ is a single adjustable parameter that is chosen to match $B_2(X)$ for this region. This is much simpler than inferring $W_{\rm app}$, which requires six parameters for the bound regime and three more for the mouth-mouth regime. } \begin{figure} \includegraphics[width=8cm]{fig12.pdf} \caption{\rlj{Data showing contributions to the second virial coefficient for specific (lock-key) and non-specific (back-to-back) binding, for a system with AO depletant with size ratio $q=0.126$, as used in Ref.~\onlinecite{Ashton2015-porous}. The behaviour is very similar to Fig.~\ref{fig:lk-b2}, although the larger $q$-value means that relative strengths of specific and non-specific interactions are more similar and the overall strength of the interaction is lower (at fixed volume fraction $\eta$).}} \label{fig:ao-well} \end{figure} \rlj{As a result of these simplifications, $W_{\epsilon}$ depends on just three parameters: the range $\xi$ and the interaction strengths for specific and non-specific binding. Fixing $\xi=0.1$ and given a depletant at volume fraction $\eta$ with size ratio $q$, one may derive an effective potential $W_{\epsilon}$ by matching contributions to the second virial coefficient from the lock-and-key and back-to-back regions. The relevant contributions to the second virial coefficient are shown in Fig.~\ref{fig:ao-well}, for an AO depletant with size ratio $q=0.126$ (see also the Supplemental Material of Ref.~\onlinecite{Ashton2015-porous}). Fig.~\ref{fig:ao-well} is qualitatively similar to the results for a lock-sphere system shown in Fig.~\ref{fig:lk-b2} of this work, although we note that Fig.~\ref{fig:ao-well} shows results for a \emph{lock-lock} interaction mediated by an \emph{ideal} (Asakura-Oosawa~\cite{Asakura1954}, AO) depletant. (As expected, the larger depletant used in Fig.~\ref{fig:ao-well} leads to weaker interactions compared to Fig.~\ref{fig:lk-b2}, when comparing at fixed volume fraction $\eta$. It also leads to a smaller difference in strength between specific and non-specific binding modes).} \rlj{The argument in Ref.~\onlinecite{Ashton2015-porous} is that $W_{\epsilon}$ provides a semi-quantitative model of lock-and-key colloids interacting with a depletant, and that matching the two values $B_2(X)$ shown in Fig.~\ref{fig:ao-well} allows the behaviour of a range of lock-and-key systems to be modelled using a single effective potential. Using this model revealed novel phase behaviour, including porous liquid phases~\cite{Ashton2015-porous}, with similarities to those found in patchy-particle models\cite{Bianchi2006,Bianchi2008,Ruzicka:2011ud}. In order to obtain accurate results for a \emph{specific} microscopic model such as the hard-sphere depletant used in this work, we expect that using the more complicated potential $W_{\rm app}$ would yield a more accurate match with the underlying microscopic model, but we would expect the observed phase behaviour to be robust, especially given the theoretical predictions (based on Wertheim's theory~\cite{Wertheim:1984zr}) that this behavior is controlled by the second virial coefficients for specific/non-specific binding~\cite{Ashton2015-porous}.} \rlj{ \subsection{Comparison between hard sphere and ideal depletant} We noted above that the behavior discussed in Ref.~\onlinecite{Ashton2015-porous} is based on a model with an ideal (AO) depletant. In that case, the (spherical) depletant particles describe polymer chains so they can overlap with each other, although they cannot overlap with the colloidal particles. This feature means that the strength of the AO interaction can be expressed geometrically in terms of the overlap between geometrical shapes. However, in contrast to the simple situation of spheres interacting with each other or with walls, analytic calculations of AO interactions between indented colloids are limited to idealised geometries~\cite{Ashton2015-wall}, although numerically exact calculations of lock-key interactions have been performed~\cite{Villegas2016}. For the full lock-lock interaction considered here, one encounters the same difficulties in the AO case as for the hard sphere depletant -- the effective interaction is a function of four variables and requires an approximate representation. We already showed results in Fig.~\ref{fig:ao-well} for the second virial coefficients based on an AO depletant, which reveal similar qualitative behaviour to Fig.~\ref{fig:lk-b2}. If we compare the effective potentials for AO and hard sphere depletants in more detail, we find the behavior is very similar. Since the AO interaction is not analytically tractable in this system, a full analysis of these differences would require the calculation shown here to be repeated for the AO case, which is beyond the scope of this paper. However, we note that the main difference between AO and hard-sphere depletant is the layering effect of the hard-sphere depletant near the colloid surfaces, which leads to the repulsive interactions between colloids that are apparent for intermediate separations in Fig.~\ref{fig:LK-rattle-eff}. A similar effect is present in Figs.~\ref{fig:LK-eff}a and \ref{fig:ll-eff-r-cr}a but is weaker in those cases and hence not so visible in the plots. In all cases, this effect has been neglected in the effective potential, for simplicity. In this sense, the effective potential that we describe in this work is also a rather accurate model for a system with an ideal depletant. } \section{Conclusions} \label{sec:conc} We have presented a general technique for obtaining coarse-grained effective potentials that approximate the interactions between anisotropic colloids, immersed in a depletant. The method takes data for the joint probability distribution of the relative positions and orientations of a pair of colloids, obtained from simulations that include the depletant explicilty. These data were obtained in this case by the geometrical cluster algorithm, although other methods can also be used. These data are then used to derive an approximate depletion potential which is a piecewise-constant function of the relative positions and orientations of the colloids. Deriving this potential requires a decomposition of the two-particle configuration space, which is chosen according to physical reasoning. This decomposition fully specifies the effective potential, which can then be inferred automatically from the simulation data, by matching the second virial coeffients between the full and approximate effective potentials in each region. This method thereby ensures thermodynamic consistency at the level of the free energy of bonding. We have illustrated our approach for lock and key colloids, showing how one decomposes the domain of position and orientation into appropriate regions and implements the matching strategy within each. The resulting depletion potentials are not simple because describing the relative position and orientation of anisotropic particles requires several co-ordinates (for example, the interaction potential between uniaxial particles depends on one distance and three angles). Nevertheless, the accurate effective potentials that we derive allow quantitative agreement with fully interacting systems of many colloids. This was tested via a comparison of the self assembly of indented colloids into chains, for which the depletion potential gave excellent agreement with a GCA simulation of the full systems of hard indented colloids plus depletant (Fig.~\ref{fig:ll_chain}), but at a fraction of the computational cost. There are many instances in which colloidal anisotropy is expected to lead to interesting self assembly behaviour controlled by depletion. By its entropic nature, the interaction is strongest between surfaces with complementary shapes. However the overall scale of the depletion interaction and the ratio of the strength of specific to non-specific interactions can be controlled by changing the volume fraction of depletant and the depletion-colloid size ratio. This makes depletion a versatile interaction for the control of self assembly. With the increasing ability to create colloids with a wide variety of shapes, it is becoming practical to use depletion to assemble these building blocks into designer structures~\cite{Anders:2014la,Ashton2015-porous,Rossi:2015aa}. If simulation is to keep up with these advances and provide predictions as to the types of assembled structures that might occur, it will be necessary to have reliable coarse-graining strategies for describing the effective interactions. Our method should be of use here and it would be interesting to apply it to other anisotropic colloids\cite{Rossi:2015aa,Rossi:2011qd,Sacanna:2013aa,Mukhija:2011aa,Dugyala:2013aa}. \begin{acknowledgments} We thank the Engineering and Physical Sciences Research Council (EPSRC) for funding through grant EP/I036192/1. \end{acknowledgments} \begin{appendix} \section{Deriving potentials $W_{\rm app}$ that are not piecewise constant} \label{app:wapp-fit} In Sec.~\ref{subsec:ll-specific}, we explained that the specific interaction between lock particles is described by a piecewise linear effective potential. Since this potential is not piecewise constant, its values cannot be inferred using (\ref{equ:Wapp-P}) and so a different method is required. To explain this procedure in a general way, we consider an effective potential $W_{\rm app}$ that depends on just one co-ordinate (in the case of Fig.~\ref{fig:ll-ci-cphi}b, this co-ordinate is $c_\phi$) and we describe the effective potential by a parameter $y$ (in this case $y$ is the intercept of the effective potential at $c_\phi=-1$). Our aim is to find the value of $y$ such that $B_2^{\rm app}(X)$ defined in (\ref{equ:B2app}) matches the second virial contribution $B_2^\eta(X)$ defined in (\ref{equ:B2X}). Recall that $p^\eta(c_\phi|X)$ is the unnormalised distribution of $c_\phi$ within region $X$, defined by analogy with (\ref{equ:pcicp}). (Note that this distribution depends on the system size $V$ through the partition function $Z_2^\eta$.) It is convenient to write \begin{align} B_2^\eta(X) & = \tfrac12 \int_X (1-\mathrm{e}^{-\beta v_{\rm eff}} ) \mathrm{d}\bm{R}' \mathrm{d}\Omega' \nonumber \\ & = \tfrac12 V(X) - \tfrac{1}{2V} \int_X \mathrm{e}^{-\beta v_{\rm eff}} \mathrm{d}\bm{R}\, \mathrm{d}\Omega\, \mathrm{d}\bm{R}' \mathrm{d}\Omega' \nonumber \\ & = \tfrac12 V(X) - (Z_2^\eta/V) \int p^\eta(c_\phi|X) \mathrm{d}\phi \nonumber \\ & = \tfrac12 V(X) - Z_2^\eta P_2^\eta(X) / V \end{align} Similarly \begin{align} B_2^{\rm app}(X) & = \tfrac12 \int_X (1-\mathrm{e}^{-\beta v_{\rm app}} ) \mathrm{d}\bm{R}' \mathrm{d}\Omega' \nonumber \\ & = \tfrac12 V(X) - \tfrac{1}{2V} \int_X \mathrm{e}^{-\beta v_{\rm app}} \mathrm{d}\bm{R}\, \mathrm{d}\Omega\, \mathrm{d}\bm{R}' \mathrm{d}\Omega' \nonumber \\ & = \tfrac12 V(X) - (Z_2^0/V) \int_X p^0(c_\phi|X) \mathrm{e}^{-W_{\rm app}(c_\phi|X)} \mathrm{d}c_\phi \end{align} where $W_{\rm app}(c_\phi|X)$ is the ($c_\phi$-dependent) approximate effective potential in region $X$ (which depends on the parameter $y$). Noting from (\ref{equ:P2B2}) that $P_2^0(X)=\int p^0(c_\phi|X) \mathrm{d}c_\phi$, we define a normalised probability distribution for $c_\phi$ (within region $X$) as $\tilde p^0(c_\phi|X) = p^0(c_\phi|X) / P_2^0(X)$. Finally, enforcing the constraint that $B_2^\eta(X)=B_2^{\rm app}(X)$ we obtain \begin{equation} \int_X {\tilde p}^0(c_\phi|X) \mathrm{e}^{-W_{\rm app}(c_\phi|X)} \mathrm{d}\phi = \frac{P_2^\eta(X)}{P_2^0(X) } \cdot \frac{Z_2^\eta}{Z_0^\eta} . \end{equation} The right hand side of this equation is the same quantity that appears in (\ref{equ:Wapp-P}) and can be calculated from simulation data. Calculation of the left-hand side from simulation data requires sufficient data to estimate (as a histogram) the distribution $\tilde p(c_\phi|X)$. With this data in hand, the left hand side can then be calculated as an average with respect to this distribution, which depends on the parameter $y$. A simple search over values of the parameter $y$ then yields an effective potential for which $B_2^\eta(X)=B_2^{\rm app}(X)$. \end{appendix}
8e32e8838eacb76c5ead2c0965df9f2f9d2efe1b
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{\protect\bigskip Introduction} Over the last years, wavelet techniques have been used to achieve remarkable results in the field of statistics, in particular in the framework of minimax estimation in nonparametric settings. The pioneering work in this area was provided by Donoho et al. in \cite{donoho1}, where authors proved that nonlinear wavelet estimators based on thresholding techniques attain nearly optimal minimax rates, up to logarithmic terms, for a large class of unknown density and regression functions. Since then, this research area has been deeply investigated and extended - we suggest for instance \cite{WASA} as a textbook reference:\ specifically, we are focussing on the wavelet block thresholding procedure, among the other techniques. Loosely speaking, this method keeps or annihilates blocks of wavelet coefficients on each given level (for more details, see \cite{WASA}), hence representing an intermediate way between local and global thresholding, which fix a threshold respectively for each coefficent and for all the set of them. This procedure, initially suggested in \cite{efroim} for orthogonal series based estimators and later applied by \cite{hkp} for both wavelet and kernel density estimation on $\mathbb{R}$ (see also \cite{hkp2}), was used in \cit {caibloc} jointly to Oracle inequalities; overlapping block thresholding estimators were studied in \cite{caisilver}. The block thresholding was also applied to study adaptivity in density estimation in \cite{chickencai}, a data-driven block thresholding procedure for wavelet regression is instead investigated in \cite{caizhou}, while wavelet-based block thresholding rules on maxisets are proposed by \cite{autin}. \newline Even if a huge number of results concerns estimation with the thresholding paradigm in standard Euclidean frameworks, such as $\mathbb{R}$ or $\mathbb{ }^{n}$, more recent applications are being established in more general settings, such as spherical data or more general manifolds. In particular, we aim to a highly successful construction of a second-generation wavelet system on the sphere, the so-called needlets. The needlets were introduced by Narcowich, Petrushev and Ward in \cite{npw1}, \cite{npw2}; their stochastic properties, when exploited on spherical random fields, were studied in \cite{bkmpAoS}, \cite{bkmpBer}, \cite{ejslan} and \cite{spalan}. This approach has been extended to more general manifolds by \cite{gm1}, \cite{gm2}, \cite{gm3}, while their generalization to spin fiber bundles on the sphere were described in \cite{gelmar}, \cite{gelmar2010}. Most of these researches can be motivated in view of their applications to Cosmology and Astrophysics: for instance, a huge amount of spherical data, concerning the Cosmic Microwave Background radiation, are being provided by satellite missions WMAP and Planck, see \cite{pbm06}, \cite{mpbb08}, \cite{pietrobon1 , \cite{fay08}, \cite{pietrobon2}, \cite{rudjord1}, \cite{dela08}, \cit {rudjord2}, \cite{dlm1} and \cite{dlm2} for more details. The applications mentioned here, however, do not concern thresholding estimation, but rather they can be related to the study of random fields on the sphere, such as angular power spectrum estimation, higher-order spectra, testing for Gaussianity and isotropy, and several others (see also \cite{cama}). As another example, we mention experiments concerning incoming directions of Ultra High Energy Cosmic Rays, such as the AUGER Observatory (http://www.auger.org).\ The Ultra-High Energy Cosmic Rays are particles with energy above $10^{18}$ eV reaching the Earth. Even if they were discovered almost a century ago, their origin, their mechanisms of acceleration and propagation are still unknown. As described in \cit {bkmpAoSb}, see also \cite{faytest}, an efficient nonparametric estimation of the density function of these data would explain the origin of the High Energy Cosmic Rays, i.e. if it is uniform, they are generated by cosmological effects, such as the decay of the massive particles generated during the Big Bang, or, on the other hand, if it is highly non-uniform and, moreover, strongly correlated with the local distribution of nearby Galaxies, it means that the Cosmic Rays are generated by astrophysical phenomena, as for instance acceleration into Active Galactic Nuclei. Massive amount of data in this area are expected to be available in the next few years. Also in view of this application, the needlet approach was recently applied within the thresholding paradigm to the estimation of the directional data: the seminal contribution in this field is due to \cit {bkmpAoSb}, see also \cite{Kerkypicard}, \cite{knp}, while applications to astrophysical data is still under way, see for instance \cite{fay08},\ \cit {faytest} and \cite{Iuppa}. Minimax estimators for spherical data, outside the needlets approach, were also studied by Kim and coauthors (see \cite{kim , \cite{kimkoo}, \cite{kookim}). Furthermore, adaptive nonparametric regression estimators of spin-functions, based on spin pure and mixed needlets defined in \cite{gelmar}, \cite{gelmar2010}, were investigated in \cite{dgm}. In this case, the needlet nonparametric regression estimators were built on spin fiber bundles on the sphere, i. e. the function to be estimated does not take as its values scalars but algebraic curves living on the tangent plane for each point of the sphere. This work, hence, extends the results established in \cite{bkmpAoSb} and \cite{dgm} towards the needlet block thresholding procedure following two main directions. First of all, we will suggest a construction of blocks of needlet coefficients, exploiting the Voronoi cells based on the geodesic distance on the sphere. Then, we will define the needlet block thresholding estimator, whose we will achieve an optimal convergence rate. In view of this aim, we will use both the needlet properties established in \cite{npw1 , \cite{npw2} (see also \cite{marpecbook}) and a set of well consolidated standard techniques, introduced by \cite{donoho1} (see also \cite{WASA}), remarking that this approach has been also applied in the needlet framework to local thresholding by \cite{bkmpAoSb} and \cite{dgm}. Section \ref{sec:need} will recall some preliminary notions, as needlets, their main properties and the Besov spaces. Section \ref{sec:block} will describe the block thresholding procedure we suggest for needlet regression estimation , while Section \ref{sec:minimax} will present the main minimax results. Section \ref{sec:aux} will collect some useful auxiliary results, while Section \ref{sec:proof} will exploit the proof of the main result of this work, named as Theorem \ref{maintheorem}. \section{Background results \label{sec:need}} In this Section, we will review briefly a few of well-known features about the Voronoi cells on the sphere, the spherical needlet construction and the Besov spaces. For what concerns Voronoi cells, we are following strictly \cite{bkmpBer}: further details can be found for instance in the textbook \cite{marpecbook}, see also \cite{bkmpAoS} and \cite{npw2}. From now on, given two positive sequences $\left\{ a_{j}\right\} $ and $\left\{ b_{j}\right\} $, we write that $a_{j}\approx b_{j}$ if there exists a constant $c>0$ so that c^{-1}a_{j}\leq b_{j}\leq ca_{j}$ for all $j$. Furthermore, $B_{x_{0}}\left( \alpha \right) =\left\{ x\in \mathbb{S}^{2}:d\left( x,x_{0}\right) <\alpha \right\} $ and $\overline{B}_{x_{0}}\left( \alpha \right) =\left\{ x\in \mathbb{S}^{2}:d\left( x,x_{0}\right) \leq \alpha \right\} $ denote respectively standard open and closed balls on $\mathbb{S}^{2}$ around x_{0}\in \mathbb{S}^{2}$, while $\left\vert A\right\vert $ is the spherical measure of a general subset $A\subset \mathbb{S}^{2}$. Given $\varepsilon >0 , the set $\Xi _{\varepsilon }=\left\{ x_{1},...,x_{N}\right\} $ of points on $\mathbb{S}^{2}$, such that for $i\neq j$ we have $d\left( x_{i},x_{j}\right) >\epsilon $, is called a \emph{maximal }$\varepsilon \emph{-net} if it satisfies $d\left( x,\Xi _{\varepsilon }\right) <\varepsilon $ for $x\in \mathbb{S}^{2}$, $\cup _{x_{i}\in \Xi _{\varepsilon }}B_{x_{i}}\left( \varepsilon \right) =$ $\mathbb{S}^{2}$ and B_{x_{i}}\left( \varepsilon /2\right) \cap B_{x_{j}}\left( \varepsilon /2\right) =\varnothing $, for $i\neq j$. For all $x_{i}\in \Xi _{\varepsilon }\,$, a family of Voronoi cells is defined a \begin{equation} \mathcal{V}\left( x_{i}\right) =\left\{ x\in \mathbb{S}^{2}:\text{for }j\neq i,\text{ }d\left( x,x_{i}\right) <d\left( x,x_{j}\right) \right\} \text{.} \label{voronoi} \end{equation In \cite{bkmpBer} it is proved that \begin{equation*} B_{x_{i}}\left( \frac{\varepsilon }{2}\right) \subset \mathcal{V}\left( x_{i}\right) \subset B_{x_{i}}\left( \varepsilon \right) \text{ .} \end{equation*} We resume now some features on the scalar needlet construction, system, suggesting for a more detailed discussion \cite{npw1}, \cite{npw2}, see also \cite{bkmpAoSb} and \cite{marpecbook}. A needlet system describes a well-localized tight frame on the sphere: it is a well-known fact (cfr. \cit {npw1}) that any function belonging to $L^{2}\left( \mathbb{S}^{2}\right) $ can be represented as a linear combination of the components of that frame, preserving furthermore some fundamental properties of needlets. Indeed, let us recall that the space $L_{2}\left( \mathbb{S}^{2}\right) $ of square-integrable functions on the sphere can be decomposed as the direct sum of the spaces $H_{l}$ of harmonic polynomials of degree $l$, spanned by spherical harmonics $\left\{ Y_{lm}\right\} _{m=-l}^{l}$, whose definition and properties can be found in \cite{vmk} and \cite{bkmpAoSb}. If we consider \begin{equation*} \Pi _{l}=\underset{l^{\prime }=0}{\overset{l}{\bigoplus }}H_{l^{^{\prime }} \text{,} \end{equation* the space of the restrictions to $\mathbb{S}^{2}$ of the polynomials of degree less (and equal) to $l$, the following quadrature formula holds (see for instance \cite{bkmpAoSb}): given $l\in \mathbb{N} $, there exists a finite subset $\chi _{l}$ such that a positive real number $\lambda _{\xi }$ (the cubature weight) corresponds to each $\xi \in \chi _{l}$ (the cubature point) and for all $f\in $ $\Pi _{l}$, \begin{equation*} \int_{\mathbb{S}^{2}}f\left( x\right) dx=\underset{\xi \in \chi _{l}}{\sum \lambda _{\xi }f\left( \xi \right) \text{.} \end{equation*} Given $B>1\,$\ and a resolution level $j$, we call $\chi _{\left[ B^{2\left( j+1\right) }\right] }=\mathcal{Z}_{j}$, $card\left( \mathcal{Z}_{j}\right) =N_{j}$; since now any element of the set of cubature points and weights, \left\{ \xi _{jk},\lambda _{jk}\right\} $, will be indexed by $j$, the resolution level, and $k$, the cardinality over $j$, belonging to $\mathcal{ }_{j}$. Furthermore, we choose $\left\{ \mathcal{Z}_{j}\right\} _{j\geq 1} to be nested so that$\ \begin{equation} N_{j}\approx B^{2j},\lambda _{jk}\approx B^{-2j}\text{ .} \label{lambdaeN} \end{equation We consider a symmetric, real-valued, not negative function $b\left( \cdot \right) $ (see again \cite{bkmpAoSb}) such that \begin{enumerate} \item it has compact support on $\left[ B^{-1},B\right] $; \item $b\in C^{\infty }\left( \mathbb{R} \right) $; \item the following \textit{unitary property }holds for \textit{\ } \left\vert \xi \right\vert \geq 1$: \end{enumerate} \begin{equation*} \underset{j\geq 0}{\sum }b^{2}\left( \frac{\xi }{B^{j}}\right) =1\text{ .} \end{equation* For each $\xi _{jk}\in \mathcal{Z}_{j}$, given $b\left( \cdot \right) $ and B$, scalar needlets are defined as \begin{equation*} \psi _{jk}\left( x\right) =\sqrt{\lambda _{jk}}\underset{B^{j-1}<l<B^{j+1}} \sum }b\left( \frac{l}{B^{j}}\right) L_{l}\left( \left\langle x,\xi _{jk}\right\rangle \right) \text{ ,} \end{equation* where $L_{l}\left( \left\langle x,y\right\rangle \right) =\sum_{m=-l}^{l}Y_{lm}\left( x\right) \overline{Y}_{lm}\left( y\right) $, describing loosely speaking the weighted convolution of the projection operator $L_{l}\left( \left\langle x,y\right\rangle \right) $. The properties of the function $b\left( \cdot \right) $ reflect three basic features of the needlets. Indeed, from the infinite differentiability of b\left( \cdot \right) $, we have a quasi-exponential localization property\ (see for instance \cite{npw2}), which states that for $k\in \mathbb{N}$, there exists $c_{k}$ such that for $x\in \mathbb{S}^{2} \begin{equation} \left\vert \psi _{jk}\left( x\right) \right\vert \leq \frac{c_{k}B^{j}} \left( 1+B^{j}d\left( \xi _{jk},x\right) \right) ^{k}}\text{,} \label{localization} \end{equation where $d\left( \xi _{jk},x\right) $ is the geodesic distance on the sphere. In view of this property, it is possible to fix a bound (upper and lower), for the norms of needlets on $L^{p}\left( \mathbb{S}^{2}\right) $, for 1\leq p\leq +\infty $. Given $p$, there exist two positive constants $c_{p}$ and $C_{p}$ such that \begin{equation} c_{p}B^{j\left( 1-\frac{2}{p}\right) }\leq \left\Vert \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }\leq C_{p}B^{j\left( 1-\frac{2}{p \right) }\text{ .} \label{boundnorm} \end{equation} Because the function $b\left( \cdot \right) $ has compact support in $\left[ B^{-1},B\right] $, it follows that $b\left( \frac{l}{B^{j}}\right) $ has compact support in $\left[ B^{j-1},B^{j+1}\right] $, hence needlets have compact support in the harmonic domain. Finally, the unitary property leads to the following reconstruction formula (see again \cite{npw1}): for $f\in L^{2}\left( \mathbb{S}^{2}\right) $, in the $L^{2}$ sense, \begin{equation} f(x)=\sum_{j,k}\beta _{jk}\psi _{jk}(x)\text{ ,} \label{reconst} \end{equation \begin{equation} \beta _{jk}:=\left\langle f,\psi _{jk}\right\rangle _{L_{2}\left( \mathbb{S ^{2}\right) }=\int_{\mathbb{S}^{2}}\overline{\psi }_{jk}\left( x\right) f\left( x\right) dx\text{ ,} \label{needcoeffic} \end{equation where $\beta _{jk}$ are the so-called needlet coefficients. Before concluding this Section, we recall the definition and some main properties of the Besov spaces, referring again to \cite{bkmpAoSb}, \cit {dgm} and \cite{WASA} for further theoretical details and discussions. Let f $ $\in L_{\pi }\left( \mathbb{S}^{2}\right) $; we define \begin{equation*} G_{k}\left( f,\pi \right) =\inf_{H\in \mathcal{H}_{k}}\left\Vert f-H\right\Vert _{L_{\pi }\left( \mathbb{S}^{2}\right) }\text{ ,} \end{equation* which is the approximation error when replacing $f$ \ by an element in \mathcal{H}_{k}$ . The Besov space $\mathcal{B}_{\pi q}^{r}$ is therefore defined as the space of functions such that $f\in L_{\pi }\left( \mathbb{S ^{2}\right) $ an \begin{equation*} \left( \sum_{k=0}^{\infty }\frac{1}{k}\left( k^{r}G_{k}\left( f,\pi \right) \right) ^{q}\right) <\infty \text{ .} \end{equation* The last condition is equivalent t \begin{equation*} \left( \sum_{j=0}^{\infty }\left( B^{jr}G_{B^{j}}\left( f,\pi \right) \right) ^{q}\right) <\infty \text{ .} \end{equation* Moreover, $F\in \mathcal{B}_{\pi q}^{r}$ if and only if, for every j=1,2,\ldots \begin{equation*} \left( \sum_{k}\left( \left\vert \beta _{jk}\right\vert \left\Vert \psi _{jk}\right\Vert _{L_{\pi }\left( \mathbb{S}^{2}\right) }\right) ^{\pi }\right) ^{\frac{1}{\pi }}=\varepsilon _{j}B^{-jr} \end{equation* where $\varepsilon _{j}\in \ell _{q}$ and $B>1$. The Besov norm is defined as follows \begin{equation*} \left\Vert f\right\Vert _{\mathcal{B}_{\pi q}^{r}}=\left\{ \begin{matrix} \left\Vert f\right\Vert _{L_{\pi }\left( \mathbb{S}^{2}\right) }+\left[ \sum_{j}B^{jq\left( r+\frac{1}{2}-\frac{1}{\pi }\right) }\left\{ \sum_{k}\left\vert \beta _{jk}\right\vert ^{\pi }\right\} ^{\frac{q}{\pi } \right] ^{\frac{1}{q}} & \text{ \ }q<\infty \\ \left\Vert f\right\Vert _{L_{\pi }\left( \mathbb{S}^{2}\right) }+\underset{j {\sup }B^{j\left( r+\frac{1}{2}-\frac{1}{\pi }\right) }\left\Vert \left( \beta _{jk}\right) _{k}\right\Vert {_{\ell _{\pi }}} & \text{ \ }q=\inft \end{matrix \right. \text{.} \end{equation* As shown for instance in \cite{bkmpAoSb}, if $\max \left( 0,1/\pi -1/q\right) <r$ and $\pi ,q>1$, then we hav \begin{equation*} f\in \mathcal{B}_{\pi q}^{r}\Leftrightarrow \left\Vert f\right\Vert _ \mathcal{B}_{\pi q}^{r}}<\infty \text{ .} \end{equation* The Besov spaces present, among their properties, some embeddings which will be pivotal in our proofs below. As proven in \cite{bkmpAoSb} and \cite{dgm}, we have that, for $\pi _{1}\leq \pi _{2},$ $q_{1}\leq q_{2} \begin{equation} \mathcal{B}_{\pi q_{1}}^{r}\subset \mathcal{B}_{\pi q_{2}}^{r}\text{ },\text{ }\mathcal{B}_{\pi _{2}q}^{r}\subset \mathcal{B}_{\pi _{1}q}^{r}\text{ , \mathcal{B}_{\pi _{1}q}^{r}\subset \mathcal{B}_{\pi _{2}q}^{r-\frac{1}{\pi _{1}}+\frac{1}{\pi _{2}}}. \label{embeddings} \end{equation} \section{Needlet Block Thresholding on the Sphere\label{sec:block}} In this Section we will present needlet estimators for nonparametric regression problems and, then, we will suggest a procedure to fix blocks for any given resolution level $j$ and, consequently, we will define the so-called needlet block threshold estimator. The first step is close to the one described in \cite{bkmpAoSb}, \cite{dgm} for local thresholding, the other one being an adaptation to the sphere of the procedure developed for \mathbb{R}$ in \cite{hkp}, \cite{hkp2}, see also \cite{WASA} In order to introduce the nonparametric regression estimator, let us initially define the so-called uncentered isonormal Gaussian process with mean $f$ (see ********). More precisely, we assume to have a family of Gaussian variables such that for all $h_{1},h_{2}\in \mathfrak{H,}$ $\left\{ X(h_{1}),X(h_{2})\right\} $ are jointly Gaussian with mean \begin{equation*} \mathbb{E}X(h)=\left\langle h,f\right\rangle =\int_{S^{2}}f(x)h(x)dx \end{equation* and covarianc \begin{equation*} \mathbb{E}\left( X(h_{1})-\mathbb{E}X(h_{1})\right) \left( X(h_{2})-\mathbb{ }X(h_{2})\right) =\left\langle h_{1},h_{2}\right\rangle _{L^{2}(S^{2})}\text{ .} \end{equation* We shall in fact be concerned with sequences $\left\{ X_{n}\right\} $ of such processes, where we assume that \begin{equation*} \mathbb{E}X_{n}(h)=\left\langle h,f\right\rangle =\int_{S^{2}}f(x)h(x)dx \end{equation* and covarianc \begin{equation*} \mathbb{E}\left( X_{n}(h_{1})-\mathbb{E}X_{n}(h_{1})\right) \left( X_{n}(h_{2})-\mathbb{E}X_{n}(h_{2})\right) =\frac{1}{n}\left\langle h_{1},h_{2}\right\rangle _{L^{2}(S^{2})}\text{ .} \end{equation* Consider now the usual needlet system $\left\{ \psi _{jk}\right\} _{j,k}$ and let $f\in L^{p}(S^{2})$; we have the following: \begin{eqnarray} \beta _{jk} &=&\mathbb{E}X_{n}(\psi _{jk})=\left\langle \psi _{jk},f\right\rangle =\int_{S^{2}}f(x)\psi _{jk}(x)dx\text{ ,} \notag \\ \widehat{\beta }_{jk} &=&X_{n}(\psi _{jk})=\beta _{jk}+\varepsilon _{jk;n \text{ , } \label{needcoeff2} \end{eqnarray wher \begin{eqnarray} \mathbb{E}\varepsilon _{jk;n} &=&\mathbb{E}\left( X_{n}(\psi _{jk})-\mathbb{ }X_{n}(\psi _{jk})\right) =0\text{ , } \notag \\ E\varepsilon _{jk;n}^{2} &=&\frac{1}{n}\left\langle \psi _{jk},\psi _{jk}\right\rangle _{L^{2}(S^{2})}=\frac{1}{n}\left\Vert \psi _{jk}\right\Vert _{L^{2}(S^{2})}^{2}\text{ , } \label{varnoise} \\ E\varepsilon _{jk_{1};n}\varepsilon _{jk_{2};n} &=&\frac{1}{n}\left\langle \psi _{jk_{1}},\psi _{jk_{2}}\right\rangle _{L^{2}(S^{2})} \notag \\ &=&\frac{1}{n}\frac{\sum_{l}b^{2}(\frac{l}{2^{j}})\frac{2l+1}{4\pi P_{l}(\left\langle \xi _{jk_{1}},\xi _{jk_{2}}\right\rangle )}{\sum_{l}b^{2} \frac{l}{2^{j}})\frac{2l+1}{4\pi }}\text{ .} \notag \end{eqnarray} In a formal sense, one could consider the Gaussian white noise measure on the sphere such that for all $A,B\subset S^{2},$ we hav \begin{equation*} \mathbb{E}W(A)W(B)=\int_{A\cap B}dx\text{ ,} \end{equation* so that \begin{equation*} \varepsilon _{jk;n}=\frac{1}{n}\int_{S^{2}}\psi _{jk}(x)W(dx)\text{ .} \end{equation*} As described above (see also \cite{bkmpAoSb}, \cite{dgm}), $f$ can be described in terms of needlet coefficients, up to a constant, as \begin{equation*} f=\sum_{j\geq 0}\sum_{k=1}^{N_{j}}\beta _{jk}\psi _{jk}\text{ .} \end{equation*} Let us now define the blocks on which we will apply the thresholding procedure: as anticipated in the Introduction, differently from \cite{hkp}, the structure itself of the needlet framework puts in evidence a way to build them. Let us fix $j>0$: recall that for each resolution level $j$, we have $N_{j}\approx B^{2j}$ cubature points. Given the size of the blocks, i.e. the number of cubature points belonging to each of them - let us say \ell _{j}$ - we will build using (\ref{voronoi}) a set of Voronoi cells, containing $\ell _{j}$ cubature points. For each cell, we choose a cubature point $\xi _{js}$ to index it: we define $S_{j}\left( \ell _{j}\right) $ as the number of Voronoi cells obtained to split cubature points into groups of cardinality $\ell _{j}$. Let us define the se \begin{equation*} R_{j;s}=\left\{ k:\xi _{jk}\in \mathcal{V}\left( \xi _{js}\right) \right\} \text{ }s=1,...,S_{j}. \end{equation* From (\ref{voronoi}), it is immediate to see that each cubature point $\xi _{jk}$ belongs to a unique Voronoi cell. We choose $\ell _{j}$ such that \begin{equation*} \ell _{j}=\left[ N_{j}^{\eta }\right] \approx N_{j}^{\eta }\text{ .} \end{equation* where $\left[ \cdot \right] $ denotes the integer part and $0<\eta <1$, such tha \begin{equation*} S_{j}=\frac{N_{j}}{\ell _{j}}\approx \left( B^{2j}\right) ^{1-\eta }\text{ .} \end{equation*} We finally define, for any integer $p\geq 1$ \begin{equation*} A_{js;p}:=\frac{1}{\ell _{j}}\sum_{k\in R_{j;s}}\beta _{jk}^{p}\text{ ,} \end{equation* and its corresponding estimator \begin{equation*} \widehat{A}_{js;p}=\frac{1}{\ell _{j}}\sum_{k\in R_{j;s}}\widehat{\beta _{jk}^{p}\text{ ,} \end{equation* similar to the ones suggested in \cite{hkp}, Remark 4.7. Let us define the following weight function \begin{equation*} w_{js;p}=I\left( \left\vert \widehat{A}_{js;p}\right\vert >\kappa t_{n}^{p}\right) \text{ ; } \end{equation* we have: \begin{equation} f^{\ast }=\sum_{j=0}^{J_{n}}\sum_{s=1}^{S_{j}}\left( \sum_{k\in R_{j;s} \widehat{\beta }_{jk}\psi _{jk}\right) w_{js;p}\text{ ,} \label{densityest} \end{equation where: \begin{itemize} \item $J_{n}$ is the highest resolution level considered, taken such that \begin{equation*} B^{J_{n}}=n^{\frac{1}{2}}\text{ ;} \end{equation*} \item $\kappa $ is the threshold constant (for more discussions see for instance \cite{bkmpAoSb}, \cite{dgm}); \item the scaling factor $t_{n}$, depends on the size of the sample. We will fix \begin{equation*} t_{n}=n^{-\frac{1}{2}}\text{.} \end{equation*} \end{itemize} \section{Minimax $L^{p}$-risk rates of convergence\label{sec:minimax}} Our main purpose is to describe the performance of the procedure and the optimality of its convergence rates with respect to general $L^{p}\left( S^{2}\right) $-loss functions. This result is achieved in the next theorem. \begin{theorem} \label{maintheorem}Let $f\in \mathcal{B}_{\pi q}^{r}\left( G\right) $, the Besov ball so that $\left\Vert f\right\Vert _{\mathcal{B}_{\pi q}^{r}\left( G\right) }\leq M<+\infty $, $r-\frac{2}{\pi }>0$. Consider $f^{\ast }$ as defined by (\ref{densityest}). For $p\in \mathbb{N}$, there exists a constant $c_{p}=c_{p}\left( p,r,q,M,B\right) $ such that \begin{equation*} \sup_{f\in \mathcal{B}_{\pi q}^{r}\left( G\right) }\mathbb{E}\left\Vert f^{\ast }-f\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}\leq c_{p}n^{-\alpha \left( r,\pi ,p\right) }\text{ ,} \end{equation* where \begin{equation*} \alpha \left( r,\pi ,p\right) =\left\{ \begin{array}{c} \frac{rp}{2\left( r+1\right) }\text{ \ \ \ \ \ \ \ \ \ \ \ \ for }\pi \geq \frac{2p}{2\left( r+2\right) } \\ \frac{p\left( r-2\left( \frac{1}{\pi }-\frac{1}{p}\right) \right) }{2\left( r-2\left( \frac{1}{\pi }-\frac{1}{2}\right) \right) }\text{ for }\pi <\frac 2p}{2\left( r+2\right) \end{array \right. \text{ .} \end{equation* If $p=+\infty $, there exists a constant $c_{\infty }=c_{\infty }\left( r,q,M,B\right) \begin{equation*} \sup_{f\in \mathcal{B}_{\pi q}^{r}\left( G\right) }\mathbb{E}\left\Vert f^{\ast }-f\right\Vert _{\infty }\leq c_{\infty }n^{-\alpha \left( r,\pi ,p\right) }\text{ ,} \end{equation* where \begin{equation*} \alpha \left( r,\pi ,p\right) =\frac{\left( r-\frac{2}{\pi }\right) } 2\left( r-2\left( \frac{1}{\pi }-\frac{1}{2}\right) \right) }\text{ .} \end{equation*} \end{theorem} \begin{remark} This procedure achieves minimax the rates established in \cite{bkmpAoSb} and \cite{dgm}, see also \cite{WASA}. \end{remark} \section{Auxiliary Results\label{sec:aux}} In order to prove Theorem \ref{maintheorem}, we will need the following \begin{lemma} \label{mainlemma} Consider $\widehat{\beta }_{jk}$ as described in \re {needcoeff2}. There exist constants $C_{p},C_{\infty },C_{A}$ such that, for $B^{j}\leq n^{\frac{1}{2}}$, $j=0,...,J_{n}$ \begin{equation} \mathbb{E}\left[ \left\vert \widehat{\beta }_{jk}-\beta _{jk}\right\vert ^{p \right] \leq C_{p}n^{-p/2}\text{ , }p\geq 1 \label{E_p} \end{equation \begin{equation} \mathbb{E}\left[ \sup_{k=1,...,N_{j}}\left\vert \widehat{\beta }_{jk}-\beta _{jk}\right\vert ^{p}\right] \leq C_{\infty }\left( j+1\right) ^{p}n^{-p/2 \text{, }p\geq 1\text{ ,} \label{E_inf} \end{equation and for all $\gamma >0$ there exists $\kappa >0$ such tha \begin{equation} \mathbb{P}\left( \left\vert \widehat{A}_{js}-A_{js}\right\vert >\kappa t_{n}^{p}\right) \leq C_{p,\gamma }\frac{1}{n^{\gamma }}\text{ .} \label{P_A} \end{equation} \end{lemma} \begin{proof} First of all, consider that, from Equations (\ref{needcoeff2}) and (\re {varnoise}), we have \begin{eqnarray*} \mathbb{E}\left( \left\vert \widehat{\beta }_{jk}-\beta _{jk}\right\vert ^{p}\right) &=&\mathbb{E}\left( \left\vert \varepsilon _{jk;n}\right\vert ^{p}\right) \\ &=&\left( Var\left( \varepsilon _{jk;n}\right) \right) ^{\frac{p}{2}}\frac 2^{\frac{p}{2}}\Gamma \left( \frac{p+1}{2}\right) }{\sqrt{\pi }} \\ \text{ } &=&\frac{1}{n^{\frac{p}{2}}}\left\Vert \psi _{jk}\right\Vert _{L^{2}(S^{2})}^{p}\frac{2^{\frac{p}{2}}\Gamma \left( \frac{p+1}{2}\right) } \sqrt{\pi }} \\ &=&O\left( n^{-\frac{p}{2}}\right) \text{ ,}\ \end{eqnarray* to obtain (\ref{E_p}). Now, for Mill's inequality, if $Z\sim N\left( 0,1\right) $, we have $\mathbb{P}\left( \left\vert Z\right\vert \geq x\right) \leq \sqrt{2/\pi }\exp \left( -x^{2}/2\right) /x\,$, which leads us, from (\ref{varnoise}), to \begin{eqnarray*} \mathbb{P}\left( \left\vert \varepsilon _{jk;n}\right\vert \geq x\right) &=& \mathbb{P}\left( \left\vert Z\right\vert \geq \sqrt{n}x\right) \\ &\leq &\sqrt{\frac{2}{\pi }}\frac{e^{-\frac{nx^{2}}{2}}}{\sqrt{n}x} \\ &\leq &C_{\varepsilon }e^{-\frac{nx^{2}}{2}}\text{ ,} \end{eqnarray* On the other hand, we hav \begin{eqnarray*} \mathbb{E}\left[ \sup_{k=1,...,N_{j}}\left\vert \widehat{\beta }_{jk}-\beta _{jk}\right\vert ^{p}\right] &=&\int_{\mathbb{R}^{+}}x^{p-1}\mathbb{P}\left( \sup_{k=1,...,N_{j}}\left\vert \widehat{\beta }_{jk}-\beta _{jk}\right\vert \geq x\right) dx \\ &=&\int_{\mathbb{R}^{+}}x^{p-1}\mathbb{P}\left( \sup_{k=1,...,N_{j}}\left\vert \varepsilon _{jk;n}\right\vert \geq x\right) dx \\ &=&E_{1}+E_{2}\text{ ,} \end{eqnarray* wher \begin{eqnarray*} E_{1} &=&\int_{0\leq x\leq \frac{2\sqrt{2}}{\sqrt{N}}j}x^{p-1}dx\text{ ,} \\ E_{2} &=&C\int_{x>\frac{2\sqrt{2}}{\sqrt{N}}j}x^{p-1}B^{2j}\max_{k}\mathbb{P \left( \left\vert \varepsilon _{jk;n}\right\vert \geq x\right) dx\text{ .} \end{eqnarray* We can easily see that \begin{equation*} E_{1}=C_{1}j^{p}n^{-\frac{p}{2}}\text{ ,} \end{equation* while on the other hand, considering that for $x>2\sqrt{2/n}j$ \begin{equation*} B^{2j}e^{-\frac{nx^{2}}{2}}\leq e^{-\frac{nx^{2}}{4}-\frac{nx^{2}}{4 +2j}\leq e^{-\frac{nx^{2}}{4}}\text{,} \end{equation* we obtain \begin{eqnarray*} E_{2} &\leq &C\int_{x>\frac{2\sqrt{2}}{\sqrt{N}}j}x^{p-1}B^{2j}e^{-\frac nx^{2}}{2}}dx \\ &\leq &C_{2}n^{-\frac{p}{2}}\text{,} \end{eqnarray* so we achieve (\ref{E_inf}). It remains to prove (\ref{P_A}), which corresponds to prove that for all $\gamma >0$ there exists $\kappa >0$ such tha \begin{equation*} \Pr \left\{ \left( \frac{1}{\ell _{j}}\sum_{k=1}^{\ell _{j}}\widehat{\beta _{jk}^{p}-E\widehat{\beta }_{jk}^{p}\right) ^{1/p}>\frac{\kappa }{\sqrt{n} \right\} \leq C_{p,\gamma }\frac{1}{n^{\gamma }}\text{ .} \end{equation* We start from rewritin \begin{equation*} \widetilde{\beta }_{jk}=\sqrt{n}\beta _{jk}+\sqrt{n}\varepsilon _{jk;n} \sqrt{n}\beta _{jk}+\varepsilon _{jk} \end{equation* So we need to analyze terms of the form \begin{equation} \left( \frac{1}{\ell _{j}}\sum_{k=1}^{\ell _{j}}\varepsilon _{jk}^{p}+\frac{ \sqrt{n}}{\ell _{j}}\sum_{k=1}^{\ell _{j}}\beta _{jk}\varepsilon _{jk}^{p-1}+...+\frac{pn^{(p-1)/2}}{\ell _{j}}\sum_{k=1}^{\ell _{j}}\beta _{jk}^{p-1}\varepsilon _{jk}\right) ^{1/p}\text{.} \label{pino} \end{equation We start by observing that \begin{equation*} \frac{1}{\ell _{j}}\sum_{k=1}^{\ell _{j}}\beta _{jk}^{p-1}\varepsilon _{jk}\leq \left( \frac{1}{\ell _{j}}\sum_{k=1}^{\ell _{j}}\beta _{jk}^{2p-2}\right) ^{1/2}\left( \frac{1}{\ell _{j}}\sum_{k=1}^{\ell _{j}}\varepsilon _{jk}^{2}\right) ^{1/2} \end{equation* Now we know tha \begin{equation*} \sum_{k=1}^{\ell _{j}}\beta _{jk}^{2p-2}\leq \sum_{k=1}^{N_{j}}\beta _{jk}^{2p-2}=O\left( B^{-js}B^{-j(1-\frac{1}{p-1})}\right) =O\left( B^{-js}B^{-j(\frac{p-2}{p-1})}\right) \text{ .} \end{equation* On the other hand, by Lemma \ref{epsilonlemma}, it is easy to see that for all $p,\gamma >0$, there exists $\kappa >0$ such that \begin{equation*} \Pr \left\{ \frac{1}{\ell _{j}}\sum_{k=1}^{\ell _{j}}\left\vert \varepsilon _{jk}\right\vert ^{p}>\kappa \right\} \leq \frac{C_{p,\gamma }}{\ell _{j}^{\gamma /2}}\text{ .} \end{equation* It is immediate to see that \begin{equation*} \frac{pn^{(p-1)/2}}{\ell _{j}}\sum_{k=1}^{\ell _{j}}\beta _{jk}^{p-1}\varepsilon _{jk}\leq C\frac{n^{(p-1)/2}}{\ell _{j}^{\frac{\gamma +2}{2}}}B^{-j(\frac{p-2}{p-1}+s)}\text{ ;} \end{equation* by choosing suitable $s$ and $\gamma $, we have \begin{equation*} \frac{pn^{(p-1)/2}}{\ell _{j}}\sum_{k=1}^{\ell _{j}}\beta _{jk}^{p-1}\varepsilon _{jk}=o\left( \frac{1}{\ell _{j}}\sum_{k=1}^{\ell _{j}}\varepsilon _{jk}^{p}\right) \text{ .} \end{equation* The same holds for all the other mixed terms in Equation (\ref{pino}). \end{proof} \begin{lemma} \label{epsilonlemma}Assume that $E\varepsilon _{jk}=0,$ $E\varepsilon _{jk}^{2}=1,$ an \begin{equation*} E\varepsilon _{jk_{1}}\varepsilon _{jk_{2}}\leq \frac{C_{M}}{\left\{ 1+B^{j}d(\xi _{jk_{1}},\xi _{jk_{2}})\right\} ^{M}}\text{ , for all }M> \text{ .} \end{equation* Then for all $p\in \mathbb{N},$ $\gamma >0$ there exists $\kappa >0$ such tha \begin{equation*} \Pr \left\{ \frac{1}{\ell _{j}}\sum_{k=1}^{\ell _{j}}\left\vert \varepsilon _{jk}\right\vert ^{p}>\kappa \right\} \leq \frac{C_{p,\gamma }}{\ell _{j}^{\gamma /2}}\text{ .} \end{equation*} \end{lemma} \begin{proof} Without loss of generality we can take $p$ to be even; note indeed tha \begin{equation*} \Pr \left\{ \frac{1}{\ell _{j}}\sum_{k=1}^{\ell _{j}}\left\vert \varepsilon _{jk}\right\vert ^{p}>\kappa \right\} \leq \Pr \left\{ \frac{1}{\ell _{j} \sum_{k=1}^{\ell _{j}}\varepsilon _{jk}^{2p}>\kappa ^{2}\right\} \text{ .} \end{equation* Now we can writ \begin{equation*} \frac{1}{\ell _{j}}\sum_{k=1}^{\ell _{j}}\varepsilon _{jk}^{p}=E\varepsilon _{jk}^{p}+\sum_{\tau =1}^{p}c_{\tau }H_{\tau }(\varepsilon _{jk})\text{, } \end{equation* whenc \begin{equation*} \Pr \left\{ \frac{1}{\ell _{j}}\sum_{k=1}^{\ell _{j}}\varepsilon _{jk}^{p}>p(\kappa +E\varepsilon _{jk}^{p})\right\} \leq \sum_{\tau =1}^{p}\Pr \left\{ \frac{1}{\ell _{j}}\sum_{k=1}^{\ell _{j}}H_{\tau }(\varepsilon _{jk})>\frac{\kappa }{c_{\tau }}\right\} \text{ .} \end{equation* By a simple application of the Markov's inequality, the result will hence follow if we simply show tha \begin{equation*} E\left[ \frac{1}{\ell _{j}}\sum_{k=1}^{\ell _{j}}H_{\tau }(\varepsilon _{jk} \right] ^{\gamma }\leq \frac{C}{\ell _{j}^{\gamma /2}}\text{ .} \end{equation* Now let us take for notational simplicity $\tau =2;$ the argument for the other terms is identical. We hav \begin{equation*} E\left[ \frac{1}{\ell _{j}}\sum_{k=1}^{\ell _{j}}H_{\tau }(\varepsilon _{jk} \right] ^{\gamma }=\frac{1}{\ell _{j}^{\gamma }}\sum_{k_{1},..,k_{\gamma }=1}^{\ell _{j}}E\left\{ H_{\tau }(\varepsilon _{jk_{1}})...H_{\tau }(\varepsilon _{jk_{\gamma }})\right\} \end{equation* \begin{equation*} =\frac{1}{\ell _{j}^{\gamma }}\left\{ \sum_{k_{1}k_{2}}^{\ell _{j}}\left[ E(\varepsilon _{jk_{1}}\varepsilon _{jk_{2}})\right] ^{2}\right\} ^{\gamma /2} \end{equation* \begin{equation*} +\frac{1}{\ell _{j}^{\gamma }}\left\{ \sum_{k_{1}k_{2}}^{\ell _{j}}\left[ E(\varepsilon _{jk_{1}}\varepsilon _{jk_{2}})\right] ^{2}\right\} ^{\frac \gamma }{2}-2}\left\{ \sum_{k_{1}...k_{4}}^{\ell _{j}}E(\varepsilon _{jk_{1}}\varepsilon _{jk_{2}})E(\varepsilon _{jk_{2}}\varepsilon _{jk_{3}})E(\varepsilon _{jk_{3}}\varepsilon _{jk_{4}})E(\varepsilon _{jk_{4}}\varepsilon _{jk_{1}})\right\} \end{equation* \begin{equation*} +\frac{1}{\ell _{j}^{\gamma }}\left\{ \sum_{k_{1}k_{2}}^{\ell _{j}}\left[ E(\varepsilon _{jk_{1}}\varepsilon _{jk_{2}})\right] ^{2}\right\} ^{\frac \gamma }{2}-4}\left\{ \sum_{k_{1}...k_{6}}^{\ell _{j}}E(\varepsilon _{jk_{1}}\varepsilon _{jk_{2}})...E(\varepsilon _{jk_{6}}\varepsilon _{jk_{1}})\right\} \end{equation* \begin{equation*} +\frac{1}{\ell _{j}^{\gamma }}\left\{ \sum_{k_{1}...k_{\gamma }}^{\ell _{j}}E(\varepsilon _{jk_{1}}\varepsilon _{jk_{2}})...E(\varepsilon _{jk_{\gamma }}\varepsilon _{jk_{1}})\right\} \end{equation* \begin{equation*} =O(\ell _{j}^{-\gamma /2})+O(\ell _{j}^{-\frac{\gamma }{2}-1})+...+O(\ell _{j}^{-\gamma +1})\text{ ,} \end{equation* becaus \begin{eqnarray*} \sum_{k_{1}...k_{\gamma }}^{\ell _{j}}E(\varepsilon _{jk_{1}}\varepsilon _{jk_{2}})...E(\varepsilon _{jk_{\gamma }}\varepsilon _{jk_{1}}) &\leq &\sum_{k_{1}...k_{q}}^{\ell _{j}}\left\vert E(\varepsilon _{jk_{1}}\varepsilon _{jk_{2}})\right\vert ...\left\vert E(\varepsilon _{jk_{\gamma -1}}\varepsilon _{jk_{\gamma }})\right\vert \\ &\leq &\ell _{j}\left\{ \sum_{k_{2}}^{\ell _{j}}\left\vert E(\varepsilon _{jk_{1}}\varepsilon _{jk_{2}})\right\vert \right\} ^{\gamma -1}=O(\ell _{j} \text{ .} \end{eqnarray*} \end{proof} \section{Proof of Theorem \protect\ref{maintheorem} (upper bound) \labe {sec:proof}} First of all, we stress that where explicitly mentioned, proofs will be totally similar to the standard thresholding case described in \cit {bkmpAoSb}, hence we will omit them. Considering that $\sum_{s=1}^{S_{j} \sum_{k\in R_{j;s}}=\sum_{k=1}^{N_{j}}$, it is easy to see that \begin{eqnarray*} \mathbb{E}\left\Vert f^{\ast }-f\right\Vert _{L^{p}\left( \mathbb{S ^{2}\right) }^{p} &=&\mathbb{E}\left\Vert \sum_{j=0}^{J_{n}}\sum_{s=1}^{S_{j}}\left( \sum_{k\in R_{j;s}}\widehat{\beta }_{jk}\psi _{jk}\right) w_{js}-\sum_{j\geq 0}\sum_{k=1}^{N_{j}}\beta _{jk}\psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p} \\ &=&\mathbb{E}\left\Vert \sum_{j=0}^{J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left( w_{js}\widehat{\beta }_{jk}-\beta _{jk}\right) \psi _{jk}-\sum_{j>J_{n}}\sum_{k=1}^{N_{j}}\beta _{jk}\psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p} \\ &\leq &2^{p-1}\left( \mathbb{E}\left\Vert \sum_{j=0}^{J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left( w_{js}\widehat \beta }_{jk}-\beta _{jk}\right) \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{ }^{2}\right) }^{p}+\left\Vert \sum_{j>J_{n}}\sum_{k=1}^{N_{j}}\beta _{jk}\psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}\right) \\ &=&:I+II\text{ .} \end{eqnarray* \textbf{CASE\ I:} \emph{Regular\ Zone} Consider $p<+\infty $. For $p\leq \pi $, we have $\mathcal{B}_{\pi q}^{r}\subset \mathcal{B}_{pq}^{r}$: we can therefore take $\pi =p\,$. Consider instead the case $p>\pi $: we use the embedding $\mathcal{B}_{\pi q}^{r}\subset \mathcal{B}_{pq}^{r-\frac{2}{p}+\frac{2}{\pi }}$, and moreover we assum \begin{equation*} r\geq \frac{2}{p},\frac{r}{2r+2}=\frac{rp}{2\left( r+1\right) p}\leq \frac r\pi }{2p}\text{ ,} \end{equation* we have as in \cite{bkmpAoSb} tha \begin{equation*} II\leq O\left( n^{-\frac{pr}{2r+2}}\right) \text{ ,} \end{equation* as claimed. Consider now the variance term. First of all, we easily obtain from the Loeve inequalit \begin{eqnarray*} I &\leq &C\mathbb{E}\left\Vert \sum_{j=0}^{J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left( w_{js}\widehat \beta }_{jk}-\beta _{jk}\right) \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{ }^{2}\right) }^{p} \\ &\leq &CJ_{n}^{p-1}\sum_{j\leq J_{n}}\mathbb{E}\left\Vert \sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left( w_{js}\widehat{\beta _{jk}-\beta _{jk}\right) \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S ^{2}\right) }^{p}\text{ .} \end{eqnarray* As described in \cite{bkmpAoSb}, see also \cite{marpecbook}, we have the following needlet property \begin{equation*} \mathbb{E}\left\Vert \sum_{k}\alpha _{k}\psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}=\left\Vert \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}\sum_{k}\mathbb{E}\left\Vert \alpha _{k}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}\text{ .} \end{equation* Hence we have \begin{equation*} \sum_{j\leq J_{n}}\mathbb{E}\left\Vert \sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left( w_{js}\widehat{\beta }_{jk}-\beta _{jk}\right) \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p} \end{equation* \begin{eqnarray*} &=&\sum_{j\leq J_{n}}\mathbb{E}\left\Vert \sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left( w_{js}\widehat{\beta }_{jk}-\beta _{jk}\right) \psi _{jk}I\left( \left\vert \widehat{A}_{js}\right\vert \geq t_{n}^{p}\right) I\left( \left\vert A_{js}\right\vert \geq \frac{t_{n}^{p}}{2}\right) \right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p} \\ &&+\sum_{j\leq J_{n}}\mathbb{E}\left\Vert \sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left( w_{js}\widehat{\beta }_{jk}-\beta _{jk}\right) \psi _{jk}I\left( \left\vert \widehat{A}_{js}\right\vert \geq t_{n}^{p}\right) I\left( \left\vert A_{js}\right\vert <\frac{t_{n}^{p}}{2}\right) \right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p} \\ &&+\sum_{j\leq J_{n}}\mathbb{E}\left\Vert \sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left( w_{js}\widehat{\beta }_{jk}-\beta _{jk}\right) \psi _{jk}I\left( \left\vert \widehat{A}_{js}\right\vert <t_{n}^{p}\right) I\left( \left\vert A_{js}\right\vert \geq 2t_{n}^{p}\right) \right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p} \\ &&+\sum_{j\leq J_{n}}\mathbb{E}\left\Vert \sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left( w_{js}\widehat{\beta }_{jk}-\beta _{jk}\right) \psi _{jk}I\left( \left\vert \widehat{A}_{js}\right\vert <t_{n}^{p}\right) I\left( \left\vert A_{js}\right\vert <2t_{n}^{p}\right) \right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p} \end{eqnarray* \begin{eqnarray*} &\leq &C\left\{ \sum_{j\leq J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left\Vert \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}\right. \mathbb{E}\left[ \left( \widehat{\beta }_{jk}-\beta _{jk}\right) ^{p}I\left( \left\vert \widehat{A}_{js}\right\vert \geq t_{n}^{p}\right) I\left( \left\vert A_{js}\right\vert \geq \frac{t_{n}^{p}}{ }\right) \right] \\ &&+\sum_{j\leq J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left\Vert \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}\mathbb{E}\left[ \left( \widehat{\beta }_{jk}-\beta _{jk}\right) ^{p}I\left( \left\vert \widehat{A}_{js}\right\vert \geq t_{n}^{p}\right) I\left( \left\vert A_{js}\right\vert <\frac{t_{n}^{p}}{2}\right) \right] \\ &&+\sum_{j\leq J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left\Vert \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}\left\vert \beta _{jk}\right\vert ^{p}\mathbb{E}\left[ I\left( \left\vert \widehat{A _{js}\right\vert <t_{n}^{p}\right) I\left( \left\vert A_{js}\right\vert \geq 2t_{n}^{p}\right) \right] \\ &&+\left. \sum_{j\leq J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left\Vert \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}\left\vert \beta _{jk}\right\vert ^{p}\mathbb{E}\left[ I\left( \left\vert \widehat{A _{js}\right\vert <t_{n}^{p}\right) I\left( \left\vert A_{js}\right\vert <2t_{n}^{p}\right) \right] \right\} \end{eqnarray* \begin{equation*} =Aa+Au+Ua+Uu\text{ .} \end{equation* The idea now is to split this sum into four terms: in one of them, $Aa$, both the $\widehat{A}_{js}$ and $A_{js}$ are large; in another one, $Uu$, they are both small and in the last two of them, $Au$ and $Ua$, the distance between $\widehat{A}_{js}$ and $A_{js}$ is large. In the first two cases, in order to achieve the minimax rate of convergence, we will split the sum into two parts and we will show the convergence of each term by using mainly (\re {boundnorm}), (\ref{E_p}) and (\ref{E_inf}). The convergence of the last two terms will be proved by using (\ref{P_A}). We start by studying \begin{eqnarray*} Aa &\leq &C\sum_{j\leq J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left\Vert \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}\mathbb{E \left[ \left\vert \widehat{\beta }_{jk}-\beta _{jk}\right\vert ^{p}\right] I\left( \left\vert A_{js}\right\vert \geq \frac{t_{n}^{p}}{2}\right) \\ &\leq &C\sum_{j\leq J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}B^{j\left( p-2\right) }I\left( \left\vert A_{js}\right\vert \geq \frac{t_{n}^{p}}{2 \right) \mathbb{E}\left[ \left\vert \widehat{\beta }_{jk}-\beta _{jk}\right\vert ^{p}\right] \text{ .} \end{eqnarray* As in \cite{bkmpAoSb}, we fix $J_{1n}$ such that \begin{equation*} B^{J_{1n}}=O\left( n^{\frac{1}{2\left( r+1\right) }}\right) \text{ ,} \end{equation* hence we have \begin{eqnarray*} Aa &\leq &Cn^{-p/2}\left( \sum_{j\leq J_{1n}}\sum_{s=1}^{S_{j}}\ell _{j}B^{j\left( p-2\right) }I\left( \left\vert A_{js}\right\vert \geq \frac t_{n}^{p}}{2}\right) +\sum_{j=J_{1n}}^{J_{n}}\sum_{s=1}^{S_{j}}\ell _{j}B^{j\left( p-2\right) }I\left( \left\vert A_{js}\right\vert \geq \frac t_{n}^{p}}{2}\right) \right) \\ &\leq &Cn^{-p/2}\left( \sum_{j\leq J_{1n}}\left( S_{j}\cdot \ell _{j}\right) B^{j\left( p-2\right) }+\sum_{j=J_{1n}}^{J_{n}}\sum_{s=1}^{S_{j}}\ell _{j}B^{j\left( p-2\right) }I\left( \left\vert A_{js}\right\vert \geq \frac t_{n}^{p}}{2}\right) \right) \\ &\leq &Cn^{-p/2}\left( \sum_{j\leq J_{1n}}B^{jp}+\sum_{j=J_{1n}}^{J_{n}}\sum_{s=1}^{S_{j}}\ell _{j}B^{j\left( p-2\right) }I\left( \left\vert A_{js}\right\vert \geq \frac{t_{n}^{p}}{2 \right) \right) \\ &\leq &Cn^{-p/2}\left( B^{pJ_{1n}}+\sum_{j=J_{1n}}^{J_{n}}\sum_{s=1}^{S_{j}}\ell _{j}B^{j\left( p-2\right) }I\left( \left\vert A_{js}\right\vert \geq \frac{t_{n}^{p}}{2 \right) \right) \text{ .} \end{eqnarray* Observe now that \begin{eqnarray*} \sum_{j=J_{1n}}^{J_{n}}\sum_{s=1}^{S_{j}}\ell _{j}B^{j\left( p-2\right) }I\left( \left\vert A_{js}\right\vert \geq \frac{t_{n}^{p}}{2}\right) &\leq &\sum_{j=J_{1n}}^{J_{n}}\ell _{j}B^{j\left( p-2\right) }\sum_{s=1}^{S_{j}}\left\vert A_{js}\right\vert \left( \frac{t_{n}^{p}}{2 \right) ^{-1} \\ &\leq &Ct_{n}^{-p}\sum_{j=J_{1n}}^{J_{n}}\ell _{j}B^{j\left( p-2\right) }\sum_{s=1}^{S_{j}}\frac{1}{\ell _{j}}\sum_{k\in R_{j;s}}\left\vert \beta _{jk}\right\vert ^{p} \\ &\leq &Ct_{n}^{-p}\sum_{j=J_{1n}}^{J_{n}}B^{j\left( p-2\right) }\sum_{k=1}^{N_{j}}\left\vert \beta _{jk}\right\vert ^{p} \\ &\leq &Cn^{\frac{p}{2}}\sum_{j=J_{1n}}^{J_{n}}\sum_{k_{1}=1}^{N_{j}}\lef \vert \beta _{jk_{1}}\right\vert ^{p}B^{j\left( p-2\right) }\text{ .} \end{eqnarray* Now, because $f\in \mathcal{B}_{pq}^{r}$, we hav \begin{equation*} \sum_{k=1}^{N_{j}}\left\vert \beta _{jk_{1}}\right\vert ^{p}B^{j\left( p-2\right) }=C\sum_{k=1}^{N_{j}}\left\vert \beta _{jk_{1}}\right\vert ^{p}\left\Vert \psi _{jk}\right\Vert _{p}^{p}\leq CB^{-prj}\text{ .} \end{equation* Hence, as in \cite{bkmpAoSb} \begin{equation*} n^{\frac{p}{2}}\sum_{j=J_{1n}}^{J_{n}}\sum_{k_{1}=1}^{N_{j}}\left\vert \beta _{jk_{1}}\right\vert ^{p}B^{j\left( p-2\right) }\leq Cn^{\frac{p}{2\left( r+1\right) }}\leq B^{pJ_{1}n}\text{ .} \end{equation* Finall \begin{equation*} Aa\leq Cn^{-p/2}B^{pJ_{1n}}=Cn^{\frac{-pr}{2\left( r+1\right) }}\text{ .} \end{equation*} Consider now the term $Uu$. We hav \begin{eqnarray*} Uu &\leq &C\sum_{j\leq J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left\Vert \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}\left\vert \beta _{jk}\right\vert ^{p}I\left( \left\vert A_{js}\right\vert <2t_{n}^{p}\right) \\ &\leq &C\sum_{j\leq J_{n}}B^{j\left( p-2\right) }\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left\vert \beta _{jk}\right\vert ^{p}I\left( \left\vert A_{js}\right\vert <2t_{n}^{p}\right) \\ &\leq &C\sum_{j\leq J_{n}}l_{j}B^{j\left( p-2\right) }\sum_{s=1}^{S_{j}}A_{js}I\left( \left\vert A_{js}\right\vert <2t_{n}^{p}\right) \\ &\leq &C\left[ \sum_{j\leq J_{1n}}N_{j}B^{j\left( p-2\right) }2t_{n}^{p}+\sum_{j=J_{1n}}^{J_{n}}\ell _{j}B^{j\left( p-2\right) }\sum_{s=1}^{S_{j}}A_{js}\right] \\ &\leq &C\left[ \sum_{j\leq J_{1n}}N_{j}B^{j\left( p-2\right) }n^{-\frac{p}{2 }+\sum_{j=J_{1n}}^{J_{n}}\sum_{k=1}^{N_{j}}\left\vert \beta _{jk}\right\vert ^{p}\left\Vert \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}\right] \\ &\leq &C\left[ n^{-\frac{p}{2}}B^{pJ_{1n}}+B^{-prJ_{1n}}\right] =O\left( n^{ \frac{pr}{2\left( r+1\right) }}\right) \text{ .} \end{eqnarray*} Let us study now $Au$ and $Ua$. As in \cite{bkmpAoSb}, we have \begin{eqnarray*} Au &\leq &\sum_{j\leq J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left\Vert \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}\left( \mathbb{E}\left[ \left\vert \widehat{\beta }_{jk}-\beta _{jk}\right\vert ^{2p}\right] \right) ^{\frac{1}{2}}\left( \mathbb{P}\left( \left\vert \widehat{A}_{js}-A_{js}\right\vert \geq \frac{\kappa n^{-\frac{p}{2}}}{2 \right) \right) ^{\frac{1}{2}} \\ &\leq &CB^{pJ_{n}}n^{-\frac{p}{2}}n^{-\gamma }\leq Cn^{-\gamma }\text{ ;} \\ Ua &\leq &\sum_{j\leq J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left\Vert \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}\left\vert \beta _{jk}\right\vert ^{p}\left( \mathbb{P}\left( \left\vert \widehat{A _{js}-A_{js}\right\vert \geq \kappa n^{-\frac{p}{2}}\right) \right) \leq Cn^{-\gamma }\left\Vert F\right\Vert _{p}^{p}\text{ .} \end{eqnarray* Because for $r\geq 1$, we have \begin{equation*} n^{-\gamma }\leq n^{-\frac{1}{2}}\leq n^{\frac{-r}{2\left( r+1\right) } \text{ ,} \end{equation* the result is proved. Consider now \thinspace $p=+\infty $: we assume now $f\in \mathcal{B _{\infty ,\infty }^{r}$, to obtai \begin{equation*} \mathbb{E}\left\Vert f^{\ast }-f\right\Vert _{\infty }\leq \mathbb{E \left\Vert \sum_{j=0}^{J_{n}}\sum_{k=1}^{N_{j}}\left( w_{j}\widehat{\beta _{jk}-\beta _{jk}\right) \psi _{jk}\right\Vert _{L^{\infty }\left( \mathbb{S ^{2}\right) }+\left\Vert \sum_{j>J_{n}}\sum_{k=1}^{N_{j}}\beta _{jk}\psi _{jk}\right\Vert _{L^{\infty }\left( \mathbb{S}^{2}\right) }=I+II\text{ .} \end{equation* As in \cite{bkmpAoSb}, we have \begin{equation*} II=O\left( n^{-\frac{r}{2\left( r+1\right) }}\right) \text{ .} \end{equation* For what concerns $I$, we instead have \begin{equation*} I\leq \sum_{j=0}^{J_{n}}\mathbb{E}\left\Vert \sum_{k=1}^{N_{j}}\left( w_{j \widehat{\beta }_{jk}-\beta _{jk}\right) \psi _{jk}\right\Vert _{L^{\infty }\left( \mathbb{S}^{2}\right) }\leq C\sum_{j=0}^{J_{n}}B^{j}\mathbb{E}\left[ \sup_{k}\left( w_{j}\widehat{\beta }_{jk}-\beta _{jk}\right) \right] \end{equation* \begin{eqnarray*} &\leq &C\sum_{j=0}^{J_{n}}B^{j}\mathbb{E}\left[ \sup_{k}\left( \widehat \beta }_{jk}-\beta _{jk}\right) \right] I\left( \left\vert A_{j}\right\vert \geq \frac{\kappa n^{-\frac{p}{2}}}{2}\right) \\ &&+C\sum_{j=0}^{J_{n}}B^{j}\mathbb{E}\left[ \sup_{k}\left( \widehat{\beta _{jk}-\beta _{jk}\right) I\left( \left\vert \widehat{A}_{j}-A_{j}\right\vert \geq \frac{\kappa n^{-\frac{p}{2}}}{2}\right) \right] \\ &&+C\sum_{j=0}^{J_{n}}B^{j}\sup_{k}\left\vert \beta _{jk}\right\vert \mathbb E}\left[ I\left( \left\vert \widehat{A}_{j}-A_{j}\right\vert \geq \kappa n^{ \frac{p}{2}}\right) \right] \\ &&+C\sum_{j=0}^{J_{n}}B^{j}\sup_{k}\left\vert \beta _{jk}\right\vert I\left( \left\vert A_{j}\right\vert <2\kappa n^{-\frac{p}{2}}\right) \\ &=&Aa+Au+Ua+Uu\text{ .} \end{eqnarray* Again, we choose $J_{1,n}$ such that \begin{equation*} B^{J_{1,n}}=\kappa ^{\prime }n^{\frac{1}{2\left( r+1\right) }}\text{ ; \ I\left( \left\vert A_{j}\right\vert \geq \frac{\kappa n^{-\frac{p}{2}}}{2 \right) =0\text{ for }j>J_{1,n}\text{ ,} \end{equation* and similarly to \cite{bkmpAoSb}, we obtain \begin{eqnarray*} Aa &\leq &CJ_{1,n}n^{-\frac{1}{2}}B^{J_{1,n}}\leq Cn^{-\frac{r}{2\left( r+1\right) }}\text{ ;} \\ Uu &\leq &C\left\{ B^{-J_{1,n}\left( r+1\right) }+B^{-J_{1,n}}\right\} \leq Cn^{-\frac{r}{2\left( r+1\right) }}\text{ .} \end{eqnarray* The other two terms $Au$ and $Ua$ are similar to the case previously described. For general $\pi $ and $q$, we observe that $\mathcal{B}_{\pi q}^{r}\subset \mathcal{B}_{\infty \infty }^{r^{\prime }}$, $r^{\prime }=r-2/\pi $. Hence we obtai \begin{equation*} \mathbb{E}\left\Vert f^{\ast }-f\right\Vert _{L^{\infty }\left( \mathbb{S ^{2}\right) }\leq CJ_{n}n^{-\frac{r^{\prime }}{2\left( r^{\prime }+1\right) }=CJ_{n}n^{-\frac{r-2/\pi }{2\left( r-2\left( 1/\pi -1/2\right) \right) } \text{ ,} \end{equation* as claimed. \textbf{CASE\ II:} \emph{Sparse Zone} The proof follows strictly the same steps of the regular case. Indeed we have $\mathcal{B}_{\pi q}^{r}\subset \mathcal{B}_{pq}^{r-2\left( \frac{1} \pi }-\frac{1}{p}\right) }$. Hence we have \begin{eqnarray*} \mathbb{E}\left\Vert f^{\ast }-f\right\Vert _{L^{p}\left( \mathbb{S ^{2}\right) }^{p} &\leq &2^{p-1}\mathbb{E}\left\Vert \sum_{j=0}^{J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left( w_{js}\widehat \beta }_{jk}-\beta _{jk}\right) \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{ }^{2}\right) }^{p}+\left\Vert \sum_{j>J_{n}}\sum_{k=1}^{N_{j}}\beta _{jk}\psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p} \\ &=&:I+II\text{ .} \end{eqnarray* Also in this case, as in \cite{bkmpAoSb}, because $r-2/\pi +1\geq 1$, we have for the bias term \begin{equation*} II=O\left( n^{-\left( r-2\left( \frac{1}{\pi }-\frac{1}{p}\right) \right) /2\left( r-2\left( \frac{1}{\pi }-\frac{1}{p}\right) \right) }\right) \text{ .} \end{equation* On the other hand, we split $I$ again into four terms as above. On one hand, we obtain \begin{eqnarray*} Au &\leq &\sum_{j\leq J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left\Vert \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}\left( \mathbb{E}\left[ \left\vert \widehat{\beta }_{jk}-\beta _{jk}\right\vert ^{2p}\right] \right) ^{\frac{1}{2}}\left( \mathbb{P}\left( \left\vert \widehat{A}_{j}-A_{j}\right\vert \geq \frac{\kappa t_{n}^{p}}{2}\right) \right) ^{\frac{1}{2}} \\ Ua &\leq &\sum_{j\leq J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}\left\Vert \psi _{jk}\right\Vert _{L^{p}\left( \mathbb{S}^{2}\right) }^{p}\left\vert \beta _{jk}\right\vert ^{p}\left( \mathbb{P}\left( \left\vert \widehat{A _{j}-A_{j}\right\vert \geq \kappa t_{n}^{p}\right) \right) \text{ ,} \end{eqnarray* whose upper bounds recall exactly the same procedure developed in regular zone. On the other hand, consider initially \begin{equation*} Aa\leq Cn^{-\frac{p}{2}}\sum_{j\leq J_{n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}B^{j\left( p-2\right) }I\left( \left\vert A_{js}\right\vert \geq \frac{\kappa n^{-\frac{p}{2}}}{2}\right) \text{ .} \end{equation* In this case, we fix $J_{2n}$ so that \begin{equation*} B^{J_{2n}}=O\left( n^{\frac{1}{2\left( r-\frac{2}{\pi }+1\right) }}\right) \text{ , }I\left( \left\vert A_{js}\right\vert \geq \frac{t_{n}^{p}}{2 \right) \equiv 0\text{ for }j\geq J_{2n}\text{ .} \end{equation* to obtai \begin{eqnarray} Aa &\leq &Cn^{-\frac{p}{2}}\sum_{j\leq J_{2n}}\sum_{s=1}^{S_{j}}\sum_{k\in R_{j;s}}B^{j\left( p-2\right) }I\left( \left\vert A_{js}\right\vert \geq \frac{t_{n}^{p}}{2}\right) \label{Aaeqn} \\ &\leq &Cn^{-\frac{p}{2}}\sum_{j\leq J_{2n}}B^{j\left( p-2\right) }N_{j}\frac \left\vert A_{j}\right\vert ^{\frac{1}{p}}}{t_{n}} \notag \\ &\leq &Cn^{-\frac{p}{2}}t_{n}^{-1}\sum_{j\leq J_{2n}}B^{j\left( p-2\right) }N_{j}^{1-\frac{1}{p}}\left( \sum_{k=1}^{N_{j}}\left\vert \beta _{jk}\right\vert ^{p}\right) ^{\frac{1}{p}} \notag \\ &\leq &Cn^{-\frac{p}{2}}t_{n}^{-1}\sum_{j\leq J_{2n}}B^{j\left( p-2\right) }N_{j}^{1-\frac{1}{p}}\left( \sum_{k=1}^{N_{j}}\left\vert \beta _{jk}\right\vert ^{\pi }\right) ^{\frac{1}{\pi }} \notag \\ &\leq &Cn^{-\frac{p}{2}}t_{n}^{-1}\sum_{j\leq J_{2n}}B^{j\left( p-2\right) }N_{j}^{1-\frac{1}{p}}\left( \sum_{k=1}^{N_{j}}\left\vert \beta _{jk}\right\vert ^{\pi }\left\Vert \psi _{jk}\right\Vert _{L^{\pi }\left( \mathbb{S}^{2}\right) }^{\pi }\right) ^{\frac{1}{\pi }}B^{-j\left( 1-\frac{ }{\pi }\right) } \notag \\ &\leq &Cn^{-\frac{p}{2}}t_{n}^{-1}\sum_{j\leq J_{2n}}B^{j\left( p-2\right) }B^{2j\left( 1-\frac{1}{p}\right) }B^{-rj}B^{-j\left( 1-\frac{2}{\pi \right) } \notag \\ &\leq &Cn^{\frac{1-p}{2}}B^{J_{2n}\left( p-2-\left( r+1-\frac{2}{\pi \right) \right) } \notag \\ &\leq &Cn^{-\frac{p\left( r+2\left( \frac{1}{p}-\frac{1}{\pi }\right) \right) }{2\left( r-\frac{2}{\pi }+1\right) }} \notag \end{eqnarray because in the last inequality we used \begin{equation*} \frac{1-p}{2}+\frac{\left( p-2-\left( r+1-\frac{2}{\pi }\right) \right) } 2\left( r-\frac{2}{\pi }+1\right) }=-\frac{p\left( r+2\left( \frac{1}{p} \frac{1}{\pi }\right) \right) }{2\left( r-\frac{2}{\pi }+1\right) }\text{ .} \end{equation* Consider no \begin{eqnarray*} Uu &\leq &C\sum_{j\leq J_{n}}B^{j\left( p-2\right) }\ell _{j}\sum_{s=1}^{S_{j}}A_{js}I\left( \left\vert A_{js}\right\vert <2t_{n}^{p}\right) =C\sum_{j\leq J_{2n}}B^{j\left( p-2\right) }\ell _{j}\sum_{s=1}^{S_{j}}A_{js}I\left( \left\vert A_{js}\right\vert <2t_{n}^{p}\right) \\ &+&\sum_{j=J_{2n}}^{J_{n}}B^{j\left( p-2\right) }\ell _{j}\sum_{s=1}^{S_{j}}A_{js}I\left( \left\vert A_{js}\right\vert <2t_{n}^{p}\right) =Uu_{1}+Uu_{2}\text{ .} \end{eqnarray* Now fix \begin{equation*} m=\frac{p-2}{r-\frac{2}{\pi }+1}\,\text{\ ,} \end{equation* so that \begin{eqnarray*} p-m &=&p\frac{r-2\left( \frac{1}{\pi }+\frac{1}{p}\right) }{r-\frac{2}{\pi +1}>0\text{ ;} \\ m-\pi &=&\frac{p-\pi \left( r+1\right) }{r-\frac{2}{\pi }+1}>0\text{ .} \end{eqnarray* Simple calculations lead to \begin{eqnarray*} Uu_{1} &=&C\sum_{j\leq J_{2n}}B^{j\left( p-2\right) }\ell _{j}\sum_{s=1}^{S_{j}}A_{js}I\left( \left\vert A_{js}\right\vert <2t_{n}^{p}\right) \\ &\leq &C\sum_{j\leq J_{2n}}B^{j\left( p-2\right) }\ell _{j}\sum_{s=1}^{S_{j}}A_{js}\left( \frac{A_{js}}{t_{n}^{p}}\right) ^{\frac{ }{p}-1} \\ &\leq &Ct_{n}^{p-1}\sum_{j\leq J_{2n}}B^{j\left( p-2\right) }\ell _{j}\sum_{s=1}^{S_{j}}A_{js}^{\frac{1}{p}} \\ &\leq &Ct_{n}^{p-1}\sum_{j\leq J_{2n}}B^{j\left( p-2\right) }\ell _{j}^{1 \frac{1}{p}}\sum_{s=1}^{S_{j}}\left( \sum_{k\in R_{js}}\beta _{jk}^{p}\right) ^{\frac{1}{p}} \\ &\leq &Ct_{n}^{p-1}\sum_{j\leq J_{2n}}B^{j\left( p-2\right) }\left( \ell _{j}\cdot S_{j}\right) ^{1-\frac{1}{p}}S_{j}^{\frac{1}{p}}\left( \sum_{k=1}^{N_{j}}\beta _{jk}^{m}\right) ^{\frac{1}{m}} \\ &\leq &Ct_{n}^{p-1}\sum_{j\leq J_{2n}}B^{j\left( p-2\right) }N_{j}\left( \sum_{k=1}^{N_{j}}\left\vert \beta _{jk}\right\vert ^{m}\left\Vert \psi _{jk}\right\Vert _{L^{m}\left( \mathbb{S}^{2}\right) }^{m}\right) ^{\frac{1} m}}B^{-j\left( 1-\frac{2}{m}\right) } \\ &\leq &Ct_{n}^{p-1}\sum_{j\leq J_{2n}}B^{j\left( p-1+\frac{2}{m}\right) }B^{-jr} \\ &\leq &Ct_{n}^{p-1}\sum_{j\leq J_{2n}}B^{j\left( p-1+\frac{2}{m}\right) }B^{-j\left( r-\frac{2}{\pi }+\frac{2}{m}\right) } \\ &=&Ct_{n}^{p-1}\sum_{j\leq J_{2n}}B^{j\left( p-2-\left( r-1-\frac{2}{\pi \right) \right) } \\ &=&Ct_{n}^{p-1}B^{J_{2n}\left( p-2-\left( r-1-\frac{2}{\pi }\right) \right) } \\ &\leq &Cn^{\frac{1-p}{2}}B^{J_{2n}\left( p-2-\left( r-1-\frac{2}{\pi \right) \right) }\text{ ,} \end{eqnarray* exactly as above in Equation (\ref{Aaeqn}). We have to study just the last ter \begin{equation*} Uu_{2}=C\sum_{j=J_{2n}}^{J_{n}}N_{j}B^{j\left( p-2\right) }A_{j}I\left( \left\vert A_{j}\right\vert <2t_{n}^{p}\right) \text{ .} \end{equation*} We hav \begin{eqnarray*} Uu_{2} &=&C\sum_{j=J_{2n}}^{J_{n}}B^{j\left( p-2\right) }\ell _{j}\sum_{s=1}^{S_{j}}A_{js}I\left( \left\vert A_{js}\right\vert <2t_{n}^{p}\right) \\ &\leq &C\sum_{j=J_{2n}}^{J_{n}}B_{j}^{j\left( p-2\right) }\ell _{j}\sum_{s=1}^{S_{j}}A_{js}^{\frac{m}{p}}A_{js}^{1-\frac{m}{p}}I\left( \left\vert A_{j}\right\vert <2t_{n}^{p}\right) \\ &\leq &Ct_{n}^{p-m}\sum_{j=J_{2n}}^{J_{n}}B_{j}^{j\left( p-2\right) }\ell _{j}\sum_{s=1}^{S_{j}}A_{js}^{\frac{m}{p}} \\ &\leq &Ct_{n}^{p-m}\sum_{j=J_{2n}}^{J_{n}}B^{j\left( p-2\right) }\ell _{j}^{1-\frac{m}{p}}\sum_{s=1}^{S_{j}}\left( \sum_{k\in R_{js}}\left\vert \beta _{jk}\right\vert ^{p}\right) ^{\frac{1}{p}\cdot m} \\ &\leq &Ct_{n}^{p-m}\sum_{j=J_{2n}}^{J_{n}}B^{j\left( p-2\right) }\ell _{j}S_{j}\left( \sum_{k=1}^{N_{j}}\left\vert \beta _{jk}\right\vert ^{p}\right) ^{\frac{1}{p}\cdot m} \\ &\leq &Ct_{n}^{p-m}\sum_{j=J_{2n}}^{J_{n}}B^{j\left( p-2\right) }\left( \ell _{j}\cdot S_{j}\right) \left( \sum_{k}\left\vert \beta _{jk}\right\vert ^{m}\right) \\ &\leq &Ct_{n}^{p-m}\sum_{j=J_{2n}}^{J_{n}}B^{j\left( p-2\right) }N_{j}\left( \sum_{k}\left\vert \beta _{jk}\right\vert ^{m}\right) \\ &\leq &Ct_{n}^{p-m}\sum_{j=J_{2n}}^{J_{n}}B^{j\left( p-m\right) }\left( \sum_{k}\left\vert \beta _{jk}\right\vert ^{m}\left\Vert \psi _{jk}\right\Vert _{L^{m}\left( \mathbb{S}^{2}\right) }^{m}\right) \\ &\leq &Ct_{n}^{p-m}\sum_{j=J_{2n}}^{J_{n}}B^{j\left( p-m\right) }B^{-jm\left( r-\frac{2}{\pi }+\frac{2}{m}\right) }\text{ .} \end{eqnarray* We can easily see tha \begin{equation*} \left( p-m\right) -m\left( r-\frac{2}{\pi }+\frac{2}{m}\right) =p-2-\frac{p- }{r-\frac{2}{\pi }+1}\left( 1+r-\frac{2}{\pi }\right) =0\text{ .} \end{equation* Henc \begin{equation*} Uu_{2}\leq Ct_{n}^{p-n}=O\left( n^{-\frac{p\left( r+2\left( \frac{1}{p} \frac{1}{\pi }\right) \right) }{2\left( r-\frac{2}{\pi }+1\right) }}\right) \text{ ,} \end{equation* as claimed. \begin{acknowledgement} The author thanks Prof. Domenico Marinucci for the relevant suggestions and discussions useful to develop this work. \end{acknowledgement}
76cc68c4d75e1fbb6f867abc7b858a5b49c2d64b
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{sec:intro} It is now believed that up to 80\% of all galaxies host nuclear star clusters (NSCs) in their centres \citep[e.g.][]{carollo98,boeker02,cote06}. Typically, low- and intermediate-luminosity galaxies show a central light excess inside a characteristic radius $r_\mathrm{b}$$\sim0.02 R_{eff}$ above the inner extrapolation of the global light profile \citep{cote07}. These NSCs usually reside in the photometric centre of the galaxy \citep{binggeli00,boeker02} and their location overlaps with the kinematic centre \citep{neumayer11}. NSCs are usually brighter than typical globular clusters, compact ($r\sim5$ pc), massive ($M\sim10^{7}M_{\odot}$), may be flattened, and often contain multiple stellar populations and complex structures \citep[e.g.][]{walcher05, walcher06,cote06,rossa06,seth06,seth08,barth09,turner12,pl12}. Their masses seem to correlate with the mass of the host galaxies \citep{ferrarese06,wh06}, extending the super-massive black holes scaling relations to the low-mass end of galaxies. In some cases, NSCs appear to co-exist with central black holes \citep[e.g. review by][and references therein]{graham09} and recently \citet{neumayer12} suggested that NSCs may be the precursors of massive black holes in galaxy nuclei. However, only handful of detailed studies on the properties of NSCs exists and these are mainly focused on such objects in late-type galaxies. Characterising NSCs in early-type galaxies is a non trivial task, both because of the high surface brightness of the underlying galaxy, and because of the NSCs compact sizes. With the availability of adaptive optics fed integral field unit (IFU) instruments, this task is now becoming feasible. For example, \citet{seth10} have shown that the nucleus of NGC\,404, a nearby S0 galaxy, hosts several morphologically and dynamically distinct components. The NSC in this galaxy shows a modest rotation aligned with the galaxy, a gas disk that rotates perpendicularly to the stars, and probably an intermediate-mass black hole ($\sim10^{5}\,M_{\odot}$). Such complicated structure inevitably poses the question of how NSCs have formed. Currently, there are two main scenarios proposed. The first involves the dissipationless infall of star clusters to the galaxy centre due to dynamical friction \citep{tremaine75}. The second suggests NSCs to be the result of dissipational sinking of gas to the galactic centre \citep{mihos94}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{./Lyubenova_FCC277_fig01.eps}} \caption{\label{fig:acs_mge} HST/ACS $z$-band image of FCC\,277. The star that we used as a NGS for the AO correction is masked out here. The SINFONI FoV ($3\arcsec\times3\arcsec$) is indicated with a white box. The overlaid contours are from the MGE light model, discussed in Sect.~\ref{sec:dyn_model}. } \end{figure} Numerical simulations of globular clusters infall have had certain success in reproducing the observed surface brightness profiles of nucleated galaxies , although with larger nuclei sizes comparing to what is observed \citep[e.g.][]{oh00,cd08a,cd08b}. However, more recently \citet{hartmann11} showed that star cluster accretion onto a pre-existing nuclear disk did not produce the observed line-of-sight kinematics of NSCs. They suggested that purely stellar dynamical mergers cannot be solely responsible for the formation of NSCs and that gas dissipation must also play a significant role in assembling the cluster's mass. What is the exact origin of this gas and how it gets transported to the galaxy nucleus is still under debate. \citet{bekki06} have shown that the dissipative merging of stellar and gaseous clumps formed from nuclear gaseous spiral arms in a gas disk eventually produce nuclei that rotate, are flattened and have a range of ages and metallicities. \citet{pflamm09} concluded that compact star clusters with masses $\geq 10^{6} M_{\odot}$ act as cloud condensation nuclei and are able to accrete gas recurrently from a warm interstellar medium. This may cause further star formation events and account for multiple stellar populations in the most massive globular and nuclear star clusters. Recently, \citet{turner12} concluded that the dominant mechanism for nucleus growth in low mass early-type galaxies is probably infall of star clusters through dynamical friction, while at higher masses, gas accretion resulting from mergers and torques becomes dominant. In this paper we present a detailed study of the nucleus in the intermediate-mass early-type galaxy FCC\,277 (NGC\,1428) and discuss our observations in the light of the current assumptions of nucleus formation. This galaxy is a member of the Fornax cluster and is part of the ACS Fornax Cluster Survey \citep{jordan07}. Its basic properties, as well as the main parameter of the NSC, are listed in Table~\ref{tab:fcc277_tab}. In Fig.~\ref{fig:acs_mge} we show part of the HST/ACS $z$-band image, together with the field-of-view of VLT/SINFONI that we used to complete our study. In Fig.~\ref{fig:acs_colour} {\it (left panel)} we plotted the $z$-band surface brightness profile, together with the two S\'{e}rsic fits (dashed lines) that describe the galaxy light. The outer galaxy light is represented with a S\'{e}rsic fit with $n=1.8$. The point where the nucleus starts to dominate over the inner extrapolation of the S\'{e}rsic fit is called break radius and for FCC\,277 has the value $r_\mathrm{b}$$=0\farcs25$ (indicated with an arrow in Fig.~\ref{fig:acs_colour}). The nucleus is fitted with another S\'{e}rsic profile with $n=1.7$. For a detailed description of the fitting process see \citet{turner12,ferrarese13}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=0]{./Lyubenova_FCC277_fig02.eps}} \caption{\label{fig:acs_colour} An HST/ACS $z$-band profile {\it (left panel)} and $(g-z)$ colour {\it (right)} of FCC\,277 \citep{turner12,ferrarese13}. The coloured curves show the S\'{e}rsic fits to the two components: nucleus and galaxy. The arrows at 0\farcs25 indicate the break radius, $r_\mathrm{b}$, at which the nucleus component starts to dominate the galaxy light. } \end{figure} The paper is organised as follows: in Sect.~\ref{sec:obs_datared} we describe our observations and data reduction. In Sect.~\ref{sec:comp_sinf_hst} we compare the light profiles obtained from HST/ACS and VLT/SINFONI coupled with adaptive optics. Sect.~\ref{sec:kinematics} is devoted to the kinematics analysis of our IFU data, while in Sect.~\ref{sec:dyn_model} we present results from dynamical modelling of the galaxy and the NSC. In Sect.~\ref{sec:stellar_pop} we explore the stellar populations of the nucleus of this galaxy. In Sect.~\ref{sec:discussion} we discuss our findings in the light of current galaxy and NSC formation models. We conclude in Sect.~\ref{sec:conclusions}. \begin{table} \caption{\label{tab:fcc277_tab} Basic properties of FCC\,277 and its Nuclear Star Cluster.} \flushleft \begin{tabular}{l l c } \hline \hline FCC\,277 & & Reference \\ \hline Morphological Type & E5 & \citet{ferguson89}\\ B$_{T}$ & 13.8$^{m}$\ & \arcsec \\ Effective radius & 10.2\arcsec & \arcsec \\ $(g-z)$ colour & 1.31$\pm$0.01 & \citet{blakeslee09}\\ Distance & 20.7$\pm$0.7 Mpc & \arcsec\\ Major axis position angle & 115\degr & \citet{graham98} \\ Velocity dispersion & 81.7 $\mbox{km s}^{-1}$ & \citet{wegner03}\\ $M_\mathrm{vir}$& $ \sim 8\times10^{9} M_{\odot}$ & $^{1)}$\\ \hline \hline NSC & & \\ \hline Effective radius & 0.09\arcsec$\sim$ 9 pc & \citet{turner12}\\ $g$ (mag) & 20.08$^{m}\pm$0.16 & \arcsec \\ $(g-z)$ & 1.33$\pm$0.18& \arcsec\\ \hline \hline \end{tabular} {$^{1)}$ Using $M_{vir} = 5.0 \, R_\mathrm{eff} \, \sigma^{2}/G$ \citep{cappellari06} } \end{table} \section{Observations and data reduction} \label{sec:obs_datared} \subsection{Observations} \label{sec:obs} We obtained integral field spectroscopy of the Fornax E5 galaxy FCC\,277 (NGC\,1428) using VLT/SINFONI \citep{eis03,bonnet04} in Natural Guide Star adaptive optics mode on October 6, 7, and 10, 2007 (programme ID 380.B-0530, PI L. Infante). We used the $K$-band grating (1.95 -- 2.45 $\mu$m) that gives a spectral resolution R$\sim$3500 (6.2 \AA\/ FWHM as measured on sky lines). Our observations cover the central 3\arcsec $\times$ 3\arcsec, with a spatial sampling of 0\farcs05 $\times$ 0\farcs10. As a natural guide star we used a $R=14^{m}$ star located at 3\farcs5 to the North of the galaxy centre (see Fig.~\ref{fig:acs_mge}). Due to its proximity to the galaxy centre, this star does not appear in the guide star catalogue or the USNO catalogue as a separate entry. Its celestial coordinates are $\alpha(J2000) =$ 03:42:22.9 and $\delta(J2000) =$ -35:09:11.4. Our observations were carried out in service mode. For the observations we used the standard near-IR nodding technique. Each observing block consisted of a sequence of object and sky frames (OOSOOSOOS), each individual integration was 300 s, the sky fields were offset by 50\arcsec to the North. Science frames were dithered by 0\farcs05 and 0\farcs15 in order to reject bad pixels. There were six observing blocks. The total on-source integration time was 3 hours. Additionally, after each observing block and at a similar airmass, we observed a B dwarf to act as a telluric star. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=0]{./Lyubenova_FCC277_fig03.eps}} \caption{\label{fig:light_profile} Comparison between isophotal parameters obtained from space and adaptive optics ground-based imaging. The red symbols show the light profile of FCC\,277 as derived from our SINFONI $K$-band reconstructed image. The black symbols denote $HST/ACS$ $z$-band imaging. The two vertical dashed lines in panel {\it (a)} show the area where the two profiles were normalised (0\farcs5 $<r<$ 0\farcs7). The vertical arrows in each panel indicate the break radius, as in Fig.~\ref{fig:acs_colour}. In panel {\it (b)} we compare the $4^{th}$ cosine term of the isophotal fits that is indicative for deviations from pure elliptical shape. In panel {\it (c)} the measured position angles are displayed, and in panel {\it (d)} the ellipticity of the isophotes is shown. } \end{figure} \subsection{Data reduction} \label{sec:data_red} We used the ESO SINFONI pipeline v2.0.5 to perform the basic data reduction on each observing block, consisting of six object and three sky exposures. In brief, the pipeline extracts the raw data, applies distortion, bad pixels and flat-field corrections, wavelength calibration, and stores the combined sky-subtracted spectra from one observing block in a 3-dimensional data cube. For each resulting data cube, we then ran the {\tt lac3d} code \citep{davies10} to detect and correct residual bad pixels identified using a 3D Laplacian edge detection method. We reduced the telluric stars in the same way as the science frames. Then for each telluric star we extracted a one-dimensional spectrum, removed the hydrogen Brackett\,$\gamma$ absorption line at $2.166\,\mu$m after fitting it with a Lorentzian profile, and divided the star spectrum by a black body spectrum with the same temperature as the star. The last step in preparing the telluric spectrum was to apply small shifts ($<$0.05 pixels) and scalings to minimise the residuals of the telluric features. To do this, we extracted a central one-dimensional spectrum from each science data cube and cross-correlated and fitted it with the corresponding telluric spectrum. Then we divided each individual spaxel in the six galaxy data cubes by the corresponding best fitting telluric spectrum. In this way we also obtained a relative flux calibration. Finally, we combined the six galaxy data cubes, using a $3\sigma$-clipping pixel reject algorithm. We also reconstructed a two-dimensional image of the galaxy, after integrating the spectral dimension of the final data cube in the range 2.1 -- 2.4~$\mu$m, where the contamination from sky lines residuals is minimal. To be able to robustly measure the velocity and velocity dispersion from the spectra of the galaxy a minimum signal to noise of 20 per pixel is required. Because the SINFONI pipeline does not provide error propagation during data reduction, we estimated the noise in each spectrum of the data cube as the r.m.s. of the residuals after subtracting a smoothed model of the spectrum. Then we used this noise estimate to bin the final galaxy data cube to achieve an approximately constant S/N$\sim$25 using the Voronoi 2D binning method of \citet{cc03}. This S/N allowed us to conserve a good spatial resolution (our smallest bins in the centre are $\sim$0\farcs1 across), while we are still able to reliably extract the stellar kinematics. \section{Comparison between VLT/SINFONI and HST/ACS light profiles} \label{sec:comp_sinf_hst} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=90]{./Lyubenova_FCC277_fig04.eps}} \caption{\label{fig:fit_spec} The spectrum from the central bin (S/N$\sim$25) of FCC\,277 with an over-plotted best fitting template spectrum, derived by the pPXF code (red line; fit residuals are shown in grey). The location of the strongest near-IR absorption features in the $K$-band is indicated. The dashed vertical lines show the location of the strongest sky emission lines. } \end{figure} In Fig.~\ref{fig:light_profile} we compare the light profiles of FCC\,277 derived from HST/ACS imaging and VLT/SINFONI observations. This galaxy is part of the ACS Fornax Cluster Survey \citep{jordan07} and high spatial resolution imaging in the $g$-band and $z$-band from the ACS is available. We used the IRAF task {\em ellipse} to fit elliptical isophotes to the ACS $z$-band image (black symbols in Fig.~\ref{fig:light_profile}) and to the SINFONI reconstructed image (before binning; red symbols). In panel {\em (a)} the two luminosity profiles are compared, after being normalised in the region 0\farcs5 -- 0\farcs7 (dashed vertical lines). Isophotal parameters are plotted against the semi-major axis length. The vertical arrows denote the break radius, $r_\mathrm{b}$$=0\farcs25$, at which point the nuclear component starts to dominate the surface brightness profile \citep[][see also Fig.~\ref{fig:acs_colour}]{cote07}. Although observed with adaptive optics, the SINFONI light profile is less steep than the ACS profile within the inner $\sim$0\farcs3. Assuming this is due to the lower spatial resolution of the SINFONI data, we estimated their PSF to be 0\farcs165 (FWHM) by convolving the ACS image \citep[using a Tiny Tim PSF,][]{krist95} with a given Gaussian PSF until it matches the light distribution in the SINFONI image (observations of stellar PSFs were not obtained during the SINFONI run). In panel {\em (b)} of Fig.~\ref{fig:light_profile} we plotted the cosine $4^{th}$ order Fourier coefficient of the isophotes, divided by the semi-major axis length. Positive values of this parameter are indicative of disky isophotes, as are observed in both the $z$- and $K$-band images in the inner 1\arcsec. There are two peaks in the $a4/a$ profile, one at $\sim$0\farcs2, coinciding with the break radius, and a stronger second peak at $\sim$0\farcs6, coinciding with the peak in the velocity field (see Sect.~\ref{sec:kinematics}). In panel {\em (c)} no significant variations of the position angle outside of the break radius are observed and the mean PA is consistent with the one derived at larger radii (see Table~\ref{tab:fcc277_tab}). In panel {\em (d)} the ellipticity reaches a maximum at $\sim$0\farcs35 for both the ACS and SINFONI profiles, although with different amplitudes. These differences are expected due to the differences in the PSF of the two images; a larger PSF leads to rounder isophotes \citep{peletier90}. The comparison of the two profiles led us to the conclusion that the SINFONI ground based adaptive optics assisted observations are similar in quality to the HST/ACS images. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=0]{./Lyubenova_FCC277_fig05.eps}} \caption{\label{fig:kin_maps} Velocity ({\it top panel}) and velocity dispersion ({\it bottom panel}) maps of FCC\,277 and corresponding errors {\em (right panels)}. Over-plotted are contours with constant surface brightness, as derived from our reconstructed SINFONI $K$-band image.} \end{figure} The observed features of the isophotal parameters point to a picture where, within the break radius, the nuclear star cluster may be flattened or, alternatively, may be the superposition of a round NSC and a larger scale disk. Such nuclear disk beyond the break radius is evident in the diskiness parameter $a4/a$ at $\sim$0\farcs6 \citep[see also][]{turner12}. \section{Stellar kinematics} \label{sec:kinematics} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=0]{./Lyubenova_FCC277_fig06.eps}} \caption{\label{fig:rad_kin} Behaviour of the stellar kinematics at different spatial scales. Long-slit kinematics data from \citet[][diamond symbols]{graham98}, \citet[][blue squares]{spolaor10} and \citet[red triangles]{koleva11} are shown. With solid symbols we plotted the innermost kinematics profile of FCC\,277, extracted from the SINFONI maps using kinemetry.} \end{figure} We used the pPXF code \citep{ce04} to derive the first and second order of the line-of-sight velocity distribution, working with a library of seven template spectra of K and M giant stars. These templates were observed with the same instrument and the same setup as our science target. To find the best fitting composite template spectrum we used the region between 2.1 and 2.36~$\mu$m, where several strong absorption features allow accurate measurements (see Fig.~\ref{fig:fit_spec}), and masked the strong near-IR sky lines \citep{rousselot00}. In Fig.~\ref{fig:fit_spec} we show the spectrum of a central bin in the galaxy with the over-plotted best fitting composite template as derived by the pPXF code (in red), as well as the residuals (in grey). Our stellar mean velocity and velocity dispersion maps are shown in Fig.~\ref{fig:kin_maps}. Using kinemetry, described by \citet{krajnovic06}, we extracted the velocity and velocity dispersion profiles, shown with filled symbols in Fig.~\ref{fig:rad_kin}. We observe rotation around the minor axis of the galaxy up to $\pm$25~$\mbox{km s}^{-1}$ at $r\sim$0\farcs6, which is outside the break radius of the luminosity profile ($r_\mathrm{b}$$=0\farcs25$, see Fig.~\ref{fig:acs_colour}, left panel) where the NSC is supposed to reside. At the centre of the galaxy the velocity dispersion approaches $\sim$55~$\mbox{km s}^{-1}$ and then it increases in the outer parts of the field of view to reach $\sim$90 $\mbox{km s}^{-1}$ at $r\sim$~1\arcsec. In Fig.~\ref{fig:rad_kin} we also compare our own SINFONI data with a compilation of literature measurements, derived using long slits, aligned along the major axis of the galaxy. \citet[diamond symbols]{graham98} used a spectrograph with a slit width 2\arcsec\/ on the Australian National University’s 2.3 m telescope at Siding Spring Observatory. Their data do not cover the inner 4\arcsec of the galaxy, due to the presence of a relatively bright star close to the nucleus (that we used as a natural guide star for the AO). \citet[blue squares]{spolaor10} used the GEMINI/GMOS instrument with a slit width of 1\arcsec. The seeing during these observations was in the range 0\farcs7\,--\,1\arcsec. \citet[red triangles]{koleva11} reanalysed the same observations. Based on Fig.~\ref{fig:rad_kin} we conclude that the rotating substructure in the nucleus of FCC\,277 co-rotates with the main body of the galaxy and that the velocity dispersion in the outer regions of the SINFONI field-of-view reaches similar values as the long slit studies. We note that the spatial resolution of our adaptive optics supported data is much higher than the resolution achieved by the other three studies, thus we cannot directly compare the radial profiles at small galactocentric radii. The significantly worse spatial resolution of the long-slit observations means that the inner rotation and velocity dispersion dip are washed out. The observed rotation, taken together with the drop in the velocity dispersion, indicates the presence of a co-rotating cold substructure in the inner 0\farcs6\/ of the galaxy. We fitted the kinematic position angle of this substructure using the method described in Appendix C of \citet{krajnovic06}. The measured value is 118\degr$\pm10\degr$, which is consistent with the photometric position angle of the main body of the galaxy, derived at larger radii (see Table~\ref{tab:fcc277_tab}). For early-type galaxies the apparent stellar angular momentum $\lambda_{R_e}$ and the galaxy flattening are now a commonly used tool to classify galaxies into \emph{fast}- and \emph{slow}-rotators \citep{emsellem07,cap07}. The method needs IFU data to measure $\lambda_{R_e}$ inside one-effective radius. For FCC\,277 only long slit data are available out to the effective radius. Thus we used our best-fit Schwarzschild model from Sect.~\ref{sec:dyn_model} to simulate the velocity and velocity dispersion as they would be observed by an IFU. We measured $\lambda_{R_e}=0.3$. Using the most recent classification from \citet{emsellem11}, we found that this galaxy is a \emph{fast}-rotator and lies slightly above the dividing line between the two classes, with its ellipticity $\epsilon=0.3$. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics[angle=0]{./Lyubenova_FCC277_fig07.eps}} \caption{\label{fig:dyn_model_data} Comparison between our symmetrised SINFONI kinematics {\em (left panels)} and the ones obtained by the best fitting dynamical model {\em (middle panels)}. The {\em right panels} show the resultant kinematics maps from our best fitting model when we do not include stars on counter-rotating orbits. } \end{figure*} \section{Dynamical modelling} \label{sec:dyn_model} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=0]{./Lyubenova_FCC277_fig08.eps}} \caption{\label{fig:MLSCcontours} Confidence interval of the dynamical models of FCC\,277 for the mass-to-light ratio in $z$-band. The black dots indicate the location of the models and the contours indicate 1, 2, and 3$\sigma$ intervals, where the 3-sigma level is indicated by a thick line. } \end{figure} To measure the mass distribution and the orbit configuration of the inner part of FCC\,277 we used \citet{schw79} modelling. This method \citep{rvdb08} works by constructing a trial mass model of the galaxy, including a black hole, stars and dark halo. Then, the gravitational potential is inferred from the mass model and representative orbits are integrated numerically, while keeping track of the paths and orbital velocities of each orbit. We can then create a mass model of the galaxy by assigning an amount of mass to each orbit so that the overall stellar mass distribution is reproduced, while simultaneously fitting the observed stellar kinematics. The effect of the PSF on the observed stellar kinematics is an integral part of the dynamical model. These models have the advantage that they do not require any assumptions regarding the orbital anisotropy of the galaxy. The models were constructed as follows. First, we parametrized the galaxy stellar surface brightness using the multi-Gaussian expansion (MGE) method, described by \citet{cap02}, on the ACS $z$-band image. In Fig.~\ref{fig:acs_mge} we show this image with overlaid contours of the MGE light model. There were 13 Gaussians with varying flattening fitted in total, the first two of them describing the NSC. The galaxy shows strong rotation around the minor axis and we therefore assumed the galaxy is oblate axisymmetric, which is the most common configuration \citep[e.g.][]{padila08}. The galaxy is also strongly flattened, with a minimum flattening 0.6 at 15\arcsec, and can thus not be seen more face-on than $i=65^{\circ}$. Then we used our symmetrised \citep[using the method described in Appendix A of][]{rvdb10} SINFONI IFU kinematics from Sect.~\ref{sec:kinematics} and the Schwarzschild orbit superposition method \citep{rvdb08} to construct a realistic dynamical model for the galaxy and the NSC. We also included the long-slit data of \citet{graham98}, to be able to constrain the mass-to-light ratio of the main body of the galaxy. We did not include the data of \citet{spolaor10} and \citet{koleva11}, which result from two different reductions of the same data set, because they do not match with our kinematics measurements for the inner parts of the galaxy. We probed the following parameters: the central black hole mass and separate mass-to-light ratio (M/L) for the galaxy and the NSC, using 5000 dynamical models. Changes in the inclination between 65\degr and 90\degr\/ led to insignificant changes in the M/L, thus we marginalised over it. In Fig.~\ref{fig:dyn_model_data} we show the input symmetrised SINFONI kinematics velocity and velocity dispersion maps {\em (left panels)} together with the resulting kinematics obtained by the best fitting dynamical model {\em (middle panels)}. The reduced $\chi^2$ of the best models is $\sim$0.21 over the 81 SINFONI bins. The low value of the reduced $\chi^2$ is due to the very conservative estimate of our kinematics errors. The best-fit $M/L_{z}$ of the galaxy and NSC is $3.2\pm0.4$ and $3.0\pm1.0$ respectively, as shown in Fig.~\ref{fig:MLSCcontours}. Confidence intervals are determined using $\Delta\chi^2$ statistics, assuming two degrees-of-freedom. Thus, the mass of the NSC is $1.4\pm 0.4 \times 10^{7} M_{\odot}$. The black hole mass is unconstrained, as the uncertainties on the central kinematics are too large. The difference in velocity dispersion between a 10$^{5}$ and a 10$^{7} M_{\odot}$ black hole is 5\,$\mbox{km s}^{-1}$\/ and the uncertainties on $\sigma$ are $\sim15$\,$\mbox{km s}^{-1}$. Black hole masses above 10$^{7} M_{\odot}$ do yield significantly worse fits and hence we place an upper limit of 10$^{7} M_{\odot}$. To robustly determine the black hole mass higher S/N spectra of the nucleus need to be obtained to reduce the uncertainties on the central kinematics and the PSF of the IFU data needs to be known precisely. The inclusion of a dark matter halo does not alter the $M/L$ of the NSC and only very weakly the $M/L$ of the galaxy. The $M/L_\mathrm{gal}$ is expected to contain only a small contribution from the dark matter \citep{cappellari06}, hence our final adopted model does not include a dark matter halo. Kinematics reaching much further out are need to properly constrain the dark matter halo. Apart from the mass distribution of the galaxy, the models yield the orbital distribution as a function of radius. In Fig.~\ref{fig:frac_orbits} we show the orbital mass weights as a function of the average radius and spin $\bar \lambda_z = \bar J_z \times (\bar r / \bar \sigma)$, where $\bar J_z$ is the average angular momentum along the short $z$-axis and $\bar \sigma$ the average second moment of the orbits. We detect the presence of three distinct components: both a co- and counter-rotating component as well as a non-rotating bulge component. The relative contribution of each of these components is shown in the {\em bottom panel} of Fig.~\ref{fig:frac_orbits}. Both rotating components extend well inside the break radius and have similar contributions in the NSC region. The sigma drop seen in the stellar dispersion map coincides with a decrease of the non-rotating orbits. The question arises if this is the only possible orbital configuration for this system. The orbits available to the Schwarzschild models are a fully representative set and the linear solver used to construct the models is guaranteed to find the global minimum \citep{vdv08,rvdb08}. This guarantees that the model finds the best-fitting orbital configuration. There could be other solutions that are also a good representation of the observations. As a consistency check, we attempted to fit models without counter-rotating orbits, which would exclude an opposite angular momentum accretion event as a formation scenario for the nucleus. However this led to a significantly worse match of the stellar kinematics (see Fig.~\ref{fig:dyn_model_data}, {\em right panels}), which indicates that counter-rotating orbits are thus required. We note that we were unable to use the Jeans' modelling approach to fit the stellar kinematics using the method described by \citet{cappellari08}. Although we could receive reasonable fits to the second velocity moment ($V_\mathrm{RMS}$), the fits to the velocity field were unsatisfactory. This is because, as it is at the moment, the JAM package does not allow the rotation parameter $\kappa$ to accept positive and negative values simultaneously for a given MGE Gaussian. To be able to quantitatively discuss the different orbital fractions, the dark matter halo of the galaxy and its global M/L, one would need improved long-slit or other large scale kinematics. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=0]{./Lyubenova_FCC277_fig09.eps}} \caption{\label{fig:frac_orbits} {\em Top panel:} Distribution of mass along the orbits of our best fitting dynamical model, as a function of angular momentum and radius. Colour coding reflects a factor of 3.5 span in mass density (darker colour corresponds to higher mass). {\em Bottom panel:} Fraction of non-/rotating orbits as a function of radius. A non rotating bulge ($-0.1<\lambda_{z}<0.1$) is denoted with a black line. The red line corresponds to the sum of all co-rotating orbits with $\lambda_{z}>0.1$, the blue line - all counter-rotating orbits with $\lambda_{z}<-0.1$} \end{figure} \section{Stellar population parameters} \label{sec:stellar_pop} In addition to the structure and dynamics, we can investigate what the parameters of the stellar populations of the NSC and/or nuclear disk in the heart of FCC\,277 are and whether they differ from the main body of the galaxy. Usually, nuclei in low luminosity Fornax and Virgo galaxies are bluer compared to their hosts \citep{cote06,turner12}. In the right panel of Fig.~\ref{fig:acs_colour} we show the $(g-z)$ colour profile, as derived from HST/ACS imaging \citep{turner12,ferrarese13}. The integrated colour of the nucleus is $(g-z)=1.33\pm0.18$ \citep{turner12} and does not differ from the main body of the galaxy. If nuclei follow the same colour-metallicity relation as globular clusters in early-type galaxies do \citep{peng06}, than the red colour would be indicative for higher metallicity of the NSC. However, age effects cannot be excluded, due to the well known age-metallicity degeneracy of broad band colours. \citet{koleva11} measured from optical spectroscopy the age and metallicity of the core of FCC\,277 (within a 0\farcs5 radius aperture) to be 5.4~Gyr and [Fe/H]=$-0.07$, respectively. At the effective radius these values are 7.7~Gyr and [Fe/H]=$-0.50$. Their data lack the spatial resolution to differentiate the NSC and the disk, however there is a pronounced negative age and positive metallicity gradient towards the nucleus. We used our near-IR IFU spectra to measure the line strengths of \ion{Na}{I}\/ ($\sim$2.2~$\mu$m) and $^{12}$CO\,(2--0)\/ ($\sim$2.3~$\mu$m) absorption features (see Fig.~\ref{fig:fit_spec}). From previous stellar population studies in the near-IR wavelength range we know that the \ion{Na}{I}\/ index increases with metallicity and younger age \citep{silva2008,esther09,az10a}, and the $D_{CO}$\/ index is expected to increase with higher metallicity for ages above 3~Gyr \citep{maraston05}. We measured the two indices using the definition of \citet{frog01} for the \ion{Na}{I}\/ index and \citet{esther08} for the $D_{CO}$\/ index. Before measuring, we first broadened our spectra to 6.9~\AA\/ (FWHM, $\sim$~94~$\mbox{km s}^{-1}$) to match the spectral resolution of other stellar population studies of elliptical galaxies in the near-IR \citep[e.g.][]{silva2008,esther09}. Finally, we corrected the \ion{Na}{I}\/ index to zero velocity dispersions using the velocity dispersion corrections of \citet{silva2008}. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=0]{./Lyubenova_FCC277_fig10.eps}} \caption{\label{fig:index_maps} \ion{Na}{I}\/ and $D_{CO}$\/ index maps with HST like spatial resolution {\em (left panels)} and radial profiles {\em (right panels)}. The red solid lines are the least squares linear fits to the data.} \end{figure} In Fig.~\ref{fig:index_maps} we show our index maps as well as their radial profiles. We observe radial gradients for both indices. Increase in \ion{Na}{I}\/ towards the centre is consistent with increasing metallicity and/or younger age in the nuclear disk and NSC. On the $D_{CO}$\/ map we see that the strongest index values towards the centre seem to form an elongated shape, aligned with the major axis of the galaxy and the rotating structure, visible on the velocity map (Fig.~\ref{fig:kin_maps}.) This increase is again consistent with the redder colour and indicative of higher metallicity. In Fig.~\ref{fig:index_sig} we compare FCC\,277 with other early type galaxies in the Fornax cluster in terms of their \ion{Na}{I}\/ and $D_{CO}$\/ indices versus their central velocity dispersion. The spectra of \citet{silva2008} cover 1/8 of the effective radius of each galaxy and are marked with circles. Open circles represent galaxies with old stellar populations, solid symbols stand for the galaxies that have optical signatures of recent ($<$3~Gyr) star formation. The dashed line illustrates the least-squares linear fit to the old galaxies only. Our SINFONI observations of FCC\,277 cover approximately the same radial extent as the other galaxies. We plotted the values measured on the integrated spectrum with solid blue squares. We extracted the NSC area ($r\leq0\farcs25$) and marked our measurements with orange asterisks. With open red diamonds we indicated the values for the so called "nuclear disk", integrated over the range $0\farcs25<r\leq0\farcs7$. The \ion{Na}{I}\/ index of the NSC is much higher compared to the extrapolation (shown with a dotted line) of the $\sigma$-\ion{Na}{I}\/ relation for old Fornax galaxies and is closer to the systems with younger ($<$3~Gyr) stellar populations. There is not a big difference of the index strength between the nuclear disk and the galaxy as a whole, so one would infer purely old age for these two components if looking only at this plot. The $D_{CO}$\/ index seems to saturate for galaxies with velocity dispersion higher than 100~$\mbox{km s}^{-1}$\/. Until reliable stellar population models for the near-IR become available, we cannot provide a quantitative estimate for the changes in stellar population parameters in the nuclei of early-type galaxies. At the moment we can only speculate that, taken together with the red colour of the NSC (as red as the host galaxy, which is unusual for NSCs in Fornax as discussed above), these point to a mixture between younger age and higher metallicity compared to the main body of the galaxy. \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=0]{./Lyubenova_FCC277_fig11.eps}} \caption{\label{fig:index_sig} \ion{Na}{I}\/ and $D_{CO}$\/ indices versus the central velocity dispersion of Fornax early-type galaxies. Open symbols denote old Fornax galaxies, solid symbols - galaxies with optical signatures of recent ($<$3~Gyr) star formation \citep{silva2008}. The dashed line represents the least-squares linear fit to the old Fornax galaxies only, the dotted line denotes its extrapolation to lower $\sigma$. The NSC in FCC\,277 is marked with an orange asterisk, the so called nuclear disk - with a red diamond, and the whole galaxy, as observed with SINFONI, is marked with a blue square. } \end{figure} \section{Formation mechanism of the NSC} \label{sec:discussion} So far we have collected evidence that: {\em i)} the nuclear star cluster inside the central 0\farcs25 of FCC\,277 does not rotate within our error of $\sim6$\,$\mbox{km s}^{-1}$; {\em ii)} the rotation visible at $\sim$0\farcs6 overlaps with a maximum in the diskiness of the isophotes, suggesting the existence of a nuclear disk around the NSC; {\em iii)} the existence of a central velocity dispersion drop is indicative of significant rotation in the same area; {\em iv)} dynamical modelling reveals a significant rotation in the inner 1\arcsec\/ in {\em both} directions, i.e. co- and counter-rotation, explaining the low level of observed rotation, the low $\lambda_{R_e}$, and the sigma drop; {\em v)} there is no significant difference in the derived dynamical $M/L$ between the NSC and the galaxy. However, within the errors, a change in the stellar population parameters may not lead to an obvious change in the $M/L$ for the different components. The derived dynamical $M/L_{z}$ within the errors is consistent with the predictions of stellar population models for a Salpeter IMF and $5-10$~Gyr age \citep[e.g.][]{bc03,maraston05}. If one compares to Chabrier or Krpupa IMF, then our dynamical $M/L_{z}$ is about a factor of two larger ; {\em vi)} there is evidence for differences in the stellar population parameters of the NSC, the nuclear disk, and the galaxy. This points to a scenario where the nucleus is younger and more metal rich. All these evidences for complex kinematics and stellar populations point to a scenario where the NSC and disk formed through multiple episodes of gas accretion and subsequent episodes of star formation. Counter-roation points towards mergers with orbital angular momentum opposite to the host galaxy. \citet{seth10} reached the same conclusion for the nucleus in NGC\,404, a nearby S0 galaxy. Thus, FCC\,277 is the second early-type galaxy that has a NSC exhibiting complex star formation history. \citet{turner12} noted that the lowest-mass galaxies with nuclei in the ACS Fornax and Virgo Cluster Surveys seemed to be structurally simple, having likely formed through star cluster infall. The more massive galaxies (such as FCC\,277) seemed to be more structurally complex in their inner regions. They interpreted that as evidence for an increased importance of gas infall at higher masses. Our results about the nucleus of FCC\,277 are consistent with this picture. Certainly, a larger sample is needed to study in more detail whether gas dissipation is a common mechanism for NSCs growth in early-type galaxies, as it is in late-type ones \citep[e.g.][]{walcher06,hartmann11}. The NSC accounts for $\sim0.2\%$ of the total mass of FCC\,277, thus it is not an outlier of the typical scaling relations in early-type galaxies \citep[e.g.][]{ferrarese06}. \section{Concluding remarks} \label{sec:conclusions} In this paper we present a pilot study of the detailed properties of nuclear star clusters (NSCs) in early-type galaxies. Although the nucleation frequency in galaxies is estimated to be $\sim80\%$, detailed data about the chemical and dynamical properties of NSCs exist mainly for spiral hosts. In early-type galaxies this task observationally is not trivial due to the intrinsic brightness of the underlying galaxy light, as well as small angular extent of the NSCs, which at the distance of Fornax, span $\sim$0\farcs1 in diameter. We showed that using current technology, it is indeed possible and valuable information about the formation mechanisms of NSCs can be obtained. As a pilot test case we chose the galaxy FCC\,277, a nucleated early-type galaxy that belongs to the Fornax cluster. This is the only member of this galaxy cluster that has a conveniently located bright star that one can use as a natural guide star for the adaptive optics (AO) system. Using SINFONI AO assisted observations, we observed the central 3\arcsec$\times$3\arcsec\/ of the galaxy. Thus we obtained maps of the stellar kinematics with HST-like spatial resolution of 0\farcs165 (FWHM). Our velocity map (Fig.~\ref{fig:kin_maps}) shows clear rotation with a maximum at $\sim$0\farcs6 from the galaxy centre, which overlaps with a maximum in the diskyness of the fitted isophots on the galaxy image (Fig.~\ref{fig:light_profile}). The NSC itself has an effective radius of 0\farcs08 and does not rotate within our detection limit of $\sim6$\,$\mbox{km s}^{-1}$. However, we observe a pronounced drop in the velocity dispersion in the central 1\arcsec\/ that suggests the existence of a dynamically cold rotating sub-structure. Our dynamical modelling reveals that the nucleus of this galaxy is complex: co- and counter-rotating, as well as non-rotating, stellar orbits are needed simultaneously to reproduce the observed kinematics (Fig.~\ref{fig:frac_orbits}). The NSC seems to be embedded in a disk that is most likely younger and more metal rich than the main body of the galaxy (Fig.~\ref{fig:index_sig}). Due to insufficient S/N we can only provide a conservative upper limit for a possible black hole of 10$^{7} M_{\odot}$. All these facts point to a complex formation history of the nuclear region in FCC\,277. Most likely gas dissipation and merging played an important role in shaping the nucleus of this galaxy. To check whether this is a common phenomenon among early-type galaxies, a larger sample is needed and can be obtained with current observing facilities. \section*{Acknowledgements} We are grateful to the ESO astronomers who obtained the data presented in this paper in service mode operations at La Silla Paranal Observatory. We acknowledge fruitful discussions with Eric Peng, David R. Silva, Jakob Walcher, Hans-Walter Rix, Jesus Falc\'{o}n-Barroso. We thank Alister Graham, Max Spolaor, and Mina Koleva for providing us with their results in tabular form. ML would like to thank the staff at the Astronomical Observatory of the University of Sofia for their hospitality, where parts of this research have been carried out. LI and AJ acknowledge Fondecyt, Fondap and Basal funding for this project. AJ is supported by the Chilean Ministry for the Economy, Development, and Tourism's Programa Iniciativa Cient\'{i}fica Milenio through grant P07-021-F, awarded to The Milky Way Millennium Nucleus, by Anillo ACT-086 and BASAL CATA PFB-06. We finally thank the referee for her/his valuable comments. This paper is dedicated to Mariika B. Ilieva (1926 - 2012) with a warm thank you for all the support. \bibliographystyle{mn2e}
7e3b664725a3ae80e277984f3404ef7f1b307e5b
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Binary stars are prime targets to study stellar evolution: for some spectral types more than 50\% of stars are estimated to belong to binary or multiple-star systems \citep[e.g.,][]{duquennoy1991,lada2006}, and Kepler's third law allows us to retrieve their global physical parameters, which is of crucial importance to constrain stellar evolution models. It is possible to determine the masses of each component $(M_1,~M_2)$ for visual or eclipsing binaries, provided that spectral lines are detected for each star to track the Doppler shifts along their orbits, and the product $(M_1 +M_2) \sin i$ for spectroscopic-only binaries, where $i$ is the orbital plane inclination. Moreover, close-in binary systems host unique and poorly-understood physical processes such as mass exchange between the two stars. The \textit{Kepler} satellite \citep{borucki2010} detected 2616 eclipsing binaries (hereafter EBs) by the end of January 2013, which represented about 1.4\,\% of the targets. These targets were listed by \citet{prsa2011}, then updated by \citet{slawson2011} and \citet{matijevic2012}. Within this list, binary systems were classified into four types. (i) Detached (D), where each component's radius is smaller than its Roche lobe so stars are spherical. The stars have no major effect on each other, and essentially evolve separately. Most binaries belong to this class. (ii) Semi-detached (SD), where the biggest star fills its Roche lobe leading to mass exchange. The mass transfer dominates the evolution of the system. In many cases, the inflowing gas forms an accretion disc around the accretor. (iii) Over-contact (OC), where both stars fill their Roche lobes and are in contact. The uppermost part of the stellar atmospheres forms a common envelope that surrounds both stars. (iv) Ellipsoidal variation (ELV), where no eclipse is observed but the system is detected in \textit{Kepler}'s light curves through the ellipsoidal shape of the stars. In addition to the \textit{Kepler} list, \citet{coughlin2011} proposed a list of low-mass ($< 1 ~M_{\odot}$) EBs, of which many in fact belong to the lists of \citet{slawson2011} and \citet{matijevic2012}. Red giants are evolved stars that have depleted the hydrogen in their cores and are no longer able to generate energy from core hydrogen burning. The physical processes taking place in their interiors are fairly poorly understood. However, the study of the global pulsations of red giants (hereafter RGs) with asteroseismology is capable of unambiguously determining bulk properties such as mass, radius, temperature, metallicity, and also the evolutionary state of RGs. Indeed, the new era of space-based missions such as CoRoT \citep{baglin2009} and \textit{Kepler} has dramatically increased the amount and quality of the available asteroseismic data. In particular, global oscillations of several thousands of solar-like stars and RGs have been detected in \textit{Kepler} data, and analysis of the oscillation eigenmodes now allows robust seismic inferences to be drawn about their internal structure \citep[e.g.,][]{chaplin2011, hekker2009, bedding2010, mosser2010, huber2010, bedding2011}. Applying modern asteroseismology to EB systems for which photometric and radial velocity data exist leads to the best possible physical characterizations: this is because masses and radii may be measured in two independent ways. Such systems are cornerstones for testing stellar evolution models. Until recently, only one case of an oscillating RG belonging to an EB system had been reported \citep[KIC 8410637,][]{hekker2010}. As only one eclipse of that system was observed, no estimate of orbital period or eccentricity could be obtained, but the global oscillations of the RG star were clearly detected. In addition, \citet{derekas2011} report the detection of a triple system containing a RG (\object{HD 181068}), and they explicitly mention that solar-like oscillations are not visible, even though most stars with similar parameters in the \textit{Kepler} database do clearly show such oscillations. In general, the \textit{Kepler} mission has succeeded in making breakthroughs in both the fields of binary stars and asteroseismology. For example, \textit{Kepler} helped reveal the presence of tidally-induced pulsations in the binary system KOI 54, which are the result of resonances between the dynamic tides at periastron and the free oscillation modes of one or both of the stars \citep{Welsh_2011}. Since most stars are observed at a 29-min cadence with \textit{Kepler}, global modes of main-sequence solar-like stars are not accessible; however, global modes of RG stars larger than $3.5~R_\sun$ are accessible \citep[e.g., Table 3 of][]{mosser2012}. Fortunately, an RG catalog from the \textit{Kepler} Team has been compiled and made public for the scientific community.\footnote{http://archive.stsci.edu/kepler/red\_giant\_release.html} In this paper, we establish a list of RG candidates that likely belong to EBs or multiple-star systems, which we obtain from the EB and RG public catalogs. We test whether these candidates are part of EB or multiple systems and characterize their main physical properties. Note we do not work with radial velocity measurements, for which data acquisition is in progress. We first present the results of a cross-correlation of the EB and RG catalogs (Section \ref{sec_2}), and the subsequent detailed analysis of their light curves to determine eclipse and asteroseismic properties (Section \ref{sec_3}). We identify 70 RGs possibly belonging to EB systems, of which 47 show clear global oscillation modes. Mean properties of the global modes are used to infer RG masses and radii. We study several ways to determine whether the oscillating RGs actually do belong to the EBs with which they are associated, and then describe details of several important cases in Section \ref{sec_4}. In Section \ref{sec_5} we conclude by defining the observations that are needed to fully characterize this set of stars and discuss general implications for oscillating stars in binary systems. \section{Data} \label{sec_2} \subsection{\textit{Kepler} data} \label{sec_21} All data used in this paper are photometric measurements obtained by the NASA satellite \textit{Kepler}, launched in March 2009 to search for exoplanets in the habitable zone \citep{borucki2010}. Since its launch, \textit{Kepler} has been monitoring about $156\,000$ stars in a 105~deg$^2$ field of view in between the Cygnus and Lyra constellations. Data are subdivided in {\it quarters} Q, i.e., three-month runs at the end of which the satellite is rotated by 90~deg to maintain the Sun's position on its solar arrays and to keep the radiator pointed to deep space. The commissioning quarter Q0 and quarter Q1 lasted respectively 10 and 35 days, respectively. Here, we only utilize public data that are available through Q13 as of January 2013. Light curve data released for the public on the MAST database can be obtained in {\it raw} or {\it corrected} form. Corrected fluxes have been processed by the Presearch Data Conditioning (PDC) pipeline that removes signatures in the light curves that are correlated with systematic error sources from the telescope and spacecraft, such as pointing drift, focus changes, and thermal transients. The PDC attempts to correct for such errors while preserving planet transits and other astrophysically interesting signals. Further details of the pipeline are described by \citet{kinemuchi2012}, \citet{Stumpe_2012}, \citet{Smith_2012}, and the \textit{Kepler} handbook.\footnote{http://archive.stsci.edu/kepler/manuals/archive\_manual.pdf} We use corrected PDC fluxes in this work. In addition, for each object there are {\it target pixel files}, consisting of the flux for all the pixels contained within a predefined mask which are used to create the data found in the photometric light curve files. Each target pixel file contains these pixels as a time series of images in a binary FITS table. We use target pixel files in Section \ref{sec_35} to determine whether the eclipsing signal we observe is from a nearby contaminating object. The \textit{Kepler} observations are sampled at either {\it long cadence} (29.4244 minutes) or {\it short cadence} (58.89 seconds, for only 512 targets). Most targets in this study were observed at long cadence; for the few objects that were also observed at short cadence it was never for more than 30 days. Therefore, we work primarily with long cadence data, and only consider the short-cadence data to search for high-frequency modes of the main-sequence or subgiant companion star belonging to the considered EB system. For long-cadence data, the Nyquist cut-off frequency is $\nu\ind{nyq} = 283.2\ \mu$Hz, which limits the possibility of performing asteroseismology on any object whose frequency at maximum amplitude $\nu\ind{max} \geq \nu\ind{nyq}$. By assuming an RG effective temperature of 4800~K and 1 $M_\odot$, asteroseismic scaling laws \citep{kjeldsen1995,huber2011,Belkacem_2011} predict that the lower limiting radius is approximately $3.5~R_\sun$. \subsection{Red-giant and binary-system catalogs} \label{sec_22} The {\it Kepler} team released a list of $13\, 698$ RGs on September 27, 2011, selected from the target list using color-magnitude estimates and considering effective temperatures $T\ind{eff} = 4800\pm300$ K and surface gravities $\log g = 2.9\pm0.5$ (cgs). This latter criterion cuts off RGs with predicted oscillation peak frequency larger than about 320~$\mu$Hz, slightly above the long-cadence Nyquist frequency. We note that not all of the RGs were continuously observed, and they are typically the first targets to be dropped from the survey when pixel resources become scarce (see the \textit{Kepler} red giant database). The RG \textit{Kepler} magnitudes range from 7.9 to 14.0 which makes them sufficiently bright for ground-based spectrometry to determine atmospheric parameters (as the SDSS III APOGEE/\textit{Kepler} experiment APOKASC at Apache Point Observatory is currently realizing). \citet{prsa2011} released a catalog of EBs identified in the {\it Kepler} field from the first data releases (Q0--Q1). The catalog was motivated by the exquisite quality of {\it Kepler} data which has led to the discovery of hundreds of new systems, revolutionized accuracy in modeling EB systems, and provided an estimate of the frequency of occurrence of EBs. Their method uses the Transit Planet Search (TPS) algorithm, first developed to detect exoplanets in the {\it Kepler} data \citep{jenkins2010a,jenkins2010b}, by adapting it to search for eclipse durations consistent with EBs. The targets that present a positive detection are then down-selected to exclude objects already identified as exoplanets, variable or spotted stars that mimic eclipse shapes, blends from background stars, or pointing jitter artifacts. The eclipses are subsequently modeled with EBAI \citep[Eclipsing Binaries via Artificial Intelligence;][]{prsa2008}, which relies on trained neural networks using synthetic eclipse profiles to extract the physical parameters. For contact systems, orbits are assumed to be circular and the main physical parameters fit are the temperature ratio $T_2/T_1$, the mass ratio $q=M_2/M_1$, the fillout factor $F$ which is a function of the potential gravity, and the inclination $\sin i$. For semi-detached and detached binaries, the mass ratio cannot be estimated, and $F$ is replaced with the sum of fractional radii $\rho_1 + \rho_2 = (R_1 + R_2)/a$, where $a$ is the semi-major axis of the binary orbit. The eccentricity $e$ together with the argument of the periastron $\omega$ are further introduced through $e\cos\omega$ and $e\sin\omega$. A second catalog from \citet{slawson2011} updates the results by including data from Q2, adding some \textit{Kepler} Objects of Interest (KOIs) flagged as possible exoplanets and later determined to be EBs, adding objects with period longer than 44 days (i.e., Q0$+$Q1), rejecting variables stars initially considered to be EBs, and removing EBs that were initially blends situated at the edge of the photometric aperture later re-observed with a re-centered aperture. This updated catalog presents 2165 EBs, composed of 58\,\% detached (D), 7\,\% semi-detached (SD), 22\,\% overcontact (OC), 6\,\% ellipsoidal (ELV), and 7\,\% undetermined systems. These studies conclude that eclipsing binaries represent 1.4\,\% of the {\it Kepler} target list. A third update has been published that employs an automatic morphological classification scheme \citep{matijevic2012}, and additional EB candidates have been released on the \textit{Kepler} MAST EB database, leading to a total of 2616 EBs. \subsection{Cross-correlation of both lists} \label{sec_23} The {\it Kepler} database contains $13\, 698$ stars identified as RGs, while the eclipsing binary catalog contains 2616 targets, as of January 2013. The cross-correlation of both catalogs reveals that 70 stars are flagged as both EB and RG (hereafter RG/EB): 56 systems (66\,\%) are classified as D, seven (10\,\%) as SD, 11 (16\,\%) as OC, and two (4\,\%) as ELV; two (3\,\%) are unclassified. Such proportions are close to those of the whole sample, which is initially surprising since we do not expect RGs to belong to close-in systems, in particular SD and OC, where orbital periods range from 0.2 to 6 days (see Table \ref{table_1}). For our purpose of considering stars flagged as EB and RG, it is prudent to keep in mind the recommendations of \citet{prsa2011}: \begin{enumerate}\itemsep 0cm \item ``The high star density leads to a non-negligible likelihood of associating an EB event with the wrong star;'' \item ``The EB interpretation should be taken with extreme caution for stars with high fraction of flux contamination;'' \item ``Stars with shallow eclipse events should also be regarded with caution even if the flux contamination is modest.'' \end{enumerate} Hereafter, when we mention RG/EBs, we imply \textit{candidate} RG/EB systems. To understand the specificities of RG/EBs with respect to the whole EB catalog and how likely they may correspond with misidentifications, we compare histograms of observing conditions and stellar atmospheric parameters (estimated when systems were supposedly single stars) from the \textit{Kepler} Input Catalog (KIC, \citealt{Brown_2011}), and orbital parameters published by \citet{slawson2011} in Figures \ref{fig1} and \ref{fig2}. \begin{figure} \centering \epsscale{1.15} \plottwo{f1_a-eps-converted-to.pdf}{f1_b-eps-converted-to.pdf} \plottwo{f1_c-eps-converted-to.pdf}{f1_d-eps-converted-to.pdf} \caption{From left to right and top to bottom, histograms of {\it Kepler} magnitude, contamination factor, effective temperature, and surface gravity for the entire EB catalog (dashed line, left $y$-axis) and for the sample flagged as RG and EB (solid line, right $y$-axis). Data are from the KIC. \label{fig1}} \end{figure} \begin{figure} \centering \epsscale{1.15} \plottwo{f2_a-eps-converted-to.pdf}{f2_b-eps-converted-to.pdf} \plottwo{f2_c-eps-converted-to.pdf}{f2_d-eps-converted-to.pdf} \plotone{f2_e-eps-converted-to.pdf}{} \caption{From left to right and top to bottom, histograms of orbital period, sum of relative radii, stellar temperature ratio, orbital plane inclination $\sin i$, and eccentricity for the whole EB catalog (dashed line, left $y$-axis) and for the sample flagged as RG and EB (solid line, right $y$-axis). The x-axis is truncated at an eccentricity of 0.6 because so few systems are above this value. Data are from the \citet{slawson2011} EB catalog. \label{fig2}} \end{figure} It appears that RG/EBs are slightly brighter on average than the full EB catalog, but this is not significant since RGs were selected with a magnitude limit of 14. For stellar crowding, we only show contamination factors calculated on the apertures corresponding to the second of the four positions of the satellite (i.e., during Q1, Q5, and Q9). We also tested the three other satellite orientations without noting any relevant difference. We see that the contamination factor of RG/EBs is lower than the mean contamination of all EBs, which shows that our candidates are not overly predisposed to the risk of target misidentification or blending. Histograms of RG/EB effective temperature and surface gravity are not representative of the whole EB catalog, but are instead typical of RGs. This means that even if a RG does not belong to an associated EB, there is still one RG per photometric aperture, whose flux overwhelms that from any secondary star. We note two exceptions at temperatures around 6500 K (see Table~\ref{table_1}) that are classified as OC and are likely misidentified as RGs. The orbital period histograms of the RG/EBs and the total EB sample in Figure \ref{fig2} are similar, and shows that most RG/EB systems have orbital periods $< 10$ days. The sum of relative radii are also similar in both samples with the exception of the lack of values lower than 0.1 in RG/EBs. The relative deficit of stars with a temperature ratio equal to one among RG/EBs can be related to the minor proportion of contact systems in the RG/EB sample. We note that the inclination and orbital eccentricity distributions seem to be locked in the ranges $\sin i = [0.99, 1]$ and $e=[0.1,0.15]$, which suggests that the estimate of these parameters by \citet{slawson2011} could be biased. Measuring orbital eccentricity from photometric data alone, i.e., without radial velocities, is known to be inaccurate. \begin{figure} \centering \epsscale{1.25} \plotone{f3-eps-converted-to.pdf} \caption{Expected primary eclipse depth (given by the contours, in \%) by assuming a companion star of temperature $T_2$ and radius $R_2$ eclipsing a RG of temperature 4800~K and radius $10\,R_\sun$.\label{fig3}} \end{figure} To get an initial rough estimate whether a RG is compatible with an EB, we determine the amplitude of the deepest eclipse measured in the RG/EB light curve. We assume the system is observed from the orbital plane, and that the RG is larger than its companion. We therefore adopt the convention that the deepest eclipse is the secondary eclipse (i.e., companion star going behind the RG) if the companion's temperature is higher than the RG's, and that the deepest eclipse is the primary eclipse otherwise (i.e., when the RG's temperature is higher than that of the companion star). Primary and secondary eclipses will have equal depths when the two stars' effective temperatures are equal. A raw proxy of relative photometric dimming during primary and secondary eclipses can be obtained by neglecting stellar limb darkening and comparing simple luminosities: \begin{eqnarray}\nonumber \left(\frac{\delta I}{I}\right)\ind{primary} &=& \frac{R_2^2\ T_2^4}{R_1^2\ T_1^4\ +\ R_2^2\ T_2^4} \\ \left(\frac{\delta I}{I}\right)\ind{secondary} &=& \frac{R_2^2\ T_1^4}{R_1^2\ T_1^4\ +\ R_2^2\ T_2^4}, \label{eqn1} \end{eqnarray} where the subscripts $1$ and $2$ stand for the RG and the companion star, respectively. Note the numerator of the second equation is $R_2^2 T_1^4$ since since the light dimming is due to the star of radius $R_2$ which hides a surface $\pi R_2^2$ of the star of temperature $T_1$. In Figure \ref{fig3} we present the expected eclipse depth assuming a typical RG with radius $10\ R_\sun$ and effective temperature $4800$~K. We conclude that the range of photometric dimmings $\sim0.2- 20\,\%$, which we measure in all but nine of the RG/EBs, is compatible with a stellar pair that includes a RG and a main-sequence star (see Table \ref{table_1}). We note that primary eclipse depths will be lower than the theoretical proxy in Equation (\ref{eqn1}) when eclipses are grazing due to limb darkening. For the nine systems with very shallow eclipses, their depths range from 0.02\,\% to 0.14\,\%. These cases could either correspond to RGs with a background EB, EB systems with grazing eclipses, or EB systems where the companion star's size is a few percent of the RG's size, as could be possible for brown dwarfs or giant planets. \begin{figure} \centering \epsscale{1.25} \plotone{f4-eps-converted-to.pdf} \caption{Orbital velocity contours (km s$^{-1}$) of a RG in a binary system as a function of the orbital separation and orbital period. A circular orbit is assumed. We further assume $(R_1,M_1) = (10R_\sun,1.4M_\sun)$ for the RG and $(R_2,M_2) = (1R_\sun,1M_\sun)$ for the companion star. Red crosses represent the values of the RG/EB sample from \citet{slawson2011}. The thick black line indicates the critical velocity for a phase-locked RG. \label{fig4}} \end{figure} From the RG and EB catalogs, as well as from the associated KIC parameters $\log g$ and $T\ind{eff}$, it is assumed that each candidate system contains one EB and one RG in the photometric aperture. However, suspicion of blending arises from quick calculations regarding the orbital period distribution. Let us consider a simple configuration where a RG with radius and mass ($10 R_\sun, 1.4 M_\sun)$ is in a phase-locked binary system with a solar-like companion ($1R_\sun, 1M_\sun$) in a circular orbit. (Such parameters for the RG are close to the median values of RG/EBs obtained with asteroseismology in Section \ref{sec_33}.) In Figure \ref{fig4}, we show the expected orbital velocities that the RG would have in that configuration, by varying both orbital period $P$ and semi-major axis $a$: \begin{equation} v\ind{orb,1} = \frac{2\pi a}{P} \frac{M_2}{M_1+M_2}. \label{eqn2} \end{equation} By using the values for the sum of fractional radii and orbital periods estimated for 46 out of the 70 systems by \citet{slawson2011}, we see that 14 RGs (mostly of the OC class) would have an orbital velocity between 500 and 2600~km\,s$^{-1}$. Even if our assumptions about the companion star's mass and radius are incorrect, this indicates that some systems are not physically possible as they would require an orbital velocity greater than the critical velocity that would begin to tear apart the RG. We consider a velocity to be ``critical'' when the rotation velocity at the equator equals the escape velocity. In the case of a phase-locked system, the rotation period is simply the orbital period. Any additional proper stellar rotation in the direction of the orbital motion (e.g.,~Earth) would lower the critical velocity, while a stellar rotation in the opposite direction of the orbital motion (e.g.,~Venus, which is unlikely) would enhance the critical velocity. Based on this criterion, only 25 out of the 46 systems, for which we have orbital parameters, have a period compatible with the critical velocity threshold. This is the first of several clues that points to the fact that many of the candidates are not true RG/EB systems. In the following sections, we use several analysis techniques to better characterize the systems and more accurately determine how many of our candidate systems are bona-fide RG/EBs. \section{Light curve analysis for eclipses and asteroseismology} \label{sec_3} \subsection{Search for contamination from surrounding stars} \label{sec_35} The cross correlation between the RG and EB databases resulted in 70 identifications. We consider four possible scenarios: \begin{enumerate} \item The RG is actually one of the EB stars. \item The RG is aligned within the pixel field of view with the EB and is a part of a multiple system (gravitationally speaking), of which two close-in stars mutually eclipse, with the RG out of the EB's orbital plane. Eclipse Timing Variations (ETVs) may be observed in this case (see Section \ref{sec_41}). \item The RG is aligned within the pixel field of view with the EB, but is not gravitationally bound. \item The RG and the EB fall on different pixels in the aperture and are not gravitationally bound. \end{enumerate} The last case can be verified using the target pixel files associated with each star by computing a map of the relative intensity variation ${\rm d}I/I$ to check whether the depth of the eclipse is correlated to the peak intensity source on the detector. Every quarter when the spacecraft rotates, a star falls onto a different set of pixels, and the pixel mask (aperture) that defines the optimal output light curve often changes. In the case of a long-period EB, the eclipses only occur in a few of the quarters. We detrended each pixel light curve in a quarter by a low-order polynomial and then modulated (``folded'') it by the period estimated from the global light curve. The relative intensity drop ${\rm d}I/I$ was then calculated and an image of this eclipse depth was compared to an average intensity image for the quarter. \begin{figure} \epsscale{1.4} \centerline{\plotone{f16_a-eps-converted-to.pdf}} \vspace{-.3\textwidth} \centerline{\hspace{.15\textwidth}\large\color{white}{(a)}\hspace{.23\textwidth}\large\color{white}{(b)}\hspace{.22\textwidth}\large\color{black}{(c)}\hfill} \vspace{.26\textwidth} \epsscale{0.8} \centerline{\plotone{f16_b-eps-converted-to.pdf}} \vspace{-.75\textwidth} \centerline{\hspace{.14\textwidth}\large\color{white}{(d)}\hfill} \vspace{.73\textwidth} \caption{Identification of RG star KIC 4576968 and contaminating EB from the \textit{Kepler} Target Pixel Files. The top panels show (a) the map of the mean intensity, (b) the map of eclipse depth in each pixel, and (c) a segment of the light curve obtained by summing over all pixels (including outside the aperture). White corresponds to highest flux on a linear scale, and the red outlines the aperture used to compute the \textit{Kepler} light curves. The matrix in panel (d) shows the period-folded light curve in each target pixel normalized to the median value. The gray boxes denote the aperture. The strongest eclipse signature is outside of the aperture, while the highest flux pixel is within the aperture. All data are from Q7. Note that the dispersion of the folded light curves is higher in the three pixels where the intensity is maximum because of the presence of RG oscillations.} \label{fig_jean} \end{figure} In 40 of the 47 cases, we find that the peak in intensity corresponds to the peak in eclipse depth, indicating scenarios 1 or 3 above cannot be ruled out (and for the nine brightest stars, saturation in the aperture that buries any eclipse signature on individual pixels does not allow this method to be used). We find 7 clear cases where the pixel with the deepest eclipse occurs away from the pixel with the peak mean intensity (and typically outside of the \textit{Kepler} aperture as well). An example is shown in Fig.~\ref{fig_jean} for KIC 4576968, which is classified in the binary catalog as an OC system. The overall peak in intensity of the mean photometry is well inside the defined aperture, yet the maximal eclipse depth is about two pixels outside of this region. The eclipse depth considering only pixels within the aperture is about 0.06\,\%, while outlying pixels reach about 2\,\%. Studying the individual folded light curves for each pixel confirms that the EB and the RG are on different pixels, and likely not part of the same system. All of these 7 cases occur in short, approximately less than 1 day period EBs, where it is indeed physically impossible for the RG to be one of the eclipsing stars and still remain intact. Five such systems (KIC 2711123, 4576968, 5652071, 7879404, 11968514) show behavior where we are confident that the RG is a background contaminant and not actually a part of the EB, as scenario 4 describes. Two other cases (KIC 7031714, 7955301) show ETVs and are consistent with scenario 2, but are likely to be false positives. Indeed, if these objects were gravitationally bound and separated by one or more four-arcsecond pixels, the physical separation at the distance of a magnitude-9 giant star would be on the order of several hundred to thousands of AU, which implies an orbital period of several thousands of years. Hence these cases could correspond to a RG with a background triple system. In addition, for each star, we compute the power spectrum of each pixel. It appears that the global oscillations always originate from the pixels where intensity is maximum, which means that the brightest star systematically corresponds to a red giant, for the systems where oscillations are detected. We even distinguish by eye the RG oscillations on the map of folded light curves across the photometric aperture through a point dispersion larger than the average noise (see Figure \ref{fig_jean}, panel (d)). \subsection{Cleaning the time series} \label{sec_31} Two distinct ways of processing the light curves are required for modeling eclipses or analyzing seismic properties. Eclipse modeling consists of extracting the stellar parameters from fitting the shape of the light curve during eclipses after removing instrumental biases and, ideally, other sources of stellar variability (e.g., activity, pulsations). On the contrary, studying global mode properties requires that the time series from which we calculate power spectra is free from periodic signals such as instrumental systematics and stellar eclipses. In other words: global modes are noise for eclipse modeling and vice versa. \begin{figure} \epsscale{1.2} \plotone{f5-eps-converted-to.pdf} \caption{Cleaning the light curve of short-period binary KIC 3532985. The top panel shows a segment of the original light curve (red) and the corrected light curve (black) after eclipse removal, which is used for seismology analysis. The middle panel shows the original light curve folded on the period (black) and its rebinned version (red). The bottom panel is the result after a smoothed light curve without eclipses is subtracted, and is used for light curve modeling. The dashed vertical lines in the top and bottom panel demarcate successive orbital periods. \label{fig_clean_1}} \end{figure} \begin{figure} \centering \epsscale{1.2} \plotone{f6-eps-converted-to.pdf} \caption{Cleaning the light curve of long-period binary system KIC 8430105. The top panel is the same as Figure \ref{fig_clean_1}, and shows a segment of the original light curve (red) and the corrected light curve (black) after eclipse removal, which is used for seismology analysis. The middle panel shows a zoomed view of a single eclipse from the original light curve, and the polynomial fit used to remove the eclipse signal for asteroseismology (red). The bottom panel is the result after a smoothed light curve without eclipses is subtracted, and is used for light curve modeling. The dashed vertical lines in the top and bottom panels demarcate the region highlighted in the middle panel. \label{fig_clean_2}} \end{figure} \begin{figure} \centering \epsscale{1.2} \plotone{f7-eps-converted-to.pdf} \caption{Cleaning a light curve with variable eclipse depth and timing (KIC 7955301). The top panel shows the entire original light curve (red) and the corrected light curve (black) after eclipse removal, which is used for seismology analysis. The middle panel shows a zoomed view of one of the eclipses (black) and an example of the function used to fit each eclipse (red). The bottom panel shows the result after the eclipses are subtracted from the original light curve, and upon which asteroseismology is performed. No light curve modeling is attempted in these cases. \label{fig_clean_3}} \end{figure} \begin{figure} \centering \epsscale{1.2} \plotone{f8-eps-converted-to.pdf} \caption{Cleaning an over-contact (OC) binary light curve with variable amplitude (KIC 7879404). The top panel shows the entire original light curve (black) and the smoothed version (red). Since the smoothing function is a weighted moving average, the smoothed light curve equals the signal at each interruption, giving an impression of spikes. The middle panel shows both the original light curve after subtraction by the smoothed version (black) and the local minima and maxima per half-period (red). The bottom panel is the result after the original light curve is divided by the smoothed maxima-to-minima distance. A zoomed view is is shown for time between 710 and 711 days. This final light curve is used for asteroseismology, and we attempt no light curve modeling. \label{fig_clean_4}} \end{figure} Nevertheless, there are initial reduction procedures common to both cases. For example, it is absolutely necessary to remove long time drifts and discontinuities, which are mostly of instrumental origin, particularly after \textit{Kepler} rotates between consecutive quarters. Note that we do not use the portion of each light curve between eclipses for our light curve models due to difficulties in disentangling instrumental flux variations, stellar activity (spots, granulation), reflection, Doppler beaming, and ellipsoidal variations. We assume that the median fluxes of each quarter's data are equal, and normalize them to 1 to work with relative fluxes. The entire light curve is then concatenated, and all outliers (selected as being out of the light curve mean dispersion by an amount evaluated individually for each target) are eliminated. The next step depends on the specifics of each light curve. Five classes of datasets and procedures are described: \begin{itemize} \item Short orbital period EBs (e.g., Figure \ref{fig_clean_1}). When the time series is long enough to contain more than about 30 orbits, we proceed by assuming that the signal fluctuations, be they stellar or instrumental in origin, may be averaged out by folding and rebinning the light curve. We proceed by subtracting the mean folded light curve from the original. This results in a time series with no eclipse (see the black curve in Figure \ref{fig_clean_1}, top panel) which retains the stellar variability that we use to search for global modes in the Fourier domain. Next, we smooth this time series with a moving average whose width is set on the characteristic time scale of the photometric variations to be cancelled. The smoothed time series is then subtracted from the original time series to get a light curve with a flat level between eclipses. In principle, this is the best method because it is simple and makes no assumption about the origin of any photometric fluctuations. In practice, we limited the use of this method to systems with orbital period shorter than 20 days. \item EBs with stellar variations on the order of the orbital period, which is much longer than the eclipse duration (e.g., Figure \ref{fig_clean_2}). In this case, a stellar signal of amplitude similar to or higher than the eclipse's depth cannot be cancelled out by folding the data. We proceed by identifying the centers of the primary and secondary eclipses, bridging them with a second-order polynomial, and then subtracting the smoothed light curve from the original to yield a flattened light curve with equally deep eclipses. We use a second-order polynomial to ``fill'' each eclipse by simply fitting the short regions of the light curve on either side of the eclipse of duration equal to the eclipses. We find a second order polynomial is sufficient account for the local behavior of the light curve. Next, an asteroseismic analysis is carried out on the light curve that has the eclipses filled with polynomials to avoid periodic gaps. This technique cannot be applied if the eclipse duration is too large compared to the orbital period because we would cancel the signal by filling the eclipses. \item Long orbital period EBs. When the eclipse duration is less than $5 \%$ of the duration of the entire light curve, we postulate that simply removing the data where eclipses occur does not significantly change the duty cycle for the asteroseismic analysis. For eclipse modeling in such systems, we flatten the light curve after ``filling'' eclipses with second-order polynomials, as in the previous case. \item EBs with variable eclipse depth and timing (e.g., Figure \ref{fig_clean_3}). In several cases, we encounter light curves with eclipses whose depth and timing change with time, either of astrophysical or instrumental origin (e.g., varying flux contamination at each field rotation). We do not attempt to model such eclipses in this work because it often implies accounting for a third body when there is an astrophysical cause, or it requires careful modeling of systematic errors when the cause is instrumental. For asteroseismology, however, we fit individual eclipses with simple functions and then subtract them from the time series. The functions we use to fit eclipses are Gaussian when eclipses are grazing (no flat bottom, or ``v-shaped''), or the \citet{mandel2002} exoplanetary fitting function otherwise. \item Contact EBs with variable amplitude (e.g., Figure \ref{fig_clean_4}). As indicated in the previous cases, varying flux contamination can modify the apparent amplitude of eclipses, and the technique of fitting eclipses with Gaussian or exoplanetary functions is not possible for contact (OC) systems. Therefore, after subtracting a smooth version of the light curve, we measure positions and amplitudes of all local maxima and minima per half-orbit and smooth them. Then we normalize the time series with the smoothed amplitude of the light curve and subsequently measure asteroseismic parameters. No light-curve modeling is attempted for these systems. \end{itemize} \subsection{Asteroseismic analysis}\label{sec_33} \subsubsection{Detection and properties of global pulsation modes} We search for global oscillation modes in light curves where eclipses have been handled as described in Section \ref{sec_31}. In this paper, we only consider clear oscillatory excess power, and do not consider low signal-to-noise ratio excess power that in principle could be detected with filtered autocorrelation techniques \citep[e.g.,][]{mosser2009}. Indeed, since the light curves studied here could have remnants of eclipse features, the presence of harmonics of the orbital period would strongly perturb the computation of the autocorrelation function, leading to false mode detections. \begin{figure*} \centering \epsscale{1} \plotone{f11-eps-converted-to.pdf} \caption{Power density spectra (PDS) of the 47 light curves in which global, solar-like oscillations are detected, sorted by increasing $\nu\ind{max}$ from top to bottom and left to right. Some spectra are truncated at low frequency because of the filtering applied to remove the signature of an eclipse. Note the different frequency $x$-axis scales in each column. Each system is labeled by its \textit{Kepler} ID number.\label{fig_PDS}} \end{figure*} Power density spectra (PDS) are computed with the discrete Fourier transform with no oversampling up to the Nyquist frequency (283~$\mu$Hz). We detect global modes in 47 of the 70 RG/EB candidates. The breakdown by binary classification is 37~D, two SD, five OC, and two ELV. All power spectra showing solar-like oscillations are presented in Figure \ref{fig_PDS}. One of the asteroseismic aims of this paper is the measurement of RG global mode parameters, such as their frequency at maximum amplitude $\nu\ind{max}$ and mean large frequency separation $\Delta\nu$. The pair ($\nu\ind{max}, ~\Delta\nu$) allows us to estimate the RG's mass and radius through well-known asteroseismic relations \citep{kjeldsen1995,huber2010}. \begin{figure} \centering \epsscale{1.2} \plotone{f12-eps-converted-to.pdf} \caption{Large frequency separation ($\Delta\nu$) versus frequency at maximum amplitude ($\nu_{\rm max}$) for the 47 pulsating RG/EB candidates derived from asteroseismology. The red line shows the empirical relationship established from a large sample of RGs from \citet{Mosser_2012b}. The dashed vertical line denotes the Nyquist frequency for the observation cadence, above which three RGs have their $\nu_{\rm max}$. \label{fig_numax_Dnu}} \end{figure} \begin{figure} \centering \epsscale{1} \plotone{f13-eps-converted-to.pdf} \caption{Histograms of masses and radii of the 47 pulsating RGs in RG/EB candidate systems. These have been derived asteroseismically and include new asymptotic scaling corrections \citep{Mosser_2012c}. The scatter plot shows the correlation between the two observables. \label{fig_mass_radius}} \end{figure} The frequency at maximum amplitude $\nu_{\rm max}$ is obtained by fitting the PDS with a Gaussian function for the mode envelope and a sum of semi-Lorentzian functions to model the stellar activity. This standard technique was first suggested by \citet{harvey1985} for the Sun, and is now widely used for solar-like stars \citep[e.g.,][]{chaplin2011}. The PDS is divided by the semi-Lorentzian background to produce a whitened spectrum, and autocorrelation is then used to extract the mean large separation $\Delta\nu$. Figure \ref{fig_numax_Dnu} shows $\Delta\nu$ versus $\nu\ind{max}$ for the RG/EBs compared with the empirical relationship that was established on thousands of red giants, subgiants, and main sequence stars with CoRoT and {\it Kepler} data \citep{hekker2009, Stello_2009, mosser2010, huber2011, Mosser_2012b}. Only the three stars with $\nu\ind{max}$ close to the Nyquist frequency are significantly different from expectations, because the determination of $\nu\ind{max}$ is biased by the PDS truncation. To compute RG masses and radii, estimates of their effective temperatures $T\ind{eff}$ are required. As we concluded from analyzing the $T\ind{eff}$ distribution in Section \ref{sec_23}, we can safely assume that temperatures from the {\it Kepler} database correspond the red giants for the RG/EB sample. Using the asteroseismic parameters and these temperatures, along with updated asymptotic scaling factors for RGs \citep{Mosser_2012c}, Figure \ref{fig_mass_radius} shows the estimated masses and radii for the 47 pulsating RGs, which range from $0.8-2.8 M_\sun$ and $3.7-33.0 R_\sun$, respectively. The new scaling changes the values of mass and radius for each star by up to a few percent. These results are presented in Table \ref{table_2}. \subsubsection{Mode identification and mixed modes} \begin{figure*} \centering \epsscale{1.1} \plotone{f14-eps-converted-to.pdf} \caption{\'Echelle diagrams of the 47 pulsating RG stars whose KIC names are labeled. The $x$-axis is the frequency modulo the large frequency spacing (i.e., from 0 to $\Delta\nu$), and the $y$-axis is frequency. The three noticeable dark mode ridges (for most stars) correspond to angular degree $\ell=1$, $\ell=2$, and $\ell=0$, from left to right, respectively indicated in dashed red, green and blue lines. Each case has been shifted so that the radial modes are roughly vertically aligned for illustration purposes. Each system is identified by its \textit{Kepler} ID number. \label{fig_ech}} \end{figure*} Peaks in light curve power spectra correspond to unique stellar oscillations, and identifying these oscillations yields enormous insight into interior stellar properties. For example, modes that are a mixture of acoustic and gravity modes provide diagnostics for probing the stellar core. Such mixed modes may be used to determine if a star is on the red-giant branch (RGB) where it is still burning H in a shell surrounding the core, or if is part of the red clump (RC) after possibly experiencing a He flash and is now fusing He in its core \citep{bedding2010,bedding2011,mosser2011,mosser2012}. In addition, we may distinguish the secondary red clump (RC2) that consists of stars more massive than 1.8\,$M_\odot$. These stars have started to burn He in a non-degenerate core, which distinguishes them from low-mass stars in the main red clump \citep{Girardi_1999}. These mixed modes typically appear most clearly in the $\ell=1$ modes as a ``forest'' of peaks. Figure~\ref{fig_ech} shows the \'echelle diagram of all 47 pulsating RGs in candidate RG/EBs. We detect the typical pair of ridges of solar-like oscillations ($\ell = 0, 2$ and $\ell = 1$ spaced by half the large separation) for 14 of the 47 oscillating RGs, while clear signatures of mixed $\ell = 1$ modes are evident in 23 stars. The remaining targets present confusing oscillatory spectra. These few RGs that show suppressed mixed modes certainly belong to the RG group identified by \cite{Mosser_2012b}. Such low-amplitude mixed modes have been observed in a group of stars starting the ascent of the RGB, at all evolutionary stages. We note that 11 of the 14 stars with suppressed mixed modes have the longest orbital periods of all the RG/EBs. Careful analysis of the dipole mixed-mode signatures allows us to classify red giants as either RGB or RC stars. This was first computed by measuring the bumped mixed-mode spacings \citep{bedding2010, mosser2011}. However, using the asymptotic development of mixed modes provides the exact period spacing $\Delta\Pi_1$, which precisely characterizes the radiative core \citep{mosser2012}. It also allows access to the core rotation rate of these giants, and provides an estimate of $P_{\rm rot}$ \citep{Beck_2012,Mosser_2012c}. The values of these mixed-mode parameters for 12 of the 23 cases are given in Table \ref{table_2}. Interestingly, the cores are rotating with periods from 30 to about 385 days, seemingly uncorrelated with their masses and increasing in most cases with radius. Also provided in Table \ref{table_2} are the classifications of 27 RGs. We note that when neither the gravity-mode period spacing nor the bumped spacing are available, the value of the large separation $\Delta\nu$ can be used to classify RGs in certain cases: a RG with $\Delta\nu \ge 9\,\mu$Hz is on the RGB, but when $3.7 < \Delta\nu < 5.5 \mu$Hz, the probability of having a clump star is higher than 90\,\% \citep[see Figure 3 of][]{mosser2012} \subsection{Light curve modeling} \subsubsection{Modeling the light curves} \label{sec_32} To independently compare physical parameters of a subset of the binary systems, we model their light curves using the Eclipsing Light Curve (ELC) code \citep{orosz2000} and/or the JKTEBOP code \citep[e.g.,][]{southworth2009}. ELC in particular uses a genetic algorithm and Monte Carlo Markov Chain optimizers to simultaneously solve for a suite of stellar parameters. It is well-suited to this analysis because additional data can be added as it becomes available, such as radial velocities from spectra. Using only the \textit{Kepler} light curves, we use ELC and JKTEBOP to solve for relative fractional radii $R_1/a$ and $R_2/a$, temperature ratio $T_2/T_1$, orbital inclination $i$, eccentricity $e$ and $e \cos \omega$ when applicable (where $\omega$ is the longitude of periastron), the \emph{Kepler} contamination factor, and the stellar limb darkening parameters for the quadratic limb darkening law. The orbital period was determined previously through a simple iterative technique and held fixed during this analysis. In the limiting case where the two stars are sufficiently separated as to be spherical in shape, ELC has a ``fast analytic'' mode that uses the equations of \citet{gimenez2006}. We make this assumption here because our modeled subsample includes only well-detached binary systems. \begin{figure}[t] \epsscale{1.2} \plotone{f9-eps-converted-to.pdf} \caption{Modeled light curves for 13 of the 14 longest-period RG/EB pulsating systems. In each case, the secondary eclipse (defined here as the RG eclipsing the secondary star) has been set to an orbital phase of zero. The primary eclipses (secondary star eclipsing the RG) have also been aligned, and the given $b$ value indicates the phase of the primary eclipse with respect to the secondary. (A EB with a circular orbit would have $b = 0.5$.) The $y$-axis for each panel is the normalized relative flux. The best-fit parameters of the models are given in Table \ref{table_1}. \label{fig_lcmod}} \end{figure} \begin{figure}[t] \epsscale{1.2} \plotone{f10-eps-converted-to.pdf} \caption{Modeled light curves for the five longest-period RG/EBs with no RG pulsations. As in Figure \ref{fig_lcmod}, each secondary eclipse (defined here as the RG eclipsing the secondary star) has been set to an orbital phase of zero. The primary eclipses (secondary star eclipsing the RG) have also been aligned, and the given $b$ value indicates the phase of the primary eclipse with respect to the secondary. The $y$-axis for each panel is the normalized relative flux, and the best-fit parameters of the models are given in Table~\ref{table_1}. \label{fig_lcmodbis}} \end{figure} The derived parameters for a subset of 18 binary systems are presented in Table \ref{table_1}.\footnote{An online sortable version of this table is available at http://nsol2.nmsu.edu/solarstorm/index.php} We have analyzed the 18 most promising detached systems where a RG likely belongs to the EB (see Section \ref{sec_41}). Thirteen of them present RG oscillations, and the other five are the longest-period systems where no detectable pulsations are observed. This subset actually corresponds to the systems with the longest periods and with no identified contamination from nearby stars (see Section \ref{sec_35}). The light curve data and models are shown in Figures \ref{fig_lcmod} and \ref{fig_lcmodbis}. \subsubsection{Search for eclipse timing variations} \label{sec_34} As discussed in Section \ref{sec_23}, a significant number of the candidate RG/EBs are probably not \textit{bona-fide} RGs in binary systems, but are more likely the result of blendings from a nearly aligned RG and EB. A further possibility is the presence of RGs in triple (multiple) systems in which two ``small'' stars mutually eclipse a primary component \citep{derekas2011, carter2011}. Within the triple-system hypothesis, if the RG's orbit is elliptical and the distance of the RG to the pair of eclipsing stars is short enough to let tidal forces contribute to the evolution of the eclipsing system, we might detect eclipse timing variations (ETVs). This further requires that the RG orbital period is, at maximum, the same order of magnitude as the observation length. In addition, we may expect some variations in the eclipse depth and width if the precession of the EB system is rapid and strong enough to perturb the inclination angle of the EB orbital plane. The detection of ETVs is a way to claim a system is at least triple, but the absence of a ETV detection cannot be considered proof that a system is not multiple since variations may be undetectable or on time scales much larger than the observation length. In addition, if the RG orbit is almost circular, no ETVs are expected. The presence of ETVs in this particular sample suggests that the RG is the third body that perturbs the eclipse timing, but this can only be proven with mass determinations from detailed ETV models and/or the addition of radial velocities to the light curve analysis. We note that low-mass triple systems are common, particularly for systems with a close-in binary (e.g., \citealt{Tokovinin_2006}), so a chance alignment of a triple system with a background RG is also possible. Modeling ETVs is a common approach to derive masses and orbital parameters for exoplanetary systems with at least two planets (e.g., \citealt{Agol_2005, Ford_2012, Fabrycky_2012, Steffen_2013}). Modeling ETVs of multiple stellar systems is beyond the scope of this paper; rather, we first determine whether ETVs are detectable and subsequently measure their amplitudes and periods. We use one of two methods to measure eclipse timings depending on the type of binary system. For D and SD systems, we fit the primary and secondary eclipses with the \citet{mandel2002} function used for fitting exoplanetary transits in the small-planet approximation, which acceptably reproduces the behavior of stellar eclipses. However, exoplanetary transit functions are not suitable for OC systems. Therefore, for the OC binaries, we measure the minimum intensity time by interpolating around the minimum of both eclipses with spline interpolation. Such an approach would not work for D systems with long orbital periods because the eclipse shape is usually strongly modulated by stellar spots, but it is suitable for SD systems. Once ETVs are detected, despite their typically asymmetric profiles, we estimate their periods by fitting a simple sine curve via the least-square method. \begin{figure*} \epsscale{1.15} \plotone{f15-eps-converted-to.pdf} \caption{Eclipse timing variations (ETVs) in hours of the 12 systems that show detectable variations. The $y$-label ``O-C'' stands for (observed $-$ computed) eclipse epoch. The timings are plotted for the deepest eclipses (black plusses) and for the shallow eclipses when detectable (grey dots). Red lines indicate sine-curve fitting on the deepest eclipses. For KICs 3532985, 8255058, and 6762188, the sub-boxes zoom in for a more detailed view of the ETVs. \label{fig_ETV}} \end{figure*} Eleven systems exhibit ETVs at various signal-to-noise ratio levels, as shown in Figure \ref{fig_ETV}. KIC 7990843, 7031714, 4732015, 10991989, and 7955301 show ETVs that are unambiguously coherent between primary and secondary eclipses, and for which periods are shorter than or near to the total observation length. \citet{slawson2011} note that KIC 7955301 (shown in Figure \ref{fig_clean_3}) presents eclipse depth variations, but based only on Q1--Q2 data they could not determine whether these effects were real or due to aperture jitter from quarter to quarter. With a longer light curve now available, we see this system presents the highest ETV amplitude of about four hours. Precession effects are detectable in this system through the shift between primary and secondary eclipse timings and the long time-scale evolution of the eclipse depths. We also note the unexpected ETVs detected for contact systems 11135978 and 9181877, which show variations on time scales longer than the observation length (1400 and 5500 days, respectively) and with a mean amplitude of only about 0.83 minutes. This is much lower than the 29-minute observing cadence and is about half of the ETV standard deviation about the fitted sine curve of 1.68 minutes. \section{Discussion}\label{sec_4} \subsection{Most RGs do not belong to their associated EB} \label{sec_41} \subsubsection{Testing binarity with Kepler's law } \begin{figure} \centering \epsscale{1.2} \plotone{f17-eps-converted-to.pdf} \caption{Mass ratio $q$ as a function of orbital period and $a^3/M_1$ for the pulsating RG/EB candidate sample based on the orbital parameters in \citet{slawson2011} and those obtained in this work from asteroseismology. For all systems, the symbols are plotted assuming $R_2=R_\odot$, and the dashed lines show the extent of $a^3/M_1$ if the secondary star's radius spans $0.1 \ R_\odot < R_2 < 10 \ R_\odot$. The color at each system's position represents the mass ratio $q$. Contours of constant $q = 10$ and 100 are shown by the diagonal lines. The size of each symbol is proportional to the pulsating RG stellar radius, $R_1$, measured asteroseismically. The pulsating OC systems are not included here. The colors reflect the huge variation of mass ratio across the sample, and clearly distinguish which systems are likely to be a bona-fide RG in an EB (yellow, i.e., $q\sim1$) from those that are unlikely (blue, i.e., $q\geq10$).} \label{qfact} \end{figure} We have shown that contamination in the \textit{Kepler} target pixel files confirmed seven systems as false positive RG/EBs. To uncover more false positives, we can rearrange Kepler's third law to express the stellar mass ratio $q = M_2/M_1$ as a function of the orbital period $P$, the semi-major axis $a$, and the mass of the primary star $M_1$: \begin{equation} q = \left(\frac{2\pi}{P}\right)^2\ \frac{a^3}{GM_1}\ -\ 1. \label{q} \end{equation} Note that our convention defines star 1 to be the RG. We use RG masses and radii from the asteroseismic analysis (see Section \ref{sec_33}) with a range of radii for the secondary star, $0.1~R_\odot < R_2 < 10~R_\odot$, and also consider the relative stellar radii $(R_1+R_2)/a$ and $P$ from eclipse fitting and \citet{slawson2011}. With all of this information taken together, we estimate the $q$ required to satisfy Kepler's third law for the pulsating sample. Figure~\ref{qfact} shows the distribution of $q$ in the phase space where $a^3/M_1$ and $P$ are the independent variables. The symbols are plotted for the case where $R_2=1R_\odot$. The horizontal extent of possible values of $a^3/M_1$ is shown over the radius range, where larger radii shift the values to the right along the dotted lines. Upon considering modest values of $R_2$, it appears that only 12 of the pulsating RGs in candidate RG/EBs may truly belong to EBs (not including the ETV systems), as these have $q\lesssim 10$. Cases in which the RG would need to be part of a binary with a companion star $\gtrsim 10$ times more massive are considered unrealistic. In this analysis, several systems take on a value of $q<0$, which is of course unphysical: this is likely the result of inaccurate estimations of the relative stellar radii. The likely 13th candidate (KIC 5640750) system is not yet conclusive as its orbital period is longer than the observation time. The likely 14th candidate (KIC 8095275) does not allow us to estimate $q$ since it is a ``heartbeat system'' for which its orbital parameters are unknown and would require specific light curve modeling and radial velocities to determine (see Section \ref{sec_433} and \citealt{Thompson_2012}). However, as such ``heartbeat'' light curve features are unique, we are confident that it is a \textit{bona-fide} RG/EB system. In summary, the analysis here and in Section \ref{sec_35} provides strong evidence that only 13 of the 47 pulsating systems are true RG/EBs in detached configurations, and one is a RG in a non-eclipsing binary system. All of these have orbital periods greater than 19~days. All of the shorter-period OC and SD systems are either contaminated or the RG is part of a multiple-star system. The rest of the D cases must be where the EB and the RG fall in the same line-of-sight but may be weakly gravitationally bound. Finally, we note that testing the likeliness of RG/EB candidates is not possible when no RG pulsations are detectable. However, true RG/EBs may belong to this sample. In particular, the five non-pulsating RGs with the longest orbital periods (shown in Figure \ref{fig_lcmodbis}) are almost certainly true RG/EBs, as they have light curves very similar in appearance to the 13 pulsating \textit{bona-fide} RG/EBs. \subsubsection{Candidate multiple-star ($> 2$) systems} We find eleven stars with ETVs that suggest the RG is part of a multiple system composed of a close-in EB and a more distant RG in an elliptical orbit. This configuration, a ``hierarchical triple system with two low-mass stars,'' was also detected for two cases in \citet{carter2011} and \citet{derekas2011}. In \citet{derekas2011} the low-mass stars were co-planar with the primary component (a RG) and all eclipses were visible, while in \citet{carter2011}, each star separately eclipses the disk of the primary component (a subgiant). No solar-like oscillations were observed in either case, and so here we report the first time global $p$-mode oscillations are detected for an RG in such a system. In the eleven ETV detections, only the close-in eclipses are observed and the RG's presence is indirectly deduced as the perturber. We find this in seven D, two SD, and two OC cases. The system KIC 4758368 is the only out of the eleven where we do not detect any RG oscillations (see Table \ref{table_1}). \subsection{EB and multiple-star candidates} \label{sec_43} We classify the RGs displaying oscillations and presumably belonging to a double or triple system into one of three categories. First, \textit{fundamental} cases are those that deserve to be studied in more detail in the future, to be used as cornerstones for testing stellar-evolution models, and for which current \textit{Kepler} data are sufficient for accurate eclipse and asteroseismic modelings. Second, the \textit{promising} cases are those for which eclipse or asteroseismic modeling require additional \textit{Kepler} observations to be considered \textit{fundamental}. Third, \textit{intriguing} cases are those in which we cannot rule out the detected signal as a false positive, and where more data could lead to unexpected discoveries. \subsubsection{Fundamental cases} The fundamental cases are EBs and one hierarchical triple system, with orbital periods over 19~days. Due to the high signal-to-noise ratio of their oscillation patterns, these systems are suitable for precise modeling of their interior properties. The candidates with highly eccentric orbits are interesting for studying tidal interactions in multiple-star systems. Their modeled light curves are shown in Figure \ref{fig_lcmod}. We list them here, sorted by the spectral type of the companion star: \begin{description}\itemsep 0cm \item[M or K dwarf companion.] We identify two cases where the EB is composed of a RG and a smaller, cooler companion, rendering the primary eclipse deeper than the secondary eclipse.\\ \textbf{- KIC 8702921} is a 19-day EB, which shows clear RG oscillations typical of the RGB. Its asteroseismic radius and mass are $5.1\pm0.1\,R_\odot$ and $1.4\pm0.1\,M_\odot$. We find that the secondary star is compatible with an M dwarf, as its mass is estimated as $0.6\pm0.3\,M_\odot$ with radius $0.44\pm0.01~R_\odot$, and an effective temperature of $2600$~K. We see variations in the light curve that could be due to M-star activity (i.e., spots), tidal distortions, and ellipsoidal and Doppler beaming. We observe a 97.8 day (almost precisely five times the orbital period) modulation in the light curve of rather large amplitude, which is comparable to the eclipse depth. \\ \textbf{- KIC 5308778} has a 41-day orbital period and low signal-to-noise ratio RG oscillations, corresponding to an asteroseismic radius and mass of $9.4\pm0.8~R_\odot$ and $1.3\pm0.3~M_\odot$. From eclipse modeling, the companion's radius and mass $R_2=0.6\pm0.1~R_\odot$ and $M_2=0.3_{-0.3}^{+0.7}~M_\odot$ fit with an M dwarf, while its effective temperature $T_2 = 4140$ K fits with a K dwarf. Its light curve presents photometric variations of amplitude up to 6\,\%, which is 17 times larger than the deepest eclipse. These variations are quasi-periodic with periodicity almost equal to the orbital period. The absence of any orbital eccentricity suggests that the photometric variations are due to features on the RG, in a system that is tidally locked on a circularized orbit. The periodic fluctuations appears incompatible with spots on the companion star since its relative brightness is 0.2\,\% of the RG. \item[G or late F dwarf companion.] Three systems are characterized by a RG with a G or late type F star on an eccentric orbit with typical $e \sim 0.2$, and light curves that present large variations.\\ \textbf{- KIC 8430105} is a 63-day period EB with clear RG oscillations that correspond to an asteroseismic radius and mass of $7.4\pm0.3~R_\odot$ and $1.2\pm0.1~M_\odot$. No mixed modes are identified, but the large separation value and the seismic mass estimate suggest this RG cannot belong to the red clump and is more likely an RGB star. From eclipse modeling, the orbit appears to be eccentric as $e=0.26$, and we estimate the companion's radius, mass,and effective temperature to be $R_2 = 0.83\pm0.04~R_\odot$, $M_2 = 0.9\pm0.4~M_\odot$, and $T_2 = 5960$~K. Thus, we likely have an RG in orbit with a solar analog. A strong variation in the light curve of (almost precisely) two times the orbital period and five times the eclipse depth is found. At the bottom of the primary eclipse (the G star passing in front of the RG), we observe sharp peaks that can reach up to half of the eclipse depth and last about of third of the eclipse time. These could be strong flares on the solar analog, which seem unlikely, or hot spots.\\ \textbf{- KIC 10001167} is a 120-day EB with clear RG oscillations that are characteristic of an RGB star of a $14.0\pm0.8 ~R_\sun$ and $1.1\pm0.2~M_\sun$. Modeling the eclipses indicates the orbit is eccentric with $e = 0.16$, and the companion is likely a G0 or F9 star with radius $R_2=1.1\pm0.1~R_\sun$, mass $M_2=1.0\pm0.6~M_\sun$, and effective temperature $T_2=6090$~K.\\ \textbf{- KIC 9970396} presents a 235-day period and clear modes of a RG with $8.3\pm0.3~R_\odot$ and $1.3\pm0.1~M_\odot$. The orbit appears to be eccentric with $e=0.20$, and its companion is likely a late F star of radius $ R_2=1.1\pm0.1~R_\odot$, mass $M_2 = 1.2\pm0.2~M_\odot$, and effective temperature $T_2 = 6060$~K. In contrast to the two previous cases, its light curve does not exhibit high photometric variations. \item[F type companion.] These four bona-fide RG/EBs are likely composed of an RGB star with an F type companion on an eccentric orbit ($e = 0.23-0.67$), with orbital periods ranging from 175 to 408 days. In these cases, the secondary F star's mass is estimated to be nearly these same or even slightly larger than the RG star mass. However, the error bars on the masses are large so that the parameters are consistent with the RG mass being larger than the secondary star mass, as would be expected if the stars formed together and the more massive RG finished its main-sequence phase first. However, it is also possible that the RG could lose several tenths of a solar mass in a wind during the RG phase, or could have transferred mass to the companion since the orbits are highly eccentric, so that at the present time the RG has a slightly lower mass than its companion. \\ \textbf{- KIC 9540226} presents a 175.5-day orbital period and clear RG oscillations that are characteristic of an AGB or RGB star with $14.0\pm0.7 ~R_\sun$ and $1.6\pm0.2~M_\sun$. Eclipse modeling indicates the orbit is eccentric with $e = 0.39$, and the companion is likely an F star of radius $R_2=1.4\pm0.1~R_\sun$, mass $M_2=1.5\pm0.7~M_\sun$, and surface temperature $T_2=6920$~K. \\ \textbf{- KIC 5786154} presents a 197.9-day period and shows clear oscillations of a a RG of $12.7\pm0.6~R_\odot$ and $1.4\pm0.2~M_\odot$. The orbit appears to be eccentric with $e = 0.38$, and the companion is likely an F star of radius $R_2=1.9\pm0.1~R_\odot$, mass $M_2 = 1.8\pm0.7~M_\odot$, and surface temperature $T_2 = 6545$~K. \\ \textbf{- KIC 7037405} presents a 207-day period and clear modes of an RGB star with $15.0\pm0.9~R_\odot$ and $1.4\pm0.2~M_\odot$. The orbit appears to be eccentric with $e=0.23$, and the companion is likely an F star of radius $ R_2=1.9\pm0.1~R_\odot$, mass $M_2 = 1.5\pm0.7~M_\odot$, and surface temperature $T_2 = 6400$~K.\\ \textbf{- KIC 8410637:} This system was the only EB previously known to host a RG displaying global oscillations \citep{hekker2010}. Its light curve is one of the most challenging among the set of EBs to model because of a high eccentricity ($e=0.67$), coupled with relatively small stellar radii compared to the semi-major axis, $(R_1+R_2)/a = 3.67\,\%$. We confirm the system is composed of a RG and an F star, whose respective sizes and masses are $(R_1, M_1) = (11.0\pm0.5 R_\sun,1.6\pm0.2 M_\sun)$ and $(R_2, M_2) = (1.6\pm0.1 R_\sun,1.8\pm0.7 M_\sun)$. \item[Probable $\delta$-Scuti companion.] The system KIC 4663185 presents a 57-day orbital period and is the only from our sample that has a power spectrum with two clear sets of oscillations. On the one hand, RG oscillations are observed at $\nu\ind{max} = 23\,\mu$Hz: this corresponds to a rather massive RGB star with radius $18.2\pm1.2~R_\odot$ and mass $2.2\pm0.4~M_\odot$. On the other hand, a second oscillation spectrum is observed at $\nu\ind{max} \simeq 132~\mu$Hz (Fig. \ref{fig_de_trop}). These mode amplitudes and widths are too high and narrow, i.e., the lifetimes are too long to match with a RG oscillation spectrum. The oscillation spectrum with $\nu\ind{max} \simeq 132~\mu$Hz is consistent with a $\delta$-Scuti or $\beta$-Cep star p-mode spectrum. However, a $\beta$-Cep star is probably too massive ($\sim10~M_\sun$) to have evolved together with an RGB star of current mass $2.2~M_\sun$. Regarding the light curve, only one type of eclipse is detected and they appear to be shallow and grazing. We associate the detected eclipses with the secondary transit, i.e., the companion star is eclipsed, since the $\delta$-Scuti is likely to present a higher surface brightness than the RG. With this hypothesis, we put an upper on the companion's size of $R_2 < 1.5\pm0.1~R_\sun$. The companion's size estimate is consistent with a main-sequence star of mass $\sim1.6 M_\sun$, which fits with the interpretation that the secondary could be a $\delta$-Scuti star. In the future, we will obtain ground-based spectroscopic observations, and attempt to match the oscillation frequency spectrum with stellar pulsation models to confirm this hypothesis and better constrain the parameters for the secondary. \item[RG companion.] The system KIC 9246715 is a 171.3-day EB showing RG oscillations with a low signal-to-noise ratio compared to the other oscillating RGs, despite it being the brightest system (K$_p = 9.266$). From eclipse modeling it appears that this system has an eccentric orbit ($e =0.35$) and is composed of a pair of RGs of similar radii and temperature ($R_2/R_1 = 0.77$, $T_2/T_1 = 1.03$). However, we detect only one oscillation pattern ($\nu\ind{max} = 102~\mu$Hz), whose asteroseismic parameters correspond to a RG of $7.7\pm0.4~R_\sun$ and $1.7\pm0.3~M_\sun$. The large separation indicates it is a RGB star. From this, we deduce the companion's radius and mass to be $R_2=5.3\pm0.3~R_\sun$ ad $M_2 = 0.8\pm0.7~R_\sun$. No pulsations from the companion star are detected: it likely pulsates with a lower signal-to-noise ratio in the same frequency range, since the companion's frequency at maximum amplitude is expected to be $\nu\ind{max} = 96 _{-84}^{+133}~\mu$Hz. In addition, the autocorrelation of the light curve reveals the presence of a sine modulation with about a 3-day period with a mean peak-to-peak amplitude of about 100 ppm. \item[Hierachical triple system.] The system KIC 7955301 displays high signal-to-noise ratio ETVs (4-h amplitude, Figure \ref{fig_ETV}) and clear RG pulsations, typical of an RGB star, with $1.2\pm0.1~M_\sun$, $5.9\pm0.2~R_\sun$, and a core rotation period of 30 days. From the eclipse timings, we are able to infer that the RG takes 210 days to orbit the more compact 15-day period EB. The components of this system are likely to interact strongly as indicated by the high and complex amplitude of their ETVs. The asymmetrical shape of the ETV curve with respect to a sine curve suggests the RG orbit is highly eccentric. In addition, the variable eclipse depth as function of time, on a period of about 1800 days, suggests that the orbital plane of the pair of EB is precessing due to tidal interactions with the RG. \end{description} \subsubsection{Promising cases} \label{sec_433} We sort these potentially fundamental cases by increasing orbital period. \begin{description}\itemsep 0cm \item[KIC 5179609:] This system has a value of $q$ in a plausible range (0.6) with a 43.9-day period. Unfortunately, the global modes of the RG are largely at frequencies higher than the Nyquist frequency, so we are only able to properly measure the mean large separation. Asteroseismic scaling laws infer the RG's mass and radius to be $1.3\pm0.1~M_\sun$ and $3.7\pm0.1~R_\sun$. In addition, we do not find a detectable signature of the secondary eclipse in this EB. Given that the standard deviation of the light curve is 164 ppm and the primary eclipse depth is rather significant (0.92\,\%), the ratio $R_2^2 T_2^4/(R_1^2 T_1^4 + R_2^2 T_2^4)$ is lower than 1.78\,\%. This is a result of a small or cool companion star. For simplicity, we can model the system as an exoplanetary system, i.e., as a dark planet eclipsing a star. With such an assumption, the radius of the companion would be $0.4 R_\sun$, which could then be an M, L, or brown dwarf. One quarter or \textit{Kepler} data at short cadence would allow us to better constrain this system and make it a cornerstone of for testing asteroseismology. It deserves further study as it hosts the smallest companion star of our sample. \item[KIC 8095275:] Highly eccentric but non-eclipsing binary systems have been observed by \textit{Kepler}, and are commonly referred to as ``heartbeat stars'' due to the resemblance of their light curves to an electrocardiogram (Fig. \ref{fig_de_trop}, \citealt{Welsh_2011,Thompson_2012}. In these cases, the photometric variability is due to tidally induced distortions generated by the companion star as it passes close to the pericenter. The system KIC 8095275 presents a 23-day orbital period, a clear heartbeat signal of about 1\,\% amplitude, and RG oscillations that correspond to a radius of $7.6\pm0.2 ~R_\sun$ and mass of $1.2\pm0.1~M_\sun$. A more detailed model of the light curve coupled with radial velocity measurements should aid us in characterizing this system. We note that it was also detected in parallel to this work and publicly reported on the Internet, even though it is not yet mentioned in any peer-reviewed paper.\footnote{http://keplerlightcurves.blogspot.com/2012/09/three-giant-heartbeats.html} \item[KIC 4732015 and 10991989:] These D systems both present clear ETVs with periods on the order of the observation duration (1050 and 544 days, respectively) with a rather small amplitude (6.9 and 3.9 min). Both systems are probably hierarchical triple systems that have a more distant RG orbiting a pair of close main-sequence stars (0.93 and 0.97 day periods, respectively). However, these systems are significantly different physically. KIC 4732015 is an RGB star and is the biggest RG in our sample $(R_1, M_1) = (33.0\pm5.1~R_\sun,1.5\pm0.5~M_\sun)$, while KIC~10991989 is a quiet, massive RC2 star with $(R_1, M_1) = (9.6\pm0.5~R_\sun,2.5\pm0.4~M_\sun)$. \item[KIC 5640750:] This system is the only one whose orbital period is longer than the total observation time (Q0--Q13; $P\ge 1149.5$~days). It appears that the eclipse shape is similar to those of RGs orbiting F type companions. Modeling the light curve yields a degeneracy between the orbital period and the eccentricity because we see only one primary and one secondary eclipse so far. For our purpose, however, the temperature and radius ratios may still be estimated even if the mass and semi-major axis are not yet. To model the light curve, we assumed the orbital period to be twice the time between the primary and secondary eclipses and set the argument of the periastron to $\omega = \pi/2$, because such an assumption leads to the lower boundary of the eccentricity. From asteroseismic measurements, the RG has radius and mass of $(R_1, M_1) = (14.3\pm0.8~R_\sun,1.4\pm0.2~M_\sun)$. Coupling this result with the preliminary eclipse modeling indicates that the companion seems to be an F star with radius $R_2 = 1.8\pm0.1~R_\sun$ and temperature $T_2=6497$~K, on an elliptical orbit with $e>0.18$. The high value of the RGs asteroseismic mass ($3.2\pm1.0~M_\sun$) suggests its period and semi-major axis are largely underestimated, which also suggests a higher eccentricity. \end{description} \begin{figure} \epsscale{2} \plottwo{f18-eps-converted-to.pdf}{f19-eps-converted-to.pdf} \caption{Top: oscillation spectrum of the RG's companion of KIC 4663185, which is likely a $\delta$-Scuti star. Bottom: folded light curve of the ``heartbeat'' star KIC 8095275, which is the only from our sample to be a RG in a non-eclipsing binary. Black crosses indicate the folded signal, while the red line indicates the same rebinned in 30-min bins.} \label{fig_de_trop} \end{figure} \subsubsection{Intriguing cases} \begin{description}\itemsep 0cm \item[KIC 3532985 and 6762188:] These D systems both present ETVs with short periods (29 and 35 days, respectively) of small amplitude (3.2 and 6.5 min, respectively), with asymmetrical shapes. At first glance, the ETVs look like artifacts, but several methods to determine the eclipse timing were used and all yielded ETVs in these systems but not in the other targets. The RGs associated with these systems are red clump stars: KIC 3532985 is a RC2 with $(R_1, M_1) = (7.8\pm0.2~R_\sun,1.7\pm0.1~M_\sun)$, while KIC 6762188 is a solar-mass RC1 with $(R_1, M_1) = (10.3\pm0.5~R_\sun,1.0\pm0.1~M_\sun)$. \item[KIC 8255058:] This D system presents the most puzzling ETV features from our sample. It appears similar to KICs 3532985 and 6762188, but its ETVs actually split into two components. We observe a rapid oscillation of period equal to the EB orbital period, with 5-minute amplitude, as well as a long period (868 days) of similar amplitude (6.7 min). The RG is an RGB star and is one of the smallest RGs in our sample with $(R_1, M_1) = (3.9\pm0.1~R_\sun,1.4\pm0.1~M_\sun)$. \item[KIC 7690843 and 7031714:] These SD systems both present clear ETVs with 75 and $> 1000$ day periods, respectively. The amplitude of the ETVs are weak and on the order of a few minutes, i.e., shorter than Kepler's long-cadence sampling. However, KIC 7031714 presents evidence for stellar crowding and blending. \item[KIC 11135978 and 9181877:] These OC systems display very slowly-varying ETV trends ($>1400$ day) with low signal-to-noise ratios in amplitude. One would not expect that a third body could significantly perturb the orbital equilibrium of the two contact stars. \item[KIC 7377422, 4569590, 3955867, 9291629, \& 7943602:] These detached systems present orbital periods, eclipse depths, and eclipse shapes similar to the eight strongest RG/EB candidates where we detect RG pulsations (see Figures \ref{fig_lcmod} and \ref{fig_lcmodbis}). Oddly, the light curves of these four systems do not show pulsations. Their peculiarity, with respect to the pulsating RG/EBs candidates, consists of high quasi-periodic photometric variability (between 10 and 30\,\%). This suggests that either their modes are buried in the noise of photometric variations, or that their mode frequencies are above the Nyquist frequency. Alternatively, the fact that no solar-like oscillations are detected in these objects that display significant stellar variability agrees with \citet{Chaplin_2011a}, which suggests that stellar activity inhibits the amplitude of solar-like oscillations. This scenario does assume that the stellar variability originates from the RG and not the companion, which is unknown. \end{description} \section{Conclusion and prospects}\label{sec_5} We have identified a set of potentially useful targets to test theories of stellar evolution. We are confident that 13 (of 70) systems cross-listed from the RG and EB \textit{Kepler} catalogs are bona-fide pulsating red giants in eclipsing binaries. One is a red giant in a non-eclipsing binary system, and an additional five are likely red giants in eclipsing binaries even though we do not detect their pulsations. It is likely that at least 11 other systems are candidates for belonging to three-body configurations composed of a pair of eclipsing main-sequence stars and a red giant. Oddly, we detect no oscillations in 23 of the red giant stars in candidate RG/EB systems across all classes of binaries. We identify several reasons that may explain this. First, the star could be misidentified and is not a giant. Second, the treatment used for removing the eclipse signature may suppress any oscillations at very low frequency. Also, the giants could actually be subgiants or low-luminosity red giants pulsating at frequencies beyond the Nyquist frequency. Finally, some phenomena in the interactions of a multiple system may yield a large damping effect of the global oscillations in the RG \citep{Fuller_2013}. Therefore, in total, we believe many of the RG/EB candidates deserve to be followed-up with short-cadence \textit{Kepler} observations. This would allow global modes to be detected in a red giant for which $\nu\ind{max}$ is larger than the long-cadence Nyquist frequency. Ideally, this would allow for pulsations of the main-sequence companions to be measured as well; however, the contribution to the total light from the companion star is only several percent in the best cases, making such detections very challenging in practice. Spectroscopic measurements from the ground will certainly help in understanding these systems too. First, the identification of overlapping spectra could indicate whether the system is indeed composed of a RG and a main-sequence star. Second, the measurement of radial velocities is a way of extracting accurate masses for the system that can be cross-checked with the asteroseismic inferences. Third, it is a way to refine the estimate of the stellar parameters which are currently based on color photometry. We have already started observations of a subset of these 70 stars with the ARCES \'echelle spectrometer (resolution $R = 30\,000$) at the Apache Point Observatory (APO), New Mexico. Other targets from our sample are currently being observed by the APOGEE spectrometer on the SLOAN Digital Sky Survey (SDSS) telescope at APO, in the context of the APOKASC program to support \textit{Kepler} observations in asteroseismology. This effort is coordinated by teams from the APOGEE project and the Kepler Asteroseismic Science Consortium (KASC). In addition, the KASC Working Group 8 is studying in detail the systems KIC 8410637, 5640750, and 9540226. If more precise estimates of the stellar parameters can be obtained and coupled with eclipse information and asteroseismology, these RG/EB systems have the potential to become some of the most accurately studied stars. \acknowledgements We gratefully acknowledge support from the Los Alamos National Laboratory Institute of Geophysics and Planetary Physics subcontract 150623-1. Part of this work was also funded by the \textit{Kepler Guest Observer Program} Cycle 2 project GO 20011, and by a NASA EPSCoR award to NMSU under contract \#~NNX09AP76A. We thank J.~Orosz for guidance using ELC, G. Vigeesh for guidance using JKTEBOP, and S. A. T. Appourchaux for science reviewing. \bibliographystyle{apj}
1882b52e51ec367faf5a02d0eb262d1a9b38f140
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Sociability confers survival advantage~\cite{Dunbar03,Silk07}. As a result, humans have evolved a set of specialized skills for maintaining a large number of complex social connections~\cite{Dunbar,Herrmann07}. They use these skills daily to link to other individuals and then exploit the resulting social network for personal advantage. Social scientists have long been interested in the role of network structure in the performance of individuals and organizations. A pair of classic theories has linked position within a network to successful outcomes for individuals. Burt~\cite{Burt95,Burt04} attributed individual success (better job outcomes, higher compensation) to their brokerage positions within the network. These positions link distinct communities, thereby exposing brokers to diverse and novel information that others within their community may not have. Granovetter~\cite{granovetter1973} similarly argued that novel information tends to flow to us via weak ties, i.e., acquaintances from other communities with whom we interact infrequently. Our strong ties, on the other hand, are close friends who belong to the same community, and therefore have the same information we do. Aral \& Van Alstyne~\cite{Aral11} linked the two theories, arguing that positions of greater network diversity, i.e., linking to others who are not otherwise connected, are associated with lower rate of interactions (low channel bandwidth). They further demonstrated, by analyzing a corpus of email communications, that this trade-off limits the amount of diverse and novel information that individuals in brokerage positions receive, though they did not speculate as to the origin of this trade-off. Recently, social media has emerged as an important platform for social interaction and information exchange. On social media sites, such as Twitter and Digg, users create social networks by subscribing to, or following, other users. When a user posts a message, or a hyperlink to content they found online, this message is broadcast to all followers, who may themselves choose to share it with their own followers, and so on, enabling the message to spread over the social network. Social media provides us with new data for testing and generalizing information brokerage theories. In this paper we study the interplay between network structure, user activity and information content. Despite significant differences between online social networks and the email networks studied by Aral \& Van Alstyne, we validate the main conclusion of their study, namely the trade-off between network structure and channel bandwidth. We find that the diversity of information that social media users are exposed to via friends depends on their position in the network. Users embedded within a community of strongly tied individuals are likely to share information on topics that other community members are interested in, while users in brokerage positions that bridge different communities receive information on more diverse topics. Users can increase their access to novel information by adding more friends. However, by adding friends, they also increase the quantity of information they are exposed, often beyond their capacity to process it~\cite{Hodas12socialcom}. Furthermore, increasing the quantity of information does not necessarily increase its novelty or diversity, since network structure has important implications for information diversity. The paper makes the following contributions. In Section~\ref{sec:data}, we describe the data we use from the social news aggregator Digg, and define a set of network and information variables we use to characterize access to information in this network. In Section~\ref{sec:networkstructure}, we investigate the relation between the structure of the social network, the content of information, and the activity of users and their friends. We study how the amount of novel information available to the user, specifically, the number of distinct news stories, depends on network structure and user activity. We test the existence of trade-off between network diversity and friends' activities in online social networks. Our study suggests that cognitive and structural bottlenecks limit access to novel information in online social networks. \section{Data and Methods} \label{sec:data} Social news aggregator Digg allows registered users to submit links to news stories and other users to vote for the stories they find interesting. Digg also allows users to follow the activity of other users. The follow links are not necessarily reciprocated: a user $b$ who follows user $a$, can see the messages $a$ posts, but not vice versa. We refer to $a$ as the \emph{friend} of $b$, and $b$ as the \emph{follower} of $a$. A user's social stream shows the stories his friends submitted or voted for. When the user votes for any story, this vote is broadcast to all of his followers who can themselves see it in their social streams. At the time datasets were collected, users were submitting tens of thousands of stories, from which Digg selected a handful (about 100 each day) to promote to its front page based on how popular the story was in the community. Before a story is promoted to the front page, it is visible on the upcoming stories queue and to submitter's followers via their social stream (friends' interface). With each new vote, the story becomes visible to the voter's followers. \subsection{Data processing} We analyzed two data sets collected in the past from Digg. The 2009 data set~\cite{Lerman10icwsm} contains information about the voting histories of 3.5K stories promoted to Digg front page in June 2009, and contains 2.1 million votes by 70K users. The follower graph of these voters contains 1.7 million social links. At the time this dataset was collected, Digg was assigning stories to one of 8 topics (Entertainment, Lifestyle, Science, Technology, World \& Business, Sports, Offbeat, and Gaming) and one of 50 subtopics (World News, Tech Industry News, General Sciences, Odd Stuff, Movies, Business, World News, Politics, etc.). The 2010 data set~\cite{sharara:icwsm11} contains information about voting histories of 11,942 users over a six months period (Jul - Dec 2010). It includes 48,554 stories with 1.9 million votes. The follower graph contains 1.3 million social links. At the time data was collected, Digg assigned stories to 10 topics (Entertainment, Lifestyle, Technology, World News, Offbeat, Business, Sports, Politics, Gaming and Science) replacing the 2009 ``World \& Business'' category with ``World News,'' ``Business,'' and ``Politics''. We examine only the votes that the story accrues before promotion to the front page. During that time, it propagates mainly via friends' recommendations. After promotion, users are likely to be exposed to the story through the front page, and vote for it independently of friends' recommendations. In the 2009 data set, 28K users voted for 3,553 stories and in the 2010 data set, 4K users voted for 36,883 stories before promotion. We focused the data further by selecting only those users who voted at least 10 times, resulting in 2,390 users (who voted for 3,553 stories) in the 2009 data set and 2,330 users (who voted on 22,483 stories) in the 2010 data set. \subsection{Definition of Variables} Following Aral \& Van Alstyne~\cite{Aral11,aral2012anatomy} we define a set of variables to characterize access to information in networks. \remove{ \begin{table*} \center \begin{tabular}{|l|l|c|} \hline \textbf{Symbol} &\textbf{ Variable } & \textbf{Definition} \\ \hline $S_i$ & num. active friends & $S_i=\sum_{u_j \in N^{frd}_i}{\delta(u_j)}$\\ $ND_i$ & network diversity & $ND_i=1-\frac{ |\{ e_{jk}: u_j, u_k \in N_i, e_{jk} \in E \} |} { |N_i| (|N_i| -1)} $ \\ $O_i$ & volume of outgoing info. & num. initiations ($O_{i}^s$) + num. adoptions ($O_{i}^a$) made by $u_i$ \\ $I_i$ & volume of incoming info. & $I_i=\sum_{k=1}^{N^{frd}_i}O_k$\\ $B_i$ & avg friend activity& $B_{i}= \frac{I_{i}}{S_{i}}$\\ $uB_i$ & num. adoptions & $uB_{i}= O_{i}^a$\\ $TD_i$ & friend topic diversity & $TD_{i}= \frac{ \sum_{j=1}^{N^{frd}_i} \sum_{k=1}^{N^{frd}_i} (1-Cos(\theta_{jt},\theta_{k}))}{S^{2}_{i}} $\\ $NRI_i$ & novel information & num. distinct stories received\\ $NRI^{frds}_i$ & novel info. potential & num. distinct stories received by friends of user $i$ \\ $R_i$ & novel info. rate & $R_i=NRI_i/I_i$ \\ $NAR_{i}$ & novel info. adoption rates & $NAR_{i} = O_{i}^a/NRI_{i}$ \\ $FNAR_{i} $ & friend novel info. adoption rates & $FNAR_{i} = NRI_{i}/NRI^{frds}_i$ \\ \hline \end{tabular} \center \caption{Variables used in the study and their definitions.} \label{tbl:definitions} \end{table*} } \begin{table} \center \begin{tabular}{|l|l|} \hline \textbf{Variable} &\textbf{ Description } \\ \hline $S$ & number of active friends \\ $ND$ & network diversity \\ \hline $O$ & volume of outgoing info. (\# votes by user) \\%& num. initiations ($O_{i}^s$) + num. adoptions ($O_{i}^a$) made by $u$ \\ $I$ & volume incoming info. (friend recommendations)\\%& $I=\sum_{k=1}^{N^{frd}}O_k$\\ $B$ & avg friend activity \\ $uB$ & user activity (\# adopted recommendations) \\ $TD$ & friend topic diversity \\ \hline $NRI$ & novel information \\%& num. distinct stories received\\ $NRI^{frds}$ & novel information friends are exposed to \\%& num. distinct stories received by friends of user $i$ \\ $NAR$ & fraction of novel information adopted by user \\%& $NAR_{i} = O_{i}^a/NRI_{i}$ \\ $FNAR $ & fraction of novel information adopted by friends \\%& $FNAR_{i} = NRI_{i}/NRI^{frds}$ \\ \hline \end{tabular} \center \caption{Variables used in the study.} \label{tbl:definitions} \end{table} \subsubsection{Network Variables} A social network can be represented as a graph $G = (U,E)$ consisting of a set of users $U$ and a set of edges $E$ between them. There exists edge $e_{ij} \in E$, if user $u_i$ follows user $u_j$. While in traditional social networks, friendship links as reciprocated, resulting in an undirected graph, online social networks (e.g., Twitter and Digg) form a directed graph. It allows users to follow people with certain interests without having a reciprocal relationship. The neighborhood $N_i$ of user $u_i$ consists of both friends $N^{frd}_i$ and followers $N^{fol}_i$ of $u_i$. \paragraph{Network Size} Network size is an important variable which shows the breadth of contacts each user has. We define the size of $u_i$'s network, $S_i$, as the number of friends from whom user $u_i$ received messages during certain time period $\Delta T$, which we take to be the time over which data was collected. Since not all friends were active during that period and thus had a chance to influence $u_i$ votes, we focused on active friends, i.e., friends who had recommended stories during $\Delta T$. Therefore, network size is defined as \begin{equation} \begin{aligned} S_i=\sum_{u_j \in N^{frd}_i}{\delta(u_j)} \end{aligned} \end{equation} \noindent where $\delta(u_j)$ is one if and only if $u_j$ voted at least ten times during the time period $\Delta T$ and zero otherwise. Note that ten is the minimum number of messages to cover all topic categories in 2009 and 2010 data set. \paragraph{Network Diversity} Network diversity of user $u_i$ represents how many otherwise unconnected neighbors $u_i$ interacts with. We measure network diversity using local clustering coefficient \cite{watts1998small}, $C_i$, which quantifies how often the neighbors of $u_i$ are linked together (regardless of the direction of the edge): \begin{equation} \begin{aligned} {C_i= \frac{ |\{ e_{jk}: u_j, u_k \in N_i, e_{jk} \in E \} |} { |N_i| (|N_i| -1)} } \end{aligned} \end{equation} \noindent where $N_i$ is the set of neighbors of user $u_i$ and $|N_i|$ is the number of neighbors. The total number of possible connections among neighbors is $|N_i| (|N_i|-1)$. High clustering coefficient implies low network diversity, and vice versa. Therefore, we define network diversity of user $u_i$ as $ND_i=1-C_i$. Aral \& Van Alstyne~\cite{Aral11,aral2012anatomy} defined network diversity as the lack of structural holes using the first and second order dimensions of link redundancy. We prefer to follow the definition of Watts et al.~\cite{watts1998small}, since clustering coefficients are more evenly distributed over the range from 0 to 1. \subsubsection{User Activity Variables} Access to information in social network depends on the activity levels of users. In friendship networks, strength of a tie defines frequency and intensity of interaction of a pair of individuals~\cite{granovetter1973}. Close friends --- strong ties --- interact more frequently than acquaintances (weak ties). In the analysis of email communications, Aral \& Van Alstyne used the quantity \emph{channel bandwidth} to represent the strength of a tie. They defined bandwidth as the number of messages sent across the tie. One-to-many directed broadcasts of social media differ in nature from email communication. We find it useful to separate activities into incoming messages and outgoing messages. \paragraph{Average Friend Activity} In social media, friends' activity determines the total volume of incoming information $I_{i}$ over a time period $\Delta T$. We measure $I_i$ as the total number of stories friends of user $u_i$ recommended, i.e., voted for, during the time period $\Delta T$. Hence, we define the average (per link) volume of incoming information during $\Delta T$ as: \begin{equation} \begin{aligned} {B_{i}= \frac{I_{i}}{S_{i}} } \end{aligned} \end{equation} \paragraph{User Activity Most social media sites, including Digg and Twitter, display items from friends as a chronologically sorted list, with the newest items at the top of the list. A user scans the list and if he finds an item interesting, he may share it with his followers, e.g., by voting it. He will continue scanning the list until he loses interest, gets bored or distracted~\cite{Hodas12socialcom}. When user gets bored, he could start to inspect new information from outside of his social network and recommend new information to all his followers. User $u_i$'s activity is the sum of the number of new stories $O_{i}^s$ (seeded messages) user discovered from outside of his network by browsing the Web and other sections of Digg, and the number of stories $O_{i}^a$ he adopted from friends' recommendations. In the analysis presented in this paper we focus on the component of user activity that corresponds to adoption events, i.e., cases where user votes for a story after a friend had recommended it. Therefore, we measure the activity of the user $u_i$ as the number of adoptions the user made during the time period $\Delta T$: \begin{equation} \begin{aligned} {uB_{i}= O_{i}^a} \end{aligned} \end{equation} \subsubsection{Information Variables} We model information content in a user's network using topic diversity of information and the total volume of novel information. \begin{figure}[] \begin{center} \begin{tabular}{c} \includegraphics[width=0.95\linewidth]{figs/percentage_of_stories.pdf} \end{tabular} \end{center} \caption{Topic distribution in 2009 and 2010 dataset.} \label{fig:topicPercentage} \end{figure} We use the topics Digg assigns to each story to represent its content. Figure~\ref{fig:topicPercentage} shows the distribution of Digg-assigned topics in our data sets, that is the percentage of stories assigned to each topic. In both data sets, ``Offbeat,'' ``Entertainment,'' ``Lifestyle'' and ``Technology'' were the most popular topics, while ``Sports'' and ``Gaming'' were the least popular topics. Overall, there is no dominant topic in either dataset and the popularity ranking of different topics are almost identical. The topics assigned to stories by Digg provide useful evidence for identifying user's topic preferences. We represent user $u_i$'s topic interest vector $\theta_{i}$ by computing the fraction of votes he made on each topic. \paragraph{Topic Diversity} The variance of topics to which a user is exposed to by his friends has important implication for modeling information in a social network. Topic diversity of a user's network neighborhood measures the variance of friends' topic interests: when most of friends have orthogonal interests, topic diversity will be high, whereas when most of friends have similar topic interests it will be low. Diversity can be captured in several different ways. Aral \& Van Alstyne~\cite{Aral11,aral2012anatomy} defined topic diversity as the average cosine distance of friends' topic vectors and their mean topic vector aggregated over all friends. Based on our experiments, Aral \& Van Alstyne's measurement is not able to capture topic diversity correctly for users with the same mean (based on friend topic vectors) but different number of friends. Instead, we define a user $u_i$'s topic interest vector $\theta_i$ in terms of the Digg-defined categories. Each component of $\theta_i$ represents the fraction of all votes made by $u_i$ on stories belonging to that category. Then, we define topic diversity of a user's network by averaging pair-wise cosine distances of friends' topic interest vectors. \begin{equation} \begin{aligned} TD_{i}= \frac{ \sum_{j=1}^{N^{frd}_i} \sum_{k=1}^{N^{frd}_i} (1-Cos(\theta_{j},\theta_{k}))}{S^{2}_{i}} \end{aligned} \end{equation} \paragraph{Novel Information} Total amount of novel information is another important measure of information content of networks. In many social media services, the same message or a piece of information can be recommended multiple times by multiple friends. Since most social media services provide a unique identifier for each message (e.g., original tweet id on Twitter or story id on Digg), we can measure the amount of novel information that a user is exposed to during time period $\Delta T$ by counting the number of distinct messages, or stories on Digg, to which user's friends expose her. Following Aral \& Van Alstyne, we refer to this quantity as $NRI_i$, or \emph{non-redundant information}, although in Aral \& Van Alstyne's studies, this quantity was not measured directly but derived from topic diversity and friend activity. In addition to the amount of novel information, we can also measure the novel information rate in a user's social network as $R_{i} = NRI_{i}/I_{i}$. Of the total volume of novel information ($NRI_i$) $u_i$ is exposed to through friends' recommendation activities, $u_i$ adopts a subset $O_{i}^a$ based on the topics or popularity of information. We measure the novel information adoption rate by $NAR_{i} = O_{i}^a/NRI_{i}$. \paragraph{Novel Information Potential} In social media, user's access to novel information is mainly determined by the activities of their friends. We introduce a new variable to define the volume of novel information that a user could potentially be exposed to if his friends adopted all the information they themselves were exposed to. We measure $NRI^{frds}_i$, the potential amount of novel information $u_i$ could access, by counting the number of distinct stories that all friends of $u_i$ are exposed to. While the friends of $u_i$ have access to $NRI^{frds}_i$ novel information, they adopt a subset of this information based on their interests, exposing $u_i$ to $NRI_{i}$ novel information. We measure friend novel information adoption rate by $FNAR_{i} = NRI_{i}/NRI^{frds}_i$. \section{How Structure and Activity\\ Shape Information Access} \label{sec:networkstructure} Sociologists have long noted a relationship between the structure of a social network and the frequency and intensity of interactions between two people, what sociologists call the strength of a tie. Granovetter~\cite{granovetter1973} argued that social tie strength can be estimated from local network structure of the two people, specifically, the number of common neighbors they have. Subsequently, a study of a massive mobile phone network established a correlation between frequency and duration of phone calls (one measure of a strength of a tie) and the fractions of common neighbors callers have~\cite{onnela2007structure}. This makes sense; since close friends not only interact with each other frequently and intensely, but also are likely to revolve in the same social circles, therefore, share common friends. On the other hand, weak ties, or acquaintances, are pairs people who interact infrequently. They are likely to come from different social circles or communities, and therefore, not share common friends. The relationship between tie strength and access to novel information is more subtle. Though weak ties deliver novel information~\cite{granovetter1973}, for example, new job prospects, since the volume of communication along these ties is low, so is their potential to deliver novel information. This was confirmed by Aral \& Van Alstyne's analysis~\cite{Aral11,aral2012anatomy} of email communication within a corporate recruiting firm. They showed that structurally diverse networks provide access to diverse and novel information, though the positive effects of structural diversity are offset by lower volumes of communication (bandwidth), what they call ``diversity--bandwidth trade-off.'' To date, little is known about how these factors operate in online social networks and how they compare to real-world and email networks. Ties in online social networks, including Digg, are often non-reciprocal, with users sharing messages with both friends they know in real life and strangers. We explore the questions about how users can broaden their access to information by controlling their position within the network and their activity level. \subsection{Access to Information} In the study of email communication within a corporate recruiting firm, Aral \& Van Alstyne observed that both the total volume of novel information ($NRI_i$) flowing to recruiters and its diversity ($TD_i$) increased with their network size, network diversity and channel bandwidth (the number of emails they received along each tie). We tested whether the same conclusions hold for the online social network of Digg. Specifically, whether larger ($S_i$), structurally diverse networks ($ND_i$) or high friend activity ($B_i$) are likely to deliver more novel information ($NRI_i$) and topically diverse information ($TD_i$). \subsubsection{Effect of Network Size} \begin{figure}[tbh] \begin{center} \begin{tabular}{c} \\ \includegraphics[width=0.9\linewidth]{figs/SandNRI.png} \\ \end{tabular} \end{center} \caption{Amount of novel information ($NRI_i$) a user is exposed to as a function of the number of active friends ($S_i$) in the 2010 Digg data set. The line represents the total number of distinct stories in the data set. } \label{fig:nobodyaccess} \end{figure} One of simplest ways users can control their position within a network is by adding friends. But, does having more friends improve access to information in online social networks? We study how the volume of novel information a user can access, which we measure by the number of distinct stories to which the user is exposed by friends on Digg, varies with the number of friends. Figure~\ref{fig:nobodyaccess} shows the volume of novel information ($NRI$) users can access as a function of the number of friends ($S$). The amount of novel information increases as users add more friends, but saturates quickly. Surprisingly, no single user had access to all the information available in the network (shown as a line in Figure~\ref{fig:nobodyaccess}). The highest number of distinct stories any user was exposed to in the 2010 Digg dataset was 29,558, or 80\% of the total (36,883 distinct stories). It appears that adding more friends in an online social network improves access to novel information, but very quickly, after about 100 friends, it becomes counterproductive, since doubling the number of friends raises the volume of novel information only a few percentage points. \subsubsection{Effect of User Activity} \begin{figure}[tbh] \begin{center} \begin{tabular}{c} \\ \includegraphics[width=0.9\linewidth]{figs/2010_NRI_pot_NRI_and_B_.pdf} \\ (a) \\ \includegraphics[width=0.9\linewidth]{figs/redundancy_.pdf}\\ (b) \\ \end{tabular} \end{center} \caption{Novel information in a user's network in the 2010 Digg data set. (a) The total amount of novel information that user's friends ($NRI^{frds}$) and the user ($NRI$) are exposed to as a function of average friend activity (or channel bandwidth $B$). Solid symbols show smoothed data, and the line represents the total amount of information in the network (number of distinct stories in the data set). (b) Novel information rate as a function of friend activity.} \label{fig:activity} \end{figure} In addition to creating new social links, a user can choose to link to more active users in order to improve his access to information in a social network. Does having active friends, i.e., friends who recommend many stories, lead to greater access to novel information? \figref{fig:activity} shows the relationship between the volume of novel information in a user's network as a function of average friend activity (referred to as channel bandwidth by Aral \& Van Alstyne). \figref{fig:activity} (a) shows the amount of novel information that user's friends are exposed to ($NRI^{frds}_i$). The solid line represents the total amount of information in the network, i.e., distinct stories in the data set. Potential amount of novel information rises quickly as a function of friend activity, approaching near-maximum. However, the amount of novel information to which the user is exposed is just a fraction of this maximum, as shown in \figref{fig:activity} (a). Interestingly, the amount of potential novel information and novel information available to the user both decrease as friend activity grows past 2000. Our results indicate that while linking to more active users does initially improve access novel information in a social network, after a certain point, higher friend activity no longer increases the amount of novel information available to the user, but may even slightly suppress it. \figref{fig:activity} (b) shows the rate at which users receive novel information, i.e., the fraction of novel information in their information stream, as a function of the average activity of their friends (channel bandwidth $B_i$). The figure clearly shows that as friends become more active, by voting for more stories, the fraction of novel information in the user's social stream drops precipitously. As we show later, this is due to higher redundancy of incoming information. In online social networks, friends activity is an important factor in deciding the extent and the amount of novel information available to the user. \subsubsection{Effect of Network Structure: the ``Diversity--Bandwidth Trade-off''} \begin{figure}[] \begin{center} \begin{tabular}{c} \includegraphics[width=1.0\linewidth]{figs/dbtradeoffs_S_nei} \end{tabular} \end{center} \caption{Scatterplot showing network diversity vs average friend activity (channel bandwidth) for Digg users who are divided into three populations based on the number of friends in the 2010 Digg data set. The plot demonstrates the diversity-bandwidth trade-off.} \label{fig:dbtradeoffs} \end{figure} Next, we study the interplay between network structure and user activity and their impact on access to information. Aral \& Van Alstyne demonstrated that while structurally diverse networks provide greater access to information, their benefits are offset by lower rate of communication along structurally diverse ties. Due to such ``diversity--bandwidth trade-off,'' people can increase their exposure to topically diverse and novel information either by placing themselves in structurally diverse network positions or by linking to people with higher bandwidth who will communicate with them more frequently. We examine whether ``diversity--bandwidth trade-off'' exists on Digg. Digg users can increase their ``bandwidth'' by linking to friends who vote for more stories. However, as friends' activity increases, it worsens the user's cognitive load, or the volume of incoming information the user has to process. We divide users into different populations based on the total volume of incoming information, which is, on average, proportional to the number of active friends $S_i$ they have. Figure~\ref{fig:dbtradeoffs} shows the relationship between network diversity $ND$ and average friend activity (or channel bandwidth) $B$ for each user, where users are broken into three populations: those with more than 322 friends, between 131 and 322 active friends, and 130 or fewer active friends. The thresholds were chosen to produce equal size populations. The correlation between network diversity and average friend activity for the three populations are -0.54 (p<.01), -0.58 (p<.01) and -0.50 (p<.01) respectively. Overall (over all populations of users), there is still a strong negative relationship (-0.47, p< .01) between network diversity $ND_i$ and bandwidth $B_i$, confirming the ``diversity--bandwidth trade-off''~\cite{Aral11}: users who place themselves into positions of greater network diversity within the Digg follower graph on average receive fewer story recommendations from friends than users who place themselves into positions of smaller network diversity. For 2009 data set, we also divided users into three populations: those with more than 87 friends, between 26 and 87 active friends, and 25 or fewer active friends. The correlation between network diversity and average friend activity are -0.54 (p<.01), -0.59 (p<.01) and -0.03 (p<.01) respectively and over all populations of users in 2009 data set, the correlation is -0.13 (p<.01). The differences mainly coming from incomplete history about users' activities in 2009 data set, since it only contains subset of users' behaviors on front page stories, while we have the complete users' voting history in the 2010 data set. In both 2009 and 2010 Digg data set, users in the greater network diversity within the follower graph on average receive fewer story recommendations from friends than users who place themselves into positions of smaller network diversity. We observed that users connected by strong ties are more active, recommending more stories than those users who are connected by weak ties. Similarly, as users' networks become more diverse, friends' activities contract. \begin{figure}[tbh] \begin{center} \begin{tabular}{c} \includegraphics[width=0.9\linewidth]{figs/TD_.pdf}\\ (a)\\ \includegraphics[width=0.9\linewidth]{figs/NRI_.pdf}\\ (b) \end{tabular} \end{center} \caption{(a) Topical diversity ($TD$) and (b) novelty ($NRI$) of information to which Digg users are exposed as a function of their network diversity ($ND$) and average friend activity (or channel bandwidth $B$) in the 2010 Digg data set.} \label{fig:DB_TD_NRI} \end{figure} \begin{table} \center \begin{tabular}{|c|c|c|c|c|c|c|} \cline{1-3} \cline{5-7} \textbf{2009} &\textbf{ NRI } & \textbf{TD} &&\textbf{2010} &\textbf{ NRI } & \textbf{TD}\\ \cline{1-3} \cline{5-7} \textbf{B} & 0.04**& -0.11** & &\textbf{B}&0.69** & -0.83**\\ \textbf{ND} & -0.09** & 0.48** & &\textbf{ND}& -0.15**& 0.41**\\ \cline{1-3} \cline{5-7} \end{tabular} \center \caption{Pairwise correlations between variables in the 2009 and 2010 Digg data sets. Note that asterisk (**) shows statistically significant correlation with p<.01.} \label{tbl:pairwisecorr} \end{table} Figure~\ref{fig:DB_TD_NRI} shows how much (a) topically diverse information ($TD$) and (b) novel information ($NRI$) Digg users are exposed to as a function of their position in the network (network diversity $ND$) and friend activity ($B$). Users whose friends are more active can access more novel information ($NRI$), whereas users in positions of higher network diversity can access more topically diverse information ($TD$). This is in contrast to the findings of Aral \& Van Alstyne, which demonstrated that users could increase both the topic diversity and amount of non-redundant (novel) information they are exposed to by increasing either their network diversity or channel bandwidth. On Digg, on the other hand, users who place themselves in position of high structural diversity can access more topically diverse information (correlation between $ND_i$ and $TD_i$ was 0.41 (p<.01)), rather than more novel information. There was a strong negative relationship (in Table~\ref{tbl:pairwisecorr}) between $B_i$ and $TD_i$ that shows users in a strongly tied network have similar topical interests. Based on detailed investigation, the strong negative relationship is not because of friends' uniform preferences over a variety of topics but because of high similarities between friends' topic preferences in highly clustered network. Intensifying friends' activity, on the other hand, led to more novel information (correlation between $B_i$ and $NRI_i$ was 0.69 (p<.01)), but less topically diverse information. These results demonstrate that activity is an important feature for accessing novel information in online social networks, while structural diversity can be used to get access to more topically diverse information. \begin{figure}[tbh] \begin{center} \begin{tabular}{c} \includegraphics[width=1.0\linewidth]{figs/O_s.png} \end{tabular} \end{center} \caption{Amount of new information injected into the network by users (colored symbols) in different positions of network diversity (ND) and friend activity (B). Symbol size represents the relative number of seeded stories. Seeding users are divided into classes based on the number of friends.} \label{fig:seeding} \end{figure} Last, we examine the characteristics of users who inject new information into their networks by voting for stories they found outside of their friends' recommendations, e.g., on the Web or on other sections of Digg. Figure~\ref{fig:seeding} shows the network diversity vs friend activity plot with colored dots representing users who introduce, or seed, new stories in their network. We divide these users into two classes based on the number of friends they have. The size of the symbol represents the relative number of seeded stories (difference between the total votes made by $u_i$ and those adopted through friends' recommendations). The x-axis is shown in log-scale to highlight the differences between classes of users. Users with many friends (blue symbols) who are very active (high $B$) inject relatively more new stories into their network than users with many, but less active friends. These users are also in less structurally diverse positions, i.e., they are more strongly tied to their friends. At first, it seems counterintuitive that these users, who already receive many recommendations, would take the time to look for new information. These could be the dedicated top users, who consider it their responsibility to look for new stories to post on Digg, or users who are so overwhelmed with the quantity and redundancy of their friends' recommendations, that they choose to find new information on their own. Users with few friends (red symbols) also tend to have less active friends (lower $B$). These users inject more information into their network when their network diversity is high, or as shown earlier, their friends have diverse interests. Such users cannot rely on their network to expose them to interesting information; instead, they seek it out themselves by seeding new stories. \subsection{Bottlenecks to Information Access} \begin{figure}[tbh] \begin{center} \begin{tabular}{c} \includegraphics[width=0.95\linewidth]{figs/PNRI_.pdf}\\ (a)\\ \includegraphics[width=0.95\linewidth]{figs/FNAR_.pdf}\\ (b) \end{tabular} \end{center} \caption{(a) Total amount of novel information that user's friends are exposed to ($NRI^{frds}_i$) as a function of user's network diversity ($ND_i$) and friend activity ($B_i$) in the 2010 Digg data set. (b) Fraction of novel information adopted ($FNAR_i$) by friends ($NRI_i/NRI^{frds}_i$) as a function of network diversity and friend activity in the 2010 Digg data set.} \label{fig:NRI_frd} \end{figure} Why do users in positions of high network diversity receive less novel information even though they are connected to more topically diverse friends? To answer this question, we measured $NRI^{frds}_i$, the total amount of novel information that friends of user $u_i$ are exposed to. \figref{fig:NRI_frd} (a) shows that this quantity depends both on network diversity ($ND$) and friend activity ($B$). In most cases, friends are collectively exposed to a large quantity of novel information (also demonstrated by \figref{fig:activity} (a)), almost all of the 36,883 distinct stories in the 2010 Digg data set. Although most of the users could potentially be exposed to nearly all of the information in the network, in fact, as shown in ~\figref{fig:activity}, they receive far less novel information. We get some insight into this puzzle from \figref{fig:NRI_frd} (b), which shows the friends' novel information adoption rate, i.e., the fraction of stories in their stream friends voted for, as a function of the user's network position and friend activity rates. Friends of users in positions of high network diversity fail to adopt most of the novel information they are exposed to. However, users with highly active friends (high channel bandwidth $B$ region) are exposed to more novel information because their friends adopt it at a higher rate. This could explain the difference between our study and the findings of Aral \& Van Alstyne. In their study, users could increase their access to diverse and non-redundant (novel) information by increasing their network diversity or channel bandwidth. In our study, however, users in positions of high friend activity (high channel bandwidth) increase their access to novel information since their friends adopt a large portion of the novel information that they themselves are exposed to. Users in positions of high network diversity are exposed to more diverse information, but since their friends have interests that are different from their own, they do not adopt much of the information they are exposed to. In addition, in Aral \& Van Alstyne's study, novelty and diversity were not independent variables: non-redundant information (novelty) was the product of topic diversity and channel bandwidth. Hence, it may not be surprising that both were correlated highly with network diversity and channel bandwidth. We treat these variables as independent variables: while topic diversity of a network neighborhood is measured based on the Digg topic assignments of the stories friends adopted, non-redundant, or novel, information is measured simply by the number of distinct stories friends adopted. Our study demonstrates that friends in structurally diverse positions and positions of high activities constrain user's information exposure in different ways. Increasing friend activity affects novel information access, while increasing network diversity provides access to more topically diverse friends, but not the other way around. \begin{figure}[tbh] \begin{center} \begin{tabular}{c} \includegraphics[width=0.9\linewidth]{figs/S_Adoptions_2010.png} \end{tabular} \end{center} \caption{Number of stories adopted by the user as a function of the number of active friends ($S$) in the 2010 Digg data set.} \label{fig:uB_S} \end{figure} Other factors that affect access to information are the cognitive constraints that limit user's ability to process incoming information. These cognitive constraints are similar to those that limit the number of stable social relationships a human can maintain~\cite{Dunbar}. In the social media domain, they have been shown to limit the number of conversations users engage in~\cite{Goncalves11} and limit the spread of information~\cite{Hodas12socialcom}. We believe that cognitive constraints also play a role in access to information on networks. As the volume of incoming information increases, users compensate by increasing their activity to cope with the greater amount of new information. This works, but up to a point. Unfortunately, we cannot directly measure how much information users process, e.g., how many stories they read, but on average this quantity should be proportional to the number of recommended stories they vote for. We call this quantity user activity $uB_i$. Figure~\ref{fig:uB_S} shows user activity as a function of the number of active friends the user has. While user activity initially increases with the number of friends, it reaches a maximum around 200 friends and then decreases somewhat. The cognitive constraints prevent users from keeping up with the rising volume of new information friends expose him to. This, in turn, limits the amount of new information to which they expose their followers. This effect is more dramatic for users whose friends have topically diverse interests. These friends tend to vote on fewer recommended stories, either because they lack interest in processing recommended information, or because they are less willing to devote the greater cognitive resources required to process this diverse information. Understanding the complex interplay between cognitive constraints and network structure is the subject of our ongoing research. \section{Related Work} Sociologists have long argued that network structure determines the intensity of interactions between people and affects information diffusion in the network. The theoretical arguments known as ``the strength of weak ties'' were proposed by Mark Granovetter~\cite{granovetter1973}, who showed that social tie strength can be estimated from local network structure. In the same paper, he linked tie strength to information access in a network. Specifically, he argued that weak ties provide users with access to novel information. The theory of weak ties has been verified by many empirical studies~\cite{uzzi1997social,allen2003managing,reagans2001networks,reagans2003network,onnela2007structure}, including studies of job search~\cite{granovetter1973}, business relations~\cite{coleman1988social,Aral11}, inter-business~\cite{uzzi1996sources}, social capital~\cite{coleman1988social}. Burt~\cite{Burt95,burt2005brokerage} argued that weak ties act as bridges between different communities and enable ``brokers'' to leverage diverse sources to access novel information. Empirical studies of mobile phone~\cite{onnela2007structure} and email communication~\cite{iribarren2011affinity,Aral11} has offered support for brokerage theory. Several studies have confirmed the role of weak ties as bridges between different communities~\cite{centola2010spread,journals11074009} though their impact on information diffusion in online networks is still debated~\cite{centola2007complex,journals10013181}. To the best of our knowledge, out study is the first to look at the relationship between network structure and access to novel information in online social networks. Aral \& Van Alstyne~\cite{Aral11} examined the relationship between weak ties, structural diversity and access to diverse and novel information. Their study of email communication within an organization demonstrate a trade-off between network diversity (brokerage positions that link different communities) and channel bandwidth (tie strength) in access to both diverse and non-redundant (novel) information. In a follow-up study they demonstrated importance of network position in maximizing information diversity and novelty~\cite{aral2012anatomy}. In contrast, our study demonstrates that users with very active friends (high channel bandwidth) increase their access to novel information since their friends adopt a large portion of the novel information that they themselves are exposed to. Users in positions of high network diversity are exposed to more diverse information, but since their friends have interests that are different from their own, they do not adopt much of the information they are exposed to. Moreover, with more freedom of users' activities in online social network, we studied the contribution of users activities to access to diverse and novel information as well as ``diversity--bandwidth trade-off.'' Homophily, which refers to the tendency of similar individuals to link to each other, is a strong organizing principle of social networks. Numerous studies found that people tend to be friends with other who belong to the same socio-economic class~\cite{Feld1981,mcpherson2001birds}, and they tend to follow others in social media who have similar interests~\cite{Kang12aaai}. In the context of information exchange on networks, this means that content users are exposed to in social media depends on their position in the network. Users embedded within a community of strongly tied individuals are likely to share information on topics that other community members are interested in, while users in brokerage positions that bridge different communities receive information on more diverse topics. In this paper, we show that the variance of topics to which a user is exposed to by his friends is highly related to the structure diversity of social network. Recently, researchers recognized that cognitive constraints are important to defining social interactions online and in the real world. The number of social relationships that people can maintain is limited to about 150~\cite{Dunbar03}. This is similar to the limit of the number of conversation partners that Twitter users have~\cite{Goncalves11}. Cognitive constraints, specifically, divided attention, was also shown to limit the spread of information on Twitter~\cite{Hodas12socialcom}. We find a dependence of user activity on network structure that mirrors those imposed by cognitive constraints. We observe that user's activity rate initially increases with the number of friends, until reaching a maximum around 200 friends and then decreases somewhat. The cognitive limits constraints the potential exposures to new information as users have too much information recommended by many friends. Further we argued that the limited activities in high diverse network is either because their lack of interest in processing recommended information, or because they are less willing to devote the greater cognitive resources required to process this diverse information. \section{Conclusion} We used data from the social news aggregator Digg to investigate the relationship between the structure of the follower graph, user activity, and access to information in social media. We showed the amount of novel information a user is exposed to increases as she adds more friends, but saturates quickly. Similarly, linking to friend who are more active improves access to novel information, but as the redundancy increases, higher friend activity can no longer increase the amount of novel information accessible to the user. In addition, we validated the ``diversity--bandwidth trade-off'' in online social media. In two different data sets, users in positions of greater network diversity in the follower graph on average receive fewer story recommendations from friends than users who place themselves into positions of high friend activity (high bandwidth). Users in positions of higher network diversity can access more topically diverse information while users whose friends are more active can access more novel information. Increasing friend activity affects novel information access, while increasing network diversity provides access to more topically diverse friends, but not the other way around. So to access more novel information the volume of communication along the tie is more important than the network structure. This is in contrast to the findings of Aral \& Van Alstyne, who demonstrated that users could increase both the topic diversity and amount of non-redundant (novel) information they are exposed to by increasing either their network diversity or channel bandwidth in the email communication. Cognitive constraints, e.g., limited attention, is an important psychological factor that limits human activity. Our analysis of Digg suggests that user's activity creates an ``information bottleneck,'' blocking potential access to novel information to their followers. Since user's network diversity is highly related to topic diversity, the mechanisms that adopted novel information become even more important in diverse network. Understanding the complex interplay between cognitive constraints and network structure is the subject of our ongoing research. \section*{Acknowledgment} This material is based upon work supported by the Air Force Office of Scientific Research under Contract Nos. FA9550-10-1-0102 and FA9550-10-1-0569, and by DARPA under Contract No. W911NF-12-1-0034. \bibliographystyle{abbrv}
7ccde59e4f031918c10a4b59e3b9e8a669a86c59
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex}{2.3ex plus .2ex}{\large\bf}} \def\FERMIPUB{} \def\FERMILABPub#1{\def\FERMIPUB{#1}} \def\ps@headings{\def\@oddfoot{}\def\@evenfoot{} \def\@oddhead{\hbox{}\hfill \makebox[.5\textwidth]{\raggedright\ignorespaces --\thepage{}-- \hfill }} \def\@evenhead{\@oddhead} \def\subsectionmark##1{\markboth{##1}{}} } \ps@headings \catcode`\@=12 \begin{document} \begin{titlepage} \rightline{UTHET-09-1001} \begin{centering} \vspace{1cm} {\Large {\bf A New Class of Exact Hairy Black Hole Solutions }}\\ \vspace{1.5cm} {\bf Theodoros Kolyvaris $^{\dagger}$}, {\bf George Koutsoumbas $^{\sharp}$},\\ {\bf Eleftherios Papantonopoulos $^{*}$} \\ \vspace{.2in} Department of Physics, National Technical University of Athens, \\ Zografou Campus GR 157 73, Athens, Greece \\ \vspace{.2in} {\bf George Siopsis $^{\flat}$} \vspace{.2in} Department of Physics and Astronomy, The University of Tennessee,\\ Knoxville, TN 37996 - 1200, USA \\ \vspace{3mm} \end{centering} \vspace{1.5cm} \begin{abstract} We present a new class of black hole solutions with a minimally coupled scalar field in the presence of a negative cosmological constant. We consider an one-parameter family of self-interaction potentials parametrized by a dimensionless parameter $g$. When $g=0$, we recover the conformally invariant solution of the Martinez-Troncoso-Zanelli (MTZ) black hole. A non-vanishing $g$ signals the departure from conformal invariance. Thermodynamically, there is a critical temperature at vanishing black hole mass, where a higher-order phase transition occurs, as in the case of the MTZ black hole. Additionally, we obtain a branch of hairy solutions which undergo a first-order phase transition at a second critical temperature which depends on $g$ and it is higher than the MTZ critical temperature. As $g\to 0$, this second critical temperature diverges. \end{abstract} \vspace{1.5cm} \begin{flushleft} $^{\dagger}~~$ e-mail address: teokolyv@central.ntua.gr \\ $^{\sharp}~~$ e-mail address: kutsubas@central.ntua.gr \\ $^{*} ~~$ e-mail address: lpapa@central.ntua.gr \\ $ ^{\flat}~~$ e-mail address: siopsis@tennessee.edu \end{flushleft} \end{titlepage} \section{Introduction} Four-dimensional black hole solutions of Einstein gravity coupled to a scalar field have been an avenue of intense research for many years. Questions pertaining to their existence, uniqueness and stability were seeking answers over these years. The Kerr-Newman solutions of four-dimensional asymptotically flat black holes coupled to an electromagnetic field or in vacuum, imposed very stringent conditions on their existence in the form of ``no-hair" theorems. In the case of a minimally coupled scalar field in asymptotically flat spacetime the no-hair theorems were proven imposing conditions on the form of the self-interaction potential~\cite{nohairtheo}. These theorems were also generalized to non-minimally coupled scalar fields~\cite{Mayo}. For asymptotically flat spacetime, a four-dimensional black hole coupled to a scalar field with a zero self-interaction potential is known~\cite{BBMB}. However, the scalar field diverges on the event horizon and, furthermore, the solution is unstable \cite{bronnikov}, so there is no violation of the ``no-hair" theorems. In the case of a positive cosmological constant with a minimally coupled scalar field with a self-interaction potential, black hole solutions were found in~\cite{Zloshchastiev:2004ny} and also a numerical solution was presented in~\cite{Torii:1998ir}, but it was unstable. If the scalar field is non-minimally coupled, a solution exists with a quartic self-interaction potential~\cite{martinez}, but it was shown to be unstable~\cite{phil,dotti}. In the case of a negative cosmological constant, stable solutions were found numerically for spherical geometries~\cite{Torii:2001pg, Winstanley:2002jt} and an exact solution in asymptotically AdS space with hyperbolic geometry was presented in~\cite{Martinez:2004nb} and generalized later to include charge~\cite{Martinez:2005di}. This solution is perturbatively stable for negative mass and may develop instabilities for positive mass~\cite{papa1}. The thermodynamics of this solution were studied in~\cite{Martinez:2004nb} where it was shown that there is a second order phase transition of the hairy black hole to a pure topological black hole without hair. The analytical and numerical calculation of the quasi-normal modes of scalar, electromagnetic and tensor perturbations of these black holes confirmed this behaviour~\cite{papa2}. Recently, a new exact solution of a charged C-metric conformally coupled to a scalar field was presented in~\cite{kolyvaris, anabalon}. A Schwarzschild-AdS black hole in five-dimensions coupled to a scalar field was discussed in~\cite{Farakos:2009fx}, while Dilatonic black hole solutions with a Gauss-Bonnet term in various dimensions were discussed in~\cite{Ohta:2009pe}. From a known black hole solution coupled to a scalar field other solutions can be generated via conformal mappings~\cite{conformal}. In all black hole solutions in the Einstein frame the scalar field is coupled minimally to gravity. Applying a conformal transformation to these solutions, other solutions can be obtained in the Jordan frame which are not physically equivalent to the untransformed ones~\cite{varofone}. The scalar field in the Jordan frame is coupled to gravity non-minimally and this coupling is characterized by a dimensionless parameter $\xi$. There are strong theoretical, astrophysical and cosmological arguments (for a review see~\cite{varofone}) which fix the value of this conformal coupling to $\xi=1/6$. If the scalar potential is zero or quartic in the scalar field, the theory is conformally invariant; otherwise a non-trivial scalar potential introduces a scale in the theory and the conformal invariance is broken. In this work we present a new class of black hole solutions of four-dimensional Einstein gravity coupled to a scalar field and to vacuum energy. We analyse the structure and study the properties of these solutions in the Einstein frame. In this frame, the scalar self-interaction potential is characterised by a dimensionless parameter $g$. If this parameter vanishes, then the known solutions of black holes minimally coupled to a scalar field in (A)dS space are obtained~\cite{martinez,Martinez:2004nb}. Transforming these solutions to the Jordan frame, the parameter $g$ can be interpreted as giving the measure of departure from conformal invariance. This breakdown of conformal invariance allows the back-scattering of waves of the scalar field off of the background curvature of spacetime, and the creation of ``tails'' of radiation. This effect may have sizeable observation signatures in cosmology~\cite{waves}. Following~\cite{Hertog:2004bb}, we perform a perturbative stability analysis of the solutions. We find that the hairy black hole is stable near the conformal point if the mass is negative and may develop instabilities in the case of positive mass. We also study the thermodynamics of our solutions. Calculating the free energy we find that there is a critical temperature above which the hairy black hole loses its hair to a black hole in vacuum. This critical temperature occurs at a point where the black hole mass flips sign, as in the case of the MTZ black hole \cite{Martinez:2004nb}. Interestingly, another phase transition occurs at a higher critical temperature which is of first order and involves a different branch of our solution. This new critical temperature diverges as the coupling constant in the potential $g\to 0$. These exact hairy black hole solutions may have interesting applications to holographic superconductors~\cite{Hartnoll:2008vx,Hartnoll:2008kx}, where new types of holographic superconductors can be constructed~\cite{papa1,zeng}. Our discussion is organized as follows. In section \ref{sec2} we introduce the self-interaction potential and we present the hairy black hole solution. In section \ref{sec4} we discuss the thermodynamics of our solution. In section \ref{secst} we perform a stability analysis. Finally, in section \ref{sec6} we summarize our results. \section{Black Hole with Scalar Hair} \label{sec2} To obtain a black hole with scalar hair, we start with the four-dimensional action consisting of the Einstein-Hilbert action with a negative cosmological constant $\Lambda$, along with a scalar, \begin{equation} I = \int d^4x\sqrt{-g}\left[ \fr{R-2\Lambda}{16 \pi G} -\fr{1}{2} g^{\mu\nu}\pa_\mu\phi\pa_\nu\phi-V(\phi)\right],\label{action11} \end{equation} where $G$ is Newton's constant and $R$ is the Ricci scalar. The corresponding field equations are \begin{eqnarray} G_{\mu\nu} +\Lambda g_{\mu\nu}&=&8\pi G T_{\mu\nu}^{\mathrm{matter}}~, \nonumber\\ \Box \phi&=&\frac{d V}{d \phi}, \label{eqfe}\end{eqnarray} where the energy-momentum tensor is given by \begin{equation} \label{Tuvfield}T_{\mu\nu}^{\mathrm{matter}} =\pa_\mu \phi \pa_\nu\phi-\fr{1}{2} g_{\mu\nu} g^{\alpha\beta}\pa_\alpha \phi \pa_\beta \phi - g_{\mu\nu} V(\phi)~. \end{equation} The potential is chosen as \begin{eqnarray} V(\phi) &=& \fr{\Lambda}{4\pi G}\sinh^2\sqrt{\fr{4 \pi G}{3}}\phi\nonumber\\ &+& \frac{g\Lambda}{24\pi G} \left[2 \sqrt{3 \pi G} \phi \cosh \left(\sqrt{\fr{16 \pi G}{3}}\phi\right) - \frac{9}{8} \sinh \left(\sqrt{\fr{16 \pi G}{3}}\phi\right)- \frac{1}{8} \sinh \left(4 \sqrt{3 \pi G}\phi\right)\right]\nonumber\\\label{potentialnew} \end{eqnarray} and it is given in terms of a coupling constant $g$. Setting $g=0$ we recover the action that yields the MTZ black hole \cite{Martinez:2004nb}. This particular form of the potential is chosen so that the field equations can be solved analytically. The qualitative nature of our results does not depend on the detailed form of the potential. A similar potential was considered in a different context in \cite{Zloshchastiev:2004ny} (see also \cite{zeng} for the derivation of a potential that yields analytic solutions in the case of a spherical horizon). If one goes over to the Jordan frame, in which the scalar field obeys the Klein-Gordon equation \be \label{KleinGordon} \Box \phi -\xi R \phi -\frac{dV}{d\phi}=0 \; , \ee with $\xi=1/6$, the scalar potential has the form \bea V (\phi) = -\frac{2 \pi G \L}{9}~\phi^{4} &-& \frac{g\L}{16 \pi G} \left[ \sqrt{\frac{16\pi G}{3}}~\phi\left( 1-\frac{4 \pi G}{3}~\phi^{2} +\frac{\frac{16\pi G}{9}~\phi^{2}}{1-\frac{4 \pi G}{3}~\phi^{2}}\right) \right.\nonumber \\ &-& \left.\left( 1-\frac{4\pi G}{3}~\phi^{2}\right)\left( 1+\frac{4\pi G}{3}~\phi^{2}\right) \ln\frac{1+\sqrt{\frac{4\pi G}{3}}~\phi}{1-\sqrt{\frac{4 \pi G}{3}}~\phi} \right]~. \label{potentialnm} \eea Evidently, the scalar field is conformally coupled but the conformal invariance is broken by a non-zero value of $g$. The mass of the scalar field is given by \be m^2 = V''(0) = - \frac{2}{l^2} \ee where we defined $\Lambda = -3/l^2$. Notice that it is independent of $g$ and coincides with the scalar mass that yields the MTZ black hole \cite{Martinez:2004nb}. Asymptotically ($r\to\infty$), the scalar field behaves as $\phi \sim r^{-\Delta_\pm}$ where $\Delta_\pm = \frac{3}{2} \pm \sqrt{ \frac{9}{4} + m^2l^2}$. In our case $\Delta_+ = 2$ and $\Delta_- = 1$. Both boundary conditions are acceptable as both give normalizable modes. We shall adopt the mixed boundary conditions (as $r\to\infty$) \be \phi (r) = \frac{\alpha}{r} + \frac{c\alpha^2}{r^2} + \dots \ , \ \ \ \ c = -\sqrt{\frac{4\pi G}{3}} < 0~. \label{eqBCphi}\ee This choice of the parameter $c$ coincides with the MTZ solution \cite{Martinez:2004nb}. Solutions to the Einstein equations with the boundary conditions (\ref{eqBCphi}) have been found in the case of spherical horizons and shown to be unstable \cite{Hertog:2004bb}. In that case, for $\alpha >0$, it was shown that $c<0$ always and the hairy black hole had positive mass. On the other hand, MTZ black holes, which have hyperbolic horizons and obey the boundary conditions (\ref{eqBCphi}) with $c<0$, can be stable if they have negative mass \cite{papa1}. This is impossible with spherical horizons, because they always enclose black holes of positive mass. The numerical value of $c$ is not important (except for the fact that $c\ne 0$) and is chosen as in (\ref{eqBCphi}) for convenience. The field equations admit solutions which are black holes with topology $\mathbb{R}^{2}\times\Sigma$, where $\Sigma$ is a two-dimensional manifold of constant negative curvature. Black holes with constant negative curvature are known as topological black holes (TBHs - see, e.g., \cite{bibMann, bibBir}). The simplest solution for $\Lambda=-3/l^2$ reads \begin{equation}\label{linel} ds^{2}=-f_{TBH}(\rho)dt^{2}+\frac{1}{f_{TBH}(\rho)}d\rho^{2} +\rho^{2}d\sigma ^{2}\quad ,\quad f_{TBH}(\rho)=\frac{\rho^{2}}{l^2} -1-\frac{\rho_0 }{\rho}~, \end{equation} where $\rho_0$ is a constant which is proportional to the mass and is bounded from below ($\rho_0\geq-\frac{2}{3\sqrt{3}} l$), $d\sigma^{2}$ is the line element of the two-dimensional manifold $\Sigma$ which is locally isomorphic to the hyperbolic manifold $H^{2}$ and of the form \begin{equation}\label{eqSigma} \Sigma=H^{2}/\Gamma \quad \textrm{,\quad $\Gamma\subset O(2,1)$}~, \end{equation} with $\Gamma$ a freely acting discrete subgroup (i.e., without fixed points) of isometries. The line element $d\sigma^{2}$ of $\Sigma$ can be written as \begin{equation} d\sigma^{2}=d\theta^{2}+\sinh^{2}\theta d\varphi{^2}~, \end{equation} with $\theta\ge0$ and $0\le\phi<2\pi$ being the coordinates of the hyperbolic space $H^{2}$ or pseudosphere, which is a non-compact two-dimensional space of constant negative curvature. This space becomes a compact space of constant negative curvature with genus $\gen\ge2$ by identifying, according to the connection rules of the discrete subgroup $\Gamma$, the opposite edges of a $4\gen$-sided polygon whose sides are geodesics and is centered at the origin $\theta=\varphi=0$ of the pseudosphere. An octagon is the simplest such polygon, yielding a compact surface of genus $\gen=2$ under these identifications. Thus, the two-dimensional manifold $\Sigma$ is a compact Riemann 2-surface of genus $\mathrm{g}\geq2$. The configuration (\ref{linel}) is an asymptotically locally AdS spacetime. The horizon structure of (\ref{linel}) is determined by the roots of the metric function $f_{TBH}(\rho)$, that is \begin{equation} f_{TBH}(\rho)=\frac{\rho^{2}}{l^2}-1-\frac{\rho_0}{\rho}=0~. \label{ftbh} \end{equation} For $-\frac{2}{3\sqrt{3}} l <\rho_0<0$, this equation has two distinct non-degenerate solutions, corresponding to an inner and to an outer horizon $\rho_{-}$ and $\rho_{+}$ respectively. For $\rho_0\geq0$, $f_{TBH}(\rho)$ has just one non-degenerate root and so the black hole (\ref{linel}) has one horizon $\rho_{+}$. The horizons for both cases of $\rho_0$ have the non-trivial topology of the manifold $\Sigma$. We note that for $\rho_0=-\frac{2}{3\sqrt{3}} l$, $f_{TBH}(\rho)$ has a degenerate root, but this horizon does not have an interpretation as a black hole horizon. The boundary has the metric \be\label{eqnew} ds_\partial^2 = -dt^2 + l^2 d\sigma^2~, \ee so spatially it is a hyperbolic manifold of radius $l$ (and of curvature $-1/l$). The action (\ref{action11}) with a potential as in (\ref{potentialnew}) has a static black hole solution with topology $\mathbb{R}^{2}\times \Sigma $ and with scalar hair, and it is given by \begin{equation}\label{MTZconf} ds^2=\frac{r (r+2r_0)}{(r+r_0)^2} \left[-F(r) dt^2+\frac{dr^2}{F(r)}+r^2 d \sigma^2\right]~, \ee where \be\label{Psiconf} F(r) = \frac{r^2}{l^2} - g\frac{r_0}{l^2} r - 1 + g \frac{r_0^2}{l^2} - \left( 1 - 2g \frac{r_0^2}{l^2} \right) \frac{r_0}{r} \left( 2 + \frac{r_0}{r} \right) +g \frac{r^2}{2l^2} \ln\left( 1 + \frac{2r_0}{r}\right)\ , \ee and the scalar field is \be \phi(r) = \sqrt{\fr{3}{4 \pi G}}~ {\rm arctanh} \left(\fr{r_0}{r+r_0}\right)~, \label{scaField} \ee obeying the boundary conditions (\ref{eqBCphi}) by design. \section{Thermodynamics} \label{sec4} To study the thermodynamics of our black hole solutions we consider the Euclidean continuation ($t \rightarrow i\tau$) of the action in Hamiltonian form \begin{equation} I=\int \left[ \pi ^{ij}\dot{g}_{ij}+p\dot{\phi}-N\mathcal{H}-N^{i}\mathcal{H _{i}\right] d^{\,3}xdt+B, \label{Hamiltaction} \end{equation} where $\pi ^{ij}$ and $p$ are the conjugate momenta of the metric and the field respectively; $B$ is a surface term. The solution reads: \begin{equation} ds^{2}=N^{2}(r)f^{\,2}(r)d\tau^{2}+f^{\,-2}(r)dr^{2}+R^{2}(r)d\s ^{2} \label{metricEucl} \end{equation} where \be N(r) = \fr{r(r+2r_0)}{(r+r_0)^2} \ , \ \ f^2(r) = \fr{(r+r_0)^2}{r(r+2r_0)}\;F(r) \ , \ \ R^2(r) = \fr{r^3(r+2r_0)}{(r+r_0)^2}~, \ee with a periodic $\tau$ whose period is the inverse temperature, $\beta = 1/T$. The Hamiltonian action becomes \begin{equation} I=-\b \,\s \int_{r_{+}}^{\infty }N(r)\mathcal{H}(r)dr+B, \label{redHamiltaction} \end{equation} where $\sigma $ is the area of $\Sigma $ and \be \mathcal{H} =NR^{2}\left[ \frac{1}{8\pi G}\left( \frac{(f^{\,2})^{\pr }R^{\,\pr}}{R}+\frac{2f^{\,2}R^{\,\pr \pr }}{R}+\frac{1}{R^{\,2}}(1 +f^{\,2}) +\L \right) +\frac{1}{2}f^{\,2}(\phi ^{\pr})^{2}+V(\phi )\right] . \ee The Euclidean solution is static and satisfies the equation \mathcal{H}=0$. Thus, the value of the action in the classical limit is just the surface term $B$, which should maximize the action within the class of fields considered. We now compute the action when the field equations hold. The condition that the geometries which are permitted should not have conical singularities at the horizon imposes \be T = \fr{F^{\;\pr}(r_+)}{4\pi} \label{eqtemperature}\,. \ee Using the grand canonical ensemble (fixing the temperature), the variation of the surface term reads \[ \d B\equiv \d B_{\phi }+\d B_{G}\;, \] where \begin{equation} \d B_{G}=\frac{\b \s }{8\pi G} \Big{[} N \Big{(} RR^{\,\pr }\d f^{\,2}-(f^{\,2})^{\pr }R\d R\Big{)} +2f^{\,2}R\Big{(} N\d R^{\,\pr }-N^{\pr }\d R\Big{)} \Big{]} _{r_{+}}^{\infty }\;, \label{delG} \end{equation} and the contribution from the scalar field equals \begin{equation} \d B_{\phi }=\b \s NR^{\,2}f^{\,2}\phi ^{\pr }\d \phi \big|_{r_{+}}^{\infty }\;. \label{delphi} \end{equation} For the metric, the variation of fields at infinity yields \begin{eqnarray} \left. \d f^{\,2}\right| _{\infty } &=& \left(\fr{2}{l^2}r_0-\fr{2(3+(9-8g) r_0^2/l^2)}{3r}-\fr{4r_0(1-4 r_0^2/l^2)}{r^2} +\mathcal{O}(r^{-3})\right)\d r_0~,\nonumber\\ \d \phi \big| _{\infty } &=& \sqrt{\fr{3}{4\pi G}}\left(\fr{1}{r}-\fr{2r_0}{r^2}+\mathcal{O}(r^{-3})\right)\d r_0~,\nonumber\\ \d R\big| _{\infty } &=& \left(-\fr{r_0}{r}+\fr{3r_0^2}{r^2}+\mathcal{O}(r^{-3})\right)\d r_0~, \end{eqnarray} so \begin{eqnarray} \d B_{G}\big| _{\infty } &=& \frac{\b\s}{8\pi G}\left( \frac{6 r_0( r-4(1-2g/9)r_0)}{l^2} -2+\mathcal{O}(r^{-1})\right) \d r_0\;, \nonumber\\ \delta B_{\phi }\big| _{\infty } &=& \frac{\b\s}{8\pi G}\Big( -\frac{6 r_0(r-4r_0)}{l^2}+\mathcal{O}(r^{-1})\Big) \d r_0\;. \label{B1} \end{eqnarray} The surface term at infinity is \begin{equation} B\big| _{\infty }=-\frac{\b\s(3-8gr_0^2/l^2)}{12\pi G} r_0\;. \label{Binf} \end{equation} The variation of the surface term at the horizon may be found using the relations \begin{eqnarray*} \left. \d R\right| _{r_{+}} &=&\d R(r_{+})-\left. R^{\,\pr }\right| _{r_{+}}\d r_{+}\;, \\ \left. \d f^{\,2}\right| _{r_{+}} &=&-\left. (f^{\,2})^{\pr }\right| _{r_{+}}\d r_{+}\;. \end{eqnarray*} We observe that $\left. \delta B_{\phi }\right| _{r_{+}}$ vanishes, since $f^2(r_+)=0$, and \begin{eqnarray*} \left. \d B\right| _{r_{+}} &=&-\frac{\b \s}{16\pi G N(r_{+})\left. (f^{\,2})^{\pr }\right| _{r_{+}}\d R^{\,2}(r_{+}) \\ &=&-\frac{\s}{4 G}\d R^{\,2}(r_{+})\;. \end{eqnarray*} Thus the surface term at the horizon is \begin{equation} \left. B\right| _{r_{+}}=-\frac{\s }{4 G}R^{\,2}(r_{+})\;. \label{Bhor} \end{equation} Therefore, provided the field equations hold, the Euclidean action reads \begin{equation} I=-\frac{\b\s(3-8gr_0^2/l^2)}{12\pi G} r_0 +\frac{\s}{4 G}R^{\,2}(r_{+})\;. \label{Ionshell} \end{equation} The Euclidean action is related to the free energy through $I=-\b F$. We deduce \begin{equation} I=S-\b M\;, \label{IMS} \end{equation} where $M$ and $S$ are the mass and entropy respectively, \be M = \fr{\s(3-8gr_0^2/l^2)}{12\pi G}\; r_0 \ , \ \ S= \fr{\s}{4 G}\;R^2(r_+)=\fr{A_H}{4 G} \label{massentropy} \ee It is easy to show that the law of thermodynamics $dM=TdS$ holds. For $g = 0$, these expressions reduce to the corresponding quantities for MTZ black holes \cite{Martinez:2004nb}. Alternatively, the mass of the black hole can be found by the Ashtekar-Das method \cite{bibAshDas}. A straightforward calculation confirms the expression (\ref{massentropy}) for the mass. In the case of the topological black hole (\ref{ftbh}) the temperature, entropy and mass are given by respectively, \be T=\frac{3}{4 \pi l} \left( \frac{\rho_{+}}{l} - \frac{l}{3\rho_+} \right)~, \quad S_{TBH}=\frac{\sigma \rho^{2}_{+}}{4G}~,\quad M_{TBH}=\frac{\sigma\rho_{+}}{8 \pi G} \left( \frac{\rho_+^2}{l^2} - 1 \right)~,\label{relations2} \ee and also the law of thermodynamics $dM=TdS$ is obeyed. We note that, in the limit $r_0\to 0,$ $F(r) \to \frac{r^2}{l^2} - 1$ from eq.~(\ref{Psiconf}) and the corresponding temperature (\ref{eqtemperature}) reads $T=\frac{1}{2 \pi l},$ which equals the temperature of the topological black hole (\ref{relations2}) in the limit $\rho_0\to 0$ ($\rho_+\to 1$). The common limit \begin{equation} ds_{\mathrm{AdS}}^{2}=-\left[ \frac{r^{2}}{l^2} -1\right] dt^{2}+\left[ \frac{r^{2}}{l^2}-1\right] ^{-1}dr ^{2}+r ^{2}d\sigma ^{2}\; \label{muzero}\end{equation} is a manifold of negative constant curvature possessing an event horizon at $r=l$. The TBH and our hairy black hole solution match continuously at the critical temperature \be\label{eqTcr} T_0 = \frac{1}{2\pi l}~, \ee which corresponds to $M_{TBH} = M = 0$, with (\ref{muzero}) a transient configuration. Evidently, at the critical point (\ref{eqTcr}) a scaling symmetry emerges owing to the fact that the metric becomes pure AdS. At the critical temperature (\ref{eqTcr}) a higher-order phase transition occurs as in the case of the MTZ black hole (with $g=0$). Introducing the terms with $g\ne 0$ in the potential do not alter this result qualitatively. Next we perform a detailed analysis of thermodynamics and examine several values of the coupling constant $g$. Henceforth we shall work with units in which $l=1$. We begin with a geometrical characteristic of the hairy black hole, the horizon radius $r_+$ (root of $F(r)$ (eq.~(\ref{Psiconf})). In figure \ref{horizons} we show the $r_0$ dependence of the horizon for representative values of the coupling constant, $g=3$ and $g=0.0005.$ We observe that, for $g=3,$ the horizon may correspond to more than one value of the parameter $r_0.$ For $g=0.0005$ we see that, additionally, there is a maximum value of the horizon radius. We note that one may express the radius of the horizon $r_{+}$ in terms of the dimensionless parameter \be \xi = \frac{r_0}{r_+} \ee as \be r_+ = \frac{1+\xi }{\sqrt{1+g \xi (1+\xi)(-1+2\xi + 2\xi^2) + \frac{1}{2} g \ln (1+2\xi)}} \label{rhm3}~. \ee The temperature reads \be T = \frac{1+\xi (1+\xi) (4-g(1+2\xi + 2\xi^2)) + \frac{1}{2} g (1+2\xi)^2 \ln (1+2\xi)}{2\pi (1+2\xi) \sqrt{1+g \xi (1+\xi)(-1+2\xi + 2\xi^2) + \frac{1}{2} g \ln (1+2\xi)}}~, \label{Tm3} \ee or equivalently \be T=\frac{(r_++r_0)(r_+^2+4 r_0 r_+ +4 r_0^2-8 g r_0^3 r_+-8 g r_0^4)}{2 \pi r_+^3}~,\ee a third order equation in $r_+,$ showing that, for a given temperature, there are in general three possible values of $\xi$. Thus we obtain up to three different branches of our hairy black hole solution. We start our analysis with a relatively large value of the coupling constant $g$, namely $g=3$ and calculate the horizon radius, temperature and Euclidean action for various values of $r_0$. In figure \ref{mass_action_q3}, left panel, we depict $r_0$ versus $T$ and it is clear that there is a $T$ interval for which there are really three corresponding values for $r_0.$ Outside this interval, there is just one solution. The corresponding graph for the Euclidean actions may be seen in the right panel of the same figure. The action for the topological black hole with the common temperature $T$ is represented by a continuous line, while the actions for the hairy black holes are shown in the form of points. We note that equation (\ref{ftbh}) yields for the temperature of the topological black hole $T=\frac{1}{4 \pi}\left(\frac{3 \rho_+}{l^2}+\frac{1}{\rho_+}\right) \Rightarrow \rho_+=\frac{2 \pi T}{3}+\sqrt{\left(\frac{2 \pi T}{3}\right)^2-\frac{1}{3}}.$ The largest Euclidean action (smallest free energy) will dominate. There are three branches for the hairy black hole, corresponding to the three different values of $r_0.$ In particular, for some fixed temperature (e.g., $T=0.16$) the algebraically lowest $r_0$ corresponds to the algebraically lowest Euclidean action; similarly, the medium and largest $r_0$ parameters correspond to the medium and largest Euclidean actions. The medium Euclidean action for the hairy black hole is very close to the Euclidean action for the topological black hole. In fact, it is slightly smaller than the latter for $T<\frac{1}{2 \pi} \approx 0.159$ and slightly larger after that value. If it were the only branch present, one would thus conclude that the hairy black hole dominates for small temperatures, while for large temperatures the topological black hole would be preferred. This would be a situation similar to the one of the MTZ black hole. However, the two additional branches change completely our conclusions. The upper branch shows that the hairy black hole dominates up to $T \approx 0.20.$ When the coupling constant $g$ decreases, equation (\ref{Tm3}) along with the demand that the temperature should be positive, show that the acceptable values of $r_0$ are two rather than three, as may be seen in figure \ref{mass_action_q00005}, left panel. The lowest branch of the previous corresponding figure \ref{mass_action_q3} shrinks for decreasing $g$ and it finally disappears. An interesting consequence of this is that the temperature has an upper limit. The graph for the Euclidean actions (figure \ref{mass_action_q00005}, right panel) is influenced accordingly. There are just two branches for the hairy black hole, rather than three in figure \ref{mass_action_q3} and the figure ends on its right hand side at $T \approx 1.25.$ The continuous line represents the Euclidean action for the topological black hole with the same temperature. Similar remarks hold as in the previous case, e.g., the phase transition moves to $T \approx 0.80.$ In addition, the largest value of $r_0$ corresponds to the upper branch of the hairy black hole. To understand the nature of this phase transition it is instructive to draw a kind of phase diagram, so that we can spot which is the dominant solution for a given pair of $g$ and $T.$ We depict our result in figure \ref{phd}. The hairy solution dominates below the curve which shows the critical temperature as a function of the coupling constant $g$. The most striking feature of the graph is that the critical temperature diverges as $g\to 0$. Thus, it does not converge to the MTZ value $\fr{1}{2 \pi} \approx 0.159$ at $g=0$. For even the slightest nonzero values of $g$ the critical temperature gets extremely large values! This appears to put the conformal point (MTZ black hole) in a special status within the set of these hairy black holes. In other words, the restoration of conformal invariance is not a smooth process, and the MTZ black hole solution cannot be obtained in a continuous way as $g\rightarrow 0$. In fact, it seems that (even infitesimally) away from the conformal point $g=0$ black holes are mostly hairy! \begin{figure}[!t] \centering \includegraphics[scale=0.3,angle=-90]{horizon_p3.ps} \includegraphics[scale=0.3,angle=-90]{horizon_p00005.ps} \caption{Horizon versus parameter $r_0$ for $g=3$ (left panel) and $g=0.0005$ (right panel).} \label{horizons} \end{figure} \begin{figure}[!t] \centering \includegraphics[scale=0.3,angle=-90]{mass_p3.ps} \includegraphics[scale=0.3,angle=-90]{actions_p3.ps} \caption{Parameter $r_0$ (left panel) and Euclidean actions versus temperature for $g=3$ (right panel).} \label{mass_action_q3} \end{figure} \begin{figure}[!b] \centering \includegraphics[scale=0.3,angle=-90]{mass_p00005.ps} \includegraphics[scale=0.3,angle=-90]{actions_p00005.ps} \caption{Parameter $r_0$ (left panel) and Euclidean actions versus temperature for $g=0.0005$ (right panel).} \label{mass_action_q00005} \end{figure} \begin{figure}[!t] \centering \includegraphics[scale=0.3,angle=-90]{phasediagram.ps} \caption{Phase diagram. For points under the curve the hairy solution will be preferred.} \label{phd} \end{figure} \section{Stability Analysis} \label{secst} To perform the stability analysis of the hairy black hole it is more convenient to work in the Einstein frame. Henceforth, we shall work in units in which the radius of the boundary is $l=1$. We begin with the hairy black hole line element, \begin{equation}\label{mtzhat} ds_0^2=\frac{\hat{r}(\hat{r}+2 r_0)}{(\hat{r}+r_0)^2} \left[-F(\hat{r}) dt^2+\frac{d\hat{r}^2}{F(\hat{r})}+\hat{r}^2 d \sigma^2\right], \ee which can be written in the form \begin{equation} ds^{2}=-\frac{f_0}{h_0^{2}}dt^{2}+\frac{dr^{2}}{f_0}+r^{2}d\sigma^{2} \end{equation} using the definitions \bea f_0(r) = F(\hat{r}) \left( 1 + \frac{r_0^2}{(\hat{r} + 2r_0)(\hat{r}+r_0)} \right)^2~, \nonumber\\ h_0(r) = \left( 1 +\frac{r_0^2}{(\hat{r} + 2r_0)(\hat{r}+r_0)} \right) \frac{\hat{r}+r_0}{\sqrt{\hat{r}(\hat{r}+2 r_0)}}~, \eea under the change of coordinates \bea r=\fr{\hat{r}^{3/2}(\hat{r}+2r_0)^{1/2}}{\hat{r}+r_0}~. \eea The scalar field solution reads \be \phi_0 (r) = \sqrt{\frac{3}{4\pi G}} \tanh^{-1} \frac{r_0}{\hat{r}+r_0}~, \ee obeying the boundary conditions (\ref{eqBCphi}) with \be\label{eqxc} \alpha =\alpha_0 = \sqrt{\frac{3}{4\pi G}} |r_0|~. \ee We are interested in figuring out when the black hole is unstable (losing its hair to turn into a TBH) and discuss the results in the context of thermodynamic considerations. To this end, we apply the perturbation \begin{equation} f(r,t)=f_{0}(r)+f_{1}(r)e^{\omega t} ,\,\,\,\,\, h(r,t)=h_{0}(r)+h_{1}(r)e^{\omega t},\,\,\,\,\, \phi(r,t)=\phi_{0}(r)+\frac{\phi_{1}(r)}{r} e^{\omega t}~.\,\,\label{perts} \end{equation} which respects the boundary conditions (\ref{eqBCphi}) with $\omega >0$ for an instability to develop. The field equations read: \be -1-f-r f^\prime+r f \fr{h^\prime}{h}+8 \pi G r^2 V(\phi)=0~, \ee \be \dot{f}+r f \dot{\phi}\phi^\prime=0~,\ee \be 2 h^\prime+r h \left[\fr{h^2}{f^2}\dot{\phi}^2+\phi^{\prime 2}\right] =0~,\ee \be \left(\fr{\dot{h}}{f} \dot{\phi}\right)-\fr{1}{r^2}\left(r^2 \fr{f}{h} \phi^\prime \right)^\prime + \fr{1}{h} V^\prime(\phi)=0~.\ee The field equations give a Schr\"odinger-like wave equation for the scalar perturbation, \begin{equation} -\frac{d^{2}\phi_{1}}{dr^{2}_{*}}+\mathcal{V}\phi_{1}=-\omega^{2}\phi_{1}~, \label{scal1} \end{equation} where we defined the tortoise coordinate \be \frac{dr_{*}}{dr}=\frac{h_{0}}{f_{0}}~, \ee and the effective potential is given by \be \mathcal{V} = \frac{f_0}{h_0^2} \left[ - \frac{1}{2} (1+r^2 {\phi_0'}^2) {\phi_0'}^2 f_0 + (1-r^2 {\phi_0'}^2 ) \frac{f_0'}{r} + 2r\phi_0' V'(\phi_0) + V''(\phi_0) \right]. \ee The explicit form of the Schr\"odinger-like equation reads: \be -F(\hat{r}) \fr{d}{d \hat{r}}\left[F(\hat{r}) \fr{d\phi_1}{d \hat{r}}\right]+{\cal V} \phi_1 = -\omega^2 \phi_1, \ee where the functional form of the function $F$ has been given in equation (\ref{Psiconf}) and \footnote{We have set $8\pi G=1$ .} \be {\cal V} = \fr{r_0^2 F(\hat{r})}{\hat{r}^2 \left(1+\fr{2 r_0}{\hat{r}}\right)^2 \left(1+\fr{3 r_0}{\hat{r}}+\fr{3 r_0^2}{\hat{r}^2}\right)^2}\left\{5 + \fr{2+(11 g+54)r_0^2}{r_0 \hat{r}} + \fr{29+(47 g+189) r_0^2}{\hat{r}^2} \right. \ee \be \left. +\fr{r_0 (150+(-3 g+270) r_0^2)}{\hat{r}^3}+ \fr{r_0^2 (396+(-351 g+135) r_0^2)}{\hat{r}^4}+ \fr{r_0^3 (612 -873 g r_0^2)}{\hat{r}^5} \right. \ee \be \left. +\fr{r_0^4 (582-1047 g r_0^2)}{\hat{r}^6} + \fr{324 r_0^5 (1 - 2 g r_0^2)}{\hat{r}^7} + \fr{81 r_0^6 (1 - 2 g r_0^2)}{\hat{r}^8} \right. \ee \be \left. +\fr{g}{2} \left(5+\fr{54 r_0}{\hat{r}} + \fr{189 r_0^2}{\hat{r}^2} +\fr{270 r_0^3}{\hat{r}^3} +\fr{135 r_0^4}{\hat{r}^4} \right) \ln\left(1+\fr{2 r_0}{\hat{r}}\right) \right\}~.\ee Near the horizon the Schr\"odinger-like equation simplifies to \be -[f^\prime(\hat{r}_+)]^2 \epsilon \fr{d}{d \epsilon} \left[\epsilon \fr{d \phi_1}{d \epsilon} \right] = -\omega^2 \phi_1, \ \ \epsilon = \hat{r}-\hat{r}_+ \ee and its acceptable solution reads \be \phi_1 \sim \epsilon^{\kappa \omega}, \ \ \ \kappa=\fr{1}{f^\prime(\hat{r}_+)}, \ \ \omega>0~. \ee Regularity of the scalar field at the horizon ($r\to r_+$) requires the boundary conditions \be \phi_1 = 0 \ \ , \ \ \ \ (r-r_+) \phi_1' = \kappa\omega \phi_1\ \ , \ \ \ \ \kappa > 0 ~. \ee For a given $\omega >0$, they uniquely determine the wavefunction. At the boundary $(\hat{r} \to \infty),$ the wave equation is approximated by \be - \frac{d^2\phi_1}{dr_*^2} + 5r_0^2 \phi_1 = -\omega^2 \phi_1~, \ee with solutions \be \phi_1 = e^{\pm E r_*} \ \ , \ \ \ E = \sqrt{\omega^2 + 5 r_0^2}~, \ee where $ r_* = \int \fr{d \hat{r}}{f(\hat{r})}= - \frac{1}{r} + \dots $~. Therefore, for large $r$, \be \phi_1 = A + \frac{B}{r} + \dots \label{fitAB}\ee To match the boundary conditions (\ref{eqBCphi}), we need \be\label{eqx33} \frac{B}{A} = 2 c \alpha_0 =- 2 r_0~. \ee Since the wavefunction has already been determined by the boundary conditions at the horizon and therefore also the ratio $B/A$, this is a constraint on $\omega.$ If (\ref{eqx33}) has a solution, then the black hole is unstable. If it does not, then there is no instability of this type (however, one should be careful with non-perturbative instabilities). In figure \ref{stable_q0_q00005} (left panel) we show the ratio $B/A$ for the standard MTZ black hole (corresponding to $g=0$) versus $\omega$ at a typical value of the mass parameter, namely $r_0=-0.10.$ It is obvious that the value of the ratio lies well below the values $2 r_0=+0.20.$ It is clearly impossible to have a solution to this equation, so the solution is stable. In fact this value of the mass parameter lies in the interesting range for this black hole, since thermodynamics dictates that for negative values of $r_0$ MTZ black holes are favored against topological black holes. We find that the MTZ black holes turn out to be stable. Next we examine the case $g=0.0005,$ for which we have presented data before in figure \ref{mass_action_q00005}. As we have explained there, the most interesting branch of the graphs is the upper branch, on the right panel, which dominates the small $T$ part of the graph and corresponds to large values of $r_0$ (typically around $30$ on the left panel of the same figure). Thus we set $g=0.0005, \ \ r_0=+30$ and plot $B/A$ versus $\omega$ in the right panel of figure \ref{stable_q0_q00005}. It is clear that the curve lies systematically below the quantity $-2 r_0 = -60$ and no solution is possible, so the hairy black hole with these parameters is stable. Finally we come to the case with $g=+3,$ which has three branches. In the left panel of figure \ref{stable_q3} we show the results for $r_0=+0.40,$ which corresponds to the upper branch of figure \ref{mass_action_q3}; the curve again lies below the quantity $-2 r_0 = -0.80$ and no solution is possible, so this hairy black hole is stable. In the right panel of figure \ref{stable_q3} we show the results for $r_0=-0.30,$ which corresponds to the lowest branch of figure \ref{mass_action_q3}, which disappears for decreasing $g.$ In this case we find something qualitatively different: the curve cuts the line $-2 r_0 = +0.60$ around $\omega \approx 0.40$ and a solution is possible, signaling instability. Thus, for $g=3$ the hairy black hole may be stable or unstable, depending on the value of $r_0.$ \begin{figure}[!t] \centering \includegraphics[scale=0.3,angle=-90]{qq0mm010.ps} \includegraphics[scale=0.3,angle=-90]{qq00005mp30.ps} \caption{Stability of the standard MTZ black hole (left panel) for $r_0=-0.10;$ similarly for the hairy black hole at $g=0.0005, \ r_0=+30$ (right panel).} \label{stable_q0_q00005} \end{figure} \begin{figure}[!t] \centering \includegraphics[scale=0.3,angle=-90]{qq3mp040.ps} \includegraphics[scale=0.3,angle=-90]{qq3mm030.ps} \caption{Stability of the hairy black hole for $g=3$ and $r_0=+0.40$ (left panel); similarly for the hairy black hole at $r_0=-0.30$ (right panel).} \label{stable_q3} \end{figure} \section{Conclusions} \label{sec6} We presented a new class of hairy black hole solutions in asymptotically AdS space. The scalar field is minimally coupled to gravity with a non-trivial self-interaction potential. A coupling constant $g$ in the potential parametrizes our solutions. If $g=0$ the conformal invariant MTZ black hole solution, conformally coupled to a scalar field, is obtained. If $g\neq 0$ a whole new class of hairy black hole solutions is generated. The scalar field is conformally coupled but the solutions are not conformally invariant. These solutions are perturbative stable near the conformal point for negative mass and they may develop instabilities for positive mass. We studied the thermodynamical properties of the solutions. Calculating the free energy we showed that for a general $g$, apart the phase transition of the MTZ black hole at the critical temperature $T=1/2\pi l$, there is another critical temperature, higher than the MTZ critical temperature, which depends on $g$ and where a first order phase transition occurs of the vacuum black hole towards a hairy one. The existence of a second critical temperature is a manifestation of the breaking of conformal invariance. As $g\rightarrow 0$ the second critical temperature diverges, indicating that there is no smooth limit to the MTZ solution. The solutions presented and discussed in this work have hyperbolic horizons. There are also hairy black hole solutions with flat or spherical horizons of similar form. However, these solutions are pathological. In the solutions with flat horizons, the scalar field diverges at the horizon, in accordance to the ``no-hair" theorems. In the case of spherical horizons, calculating the free energy we find that always the vacuum solution is preferable over the hairy configuration. Moreover, studying the asymptotic behaviour of the solutions, we found that they are unstable for any value of the mass. \section*{Acknowledgments} G.~S.~was supported in part by the US Department of Energy under grant DE-FG05-91ER40627.
8b5e6c0adc5e8382bcb66604e2e567f2614c23ec
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{INTRODUCTION} \label{intro} One of the current issues in black hole (BH) research is to measure the spin parameter, $a$. This is possible if we can estimate the radius of the innermost stable orbit $R_{\rm in}$, which decreases from $6 R_{\rm g}$ ($R_{\rm g}$ being the gravitational radius) down to 1.235 $R_{\rm g}$ as $a$ increases from 0 to 1. A traditional method of measuring $R_{\rm in}$ of black-hole binaries (BHBs) is to parameterize the optically-thick disk emission component in their X-ray spectra in terms of the disk-blackbody ({\tt diskBB}) model (Mitsuda et al. 1984). First applied successfully to GX~339-4 (Makishima et al. 1986) and LMC X-3 (Ebisawa et al. 1993), this method has been calibrated using BH masses estimated from companion star kinematics, and confirmed to give reliable (e.g., within $\sim 30$\%) estimates of $R_{\rm in}$ (Makishima et al. 2000) if distance uncertainties can be neglected. A more modern way to estimate $a$, applicable also to active galactic nuclei, is to utilize iron line profiles, which become broadened and skewed due to stronger relativistic effects as $a$ increases (e.g., Fabian et al. 1989). First found from the Seyfert galaxy MCG--6-30-15 by {\it ASCA} (Tanaka et al. 1995), the broad Fe-K lines were later reported in a number of Seyferts using, e.g., {\it XMM-Newton} (Nandra et al. 2007). The latest {\it Suzaku} results on MCG--6-30-15 indicate a high spin parameter of $a > 0.917$ (Miniutti et al. 2007). Similar high values of $a$ were derived from BHBs (Miller 2007, Miller et al. 2009), including GX~339$-$4 observed with {\it XMM-Newton} and {\it Chandra} (Miller et al. 2004ab). Analyzing the {\it Suzaku} data of this BHB acquired in 2007 February, Miller et al. (2008), hereafter MEA08, confirmed the broad Fe-K feature, and argued that the object is an extreme Kerr BH with $R_{\rm in} \sim R_{\rm g}$. The wide-band {\it Suzaku} spectra of Cyg X-1 in the Low/Hard state (LHS), in contrast, gave $R_{\rm in} \sim 15 R_{\rm g} $ (Makishima et al. 2008), via both the disk emission analysis and the Fe-K line modeling. Likewise, $R_{\rm in} \sim 8 R_{\it g}$ was obtained from the {\it Suzaku} data of GRO 1655-40 (Takahashi et al. 2008). Also, a 2--20 keV {\it Tenma} observation of GX 339$-$4 (Makishima et al. 1986) in the High/Soft state (HSS), with dominant disk emission, gave $R_{\rm in}= 57\,d_{8}$ km, or $5.6\,Q$ times $R_{\rm g}$; here $Q \equiv d_8/m_7$, $d_8$ is the distance in 8 kpc (Zdziarski et al. 2004), $m_7$ is the BH mass in units of a typical value of $7~M_\odot$ (Hynes et al. 2004), and $R_{\rm in}$ was recalculated applying a correction factor of 1.18 (Kubota et al. 1998; Makishima et al. 2000) and assuming an inclination of $i=30^\circ$ (Gallo et al. 2004). These results imply $a \sim 0$. To examine these apparent discrepancies on $a$, we reanalyzed the same {\it Suzaku} data of GX 339-4 as MEA03, and found that the X-ray Imaging Spectrometer (XIS; Koyama et al. 2008) events suffer heavy pileup and telemetry saturation. These effects, neglected by MEA08, distort the continuum, and {\it indirectly} affect the Fe-K line shape. \section{OBSERVATION} \label{obs} The present {\it Suzaku} data of GX 339$-$4, as used by MEA08, were obtained on 2007 February 12, during an outburst which started in late 2006 (Swank et al. 2006). The XIS employed the 1/4 window option and a burst option (0.3 sec for XIS0/XIS1, and 0.5 sec for XIS3), to achieve a time resolution of 2 sec and a duty cycle of 15\% or 25\%. The Hard X-ray Detector (HXD; Kokubun et al. 2008) was operated in the standard mode. The data processing and reduction were performed in the same way as MEA08, using the {\it Suzaku} pipeline processing ver.\,2.0.6.13. In spite of the 1/4 window and burst options, the XIS events piled up significantly (\S~\ref{instrumental}) as the source was very bright ($\sim 0.6$ Crab in the 2--10 keV {\it Suzaku} band). In fact, the unscreened XIS event file contains an unusually high fraction of Grade 1 events, which are produced when pileup occurs.\footnote{http://www.astro.isas.ac.jp/suzaku/analysis/xis/pileup/ \\ HowToCheckPileup\_v1.pdf.} In addition, a variable fraction of the CCD frame was often lost due to telemetry saturation; this affects the absolute XIS flux. Out of the 10.3 ks of XIS0 exposure, only 2.84 ks was free from this problem. The HXD-PIN and HXD-GSO spectra, acquired for a net exposure of 87.9 ks, were corrected for small dead times, but no other correction due to the source brightness was necessary. We subtracted modeled non X-ray backgrounds (NXBs; Fukazawa et al. 2009). The cosmic X-ray background, $<1\%$ of the total counts, was ignored. By analyzing the HXD data acquired during Earth occultations, we confirmed the NXB models to reproduce the PIN and GSO data to within 1\%, a typical accuracy level (Fukazawa et al. 2009). Since the signal becomes comparable to this uncertainty at $\sim$ 300 keV, we quote the source detection up to $\sim 300 $ keV. In the present {\em Letter}, we use data from XIS0, HXD-PIN, and HXD-GSO. We do not use data from XIS1 or XIS3, which are affected more by pileup than XIS0. Unless otherwise stated, errors refer to 90\% confidence limits. \section{DATA ANALYSIS AND RESULTS} \subsection{Effects of event pileup and telemetry saturation} \label{instrumental} To examine in detail the XIS0 data for pileup effects, we produced a radial count-rate profile from a 0.5--10.0 keV XIS0 image (discarding telemetry-saturated frames), and compared it with that of a pileup-free point source, i.e., MCG--6-30-15 observed on 2005 August 17. The profile ratio between GX 339$-$4 and MCG--6-30-15 decreases significantly within a radius of $r=2'$ of the image centroid, reaching at the center $\sim 25\%$ of the ratio at $r>2'$. Therefore, the XIS0 data are affected by pileup at least within $r< 2'$, and possibly up to $\rm 3'$. Figure~\ref{fig:avespec} shows XIS0 spectra of GX 339$-$4, accumulated over different annuli around the source, and divided by that outside $3'$ to visualize shape changes. Pileup thus affects the 1-10 keV continuum shape, and produces line-like features at 1.8 and 2.2 keV. Although MEA08 attributed the line features to response uncertainties, it fails to explain the fact that the line strength increases inwards. These features, appearing at the Si-K edge in the XIS and the Au M-edge in the X-ray Telescope (Serlemitsos et al. 2007), are due to rapid changes of the instrumental response coupled with pileup. \begin{figure}[bth] \begin{center} \vbox{ \includegraphics[scale=0.48]{figure1.eps}} \caption{ The XIS0 spectra of GX 339$-$4 taken from different annuli around the source, $0'-7.5'$ (black), $1'-7.5'$ (cyan), and $2'-7.5'$ (blue), all divided by that over $3'-7.5'$. The ratios are renormalized to unity at 5.0 keV. The XIS background, though inclusive, is negligible.} \label{fig:avespec} \end{center} \end{figure} Figure~\ref{fig:burst_spectra} shows raw XIS0 spectra accumulated from different annular regions of the image, including telemetry-saturated frames, and corrected for neither pileup nor live-time fraction. They are shown divided by a powerlaw (PL) prediction, which is calculated using respective ARFs (Ishisaki et al. 2007) that properly consider the fractional photons falling therein. In all cases, the PL was chosen to have $\Gamma=2.6$ (which approximates the average 2--50 keV spectral slope), and a common normalization of 5 ph cm$^{-2}$ keV$^{-1}$ s$^{-1}$, absorbed by a column of $N_{\rm H} = 5.0 \times 10^{21}$ cm$^{-2}$. Towards the image centroid, the continua are severely distorted by event pileup. In addition, the spectral ratio decreases towards outer regions, which suffer increasingly from the telemetry saturation. The live-time fractions of XIS0 are 85\%, 79\%, 60\%, and 43\%, from the inner to outer annuli. In Figure~\ref{fig:burst_spectra}, the NXB-subtracted HXD-PIN spectrum is shown in black. It is further repeated four times, but scaled by the XIS live-time fractions so that a pair of XIS and HXD-PIN spectra with the same color should match. MEA08 used the grey XIS0 spectrum and the black HXD-PIN data, but even putting aside the pileup distortions, this is an incorrect combination. In MEA08, the XIS and HXD-PIN data points match rather accidentally, because pileup (which increases the highest XIS spectral end) and the telemetry saturation (which reduces the XIS normalization) have opposite effects. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.33]{figure2.eps} \caption{ The XIS spectra from annuli with different radii, divided by predictions of a common PL model with $\Gamma$ = 2.6; $0'-7'.5$ (grey), $2'.0-7'.5$ (blue), $3'.0-7'.5$ (red), and $4'.0-7'.5'$ (green). They are corrected for neither pileup nor telemetry saturation. The 13--60 keV HXD-PIN spectrum is shown in black. It is reproduced after scaling by live-time fractions of the XIS spectra of the corresponding colors: 85\%, 79\%, 60\%, and 43\% for grey, blue, red, and green, respectively. The two value of $\Gamma$, used in Figure 3, are illustrated. } \label{fig:burst_spectra} \end{center} \end{figure} \subsection{Analysis of the $r=0'-4'$ spectra} \label{corespec} \begin{figure}[bth] \begin{center} \vbox{ \includegraphics[scale=0.2]{figure3.eps}} \caption{ (a) The XIS0 spectrum of GX 339$-$4 in $\nu F \nu$ form, taken from the $0'-4'$ region around the source. It is fitted with two different sets of PL+{\tt diskBB} models (see text), ignoring the 4-7 keV range. (b) The same spectrum, divided by the two models given in (a). } \label{fig:pileupspec} \end{center} \end{figure} We tentatively fitted the raw XIS0 spectrum from $0'-4'$ (grey one in Figure~\ref{fig:burst_spectra}), with a PL plus {\tt diskBB} model, but ignoring the 4--7 keV range as MEA08 did. When using the PL photon index of $\Gamma=2.2$ which MEA08 (supplemented by Miller 2009) found for XIS data, the {\tt diskBB} temperature was $T_{\rm in}=0.8$ keV (shown in Figure 3 (a) in red). The ratio plot (shown in Figure 3 (b) also in red) reveals the broad Fe-K line feature reported by MEA08. However, the feature becomes much narrower and weaker (blue crosses), if we employ $\Gamma=2.44$ which is close to the value of $\Gamma=2.4$ used by Miller (2009) for the HXD-PIN data. In this case, $T_{\rm in}=0.7$ keV. Thus, the large Fe-K line width claimed by MEA08 depends on the employed continuum slopes, which are also indicated in Figure~\ref{fig:burst_spectra}. In fact, the slope should not differ by $\Delta \Gamma \sim 0.2$ between the XIS and PIN (suzaku memo-2008-06). \subsection{Analysis of the $r>2'$ spectra} \label{bluespec} Discarding the $r<2'$ region where the pileup effects are severe, we jointly fit the ``blue" pair of XIS0 and HXD-PIN spectra in Figure~\ref{fig:burst_spectra}, in the 2.4--9.0 keV and 13.0--60.0 keV ranges, respectively. Though still piled up, this XIS0 spectrum ($r=2'-7'.5$) retains high signal statistics. The spectra in Figure~\ref{fig:burst_spectra} exhibit a soft ($\lesssim 8$ keV) and a hard (20--40 keV) hump, and an Fe-K emission line, as found before (Zdziarski et al. 1998; Ueda et al. 1993). We hence employed a model consisting of a {\tt diskBB}, a PL, the associated ionized reflection {\tt pexriv} (Magdziarz \& Zdziarski 1994), and a {\tt laor} (Loar 1991) model for the Fe-K line with the emissivity index fixed at $q=3.0$ and the outer disk radius at $400 R_{\rm g}$. The metal abundances in {\tt pexriv} were fixed to solar values except that of iron ($Z_{\rm Fe}$), while the disk inclination was allowed to vary over $i=25^\circ - 45^\circ$. The ``constant'' parameter was set at $C_{\rm XIS}=0.79$ (the live-time fraction), for XIS0 with a tolerance of $\pm 0.05$, while at 1.07 (Suzaku memo 2007-11) for HXD-PIN. We left free all the other model parameters, including the absorbing column $N_{\rm H}$. The fit was acceptable, and behaved as shown by filled blue circles in Figure~\ref{fig:chisq} as a function of the {\tt laor} inner radius, $R_{\rm Fe}$. We thus find $R_{\rm Fe}<3.6 R_{\rm g}$, which is consistent with the conclusion of MEA08. The other parameters are: $C_{\rm XIS} = 0.78 \pm 0.02$, $\Gamma=2.68 \pm 0.02$, $T_{\rm in}=0.73^{+0.03}_{-0.02}$ keV, the {\tt pexriv} solid angle fraction $\Omega/2\pi =0.64^{+0.02}_{-0.03}$, its ionization parameter $\log \xi = 2.53^{+0.38}_{-0.74}$, and $Z_{\rm Fe} = 1.77 ^{+0.13}_{-0.15}$. Fixing $Z_{\rm Fe} = 1.0$ worsened the fit by $\Delta \chi^2 \sim 30$. The {\tt laor} rest-frame energy was $E_{\rm c}=6.71^{+0.09}_{-0.53}$ keV, and its inclination $i = 43^\circ~^{(+2)}_{-4}$. However, the Fe-K equivalent width (EW), $230^{+80}_{-40}$ eV, much exceeds the value of $\sim 110$ eV (George \& Fabian 1991) predicted by the derived $\Gamma$, $\Omega/2\pi$, and $Z_{\rm Fe}$. The steep slope of $\Gamma \sim 2.7$ indicates that the source was in the Very High state (VHS; Miyamoto et al. 1993), wherein the disk emission must be Comptonized (Kubota \& Makishima 2004). We hence replaced {\tt diskbb} with a Comptonized blackbody, {\tt compbb}; the photon source it assumes, a single blackbody, may be reconciled with the multi-color {\tt diskbb} formalism by the free $N_{\rm H}$. The results of this analysis are given in Figure~\ref{fig:chisq} by open blue squares. Thus, the local $\chi^2$ minimum at $R_{\rm Fe} \sim 10 R_{\rm g}$, which existed in the {\tt diskbb} modeling, became as good as the small-$R_{\rm Fe}$ solution. This large-$R_{\rm Fe}$ solution is characterized by a blackbody temperature of $0.54 \pm 0.01$ keV, a Compton optical depth of $\tau = 1.09^{+0.06}_{-0.12}$, an electron temperature of $9.5^{+0.9}_{-1.6}$ keV, $E_{\rm c} = 6.29^{+0.22}_{-0.18}$ keV and $i = 42^\circ~^{+3}_{-11}$. Again, we find that the Fe-line solutions are degenerate, depending on the continuum choice. \begin{figure} \begin{center} \vbox{ \includegraphics[scale=0.42]{figure4.eps}} \caption{Chi-squares of the joint fits to the XIS0 and HXD-PIN spectra, shown as a function of the {\tt laor} inner radius $R_{\rm Fe}$. Blue and red data points use the spectra of the same color as in Figure~\ref{fig:burst_spectra}. Filled circles show the absorbed {\tt diskbb+pexriv+laor} fit, with the degree of freedom $\nu$ given on the ordinate. Open squares are derived when {\tt diskbb} is replaced with {\tt compbb}, with $\nu$ decreasing by 2. } \label{fig:chisq} \end{center} \end{figure} \subsection{Analysis of the $r>3'$ spectra} \label{redspec} We finally analyze the ``red" spectra in Figure~\ref{fig:burst_spectra} in the same way as in \S~\ref{bluespec}, under a constraint of $C_{\rm XIS}= 0.60\pm 0.06$. Although this XIS0 spectrum ($r=3'-7'.5)$ is estimated to be still weakly ($\lesssim 10 \%$) piled up, we do not correct for this, because no established method is available yet. The HXD data are the same as in \S~\ref{bluespec}. In advance, the red XIS0 spectrum and that of XIS3 from $r=3.5'-7.5'$, where the count rates are comparable, were confirmed to have a constant ratio within $\sim 10\%$. We also confirmed with the MCG--6-30-15 data that XIS spectra from these outer regions have a constant (within $\sim 10\%$) ratios to those from the image core. As shown in Figure~\ref{fig:chisq} in red, the results from the {\tt diskbb} and {\tt compbb} modeling agree better than for $r >$ 2, with $\tau = 0.16^{+0.55}_{-0.16}$ for {\tt compbb}. Below, we examine the {\tt diskbb}+{\tt pexriv}+{\tt laor} fit (filled red circles). In contrast to the ``blue" spectra (\S~\ref{bluespec}), the data now prefer the small-$R_{\rm Fe}$ solution as $5.0< R_{\rm Fe}/R_{\rm g} <14$ (with minimum at 8.2) at 68\% confidence, although $R_{\rm Fe}$ is unconstrained at 90\% level due to poor statistics. The other parameters are: $T_{\rm in}=0.72 ^{+0.04}_{-0.03}$ keV, $\Gamma=2.67 ^{+0.06}_{-0.02}$, $\Omega/2\pi =0.60 \pm 0.02$, $Z_{\rm Fe} = 1.71 ^{+0.17}_{-0.13}$, $N_{\rm H} = 0.55^{+0.27}_{-0.13}\times 10^{22} $ cm$^{-2}$, and $i \sim 33 ^\circ$ (the entire range of $25^\circ$--$45^\circ$ is allowed at the 90 \% confidence). The line energy $E_{\rm c}=6.68 ^{+0.41}_{-0.47}$ keV is consistent with the derived range of $\log \xi = 3.65^{+\infty}_{-1.78}$. The Fe-line EW, $104^{+67}_{-53}$ eV, is also now in agreement with the prediction ($\sim 100$ eV) from $\Gamma$, $\Omega/2\pi$, and $Z_{\rm Fe}$. Employing $i=30^\circ$ and the correction factor 1.18 (\S~\ref{intro}), the {\tt diskbb} normalization yields $R_{\rm{in}} = (57^{+11}_{-15}) \;d_{8}$ km, or $R_{\rm in}/R_{\rm g}= (5.6^{+1.0}_{-1.5})\; Q$. The results remain largely unchanged when fixing $Z_{\rm Fe}$ at 1.0. As shown in Figure~\ref{fig:specfit}(a), the best-fit model has been extended to incorporate the 70--300 keV HXD-GSO data, resulting in an acceptable ($\chi^2/\nu = 108.7/102$) simultaneous fit. The model parameters have remained unchanged within their statistical errors. Panels (b) and (c) therein show the fit residuals. We tried a simple continuum blurring, and found no important change, but we defer detailed analysis to later work. \begin{figure} \begin{center} \vbox{ \includegraphics[scale=0.34]{figure5.eps}} \caption{ (a) The XIS0 ($r=3'-7'.5$), HXD-PIN, and HXD-GSO spectra (black crosses), simultaneously fitted with the absorbed {\tt diskbb+pexriv+laor} model (red) and presented in a deconvolved $\nu F \nu$ form. The PL (blue), reflection (green), {\tt diskbb} (orange), and {\tt laor} (purple) components are also shown. (b) The fit residuals. (c) The same as panel (b), but the {\tt laor} component is set to 0. } \label{fig:specfit} \end{center} \end{figure} \section{DISCUSSION} \label{discussion} We reanalyzed the {\it Suzaku} XIS0 and HXD data of GX 339$-4$ acquired in the 2007 VHS. When the disk emission was modeled with {\tt diskbb}, the ``blue" XIS0 spectrum, using $r>2'$, gave a small Fe-K inner radius as $R_{\rm Fe}< 3.6 R_{\rm g}$ (\S~\ref{bluespec}). Although this apparently reconfirms the high BH spin by MEA08, the Fe line EW is too large (\S 3.3). In addition, a large-$R_{\rm Fe}$ ($\sim 10 R_{\rm g}$) solution is also allowed when considering disk Comptonization that is usually the case in the VHS. Thus, the Fe line shapes degenerate, depending on the continuum modeling. To further eliminate the XIS pileup effects, we utilized the ``red" XIS0 spectrum from $r>3'$ (\S 3.4). Then, the {\tt diskbb} and {\tt compbb} results became more consistent. Importantly, the large-$R_{\rm Fe}$ solution (e.g., $5.0< R_{\rm Fe}/R_{\rm g} <14$ at $1\sigma$), are preferred, although the small-$R_{\rm Fe}$ solution still remains valid at the 90\% confidence level. The implied Fe-K line EW and the center energy are consistent with the reflection parameters (\S 3.4). Assuming isotropic emission, the 0.5--200 keV luminosity is $3.8 \times 10^{38}\,d_8^2$ erg s$^{-1}$, or $\sim 0.4\,d_8^2m_7^{-1}$ times the Eddington limit ($L_{\rm Ed}$). Since the VHS have so far been observed in other sources over a typical luminosity range of $ (0.2-1) L_{\rm Ed}$ (Kubota \& Makishima 2004), this indicates $ 0.5 < d_8^2/m_7 < 2.5$, or $ 0.7/\sqrt{m_7} < Q < 1.3/\sqrt{m_7}$. Considering extreme BH masses of $3\;M_\odot$ ($m_7= 0.43$) and $15\;M_\odot$ ($m_7= 2.1$), we then obtain $0.5 < Q < 2$. The {\tt diskbb} radius $R_{\rm in}/R_{\rm g}=(5.6^{+1.0}_{-1.5}) \; Q$, found in \S~3.4, is in fact a lower limit, because a significant fraction of disk photons will be Comptonized into the PL by hot electron clouds (Kubota \& Makishima 2004). Then, the true disk area must be a sum of the {\tt diskBB} area and that of the seed photon source. Since the 0.5--200 keV photon number in the PL component is $\sim5$ times larger than that contained in the 0.5--10 keV {\tt diskBB} emission, the estimated radius will increase by a factor of $\sqrt{1+5}$, to $R_{\rm in}/R_{\rm g} =(10-16)\; Q$. The above estimated uncertainty in $Q$ then yields $R_{\rm in}/R_{\rm g} =(5-32)$. Since we derived this range assuming $i = 30^{\circ}$ (or $\sqrt{\cos i }$ = 0.93), uncertainties in $i$ can increase the upper bound, but would not affect the lower bound by more than $\sim$ 7\%. The present Fe-line and {\tt diskbb} analyses consistently suggest $R_{\rm in}/R_{\rm g} \gtrsim 5$, and hence $a<0.4$, in contrast to the value of $a = 0.89 \pm 0.04$ by MEA08. If our interpretation is correct, GX 339-4 is inferred to be spinning only mildly (if at all), like Cyg X-1 (\S~\ref{intro}; Miller et al. 2009). In addition, the consistency between the two methods reconfirms that $Q$ is relatively close to unity. We must admit that the assumption of a single PL continuum down to $\sim 1$ keV might not be warranted. In that sense, the value of $R_{\rm in}/R_{\rm g}= 5.6\,Q$ (\S~\ref{intro}) measured with {\it Tenma} is more reliable, since it was obtained in the HSS where the spectral modeling is much less ambiguous, and by a non-CCD instrument that is free from any pileup. This, together with our estimates on $Q$, further argues against the large spin parameters. So far, large values of $a$ were reported for GX~339$-$4 based on {\it Chandra} data, and from {\it XMM-Newton} plus {\it RXTE} (Miller et al. 2004ab, and Reis et al. 2008). However, these measurements have not been examined for systematic effects due to the continuum choice employed therein. In addition, the former may be limited by the continuum bandwidth, and the latter could be still subject to CCD pileup. \smallskip The authors would like to express their thanks to the Suzaku team members and an anonymous referee who gave us valuable comments. The present work was supported by Grant-in-Aid for JSPS Fellows.
0f989f8ab16e62c1c399c7213e383c7f336e0929
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} In previous paper,[1] we have shown that a new SO(3) Wu-Yang-like [2] seven-dimensional Kaluza-Klein(KK) dyon solution satisfies the Einstein equation, by calculating the Christoffel symbols and the Ricci tensor. In this note, we will show an alternative approach [3] using an orthonormal frame, the Cartan's structure equations and calculating the affine spin connection one-form, curvature tensor and Ricci tensor. Now consider the Kaluza-Klein theory [4] in a $(4+N)$-manifold $M$ with a metric $\bar{g}_{AB} (x)$ on $M$ in local coordinates $\bar{x}^{A}$. The line element is \begin{eqnarray} d \bar{s}^2 &=&\bar{g}_{AB}d\bar{x}^A d\bar{x}^B \\ &=&g_{\mu \nu} (x) dx^{\mu} dx^{\nu} + \gamma_{mn}(y) (dy^m + B_{\mu}^m dx^{\mu} )(dy^n + B_{\nu}^n dx^{\nu} ), \end{eqnarray} where $x$ parametrizes four-dimensional spacetimes, $y$ parametrizes extra dimensions. We use $A,B, C...$ indices to represent the total spacetimes; $\mu , \nu , \rho...$ to represent the four-dimensional spacetimes; $m, n, l...$ to represent the extra dimensions. $g_{\mu \nu}$ is only a function of $x$, and $\gamma_{mn}$ is only a function of $y$. The ans$\ddot{a}$tz [1] of the Kaluza-Klein dyonic metric admitting $SO(3)$ Killing vectors is \begin{eqnarray} d\bar{s}^2 =& -& e^{2\Psi} dt^{2} \nonumber \\ &+& e^{2\Lambda} dr^{2} +r^2 d\theta^2 +r^2 \sin^{2}{\theta} d\phi^2 \nonumber \\ &+& (dR + B_{\mu}^{5} dx^{\mu} )^2 + R^2 (d\Theta + B_{\mu}^{6} dx^{\mu} )^2 + R^2 \sin^{2}\Theta (d\Phi + B_{\mu}^{7} dx^{\mu} )^2 . \end{eqnarray} $r,\theta , \phi $ are three coordinates of the ordinary three-dimensional spherical coordinate system, $(r,\theta ,\phi )=(\bar{x}^1 ,\bar{x}^2 , \bar{x}^3 )=(x^1 ,x^2 , x^3 )$. $R, \Theta , \Phi$ are another three coordinates of the spherical coordinate system in the extra dimensions, $(R, \Theta ,\Phi ) =(\bar{x}^5 ,\bar{x}^6 , \bar{x}^7 )=(y^{5} ,y^6 ,y^7 )$. $\Psi$ and $\Lambda$ are two functions of $r$. We have $g_{00}=-e^{2\Psi}$, $g_{11}=e^{2\Lambda}$, $g_{22} =r^2$, $g_{33}=r^2 \sin^2 \theta$, $\gamma_{55}=1$, $\gamma_{66}=R^2$, $\gamma_{77}=R^2 \sin^2 \Theta$. $B_{\mu}^m $ cannot be identified as the Yang-Mills field. To extract the true Yang-Mills field, one has to introduce the Killing vectors \begin{equation} L_a \equiv - i \zeta_{a}^{m} \partial_{m}, \end{equation} which generate an $SO(3)$ algebra, \begin{equation} [L_{a} , L_{b} ]=i \epsilon_{abc} L_{c}, \end{equation} Inserting $L_{a}$ of (4) into equation (5), one gets the Killing's equation \begin{equation} \zeta_{a}^m \partial_{m} \zeta_{b}^n - \zeta_{b}^m \partial_{m} \zeta_{a}^n =- \epsilon_{abc} \zeta_{c}^n . \end{equation} With these Killing vectors, one can define \begin{equation} B_{\mu}^m =\zeta_{a}^{m} A_{\mu}^{a}, \end{equation} where $A_{\mu}^{a}$ is the true Yang-Mills field and $\zeta_{a}^{m}$ is only a function of $y$. Defining \begin{eqnarray} \widetilde{\mathcal{F}}_{\mu \nu}^{m} &\equiv& \partial_{\mu} B_{\nu}^{m} + B_{\nu}^{l} \partial_{l} B_{\mu}^{m} -(\mu \leftrightarrow \nu ) \\ &=& \zeta_{a}^{m} F_{\mu \nu}^{a}, \end{eqnarray} where $F_{\mu \nu}^{a}$ is the true field strength tensor of the Yang-Mills field, \begin{equation} F_{\mu \nu}^{a} = \partial_{\mu} A_{\nu}^{a}- \partial_{\nu} A_{\mu}^{a} +\epsilon_{abc} A_{\mu}^{b} A_{\nu}^{c}. \end{equation} It can be checked that the components of $\zeta_{a}^{m}$, \begin{eqnarray} \zeta_{a}^{5} &=& 0, \\ \zeta_{a}^{6} &=& \hat{\Phi}_{a}, \\ \zeta_{a}^{7} &=& -{1\over \sin{\Theta} } \; \hat{\Theta}_{a}, \end{eqnarray} satisfy the Killing equation (6). The gauge-field components of the Wu-Yang-like KK dyon are \begin{eqnarray} A_{0}^{a} &=& {1\over r} \hat{r}^{a}, \;\; \; \;\;\; \; \; A_{1}^{a} = 0, \\ A_{2}^{a} &=& - \hat{\phi}^{a}, \; \;\; \; \; \;\; A_{3}^{a} = \sin\theta \; \hat{\theta}^{a} . \end{eqnarray} $A_{1}^{a}$,$A_{2}^{a}$,$A_{3}^{a}$ are just the spherical coordinate representation of the Wu-Yang monopole field in the ordinary gauge theory of four-dimensional spacetimes. The electric field of the KK dyon is \begin{equation} F_{01}^{a} = {1\over r^2} \; \hat{r}^{a} , \end{equation} while the magnetic field is \begin{equation} F_{23}^{a} = - \sin \theta \; \hat{r}^{a}. \end{equation} The fields $B_{\mu}^{m}$ in (7) can be rewritten as \begin{eqnarray} B_{\mu}^{5} &=& \; 0, \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; B_{1}^{m} =\; \; 0,\; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \\ B_{0}^{6}&=& {1\over r} \; \hat{r} \cdot \hat{\Phi}, \; \; \; \; \; \; \; \; \; \; \; \; \; \; B_{0}^{7} = - { 1\over {r\sin\Theta}} \; \hat{r} \cdot \hat{\Theta}, \\ B_{2}^{6}&=& - \hat{\phi} \cdot \hat{\Phi}, \; \; \; \; \; \; \; \; \; \; \; \;\; \; B_{2}^{7} = \; \;\; { 1\over {\sin\Theta}} \; \hat{\phi} \cdot \hat{\Theta}, \\ B_{3}^{6}&=& \; \sin\theta \; \hat{\theta} \cdot \hat{\Phi}, \; \; \; \; \; \; \; \; \; B_{3}^{7} = - {\sin\theta \over {\sin\Theta}} \; \hat{\theta} \cdot \hat{\Theta}. \end{eqnarray} The nonzero components of $\widetilde{\mathcal{F}}_{\mu \nu}^{m}$ are \begin{eqnarray} \widetilde{\mathcal{F}}_{01}^{6} &=& \; \; {1\over r^2}\; \hat{r} \cdot \hat{\Phi}, \;\; \; \; \; \; \; \; \; \; \; \; \; \; \; \widetilde{\mathcal{F}}_{01}^{7} = {-1 \over r^2\sin\Theta}\; \hat{r} \cdot \hat{\Theta}, \; \; \; \; \; \; \; \\ \widetilde{\mathcal{F}}_{23}^{6} &=& {- \sin\theta}\; \hat{r} \cdot \hat{\Phi}, \;\; \; \; \; \; \; \; \; \; \; \widetilde{\mathcal{F}}_{23}^{7} = {\sin\theta \over \sin\Theta}\; \hat{r} \cdot \hat{\Theta}. \; \; \; \; \; \; \; \end{eqnarray} \section{Orthonormal Frame} We now decompose the metric into vielbeins as \begin{equation} \bar{g}_{AB} =\eta_{\bar{a} \bar{b}}\; e^{\bar{a}}_{\; A} e^{\bar{b}}_{\; B}, \; \; \; \; \; \; \; \; \bar{a},\bar{b}= 0,1,2,3,5,6,7, \end{equation} where the $\eta_{\bar{a} \bar{b}}$ is a seven-dimensional flat Minkowski space metric, \begin{equation} \eta_{\bar{a} \bar{b}} = diag(-1, 1,1,1,1,1,1,) \end{equation} The inverse of $e^{\bar{a}}_{\; A} $ is defined by \begin{equation} E_{\bar{a}}^{\; A} =\eta_{\bar{a} \bar{b}}\; \bar{g}^{AB}\; e^{\bar{b}}_{\; B} \end{equation} which obeys \begin{equation} E_{\bar{a}}^{\; A}\; e^{\bar{b}}_{\; A}= \delta_{\bar{a}}^{\bar{b}} , \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \eta^{\bar{a} \bar{b}}\; E_{\bar{a}}^{\; A}\; E_{\bar{b}}^{\; B}=\bar{g}^{AB}. \end{equation} $e^{\bar{a}}_{\; A}$ is the matrix which tranforms the coordinate basis $d\bar{x}^{A}$ of the cotangent space $T^{*}_{x} (M) $ to an orthonormal basis of the same space $T^{*}_{x} (M) $, \begin{equation} e^{\bar{a}} =\; e^{\bar{a}}_{\; A} \; d{\bar{x}}^{A}. \end{equation} The vielbein basis of the SO(3) KK dyonic metric can be written as \begin{equation} e^0 =e^{\Psi} dt , \; \; \; \; \; \; e^{1} =e^{\Lambda}\; dr, \; \; \; \; \; \; e^2 = r \; d\theta , \; \; \; \; \; \; e^3 = r sin\theta \; d\phi, \end{equation} \begin{equation} e^5 =dR , \; \; \; \; \; \; \; \; \; e^{6} =R(d\Theta +B^{6}_{\mu} dx^{\mu} ), \; \; \; \; \; \; \; \; \; e^7 = R sin\Theta (d\Phi +B^{7}_{\mu} dx^{\mu} ). \end{equation} The affine spin connection one-form $\omega^{\bar{a}}_{\; \; \bar{b}}$ are introduced by \begin{equation} de^{\bar{a}} +\omega^{\bar{a}}_{\; \; \bar{b}} \wedge e^{\bar{b}} =\; 0 \end{equation} and the metricity condition \begin{equation} \omega_{\bar{a} \bar{b}} = -\omega_{\bar{b} \bar{a}}. \end{equation} The cuvature 2-form is defined as \begin{equation} {{R}}^{\bar{a}}_{\; \; \bar{ b}} =d{\omega}^{\bar{a}}_{\; \; \bar{b}} +{\omega}^{\bar{a}}_{\; \; \bar{c}}\wedge {\omega}^{\bar{c}}_{\; \; \bar{b}} ={\bar{R}}^{\bar{a}}_{\; \; {\bar{b}}{ \bar{c}}{ \bar{d}}}\; e^{\bar{c}} \wedge e^{\bar{d}}. \end{equation} Equations (31) and (33) are called Cartan's structure equations. The components of the curvature tensor have the relations, \begin{equation} {\bar{R}}_{\bar{a} \bar{b} \bar{c} \bar{d}} =-{\bar{R}}_{\bar{b} \bar{a} \bar{c} \bar{d}} =-{\bar{R}}_{\bar{a} \bar{b} \bar{d} \bar{c}}= {\bar{R}}_{\bar{c} \bar{d} \bar{a} \bar{b}}. \end{equation} and satisfy the Bianchi identity, \begin{equation} {\bar{R}}_{\bar{a} \bar{b} \bar{c} \bar{d}} + {\bar{R}}_{\bar{a} \bar{c} \bar{d} \bar{b}} + {\bar{R}}_{\bar{a} \bar{d} \bar{b} \bar{c}}= 0. \end{equation} The components of the affine spin connection one-form $\omega^{\bar{a}}_{\; \; \bar{b}}$ of the $SO(3)$ KK dyon can be written more explicitely as \begin{eqnarray} \omega^0_{\; \; 1}& = &\Psi^{'} e^{-\Lambda} \; e^0 + {R\over 2r^2} e^{-\Psi -\Lambda} (\hat{r} \cdot \hat{\Phi})\; e^6 -{R\over 2r^2} e^{-\Psi -\Lambda} (\hat{r} \cdot \hat{\Theta})\; e^7 , \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \\ \omega^0_{\; \; 2} &=& 0, \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \omega^0_{\; \; 3} =0, \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \omega^0_{\; \; 5} =0, \; \;\; \; \; \; \; \; \; \; \; \; \; \; \; \; \\ \omega^0_{\; \; 6}& = & {R\over 2r^2} e^{-\Psi -\Lambda} (\hat{r} \cdot \hat{\Phi})\; e^1, \; \; \; \; \; \; \; \\ \omega^0_{\; \; 7} &= &- {R\over 2r^2} e^{-\Psi -\Lambda} (\hat{r} \cdot \hat{\Theta})\; e^1, \\ \omega^1_{\; \; 2} &= &- {1\over r} e^{ -\Lambda}\; e^2, \\ \omega^1_{\; \; 3} &= &- {1\over r} e^{ -\Lambda} \; e^3, \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \omega^1_{\; \; 5} = 0, \\ \omega^1_{\; \; 6}& = & \; \; {R\over 2r^2} e^{-\Psi -\Lambda} (\hat{r} \cdot \hat{\Phi})\; e^0 \\ \omega^1_{\; \; 7}& = & -{R\over 2r^2} e^{-\Psi -\Lambda} (\hat{r} \cdot \hat{\Theta})\; e^0 \\ \omega^2_{\; \; 3}& = & -{1\over r} cot\theta \; e^3 + {R\over 2r^2} \; (\hat{r} \cdot \hat{\Phi})\; e^6 -{R\over 2r^2} \; (\hat{r} \cdot \hat{\Theta})\; e^7, \; \; \; \; \; \; \; \; \; \; \;\omega^2_{\; \; 5} = 0, \\ \omega^2_{\; \; 6}& = & \; \; {R\over 2r^2} (\hat{r} \cdot \hat{\Phi})\; e^3 ,\\ \omega^2_{\; \; 7}& = & -{R\over 2r^2} (\hat{r} \cdot \hat{\Theta})\; e^3 , \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \omega^3_{\; \; 5} = \; \; 0,\\ \omega^3_{\; \; 6}& = & -{R\over 2r^2} (\hat{r} \cdot \hat{\Phi})\; e^2 , \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \omega^3_{\; \; 7} = \; \; {R\over 2r^2} \; (\hat{r} \cdot \hat{\Theta})\; e^2 ,\\ \omega^5_{\; \; 6}& = & -{1\over R}\; e^6, \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \omega^5_{\; \; 7} = -{1\over R}\; e^7, \\ \omega^6_{\; \; 7}& = & -{1\over R} cot\Theta \; e^7 -{1\over r} \; e^{-\Psi } \; cot\Theta \; (\hat{r} \cdot \hat{\Theta}) \; e^0 -{1\over r} e^{-\Psi } \; (\hat{r} \cdot \hat{R}) \; e^0 \nonumber \\ & & +{1\over r} cot\Theta \; (\hat{\Phi} \cdot \hat{\Theta}) \; e^2 +{1\over r} (\hat{\Phi} \cdot \hat{R}) \; e^2 -{1\over r} cot\Theta \; (\hat{\theta} \cdot \hat{\Theta}) \; e^3 -{1\over r} (\hat{\theta} \cdot \hat{R}) \; e^3. \end{eqnarray} The nonzero components of the curvature tensor are \begin{eqnarray} \bar{R}_{0101} &=& e^{-2\Lambda} ( \Psi^{''} -\Psi^{'} \Lambda^{'} + (\Psi^{'} )^2 ) -{3\over 4}{R^2 \over r^4}\; e^{-2\Psi -2\Lambda} \{ (\hat{r} \cdot \hat{\Theta})^2 + (\hat{r} \cdot \hat{\Phi})^2 \}, \\ \bar{R}_{0116} &=& \; \; {R\over 2r^2}\; e^{-\Psi -2\Lambda} (\Psi^{'} + \Lambda^{'} + {2\over r}) (\hat{r} \cdot \hat{\Phi}), \\ \bar{R}_{0117} &=& - {R\over 2r^2} \; e^{-\Psi -2\Lambda} (\Psi^{'} + \Lambda^{'} + {2\over r}) (\hat{r} \cdot \hat{\Theta}),\\ \bar{R}_{0123} &=& \; \; {R^2 \over 2r^4}\; e^{-\Psi -\Lambda} \{ (\hat{r} \cdot \hat{\Theta})^2 + (\hat{r} \cdot \hat{\Phi})^2 \},\\ \bar{R}_{0156} &=& - {1\over r^2} e^{-\Psi -\Lambda} (\hat{r} \cdot \hat{\Phi}), \; \; \; \; \; \; \; \; \; \; \; \; \bar{R}_{0157} = \; \;{1\over r^2} e^{-\Psi -\Lambda} (\hat{r} \cdot \hat{\Theta}),\\ \end{eqnarray} \begin{eqnarray} \bar{R}_{0167} &=& - {1\over r^2} e^{-\Psi -\Lambda} (\hat{r} \cdot \hat{R}), \; \; \; \; \; \;\; \; \; \; \; \;\; \; \; \; \; \bar{R}_{0202} = \; \; {1\over r}\; \Psi^{'} e^{-2\Lambda},\\ \bar{R}_{0213} &= & \; \; {R^2 \over 4r^4}\; e^{-\Psi -\Lambda} \{ (\hat{r} \cdot \hat{\Theta})^2 + (\hat{r} \cdot \hat{\Phi})^2 \}, \\ \bar{R}_{0226} &=& - {R \over 2r^3}\; e^{-\Psi -2\Lambda} (\hat{r} \cdot \hat{\Phi}) , \; \; \; \; \; \;\; \;\; \; \; \; \; \bar{R}_{0227} =\; \; {R \over 2r^3}\; e^{-\Psi -2\Lambda} (\hat{r} \cdot \hat{\Theta}) ,\\ \bar{R}_{0303}& =&\; \;{1\over r}\; \Psi^{'} e^{-2\Lambda},\\ \bar{R}_{0312} &=& - {R^2 \over 4r^4}\; e^{-\Psi -\Lambda} \{ (\hat{r} \cdot \hat{\Theta})^2 + (\hat{r} \cdot \hat{\Phi})^2 \},\\ \bar{R}_{0336} &=& -{R \over 2r^3}\; e^{-\Psi -2\Lambda} (\hat{r} \cdot \hat{\Phi}), \; \; \; \; \; \;\; \; \; \; \; \; \; \; \bar{R}_{0337} =\; \; {R \over 2r^3}\; e^{-\Psi -2\Lambda} (\hat{r} \cdot \hat{\Theta}) ,\\ \bar{R}_{0516} &=& - {1 \over 2r^2}\; e^{-\Psi -\Lambda} (\hat{r} \cdot \hat{\Phi}) , \; \; \; \; \; \;\; \; \; \; \; \;\; \; \; \; \bar{R}_{0517} =\; \; {1 \over 2r^2}\; e^{-\Psi -\Lambda} (\hat{r} \cdot \hat{\Theta}) ,\\ \bar{R}_{0606} &=& \; \; {R^2 \over 4r^4}\; e^{-2\Psi -2\Lambda} (\hat{r} \cdot \hat{\Phi})^2 , \; \; \; \; \; \;\; \;\; \; \; \; \; \bar{R}_{0607} = - {R^2 \over 4r^4}\; e^{-2\Psi -2\Lambda} (\hat{r} \cdot \hat{\Theta}) (\hat{r} \cdot \hat{\Phi}),\\ \bar{R}_{0615} &=& \; \; {1 \over 2r^2}\; e^{-\Psi -\Lambda} (\hat{r} \cdot \hat{\Phi}) , \; \; \; \; \; \;\; \; \; \; \; \;\;\; \; \; \; \bar{R}_{0617} = - {1 \over 2r^2}\; e^{-\Psi -\Lambda} (\hat{r} \cdot \hat{R}) ,\\ \bar{R}_{0707} &=& \; \; {R^2 \over 4r^4}\; e^{-2\Psi -2\Lambda} (\hat{r} \cdot \hat{\Theta})^2 , \; \; \; \; \; \;\; \; \; \; \; \; \; \bar{R}_{0715} = - {1 \over 2r^2}\; e^{-\Psi -\Lambda} (\hat{r} \cdot \hat{\Theta}) ,\\ \bar{R}_{0716} &=& \; \; {1 \over 2r^2}\; e^{-\Psi -\Lambda} (\hat{r} \cdot \hat{R}) , \; \; \; \; \; \;\; \; \; \; \; \; \;\; \; \; \; \bar{R}_{1212} = \; \;{1\over r}\; \Lambda^{'} e^{-2\Lambda},\\ \bar{R}_{1236} &=&\; \; {R \over 2r^3}\; e^{-\Lambda} (\hat{r} \cdot \hat{\Phi}) , \; \; \; \; \; \;\; \; \; \; \; \; \; \; \; \; \;\; \; \; \; \bar{R}_{1237} = - {R \over 2r^3}\; e^{-\Lambda} (\hat{r} \cdot \hat{\Theta}) ,\\ \bar{R}_{1313} &=& \; \;{1\over r}\; \Lambda^{'} e^{-2\Lambda}, \; \; \; \; \; \;\; \; \; \; \; \;\; \; \; \; \; \;\; \; \; \; \; \; \; \; \; \; \; \bar{R}_{1326} = - {R \over 2r^3}\; e^{-\Lambda} (\hat{r} \cdot \hat{\Phi}) ,\\ \bar{R}_{1327} &=& \; \; {R \over 2r^3}\; e^{-\Lambda} (\hat{r} \cdot \hat{\Theta}) , \; \; \; \; \; \;\; \; \; \; \; \;\; \; \; \; \; \; \; \; \; \bar{R}_{1616} = - {R^2 \over 4r^4}\; e^{-2\Psi -2\Lambda} (\hat{r} \cdot \hat{\Phi})^2 ,\\ \bar{R}_{1617} &=& \; \; {R^2 \over 4r^4}\; e^{-2\Psi -2\Lambda} (\hat{r} \cdot \hat{\Theta}) (\hat{r} \cdot \hat{\Phi}), \; \; \; \; \; \bar{R}_{1623} = - {R \over r^3}\; e^{ -\Lambda} (\hat{r} \cdot \hat{\Phi}) ,\\ \bar{R}_{1717} &=& - {R^2 \over 4r^4}\; e^{-2\Psi -2\Lambda} (\hat{r} \cdot \hat{\Theta})^2 , \; \; \; \; \; \; \; \; \; \; \; \; \, \bar{R}_{1723} = {R \over r^3}\; e^{ -\Lambda} (\hat{r} \cdot \hat{\Theta}) ,\\ \bar{R}_{2323} &=& \; \; {1\over r^2}(1-e^{-2\Lambda} ) -{3\over 4}{R^2 \over r^4}\; \{ (\hat{r} \cdot \hat{\Theta})^2 + (\hat{r} \cdot \hat{\Phi})^2 \},\\ \bar{R}_{2356} &=& \; \; {1 \over r^2}\; (\hat{r} \cdot \hat{\Phi}) , \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \bar{R}_{2367} = \; \; {1 \over r^2}\; (\hat{r} \cdot \hat{R}) ,\\ \bar{R}_{2357} &=& - {1 \over r^2}\; (\hat{r} \cdot \hat{\Theta}) , \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \bar{R}_{2536} = \; \; {1 \over 2r^2} \; (\hat{r} \cdot \hat{\Phi}) ,\\ \bar{R}_{2537} &=& - {1 \over 2r^2} \; (\hat{r} \cdot \hat{\Theta}) , \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \, \bar{R}_{2626} = \; \; {R^2\over 4r^4}\; (\hat{r} \cdot \hat{\Phi})^2 \\ \bar{R}_{2627} &=& - {R^2 \over 4r^4}\; (\hat{r} \cdot \hat{\Theta}) (\hat{r} \cdot \hat{\Phi}), \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \bar{R}_{2635} = - {1 \over 2r^2}\; (\hat{r} \cdot \hat{\Phi}) \\ \bar{R}_{2637} &=& \; \; {1 \over 2r^2}\; (\hat{r} \cdot \hat{R}), \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \, \bar{R}_{2727} = \; \; {R^2\over 4r^4}\; (\hat{r} \cdot \hat{\Theta})^2 \\ \bar{R}_{2735} &=& \; \; {1 \over 2r^2}\; (\hat{r} \cdot \hat{\Theta}), \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \, \bar{R}_{2736} = - {1 \over 2r^2}\; (\hat{r} \cdot \hat{R})\\ \bar{R}_{3636} &=& \; \; {R^2\over 4r^4}\; (\hat{r} \cdot \hat{\Phi})^2 , \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \; \, \bar{R}_{3637} = - {R^2 \over 4r^4}\; (\hat{r} \cdot \hat{\Theta}) (\hat{r} \cdot \hat{\Phi}),\\ \bar{R}_{3737} &=& \; \; {R^2\over 4r^4}\; (\hat{r} \cdot \hat{\Theta})^2 . \end{eqnarray} We have tacitly omitted nearly one half of the nonzero components of the curvature tensor, because of the relations in (34) , for shorthand. \section{Einstein Equation} Contracting the curvature tensor, one will get the Ricci tensor \begin{equation} \bar{R}_{\bar{a} \bar{b}} = \eta^{\bar{c} \bar{d}} \bar{R}_{\bar{c}\bar{a}\bar{d}\bar{b}}. \end{equation} Substituting the components of the dyonic metric, \begin{eqnarray} e^{2\Psi} &=& 1-{r_{s} \over r} , \\ e^{2\Lambda} &=& (1-{r_{s} \over r })^{-1}, \end{eqnarray} where $r_s$ is the Schwarzschild radius, then we can obtain almost $\bar{R}_{\bar{a} \bar{b}} $ are zero except \begin{eqnarray} \bar{R}_{00}& =& -{1\over 2} {R^2 \over r^4} \{ ( \hat{r} \cdot \hat{\Theta} )^2 +( \hat{r} \cdot \hat{\Phi} )^2 \}, \\ \bar{R}_{11}& =& +{1\over 2} {R^2 \over r^4} \{ ( \hat{r} \cdot \hat{\Theta} )^2 +( \hat{r} \cdot \hat{\Phi} )^2 \} ,\\ \bar{R}_{22}& =& - {1\over 2}{R^2 \over { r^4}} \{ ( \hat{r} \cdot \hat{\Theta} )^2 +( \hat{r} \cdot \hat{\Phi} )^2 \}, \\ \bar{R}_{33}& =& -{1\over 2} {R^2 \over { r^4}} \{ ( \hat{r} \cdot \hat{\Theta} )^2 +( \hat{r} \cdot \hat{\Phi} )^2 \}. \end{eqnarray} Then one has the Ricci scalar curvature, \begin{eqnarray} \bar{R} &=& \bar{\eta}^{\bar{a} \bar{b}} \bar{R}_{\bar{a} \bar{b}} \\ &=& - \bar{R}_{00} + \bar{R}_{11} + \bar{R}_{22} + \bar{R}_{33} \\ &=& 0. \end{eqnarray} From the fields $\widetilde{\mathcal{F}}_{\mu \nu}^{m}$ in (22) and (23), the components of the Ricci tensor, $(84) \sim (87)$, can be recast into the form, \begin{equation} \bar{R}_{\bar{a} \bar{b}} = -{1\over 2}E_{\bar{a}}^{\;\; \mu} \; E_{\bar{b}}^{\;\; \nu}\; \bar{g}^{\alpha \beta} \gamma_{mn} \widetilde{\mathcal{F}}_{\mu \alpha}^{m} \widetilde{\mathcal{F}}_{\nu \beta}^{n}, \end{equation} or \begin{equation} \bar{R}_{\mu \nu} = -{1\over 2}\bar{g}^{\alpha \beta} \gamma_{mn} \widetilde{\mathcal{F}}_{\mu \alpha}^{m} \widetilde{\mathcal{F}}_{\nu \beta}^{n}. \end{equation} Since the identity, $ \gamma_{mn} \widetilde{\mathcal{F}}_{\mu \nu}^{m} \widetilde{\mathcal{F}}^{\mu \nu n} =0$, holds, the right-hand side of the equation (92) can be identified as $8\pi$ times the stress-energy tensor of the Yang-Mills field, \begin{equation} \bar{R}_{\mu \nu} = 8\pi \bar{T}_{\mu \nu}, \; \; \; \; \bar{T}_{\mu \nu} ={-1\over 16 \pi}\bar{g}^{\alpha \beta} \gamma_{mn} \widetilde{\mathcal{F}}_{\mu \alpha}^{m} \widetilde{\mathcal{F}}_{\nu \beta}^{n}. \end{equation} Then the Einstein equation, $\bar{R}_{AB}-{1\over 2} \bar{g}_{AB} \bar{R} = 8\pi \bar{T}_{AB} $, is satisfied, where some components of $ \bar{T}_{AB}$ are zero, $\bar{T}_{\mu m}=0$ and $\bar{T}_{ m n}=0.$ The $ SO(3)$ KK dyonic metric satisfies the Einstein equation in the seven-dimensional. The results from two different methods are coincident. .
c2590c9b72ee2e96f78819f5f76ef54615e2dac4
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Relativistic heavy-ion collisions at RHIC have created a hot and dense medium that exhibits properties of a strongly coupled Quark-Gluon Plasma (sQGP)~\cite{wps}. Approximate chiral symmetry may be restored in the sQGP. It was recently suggested that metastable domains may form in such sQGP state where the parity and time-reversal symmetries are locally violated~\cite{PV,PVquench}. Such violation would lead to separation of positive and negative particles due to the chiral magnetic effect along the system's orbital angular momentum into the two hemispheres separated by the reaction plane~\cite{PV,PVquench}. The most direct consequence of this charge separation is a negative correlation of multiplicity asymmetry of positive particles between the hemispheres separated by the reaction plane and that of negative particles~\cite{PV}. Such a negative correlation is in addition to any (background) correlations that may exist due to other dynamics of the collision, the magnitude of which may be assessed by correlation of multiplicity asymmetries between the hemispheres separated by the plane normal to the reaction plane. Another consequence of the charge separation is a positive correlator $\mean{\cabp}$ of unlike-sign (US) particle pairs and a negative $\mean{\cabp}$ of like-sign (LS) particle pairs, where $\alpha$ and $\beta$ are the azimuthal angles of the two particles and $\psi$ is the reaction plane azimuth~\cite{Voloshin}. The reaction plane azimuthal angle is, however, not fixed but random in heavy-ion collisions. In order to estimate the reaction plane angle, a third particle, $c$, may be used to correlate with $\alpha$ and $\beta$, correcting for the resolution effect ($\vc$, the elliptic flow of particle $c$)~\cite{Voloshin}. Namely \be \mean{\cabp}\approx\mean{\cabc}/\vc. \label{eq1} \ee This assumes three-particle correlation is negligible. The three-particle azimuthal correlator $\mean{\cabc}$ has been measured and is used to deduce the two-particle azimuthal correlator $\mean{\cabp}$ by the STAR experiment~\cite{STAR}. The measurements show a negative correlator $\mean{\cabp}$ for LS pairs and a small, close to zero, correlator for US pairs~\cite{STAR}. The LS pair result is qualitatively consistent with the expectation from local strong parity violation~\cite{PV,PVquench}. The US pair result, however, is inconsistent with the initial expectation where the US and LS pair correlations should be equal in magnitude and opposite in sign~\cite{PV}. To explain the preliminary version of the STAR data~\cite{Selyuzhenkov}, it was suggested that at least one of the particles from a back-to-back US pair from local parity violation would have to traverse and interact with the medium and its angular correlation with the other particle of the pair would be significantly reduced~\cite{PVquench}. In fact, in this medium interaction scenario, correlations between particle pairs from local parity violation domains formed in the interior of the collision medium would be lost, and only those from pairs emitted from the surface could survive. In other words, the LS pair correlation is due to those pairs from local parity violation domains on the surface, and the back-to-back US pair correlation is lost~\cite{PVquench}. The three-particle correlator observable is parity-even and is subject to background correlations that are reaction-plane-dependent, some of which are discussed in detail in Ref.~\cite{STAR}. This is easy to see in the following extreme: a small opening angle pair perpendicular to the reaction plane is indistinguishable from a back-to-back pair parallel to the reaction plane in the correlator variable $\cabp$, and vice versa. More modestly, particle correlations from clusters which themselves possess anisotropy can give rise to observable signals in $\mean{\cabp}$. Cluster correlations can have different effects on LS and US pairs, because LS and US contributions from clusters are likely different, such as from jet-correlations~\cite{Back2back}. In this paper, we investigate quantitatively effects of cluster particle correlations on the azimuth correlator observables $\mean{\cabc}$ and $\mean{\cabp}$. We first present analytical results. We then use experimental measurements of two-particle angular correlations~\cite{Daugherity,STAR} as input to estimate effects of cluster particle correlations on the correlator observables. Since we do not have experimental information on cluster anisotropy, we calculate how much cluster anisotropies are needed in order to fully account for the correlator measurements~\cite{STAR}. We then judge the plausibility of the needed cluster anisotropies to either confirm or disprove cluster correlations as a possible explanation for the correlator measurements. \section{The Cluster Model and Results} Assume events are composed of hydrodynamic-like particles plus small-angle (SA, $|\dphi|<\pi/2$) clusters and back-to-back (BB, $|\dphi|>\pi/2$) clusters. We will use operational definitions of SA-cluster to be composed of one or more small-angle particle pairs, and BB-cluster to be composed of one or more back-to-back particle pairs. Note, with these operational definitions, that a conventional back-to-back cluster of $a$ particles on one side and $b$ particles on the other side is made of two SA-clusters (with numbers of particle pairs of $a^2$ and $b^2$) and one BB-cluster (with number of particles $2ab$). Here we have taken $a$ and $b$ to be large for simplicity, and we will assume Poisson statistics for particle multiplicity in clusters. Also note that not all SA-clusters have a back-side partner. The anisotropies of SA- and BB-clusters can therefore be different. Now consider US and LS particle pairs from clusters. They can come from either SA-clusters (i.e. SA particle pairs) or BB-clusters (i.e. BB particle pairs). The relative fractions of particle pairs from SA-clusters and BB-clusters can be different for US-pairs and LS-pairs. Let $\xUS$ be the fraction of US-pairs from SA-clusters (and $1-\xUS$ the fraction from BB-clusters), and $\xLS$ be the fraction of US-pairs from SA-clusters (and $1-\xLS$ the fraction from BB-clusters). We shall first obtain $\xUS$ and $\xLS$ from two-particle angular correlation measurements. The following two-particle correlators are measured for US and LS pairs~\cite{STAR}: \bea \mean{\cab}_{_{US}}&=&\xUS\mean{\cab}_{_{SA}}+(1-\xUS)\mean{\cab}_{_{BB}}=\xUS\wSA-(1-\xUS)\wBB,\label{eq2a}\\ \mean{\cab}_{_{LS}}&=&\xLS\mean{\cab}_{_{SA}}+(1-\xLS)\mean{\cab}_{_{BB}}=\xLS\wSA-(1-\xLS)\wBB,\label{eq2b} \eea where \bea \wSA&\equiv&\mean{\cab}_{_{SA}},\label{eq3a}\\ \wBB&\equiv&-\mean{\cab}_{_{BB}}\label{eq3b} \eea are the average angular spread of particle pairs from SA- and BB-clusters, respectively. In Eqs.~(\ref{eq2a}) and (\ref{eq2b}) we have taken the two-particle back-to-back correlations to be the same between US and LS pairs. We have assumed in Eqs.~(\ref{eq2a}) and (\ref{eq2b}) that the SA two-particle azimuthal correlations to be of the same shape for US and LS pairs. This is a reasonable assumption because same-side correlations of US and LS pairs have similar shapes although their magnitudes are different, e.g. in jet-like correlations~\cite{Back2back}. Information about cluster particle pairs can be obtained from two-particle azimuthal correlations. STAR has measured two-particle correlations integrated over transverse momentum ($\pT$), in $(\etad,\phid)$, the two-particle pseudorapidity and azimuth differences~\cite{Daugherity}. The correlation functions are parameterized by the sum of a near-side Gaussian, a negative dipole, and a quadrupole corresponding to elliptic flow~\cite{Daugherity}. The sum of the two former terms is considered to be correlations due to clusters. It is given by~\cite{Daugherity}: \be \frac{d^2N}{d\phid d\etad}=\frac{V_0}{\sqrt{2\pi}\sigma}\exp\left(-\frac{\phid^2}{2\sigma^2}\right)G(\etad)-A_{\phid}\cos\phid. \label{eq4} \ee Here $G(\etad)$ is a Gaussian in $\etad$ normalized to unity that is of no interest in our study. The first term in Eq.~(\ref{eq4}) r.h.s.~is the near-side Gaussian and the second term is the negative dipole. We can obtain the SA pair azimuthal spread as \be \wSA\equiv\mean{\cos\phid}_{_{SA}} =\frac{\int_{-\pi/2}^{\pi/2}\left[\frac{V_0}{\sqrt{2\pi}\sigma}\exp\left(-\frac{\phid^2}{2\sigma^2}\right)G(\etad)-A_{\phid}\cos\phid\right]\cos\phid d\phid\acc d\etad}{\int_{-\pi/2}^{\pi/2}\left[\frac{V_0}{\sqrt{2\pi}\sigma}\exp\left(-\frac{\phid^2}{2\sigma^2}\right)G(\etad)-A_{\phid}\cos\phid\right]d\phid\acc d\etad} =\frac{Ve^{-\sigma^2/2}-\pi A_{\phid}}{V-4A_{\phid}}, \label{eq5} \ee where $\acc$ is the two-particle $\etad$ acceptance of the STAR detector, and $V$ is the integrated volume within the acceptance (which is not equal to $V_0$)~\cite{Daugherity,Wang}. The extracted $\wSA$ ranges from 0.58 in peripheral Au+Au collisions to 0.85 in central collisions. For $\wBB$ we take the width of the away-side dipole: \be \wBB\equiv -\mean{\cos\phid}_{_{BB}}=\frac{\int_{\pi/2}^{3\pi/2}\left(A_{\phid}\cos\phid\right)\cos\phid d\phid}{\int_{\pi/2}^{3\pi/2}A_{\phid}\cos\phid d\phid}=\frac{\pi}{4}. \label{eq6} \ee The $\wBB$ is a fixed value because the away-side correlation shape can be satisfactorily described by the same functional form of a negative dipole. It is worthwhile to note that the cluster shape quantities are extracted from the measured angular correlations, thus they are immune to the underlying physics mechanisms generating those correlations. Besides cluster correlations, a negative dipole can be also generated, for example, by the statistical global momentum conservation. It was estimated, however, that the global momentum conservation effect is significantly smaller than the measured dipole strength. On the other hand, in order to estimate the cluster size that will be needed for our study below, some production mechanisms for the clusters have to be assumed. We will discuss those assumptions later. The pair quantities in Eqs.~(\ref{eq2a}) and (\ref{eq2b}) measured in experiment are diluted by hydro-like particles (those pair quantities are all zero for hydro-like particle pairs as well as cross-pairs of hydro-like and cluster particles): \bea \mean{\cab}_{_{US}}^{\meas}&=&f_{_{2,US}}\mean{\cab}_{_{US}},\label{eq7a}\\ \mean{\cab}_{_{LS}}^{\meas}&=&f_{_{2,LS}}\mean{\cab}_{_{LS}}.\label{eq7b} \eea The dilution factors are \bea f_{_{2,US}}&=&N_{_{US}}/(N^2/2),\label{eq8a}\\ f_{_{2,LS}}&=&N_{_{LS}}/(N^2/2),\label{eq8b} \eea for US and LS pairs (numbers of pairs $N_{_{US}}$ and $N_{_{LS}}$), respectively, where we have assumed the total numbers of US and LS pairs are equal in the event ($N$ is total particle multiplicity). The total number of cluster particle pairs is \be N_{_{US}}+N_{_{LS}}=\Ncl\Mcl^2,\label{eq9} \ee where $\Ncl$ is the number of clusters and $\Mcl$ is the particle multiplicity per cluster (cluster size). The numbers of US and LS pairs from {\em clusters} are not necessarily equal. Since there is no charge-sign difference in the back-to-back particle pair correlations: \be (1-\xUS)N_{_{US}}=(1-\xLS)N_{_{LS}},\label{eq10} \ee we obtain the dilution factors: \bea f_{_{2,US}}&=&\frac{1-\xLS}{1-(\xUS+\xLS)/2}f_2,\label{eq11a}\\ f_{_{2,LS}}&=&\frac{1-\xUS}{1-(\xUS+\xLS)/2}f_2,\label{eq11b} \eea with \be f_2=\Ncl\Mcl^2/N^2\label{eq11c}. \ee Note $\Ncl\Mcl^2/N$ --the number of correlated pairs per charged particle-- is a measured quantity in two-particle correlation function; it is obtained by integrating the correlation function over the measured acceptance (which is the $V$ in Eq.~(\ref{eq4}), $V=\Ncl\Mcl^2/N$)~\cite{Daugherity,Wang}. The measured $V$ ranges from 0.3 in peripheral Au+Au collisions to 1.9 in central collisions. The measured $\Ncl\Mcl^2$ is not affected by the individual quantities of the cluster size $\Mcl$ or the number of clusters $\Ncl$ which are not experimentally measured. We can now obtain $\xUS$ and $\xLS$ from the measured $\mean{\cab}_{_{US}}^{\meas}$ and $\mean{\cab}_{_{LS}}^{\meas}$. With short notations: \bea t_{_{US}}&=&\mean{\cab}_{_{US}}^{\meas}/f_2,\label{eq12a}\\ t_{_{LS}}&=&\mean{\cab}_{_{LS}}^{\meas}/f_2,\label{eq12b} \eea Eqs.~(\ref{eq2a}) and (\ref{eq2b}) become \bea t_{_{US}}[1-(\xUS+\xLS)/2]&=&\xUS(1-\xLS)\wSA-(1-\xUS)(1-\xLS)\wBB,\label{eq13a}\\ t_{_{LS}}[1-(\xUS+\xLS)/2]&=&\xLS(1-\xUS)\wSA-(1-\xUS)(1-\xLS)\wBB.\label{eq13b} \eea By simple algebra, we have \be \xUS^2-\frac{\wSA^2+\wSA(3t_{_{US}}-t_{_{LS}})/2+2\wSA\wBB+\wBB(t_{_{US}}-t_{_{LS}})}{(\wSA+\wBB)(\wSA+(t_{_{US}}-t_{_{LS}})/2)}\xUS+\frac{\wSA\wBB+\wSA t_{_{US}}+\wBB(t_{_{US}}-t_{_{LS}})/2}{(\wSA+\wBB)(\wSA+(t_{_{US}}-t_{_{LS}})/2)}=0,\label{eq14a} \ee \be \xLS^2-\frac{\wSA^2+\wSA(3t_{_{LS}}-t_{_{US}})/2+2\wSA\wBB+\wBB(t_{_{LS}}-t_{_{US}})}{(\wSA+\wBB)(\wSA+(t_{_{LS}}-t_{_{US}})/2)}\xLS+\frac{\wSA\wBB+\wSA t_{_{LS}}+\wBB(t_{_{LS}}-t_{_{US}})/2}{(\wSA+\wBB)(\wSA+(t_{_{LS}}-t_{_{US}})/2)}=0.\label{eq14b} \ee From Eqs.~(\ref{eq14a}) and (\ref{eq14b}) we can solve for $\xUS$ and $\xLS$. Table~\ref{tab} shows the obtained fractions $\xUS$ and $\xLS$ given cluster particle correlation inputs from two-particle angular correlations~\cite{Daugherity} and two-particle correlator quantities $\mean{\cab}_{_{US}}^{\meas}$ and $\mean{\cab}_{_{LS}}^{\meas}$ measured by STAR~\cite{STAR}. \subsection{Effect of three-particle correlation from clusters\label{sec2A}} STAR has measured three-particle correlators for US and LS pairs, $\alpha$ and $\beta$, with a third particle $c$ regardless of its charge sign. It is assumed that three-particle correlation is negligible, so particle $c$ can be used as a single-particle estimator of the reaction plane to obtain the two-particle correlators from the three-particle correlator measurements by Eq.~(\ref{eq1}). One supporting evidence for the assumption comes from the consistent results of $\mean{\cabp}\approx\mean{\cabc}/\vc$ using particle $c$ from the main Time Projection Chamber (TPC) or the forward TPC's while the particle pairs (US and LS) $\alpha$ and $\beta$ come from the main TPC~\cite{STAR}. However, it is possible that the probability for a triplet to be correlated may drop with the pseudorapidity gap between the particle $c$ and the other two particles in the main TPC, in a similar way to the $\vc$ dependence on pseudorapidity~\cite{PHOBOS}. If large clusters exist, as suggested by low-$\pT$ two-particle angular correlation measurements~\cite{Daugherity}, then finite three-particle correlation should exist. We shall estimate the effect of three-particle correlation using two-particle correlations~\cite{Daugherity}. Consider particle triplets from the same cluster, where SA and BB still stand for a pair of $\alpha$ and $\beta$, and the third particle $c$ can be on either side. \bea \mean{\cabc}_{_{SA}}&=&\mean{\cos(\dphi_\alpha+\dphi_\beta+2\phicl-2c)}_{_{SA}}=\mean{\cos(\dphi_\alpha+\dphi_\beta+2\dphi_c)}_{_{SA}},\label{eq15a}\\ \mean{\cabc}_{_{BB}}&=&\mean{\cos(\dphi_\alpha+\dphi_\beta+2\phicl-2c)}_{_{BB}}=\mean{\cos(\dphi_\alpha+\dphi_\beta+2\dphi_c)}_{_{BB}}.\label{eq15b} \eea Here $\dphi=\phi-\phicl$ is the particle azimuth relative to the cluster axis, $\phicl$. In Eq.~(\ref{eq15a}), which side the particle $c$ is does not really matter (because the angle is $2c$). In Eq.~(\ref{eq15b}), the particle $c$ is on the same side of either $\alpha$ or $\beta$, and thereby either $\dphi_\alpha$ or $\dphi_\beta$ is larger than $\pi/2$. Assuming that emission of daughter particles in clusters is independent of each other, we can obtain \bea \mean{\cabc}_{_{SA}}&\approx&\mean{\cos\dphi_\alpha}_{_{SA}}^2\mean{\cos2\dphi_c}_{_{SA}}\approx\mean{\cos(\dphi_\alpha-\dphi_\beta)}_{_{SA}}\mean{\cos2\dphi_c}_{_{SA}}=\wSA\wwSA,\label{eq15c}\\ \mean{\cabc}_{_{BB}}&\approx&\mean{\cos\dphi_\alpha}_{_{BB}}^2\mean{\cos2\dphi_c}_{_{SA}}\approx\mean{\cos(\dphi_\alpha-\dphi_\beta)}_{_{BB}}\mean{\cos2\dphi_c}_{_{SA}}=-\wBB\wwSA.\label{eq15d} \eea Here $\dphi_\alpha-\dphi_\beta=\phid$ is the azimuth difference used in two-particle correlation measurement, and $\wwSA=\mean{\cos2\dphi}_{_{SA}}$ is the average azimuthal spread of SA-clusters: \bea \wwSA&\equiv&\mean{\cos2\dphi}_{_{SA}}\approx\mean{\cos2\phid}^{1/2}\nonumber\\ &=&\left(\frac{\int_{-\pi/2}^{\pi/2}\left[\frac{V_0}{\sqrt{2\pi}\sigma}\exp\left(-\frac{\phid^2}{2\sigma^2}\right)G(\etad)-A_{\phid}\cos\phid\right]\cos2\phid d\phid\acc d\etad}{\int_{-\pi/2}^{\pi/2}\left[\frac{V_0}{\sqrt{2\pi}\sigma}\exp\left(-\frac{\phid^2}{2\sigma^2}\right)G(\etad)-A_{\phid}\cos\phid\right]d\phid\acc d\etad}\right)^{1/2} =\left(\frac{Ve^{-2\sigma^2}-4A_{\phid}/3}{V-4A_{\phid}}\right)^{1/2}. \label{eq16} \eea Note in Eq.~(\ref{eq15d}), it is the azimuthal spread of SA (not BB) clusters as well because particle $c$ is always on the same side of either particle $\alpha$ or $\beta$. We can estimate effects of three-particle correlations in US and LS pairs of particles $\alpha$ and $\beta$ by \bea \mean{\cabc}_{_{US}}&=&\xUS\mean{\cabc}_{_{SA}}+(1-\xUS)\mean{\cabc}_{_{BB}}\approx\xUS\wSA\wwSA-(1-\xUS)\wBB\wwSA,\label{eq17a}\\ \mean{\cabc}_{_{LS}}&=&\xLS\mean{\cabc}_{_{SA}}+(1-\xLS)\mean{\cabc}_{_{BB}}\approx\xLS\wSA\wwSA-(1-\xLS)\wBB\wwSA.\label{eq17b} \eea Comparing Eqs.~(\ref{eq17a}) and (\ref{eq17b}) to Eqs.~(\ref{eq2a}) and (\ref{eq2b}), we see that \bea \mean{\cabc}_{_{US}}&=&\mean{\cab}_{_{US}}\cdot\wwSA,\label{eq18a}\\ \mean{\cabc}_{_{LS}}&=&\mean{\cab}_{_{LS}}\cdot\wwSA.\label{eq18b} \eea The only assumption in arriving at Eqs.~(\ref{eq18a}) and (\ref{eq18b}) is that {\em particle emission azimuths within a cluster are independent of each other}. This is a reasonable assumption when the clusters consist of a relatively large number of particles. Under this assumption, three-particle correlation is completely determined by two-particle correlation. These three-particle correlation effects are diluted by hydro-like particles, \bea \mean{\cabc}_{_{US}}^{\meas}&=&f_{3,US}\mean{\cabc}_{_{US}},\label{eq19a}\\ \mean{\cabc}_{_{LS}}^{\meas}&=&f_{3,LS}\mean{\cabc}_{_{LS}},\label{eq19b} \eea by a factor of \bea f_{3,US}&=&\frac{1-\xLS}{1-(\xUS+\xLS)/2}f_3,\label{eq20a}\\ f_{3,LS}&=&\frac{1-\xUS}{1-(\xUS+\xLS)/2}f_3,\label{eq20b} \eea with \be f_3=\Ncl\Mcl^3/N^3\label{eq20c}. \ee Again, the measured three-particle correlation is determined by the measured two-particle correlation: \bea \mean{\cabc}_{_{US}}^{\meas}&=&\mean{\cab}_{_{US}}^{\meas}\cdot\wwSA\cdot\Mcl/N,\label{eq21a}\\ \mean{\cabc}_{_{LS}}^{\meas}&=&\mean{\cab}_{_{LS}}^{\meas}\cdot\wwSA\cdot\Mcl/N.\label{eq21b} \eea The only inputs for this determination are the near-side angular spread $\wwSA$ by Eq.~(\ref{eq16}) which is well measured and the cluster size $\Mcl$ which can be estimated. We estimate $\Mcl$ from the measured $\Ncl\Mcl^2$ assuming {\em binary scaling for the number of clusters, $\Ncl$}. Table~\ref{tab} shows the effect in three-particle azimuthal correlator estimated from the measured two-particle azimuthal correlator by Eqs.~(\ref{eq21a}) and (\ref{eq21b}), using cluster inputs from two-particle correlation measurements~\cite{Daugherity}. Figure~\ref{fig2} shows the estimated three-particle correlation effects in thin lines together with the measured three-particle correlator data in open points~\cite{STAR}. The estimated three-particle correlation effects from clusters are significantly larger than the US measurement $\mean{\cabc}_{_{US}}^{\meas}$~\cite{STAR}. This implies that there must be some cancellation of US correlation in the data from other effects (one candidate is two-particle correlation as we will discuss below). Those estimated for LS pairs are smaller than measurement $\mean{\cabc}_{_{LS}}^{\meas}$~\cite{STAR} in most of the centralities, by a factor of a few. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{plot_data.eps} \includegraphics[width=0.45\textwidth]{plot_data_Npart.eps} \end{center} \caption{(Color online) The measured three-particle correlators for US and LS particle pairs (open data points)~\cite{STAR}, the estimated three-particle correlation effects (thin lines) using inputs from measurements of two-particle angular correlations~\cite{Daugherity} and two-particle correlators~\cite{STAR}, and the remaining three-particle correlator magnitudes (solid data points), i.e. difference between open points and the thin lines. The left panel shows the three-particle correlators themselves versus centrality bin, and the right panel shows the number of participants ($\Npart$) scaled three-particle correlators versus $\Npart$.} \label{fig2} \end{figure} The three-particle correlation effects should be first removed from the three-particle correlator measurements. After removing the estimated three-particle effects, the three-particle correlators (from physics other than three-particle correlation) become the solid data points in Fig.~\ref{fig2}. As seen, both the LS and the US three-particle correlators are negative and seem to follow a regular trend. Those remaining effects can be due to two-particle correlation from clusters together with cluster anisotropies, as well as any other physics. \subsection{Effect of two-particle correlation from clusters} We now discuss the effect of two-particle correlation. Our approach is to assume that the only remaining effects in the correlator measurements are the combined effects of two-particle correlations and cluster anisotropies, and see whether the cluster flow parameters extracted from such an approach are reasonable. If they are unreasonable, then there may be new physics, such as local strong parity violation. After removing three-particle correlation, the remaining three-particle correlator can now be factorized by Eq.~(\ref{eq1}) because particle $c$ is not correlated with particle pair $\alpha$ and $\beta$ through clusters. However, particle $c$ (hydro-like particle) can be still correlated with the cluster if clusters are anisotropic. Two-particle correlation from clusters together with cluster anisotropy can give non-zero contribution to the measured three-particle correlator. We subtract the estimated three-particle correlation effects from the measured US and LS three-particle correlators, $\mean{\cabc}_{_{US}}^{\meas}$ and $\mean{\cabc}_{_{LS}}^{\meas}$, respectively, as shown in Fig.~\ref{fig2}. What is left in the three-particle correlator is two-particle correlation from clusters. Two-particle correlations can exist between $\alpha$ and $\beta$ (and $c$ is not from the cluster), or similarly between $\alpha$ and $c$, or between $\beta$ and $c$. The former is simply \be \mean{\cabc}_{\alpha\beta}=\mean{\cos(\dphi_\alpha+\dphi_\beta+2(\phicl-\psi)-2(c-\psi))}_{\alpha\beta}=\mean{\cos(\dphi_\alpha+\dphi_\beta)}_{\alpha\beta}\cdot\vcl\cdot\vc, \label{eq21c} \ee while the latter, e.g. between $\alpha$ and $c$, is given by \be \mean{\cabc}_{\alpha c}=\mean{\cos(\dphi_\alpha-2\dphi_c-(\phicl-\psi)+(\beta-\psi))}_{\alpha c}=\mean{\cos(\dphi_\alpha-2\dphi_c)}_{\alpha c}\cdot v_{1,clust}\cdot v_{1,\beta}. \label{eq21d} \ee Since direct flow $v_1$ is generally much smaller than elliptic flow $v_2$ at mid-rapidity, the latter correlations can be neglected and we shall only focus on two-particle correlation effect between $\alpha$ and $\beta$. We divide the three-particle correlation corrected results by $\vc$ used in Ref.~\cite{STAR} to obtain $\mean{\cabp}_{_{US}}$ and $\mean{\cabp}_{_{LS}}$. The dilution factors are properly taken into account. Assuming the only remaining correlation is from clusters (no new physics), then \bea \mean{\cabp}_{_{US}}&=&\xUS\mean{\cabp}_{_{SA}}+(1-\xUS)\mean{\cabp}_{_{BB}},\label{eq22a}\\ \mean{\cabp}_{_{LS}}&=&\xLS\mean{\cabp}_{_{SA}}+(1-\xLS)\mean{\cabp}_{_{BB}},\label{eq22b} \eea where \bea \mean{\cabp}_{_{SA}}&=&\mean{\cos(\dphi_\alpha+\dphi_\beta+2\phicl-2\psi)}_{_{SA}}=\mean{\cab}_{_{SA}}\vSA=\wSA\vSA,\label{eq23a}\\ \mean{\cabp}_{_{BB}}&=&\mean{\cos(\dphi_\alpha+\dphi_\beta+2\phicl-2\psi)}_{_{BB}}=\mean{\cab}_{_{BB}}\vBB=-\wBB\vBB.\label{eq23b} \eea Here $\vSA$ and $\vBB$ are the elliptic flow parameters of SA- and BB-clusters weighted by the number of particle pairs per cluster (i.e. anisotropy of clusters with each cluster counted by the number of pairs in the cluster). In Eqs.~(\ref{eq23a}) and (\ref{eq23b}), we have assumed that the cluster two-particle azimuthal correlation shape is independent of the cluster orientation with respect to the reaction plane, so that we can factorize the cluster azimuthal spread and the cluster anisotropy. If particle azimuthal distribution in clusters depends on the cluster orientation, then the cluster anisotropy should be taken as an effective average anisotropy. From Eqs.~(\ref{eq22a}), (\ref{eq22b}), (\ref{eq23a}) and (\ref{eq23b}), we have \bea \mean{\cabp}_{_{US}}&=&\xUS\wSA\vSA-(1-\xUS)\wBB\vBB,\label{eq24a}\\ \mean{\cabp}_{_{LS}}&=&\xLS\wSA\vSA-(1-\xLS)\wBB\vBB.\label{eq24b} \eea From Eqs.~(\ref{eq24a}) and (\ref{eq24b}) we solve for $\vSA$ and $\vBB$: \bea \vSA&=&\frac{1}{\wSA}\frac{(1-\xLS)\mean{\cabp}_{_{US}}-(1-\xUS)\mean{\cabp}_{_{LS}}}{\xUS(1-\xLS)-\xLS(1-\xUS)},\label{eq25a}\\ \vBB&=&\frac{1}{\wBB}\frac{\xLS\mean{\cabp}_{_{US}}-\xUS\mean{\cabp}_{_{LS}}}{\xUS(1-\xLS)-\xLS(1-\xUS)}.\label{eq25b} \eea Our strategy now is to see what values of $\vSA$ and $\vBB$ are needed in order to reproduce the correlator measurements $\mean{\cabp}_{_{US}}$ and $\mean{\cabp}_{_{LS}}$ from STAR~\cite{STAR}, and judge whether the required cluster anisotropies are reasonable. Table I shows the obtained flow parameters $\vSA$ and $\vBB$ for SA- and BB-cluster particle pairs, respectively. The errors are propagated from statistical errors on the three-particle correlator measurements and 5\% errors on low-$\pT$ two-particle angular correlation measurements (same-side Gaussian amplitude, same-side Gaussian $\sigma$, and dipole amplitude). All errors are treated as uncorrelated. \begin{table} \caption{Results from cluster model calculation. The $\xUS$ is the fraction of US pairs from SA-clusters; the $\xLS$ is that of LS pairs from SA-clusters; the $\xUS$ and $\xLS$ are obtained from $\mean{\cab}$ measurements by Eqs.~(\ref{eq14a}) and (\ref{eq14b}) using inputs from two-particle angular correlation measurements~\cite{Daugherity}, assuming no charge difference in BB-clusters (Eq.~(\ref{eq10})). Cluster $\mean{\cabc}_{\rm clust}$ is the calculated three-particle correlation effect assuming independent particle emission in clusters and binary collision scaling for the number of clusters. The $\vSA$ and $\vBB$ are the elliptic flow parameters by Eqs.~(\ref{eq25a}) and (\ref{eq25b}) to reproduce the three-particle correlator results measured by STAR~\cite{STAR}, after cluster three-particle effect removed; errors are propagated from statistical errors on the correlator measurements~\cite{STAR} and the assumed 5\% error on the two-particle angular correlation measurements~\cite{Daugherity}. The last two columns list the $\vSA$ and $\vBB$ parameters needed to reproduce the three-particle correlator results~\cite{STAR}, assuming vanishing three-particle correlation from clusters.} \label{tab} \begin{tabular}{c|cc|cccc|cc} \hline & & & \multicolumn{4}{c|}{Binary scaled clusters} & \multicolumn{2}{c}{Cluster three-particle}\\ & & & \multicolumn{2}{c}{$\mean{\cabc}_{\rm clust}$} & & & \multicolumn{2}{c}{correlation set to zero}\\ centrality & \hspace{2mm}$\xUS$\hspace{2mm} & \hspace{2mm}$\xLS$\hspace{2mm} & \hspace{8mm}US\hspace{8mm} & \hspace{8mm}LS\hspace{8mm} & \hspace{6mm}$\vSA$\hspace{6mm} & \hspace{6mm}$\vBB$\hspace{6mm} & \hspace{6mm}$\vSA$\hspace{6mm} & \hspace{6mm}$\vBB$\hspace{6mm} \\\hline 70-60\% & 0.85 & 0.36 & $8.0 \times 10^{-5}$ & $-1.2 \times 10^{-5}$ & $-0.20 \pm 0.09$ & $0.36 \pm 0.12$ & $0.23 \pm 0.02$ & $0.79 \pm 0.07$ \\ 60-50\% & 0.78 & 0.42 & $4.4 \times 10^{-5}$ & $-8.2 \times 10^{-6}$ & $-0.10 \pm 0.05$ & $0.30 \pm 0.07$ & $0.22 \pm 0.01$ & $0.61 \pm 0.04$ \\ 50-40\% & 0.70 & 0.44 & $2.9 \times 10^{-5}$ & $-6.7 \times 10^{-6}$ & $-0.10 \pm 0.04$ & $0.11 \pm 0.04$ & $0.19 \pm 0.01$ & $0.40 \pm 0.02$ \\ 40-30\% & 0.66 & 0.43 & $1.5 \times 10^{-5}$ & $-3.9 \times 10^{-6}$ & $-0.05 \pm 0.02$ & $0.11 \pm 0.03$ & $0.16 \pm 0.01$ & $0.32 \pm 0.02$ \\ 30-20\% & 0.62 & 0.43 & $8.7 \times 10^{-6}$ & $-2.6 \times 10^{-6}$ & $-0.05 \pm 0.02$ & $0.05 \pm 0.02$ & $0.14 \pm 0.01$ & $0.24 \pm 0.01$ \\ 20-10\% & 0.61 & 0.42 & $4.7 \times 10^{-6}$ & $-1.8 \times 10^{-6}$ & $-0.07 \pm 0.01$ & $0.00 \pm 0.02$ & $0.10 \pm 0.01$ & $0.18 \pm 0.01$ \\ 10-5\% & 0.61 & 0.41 & $2.8 \times 10^{-6}$ & $-1.2 \times 10^{-6}$ & $-0.13 \pm 0.02$ & $-0.08 \pm 0.02$ & $0.07 \pm 0.01$ & $0.12 \pm 0.01$ \\ 5-0\% & 0.61 & 0.38 & $1.8 \times 10^{-6}$ & $-8.8 \times 10^{-7}$ & $-0.21 \pm 0.03$ & $-0.17 \pm 0.03$ & $0.03 \pm 0.01$ & $0.07 \pm 0.01$ \\ \hline \end{tabular} \end{table} Figure~\ref{fig3} (upper left panel) depicts the obtained $\vSA$ and $\vBB$. The SA-cluster particle pairs $v_2$ is somewhat negative. Negative SA-cluster particle pair v2 is not impossible, perhaps even natural in the jet-quenching picture -- high-$\pT$ particles are suppressed more in the out-of-plane direction, generating more low-$\pT$ particles, thereby more SA-cluster particle pairs out-of-plane than in-plane. Positive anisotropy for BB-cluster particle pairs implies larger survival probability of BB-pairs in-plane than out-of-plane, again consistent with the jet-quenching picture. The magnitudes of the obtained $\vSA$ and $\vBB$ seem reasonable, however, the trends towards the most central collisions seem unreasonable. More discussions on the extracted flow parameters can be found in Section~\ref{sec3}. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{plot_v2.eps} \includegraphics[width=0.45\textwidth]{plot_v2_fixedClust=3.eps}\\ \includegraphics[width=0.45\textwidth]{plot_v2_smallerClustX3.eps} \includegraphics[width=0.45\textwidth]{plot_v2_no3part.eps} \end{center} \caption{(Color online) The $\vSA$ and $\vBB$ parameters (anisotropies of cluster population weighted by the number of particle pairs per cluster) required to reproduce the STAR three-particle correlator measurements~\cite{STAR} after removing cluster three-particle correlation effects calculated from two-particle angular correlation measurements~\cite{Daugherity} and two-particle correlator measurements~\cite{STAR}. The upper left panel assumes binary scaling for the number of clusters; the upper right panel assumes fixed cluster size of 3 particles; the lower left panel assumes that cluster size is a factor of 3 smaller than those in the upper left panel; and the lower right panel assumes a vanishing three-particle correlation from clusters. The particle elliptic flow measured by the event-plane method~\cite{flow} is shown in the dashed curve, and that from fit to two-particle correlation measurement~\cite{Daugherity,Kettler} is shown in the solid curve.} \label{fig3} \end{figure} \subsection{Dependence of results on model assumptions} We have made two major assumptions in our study as we have italicized in Section~\ref{sec2A}: \begin{itemize} \item[(i)] {\em The number of clusters scales with binary collisions so we may estimate the cluster size;} and \item[(ii)] {\em Particle emission azimuths within a cluster are independent of each other so we may factorize three-particle correlation as the product of two-particle correlations.} \end{itemize} Assumption (i) does not affect the two-particle dilution factor of Eqs.~(\ref{eq11a}) and (\ref{eq11b}) because the number of correlated particle pairs $\Ncl\Mcl^2$ is one of the measured quantities, as we noted already. However, the assumption does have an effect on the estimation of the number of correlated triplets $\Ncl\Mcl^3$, thereby on the three-particle dilution factor of Eqs.~(\ref{eq20a}) and (\ref{eq20b}). A larger $\Ncl$ than the binary scaling estimate would result in a smaller cluster size $\Mcl$, hence smaller three-particle correlation effect; a smaller $\Ncl$ would result in a larger three-particle effect. With binary scaling estimation of $\Ncl$, the cluster size varies in the range of $\Mcl\approx$~5-10 from peripheral to central Au+Au collisions, and the fraction of particles from clusters varies from 5-20\% of all particles measured in the final state~\cite{Daugherity,Wang}. Other studies, from multiplicity correlations, indicate that cluster size is only $\sim3$ whereas more particles originate from clusters~\cite{PHOBOSclust}. If cluster size is $\Mcl=3$ independent of centrality, then the fraction of particles from clusters would be 8-70\% of all particles from peripheral to central collisions and the number of clusters would scale more strongly than binary collisions. Assumption (ii) affects the factorization approximation in deriving the cluster three-particle correlation by Eqs.~(\ref{eq15a}) and (\ref{eq15b}). If independent emission of particles from clusters does not hold, then the three-particle cluster correlation result may not be valid. The factorization approximation also affects the connection between the two-particle correlations $\mean{\cos(\dphi_\alpha+\dphi_\beta)}=\mean{\cos(\dphi_\alpha-\dphi_\beta)}$ in Eqs.~(\ref{eq23a}) and (\ref{eq23b}). To get a ``feeling'' about the effects of three-particle correlations from smaller clusters, we repeat our analysis by fixing the cluster size to be $\Mcl=3$, independent of centrality, while keeping the measured number of cluster particle pairs $\Ncl\Mcl^2$ the same. This changes the three-particle correlation effects from clusters. The cluster $\vSA$ and $\vBB$ that are needed to reproduce the measured three-particle correlators change accordingly. Figure~\ref{fig3} (upper right panel) shows the obtained $\vSA$ and $\vBB$. Both the $\vSA$ and $\vBB$ increase from the case in the upper left panel where binary collision scaling is assumed for the number of clusters resulting in large clusters. Furthermore, the values of $\vSA$ become mostly positive. The magnitude of $\vSA$ appears reasonable. The magnitude of $\vBB$ seems reasonable for most centralities except peripheral collisions where it is too large. Besides the fixed cluster size, we have also tried reducing the cluster size by a constant factor of 3 for all centralities. The obtained $\vSA$ and $\vBB$ (which are needed to explain the three-particle correlator measurements) are shown in Fig.~\ref{fig3} (lower left panel). The results are similar to the previous results where cluster size is fixed to 3. Reducing the cluster size reduces the effect of cluster three-particle correlation. The extreme would be to assume a vanishing three-particle correlation from clusters, and the only contribution to the measured three-particle correlator is the combined effect of two-particle correlation from clusters and cluster anisotropy. To test this extreme, we set the cluster three-particle correlation to zero, and repeat our analysis to extract the values of $\vSA$ and $\vBB$ that are needed to fully account for the measured three-particle correlators. The extracted $\vSA$ and $\vBB$ are tabulated in the last two columns of Table~\ref{tab}, and are shown in the lower right panel of Fig.~\ref{fig3}. Both the $\vSA$ and $\vBB$ are larger than the other cases, as expected from their trend with reducing effect from cluster three-particle correlation. Both $\vSA$ and $\vBB$ are positive, and $\vBB$ is larger than $\vSA$. The magnitude of $\vSA$ seems reasonable (see more discussion on this in Section~\ref{sec3}). The magnitude of $\vBB$ seems reasonable in central collisions, but is too large in peripheral collisions. The signs and relative magnitudes of $\vSA$ and $\vBB$ in Fig.~\ref{fig3} (lower right panel) can be understood as follows. The US three-particle correlator is nearly zero. The three-particle correlators due to SA- and BB-cluster correlations should cancel each other. Because particle pairs from SA- and BB-clusters at the same location relative to the reaction plane give opposite sign three-particle correlators, the anisotropy of the SA- and BB-clusters have to be of the same sign (either positive or negative). Because more US pairs come from SA-clusters than BB-clusters, the anisotropy of BB-cluster particle pairs has to be larger than those of SA-clusters in order to have the averaged three-particle correlator to be nearly zero. The sign of the LS three-particle correlator is decided by BB-clusters, because more LS pairs come from BB-clusters and $\vBB$ is larger than $\vSA$ in absolute magnitude. BB-cluster particle pairs in-plane gives negative $\mean{\cabp}$ and those out-of-plane gives positive $\mean{\cabp}$. In order for the final averaged LS pair correlator $\mean{\cabp}_{_{LS}}$ to be negative, the $\vBB$ has to be positive. \section{Discussion on Cluster Particle Pair Anisotropy\label{sec3}} The cluster anisotropy $\vSA$ and $\vBB$ obtained above are the magnitudes of modulation of the number of clusters weighted by the average number of particle pairs per cluster. One way to gauge whether the obtained $\vSA$ and $\vBB$ are reasonable or not is to see how large the hydro-like particle $\vbg$ has to be in order, together with cluster $v_2$, to reproduce the observed final state particle $v_2^{\meas}$. Since our clusters are large, having on average 5-10 particles, the particles from back-to-back pairs are part of those from SA-clusters. We can therefore consider only SA-cluster pair $\vSA$. Note this may not be accurate in peripheral collisions where a BB-cluster of a single back-to-back particle pair is not part of a SA-cluster pair, thus the BB $\vBB$ has to be also considered. But for our purpose, it is sufficient to use only SA $\vSA$ to get a ``feeling''. The hydro-like particle $\vbg$ can be obtained from the following two scenarios. \begin{itemize} \item Assuming cluster size does not vary with respect to the reaction plane, then cluster $\vSA$ translates directly into cluster particle $v_2$ by \be \vSA^{\rm particle}=\wwSA\vSA,\label{eq26} \ee where $\wwSA$ is the angular spread of cluster particles relative to the cluster axis by Eq.~(\ref{eq16}). The hydro-like particle $\vbg$ is then given by \be (1-f)\vbg+f\wwSA\vSA=v_2^{\meas},\label{eq27} \ee where $f$ is the fraction of particles from clusters. \item Assuming cluster orientation is isotropic (e.g. initial hard scattering products), but cluster size varies from in-plane to out-of-plane, then approximately half of the cluster $\vSA$ translates into cluster particle $v_2$ by \be \vSA^{\rm particle}=\wwSA\vSA/2.\label{eq28} \ee The hydro-like particle $\vbg$ is then given by \be (1-f)\vbg+f\wwSA\vSA/2=v_2^{\meas}.\label{eq29} \ee \end{itemize} We use the $v_2$\{{\sc 2d}\} fit to the two-particle angular correlation data~\cite{Daugherity} as the measured $v_2^{\meas}$, because the non-flow same-side correlation peak (part of the cluster pair correlation) should be excluded. In fact, if the fit model~\cite{Daugherity} used to separate elliptic flow and non-flow correlations is accurate, then the fit $v_2$\{{\sc 2d}\} should be the true elliptic flow (correlation related to the reaction plane)~\cite{Wang}. Note that the true elliptic flow may not be necessarily equal to the hydro-like elliptic flow, but is the net sum of the hydro-like elliptic flow and the product of cluster correlation and cluster elliptic flow~\cite{Wang}, as shown in Eqs.~(\ref{eq27}) and (\ref{eq29}). Figure~\ref{fig4} shows the obtained hydro-like particle $\vbg$ from Eqs.~(\ref{eq27}) and (\ref{eq29}) together with the $v_2$\{{\sc 2d}\}. The $v_2$\{{\sc ep}\} measured by the event-plane method, which contains a large contribution from non-flow, is also shown. Its difference from $v_2$\{{\sc 2d}\} gives a good estimate of uncertainty of all available elliptic flow measurements. The left panel of Fig.~\ref{fig4} shows the case where the number of clusters scales with binary collisions corresponding to the upper left panel of Fig.~\ref{fig3}. The obtained hydro-like particle $v_2$ is larger than the measured particle $v_2$ to account for the negative cluster $\vSA$, but not much larger. The $\vbg$ values seem reasonable, suggesting that the $\vSA$ and $\vBB$ may be reasonable too. The right panel of Fig.~\ref{fig4} shows the case where the three-particle correlation from clusters is set to zero, corresponding to the lower right panel of Fig.~\ref{fig3}. Although the fraction of particles from clusters does not matter for this case because the three-particle correlation from clusters is set to zero, we need the fraction to obtain the hydro-like particle $\vbg$ by Eqs.~(\ref{eq27}) and (\ref{eq29}). We use the same fraction of particles from clusters as in the above case (assuming binary collision scaling for the number of clusters). The calculated hydro-like particle $\vbg$ is smaller than the measured particle $v_2$ for cluster size independent of the reaction plane to offset the larger cluster particle $\vSA^{\rm particle}$ (by Eq.~(\ref{eq26})). For isotropic clusters, the hydro-like particle $\vbg$ is not much different from the measured particle $v_2$ because the cluster particle $\vSA^{\rm particle}$ due to cluster anisotropy (by Eq.~(\ref{eq28})) approximately equal to the measured particle $v_2$. The hydro-like particle $\vbg$ is reasonable for this case too, again suggesting that the $\vSA$ and $\vBB$ required to explain the three-particle correlator measurements are reasonable. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{plot_v2_particle_use_v2_2D.eps} \includegraphics[width=0.45\textwidth]{plot_v2_particle_use_v2_2D_no3part.eps} \end{center} \caption{(Color online) hydro-like particle particle $\vbg$ that is needed in order, together with the estimated $\vSA$, to reproduce the fit $v_2$\{{\sc 2d}\}. The left panel corresponds to $\vSA$ in Fig.~\ref{fig3} upper left panel where the number of clusters is assumed to scale with binary collisions. The right panel corresponds to $\vSA$ in Fig.~\ref{fig3} lower right panel where the three-particle correlation from clusters is set to zero. The fraction of cluster particles is kept as same as that for the left panel.} \label{fig4} \end{figure} \section{Conclusions} Cluster model parameters are calculated by Eqs.~(\ref{eq5}), (\ref{eq6}), and (\ref{eq16}) using two-particle angular correlation data~\cite{Daugherity} and two-particle azimuth correlator $\mean{\cab}$ measurements from STAR~\cite{STAR}. Fractions of unlike-sign (US) and like-sign (LS) particle pairs from small-angle (SA) and back-to-back (BB) clusters are obtained by Eqs.~(\ref{eq14a}) and (\ref{eq14b}) assuming no charge difference in BB two-particle correlation. Three-particle correlation effects are estimated by Eqs.~(\ref{eq21a}) and (\ref{eq21b}) assuming independent emission of particles in clusters, and are positive for SA-clusters and negative for BB-clusters. The estimated three-particle effects are removed from the three-particle azimuth correlator $\mean{\cabc}$ measurements~\cite{STAR}. The remaining correlator magnitudes of $\mean{\cabp}=\mean{\cabc}/\vc$ are assumed to come entirely from cluster two-particle correlations (i.e. no new physics), and are used to determine elliptic flow parameters of SA- and BB-clusters by Eqs.~(\ref{eq25a}) and (\ref{eq25b}). Cluster size is not measured. A wide range of assumptions are made, ranging from binary collision scaling of cluster abundance, resulting in large cluster three-particle correlation, to zero cluster size yielding vanishing three-particle correlation effect. These assumptions do not affect the cluster two-particle correlation which is constrained by the two-particle angular correlation measurements~\cite{Daugherity}. The magnitudes of the obtained cluster anisotropy (azimuthal modulation in the number of clusters weighted by the number of particle pairs per cluster) to fully account for the three-particle correlator measurements~\cite{STAR} seem reasonable, except for peripheral collisions. The hydro-like particle flow magnitude, in order to make up to the measured inclusive particle flow together with the cluster particle flow, appears reasonable too. Cluster particle correlations may originate from (semi-)hard scatterings. It is therefore natural to expect that cluster effect would increase with $\pT$, although the $\pT$-dependence is not studied in this paper due to the lack of $\pT$-dependent measurements of two-particle angular correlations. It is worth to note that the measured azimuth correlators indeed increase with $\pT$~\cite{STAR}, an observation that cannot be explained by local strong parity violation but may be expected from cluster particle correlations. In conclusion, our results from {\em conventional} physics of cluster particle correlations suggest that no new physics is {\em required} to explain the two- and three-particle azimuth correlator measurements by STAR. Our conclusion is complementary to the experimental findings in Ref.~\cite{STAR} that the azimuth correlator data in itself does not allow to conclude on the local strong parity violation. \section*{Acknowledgments} I thank my STAR collaborators for useful discussions. This work is supported by U.S. Department of Energy under Grant DE-FG02-88ER40412.
09eaacb20977d63d8aaca1c479cd148eecb81374
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction}\label{intro} Let $\gamma$ be a curve in ${\mathbb {R}}^d$ given by \begin{equation}\label{ineq0} \gamma (t) =\Big(t,\frac{t^2}{2} ,\dots ,\frac{t^{d-1}}{(d-1)!},\phi (t)\Big) \end{equation} \noindent where $\phi\in C^{(d)}(a,b)$, where $\phi ^{(j)}(t)> 0$ for $t\in (a,b)$ and $j=0,1,2,\dots ,d$, and where $\phi^{(d)}$ is nondecreasing. Such curves are termed {\it simple} in \cite{DM1}. We are interested in the possibility of proving $L^p \rightarrow L^q$ convolution estimates for the affine arclength measure $\lambda$ on \eqref{ineq0}, given by $d\lambda =\phi^{(d)} (t)^{2/(d^2+d)}dt$. We begin by recalling a theorem from \cite{O1}. (In this note, $|E|$ will stand for the Lebesgue measure of $E$.) \begin{theorem} \label{theorem1} Suppose $d=2$. The inequality \begin{equation*} \|\lambda\ast\chi_E \|_3 \leq (12)^{1/3} \, |E|^{2/3} \end{equation*} \noindent holds for all measurable $E\subset{\mathbb {R}}^2$. \end{theorem} \noindent Theorem \ref{theorem1} is equivalent to a weak-type $(3/2,3)$ estimate for the operator given by convolution with $\lambda$, an estimate which is uniform over the class of measures $\lambda$ described above. Here are two questions which are raised by Theorem \ref{theorem1}: \noindent (i) is there an analogous strong-type estimate, and \noindent (ii) are there analogs of Theorem \ref{theorem1} if $d>2$? \noindent Having no idea how to attack these interesting questions in their natural generality, we follow the usual practice of asking what can be said along such lines by imposing additional hypotheses on $\phi$. The requirement \begin{equation}\label{ineq1} \Big(\prod_{j=1}^n \phi^{(d)}(s_j)\Big)^{1/n} \le A \, \phi^{(d)} \big(\tfrac{s_1+\dots+s_n}{n}\big), \end{equation} to hold for $s_j \in (a,b)$, was used with $n=d$ in \cite{BOS2} to obtain Fourier restriction estimates for curves \eqref{ineq0}. It is obvious that if $\beta\ge d$ then condition \eqref{ineq1} holds with $A=1$ for $\phi(t)=t^\beta$ on the interval $(0,\infty)$. Moreover, as was observed in \cite{BOS2}, if we define $\phi_0(t)=t^{\beta}$ for some $\beta >d$ and then define $$ \phi_j(t)= \int_0^t {(t-u)^{d-1}} \exp\Big(-\tfrac{1}{\phi_{j-1}^{(d)}(u)}\Big) du $$ for $j\ge1$, each of the functions $\phi_j$ satisfies \eqref{ineq1} with $A=1$ on $(0,\infty)$. This yields a sequence of functions which are progressively flatter at the origin. (See \S 4 of \cite{BOS2} for other examples of flat functions satisfying \eqref{ineq1}.) In this note we will assume the $n=2$ version of \eqref{ineq1} which, with $\omega\doteq (\phi^{(d)})^{2/(d^2+d)}$, we write as \begin{equation}\label{ineq2} \big( \omega (s_1)\,\omega (s_2)\big)^{1/2} \leq A \, \omega \big(\tfrac{s_1+s_2}{2}\big). \end{equation} We will obtain convolution estimates in only the dimensions $d=2,3$ and $4$: \begin{theorem}\label{d=2} Suppose $d=2$ and assume \eqref{ineq2}. Then there is the Lorentz space estimate \begin{equation* \|\lambda\ast f\|_{L^{3}}\leq C(A)\,\|f\|_{L^{3/2,3}}. \end{equation*} \end{theorem} \begin{theorem}\label{d=3} Suppose $d=3$ and assume \eqref{ineq2}. Then, for any $\epsilon >0$, there is the Lorentz space estimate \begin{equation* \|\lambda\ast f\|_{L^{2}}\leq C(A)\,\|f\|_{L^{3/2,2-\epsilon}}. \end{equation*} \end{theorem} \begin{theorem}\label{d=4} Suppose $d=4$ and assume \eqref{ineq2}. If \begin{equation}\label{ineq5.15} \frac{1}{p}-\frac{1}{q}=\frac{1}{10}\ \text{and}\ \frac{4}{10}<\frac{1}{p}<\frac{7}{10} \end{equation} then there is the Lebesgue space estimate \begin{equation* \|\lambda\ast f\|_{L^{q}}\leq C(A,p)\,\|f\|_{L^{p}}. \end{equation*} \end{theorem} \noindent Here are some comments: \noindent(a) Theorem \ref{d=2} is the best possible Lorentz space estimate, even in the nondegenerate case $\phi(t)=t^2 /2$. It implies the sharp $L^p \rightarrow L^q$ mapping property, an $L^{3/2}\rightarrow L^3$ estimate. \noindent(b) Theorem \ref{d=3} is analogous to a result from \cite{DLW} for polynomial curves (whose proof we will follow). Theorem \ref{d=3} implies the sharp $L^p \rightarrow L^q$ estimates, which hold for \begin{equation}\label{ineq4.2} \frac{1}{p}-\frac{1}{q}=\frac{1}{6}\ \text{and}\ \frac{1}{2}\leq\frac{1}{p}\leq\frac{2}{3}. \end{equation} But there are sharper Lorentz space estimates for the nondegenerate case $\phi (t)=t^3 /6$ in \cite{BS} and for polynomial curves (for all dimensions $d$) in \cite{Stov2}. \noindent(c) Theorem \ref{d=4} is much less satisfactory. One would like, for example, at least the sharp $L^p \rightarrow L^q$ mapping properties, which correspond to the endpoints in \eqref{ineq5.15}. \noindent(d) An analog of Theorem \ref{d=4} for all dimensions $d$, as well as analogs of the endpoint results of \cite{Stov1} and \cite{Stov2}, might follow from an analog of the band structure construction of \cite{C} for the curves and measures considered in this note. But, in view of the complicated nature of a Jacobian determinant associated with our operators, it is not clear how to obtain such a band structure. \noindent(e) The papers \cite{D}, \cite{Choi1}, and \cite{Choi2} contain some earlier results for convolution with affine arclength measures in dimensions $2$ and $3$. Section \ref{proofs} contains the proofs of Theorems 1.2--1.4 and \S 3 contains proofs for the lemmas required in \S 2. \section{Proofs of Theorems}\label{proofs} \noindent{\bf{Proof of Theorem \ref{d=2}.}} \begin{proof} According to the proof of Theorem 5 in \cite{O2}, which abstracts an argument from \cite{BOS1}, it is enough to establish the estimate \begin{equation* \int_a^b \Big(\int_a^b \chi_E \big(\gamma (t_2)-\gamma (t_1)\big)\, \omega(t_2)\, dt_2\Big)^2 \omega (t_1)\, dt_1 \leq C(A)\, |E| \end{equation*} for measurable $E\subset {\mathbb {R}}^2$. The inequality \begin{equation*} \int_a^b \Big(\int_{t_1}^b \chi_E \big(\gamma (t_2)-\gamma (t_1)\big)\, \omega(t_2)\, dt_2\Big)^2 \omega (t_1)\, dt_1 \leq 4\, |E| \end{equation*} from \cite{O1}, which is true without any additional hypothesis like \eqref{ineq2}, shows that it suffices to establish the estimate \begin{equation}\label{ineq5} \int_a^b \Big(\int_a^{t_1} \chi_E \big(\gamma (t_2)-\gamma (t_1)\big)\, \omega(t_2)\, dt_2\Big)^2 \omega (t_1)\, dt_1\leq C(A)\, |E|. \end{equation} The mapping $$ (t_1 ,t_2 )\mapsto \gamma (t_2 )-\gamma (t_1 ) $$ is one-to-one by the convexity of the curve $\gamma$. If $J(t_1 ,t_2 )$ is the absolute value of the Jacobian determinant of this mapping, then \eqref{ineq5} is equivalent to \begin{equation}\label{ineq6} \int \Big(\int \chi_\Omega (t_1 ,t_2 )\, \omega(t_2)\, dt_2\Big)^2 \omega (t_1)\, dt_1\leq C(A)\, \int\int \chi_\Omega (t_1 ,t_2 )\, J(t_1 ,t_2 )\, dt_2 \, dt_1 \end{equation} if $\Omega\subset\{ (t_1 ,t_2 ):a<t_2 <t_1 <b\}$. We will need the following estimate, a consequence of Lemma \ref{lemma0} below, \begin{equation}\label{ineq7.5} J(t_1 ,t_2 )\geq c(A)\,|t_1 -t_2 |\,\omega(t_2 )\,\omega (t_1)^2. \end{equation} \begin{lemma}\label{lemma0} Suppose $\gamma$ is as in \eqref{ineq0} and let $J(t_1 ,\dots ,t_d )$ be the absolute value of the Jacobian determinant of one of the mappings $$ (t_1 ,\dots ,t_d )\mapsto \gamma (t_1 )\pm \gamma (t_2 )\pm\cdots\pm\gamma (t_d). $$ Suppose that \eqref{ineq2} holds and that $n_1 \leq \cdots\leq n_d$ are positive numbers satisfying $n_1 +\cdots +n_d =d(d+1)/2$. Suppose that $\{i_1 ,\dots ,i_d\}=\{1,\dots ,d\}$ and that $a<t_{i_1} <\cdots <t_{i_d} <b$. Then \begin{equation}\label{ineq3.0} J(t_1 ,\dots ,t_d )\ge c\, \Big(\prod_{j=1}^d \omega (t_{i_j} )^{n_j}\Big) V(t_1 ,\dots ,t_d ), \end{equation} where $$ V(t_1 ,\dots ,t_d )=\big| \prod_{1\le i<j\le d}(t_j -t_i ) \big| $$ and where $c$ depends only on $A$ from \eqref{ineq2} and on $n_1 ,\dots , n_d$. \end{lemma} \noindent Given \eqref{ineq7.5}, inequality \eqref{ineq6} will follow from \begin{equation}\label{ineq8} \int \Big(\int \chi_\Omega (t_1 ,t_2 )\, \omega(t_2)\, dt_2\Big)^2 \omega (t_1)\, dt_1\leq \end{equation} \begin{equation*} C\, \int\Big(\int \chi_\Omega (t_1 ,t_2 )\, \omega (t_1 )\,|t_1 -t_2 |\,\omega (t_2) \, dt_2 \Big)\omega (t_1 )\, dt_1 . \end{equation*} To see \eqref{ineq8}, we will use the following lemma. \begin{lemma}\label{lemma1} Suppose $\omega$ is nonnegative and nondecreasing on some interval $(c,d]$. Suppose that $c_1 ,\dots ,c_K \in{\mathbb {R}}$. For $\rho>0$ let $$ E_\rho =\{t\in(c,d]: \omega (t)^{K-1} \,\omega (d)\prod_{\ell =1}^K |t-c_l |\leq \rho ^K\}. $$ Then $$ \int_{E_\rho}\omega (t)\,dt\leq C(K)\,\rho . $$ \end{lemma} \noindent Indeed, fix $t_1$ and define $\rho$ by $$ \rho=\frac{1}{2\,C(1)}\int \chi_\Omega (t_1 ,t_2 )\, \omega(t_2)\, dt_2 , $$ where $C(1)$ is the constant in Lemma \ref{lemma1} corresponding to $K=1$. It follows from Lemma \ref{lemma1} (with $d=t_1$) that $$ \int \chi_\Omega (t_1 ,t_2 )\, \omega (t_1 )\,|t_1 -t_2 |\,\omega (t_2)\, dt_2 \geq \frac{1}{4\,C(1)} \Big(\int \chi_\Omega (t_1 ,t_2 )\, \omega(t_2)\, dt_2\Big)^2 . $$ Now integrating with respect to $t_1$ gives \eqref{ineq8}. \end{proof} \noindent{\bf{Proof of Theorem \ref{d=4}.}} \begin{proof} We will apply the iterated $TT^*$ method introduced by Christ in \cite{C} and (see, e.g., the discussion and references in \cite{Stov2}) employed by many others since then. Thus, assuming some familiarity with Christ's method, Theorem \ref{d=4} will follow if we establish the inequality \eqref{ineq9} below, where $E$, $\alpha$, and $\beta$ are as follows: let $\Omega\subset (a,b)^4$ be a set of the form \begin{equation* \Omega =\{(t_1 ,t_2 ,t_3 ,t_4):t_1 \in\Omega_0 ,\,t_2 \in \Omega (t_1 ),\, t_3 \in \Omega (t_1 ,t_2 ),\, \, t_4 \in \Omega (t_1 ,t_2 ,t_3 )\} \end{equation*} where \begin{equation}\label{ineq8.5} \lambda (\Omega_0 )\geq \beta >0,\ \lambda \big(\Omega (t_1 ) \big)\geq \alpha >0 \ \text{for each}\ t_1\in\Omega_0 , \ \end{equation} \begin{equation}\label{ineq8.6} \lambda \big(\Omega (t_1 ,t_2) \big)\geq \beta \ \text{whenever}\ t_1\in\Omega_0 ,\,t_2\in\Omega (t_1),\ \text{and}\ \end{equation} \begin{equation}\label{ineq8.7} \lambda \big(\Omega (t_1 ,t_2 ,t_3) \big)\geq \alpha \ \text{whenever}\ t_1\in\Omega_0 ,\,t_2\in\Omega (t_1),\ t_3 \in \Omega (t_1 ,t_2 ). \end{equation} (Here we are writing $\lambda$ for the measure $$ d\lambda (t)=\omega (t)\, dt=\phi ^{(4)} (t)^{1/10}dt $$ on $(a,b)$ as well as for its image on $\gamma$.) The set $E$ is defined by $$ E=\{\gamma (t_1 )-\gamma (t_2 )+\gamma (t_3 )-\gamma (t_4 ):(t_1 ,t_2 ,t_3 ,t_4 )\in \Omega \}, $$ and the desired inequality is \begin{equation}\label{ineq9} |E|\geq c(A)\,\alpha^7 \beta^3 . \end{equation} By passing to a subset of $\Omega$ and replacing $\alpha$ and $\beta$ by $\alpha /24$ and $\beta /24$, we can assume that there is some permutation $\{i_1 ,i_2 ,i_3 ,i_4\}$ of $\{1,2,3,4\}$ such that if $(t_1 ,t_2 ,t_3 ,t_4)\in\Omega$ then \begin{equation* t_{i_1}<t_{i_2}<t_{i_3}<t_{i_4}. \end{equation*} If $J(t_1 ,t_2 ,t_3 ,t_4)$ is the absolute value of the Jacobian determinant of the mapping $$ (t_1 ,t_2 ,t_3 ,t_4)\mapsto \gamma (t_1 )-\gamma (t_2 )+\gamma (t_3 )-\gamma (t_4 ), $$ we will use the following inequality, a consequence of Lemma \ref{lemma0}, \begin{equation}\label{ineq12} J(t_1 ,t_2 ,t_3 ,t_4 )\geq c(A)\, \omega (t_{i_4})^9 \Big(\prod_{j=1}^3 \omega (t_{i_j})\Big)^{1/3}\,V(t_1 ,t_2 ,t_3 ,t_4 ). \end{equation} We will also need the following lemma. \begin{lemma}\label{lemma2} Suppose $\omega$ is nonnegative and nondecreasing on an interval $[c,d)$. Suppose $\eta >0$ and $r>1$ satisfy $$ \eta<\frac{1}{r'}\ \dot=\ 1-\frac{1}{r}. $$ Suppose $E\subset [c,d)$ and let $$ \rho =\int_E \omega (t)\,dt . $$ Then, for $t_0 \in{\mathbb {R}}$, \begin{equation* \rho^{1+r\eta}\ \omega (c)^{r-(1+r\eta )}\leq C(\eta ,r) \, \int_E \omega (t)^r \, |t-t_0 |^{r\eta}\,dt. \end{equation*} If $t_1 ,t_2 \in{\mathbb {R}}$ then also \begin{equation* \rho^{1+r\eta}\,|t_2 -t_1 |^{r\eta }\,\omega (c)^{r-(1+r\eta )}\leq C(\eta ,r) \, \int_E \omega (t)^r \, (|t-t_1 |\cdot|t-t_2 |)^{r\eta}\,dt. \end{equation*} \end{lemma} Now define $I$ by \begin{equation* I= \int_{\Omega_0} \int_{\Omega (t_1 )}\int_{\Omega (t_1 ,t_2 )}\int_{\Omega (t_1 ,t_2 ,t_3)} \omega (t_{i_4})^9\, \Big(\prod_{j=1}^3 \omega (t_{i_j})\Big)^{1/3}\,V(t_1 ,t_2 ,t_3 ,t_4 ) \,dt_4 \, dt_3 \, dt_2 \,dt_1 \end{equation*} so that, because of \eqref{ineq12}, we have \begin{equation}\label{ineq15.5} |E|\geq c(A)\,I. \end{equation} (The change of variables needed for the estimate \eqref{ineq15.5} is justified as in \cite{DM2}, p. 549.) In view of \eqref{ineq15.5}, \eqref{ineq9} will follow if we show that \begin{equation}\label{15.7} I\gtrsim \alpha^7 \beta^3. \end{equation} (The constants implied by $\lesssim$ and $\gtrsim$ will not depend on any parameters.) We will, unfortunately, need to consider several cases. To begin, if $4=i_4$, we will use Lemma \ref{lemma2} with $r=5$ and $\eta =3/5$ to estimate \begin{equation* \int_{\Omega (t_1 ,t_2 ,t_3)}\omega (t_4 )^5 \,\prod_{j=1}^3 |t_4 -t_j |\,dt_4 \geq \int_{\Omega (t_1 ,t_2 ,t_3)}\omega (t_4 )^5 \, |t_4 -t_{i_3} |^3 \,dt_4 \gtrsim \end{equation*} \begin{equation*} \,\Big(\int_{\Omega (t_1 ,t_2 ,t_3)}\omega (t_4 )\, dt_4 \Big)^4\omega (t_{i_3}). \end{equation*} With the inequality $\omega (t_{i_4})\ge\omega (t_{i_3})$ this gives \begin{equation}\label{ineq17} I\gtrsim \int_{\Omega_0} \int_{\Omega (t_1 )}\int_{\Omega (t_1 ,t_2 )} \Big(\int_{\Omega (t_1 ,t_2 ,t_3)}\omega (t_4 )\,dt_4 \Big)^4 \cdot \end{equation} \begin{equation*} \omega (t_{i_3})^{16/3}\,\omega (t_{i_2})^{1/3}\,\omega (t_{i_1})^{1/3}\,V(t_1 ,t_2 ,t_3 )\,dt_3 \,dt_2 \,dt_1 . \end{equation*} If $4 =i_{k_0}$ for some $k_0 =1,2,3$, then $$ \omega (t_4 )^{1/3}\,\omega (t_{i_4})^{11/3}\ge \omega (t_4)^3 \,\omega (t_{i_{k_0 +1}}) $$ by the monotonicity of $\omega$. Thus \begin{equation*} \int_{\Omega (t_1 ,t_2 ,t_3)}\omega (t_4 )^{1/3}\,\omega (t_{i_4})^{11/3}\prod_{j=1}^3 |t_4 -t_j |\,dt_4 \geq \int_{\Omega (t_1 ,t_2 ,t_3)}\omega (t_4)^3\,\omega (t_{i_{k_0 +1}}) \prod_{j=1}^3 |t_4 -t_j |\,dt_4 \gtrsim \end{equation*} \begin{equation*} \Big(\int_{\Omega (t_1 ,t_2 ,t_3)}\omega (t_4 )\,dt_4 \Big)^4 , \end{equation*} where the last inequality follows from an application of Lemma \ref{lemma1} as at the end of the proof of Theorem \ref{d=2} but with $K=3$ instead of $K=1$. Therefore \begin{equation}\label{ineq18} I\gtrsim \int_{\Omega_0} \int_{\Omega (t_1 )}\int_{\Omega (t_1 ,t_2 )} \Big(\int_{\Omega (t_1 ,t_2 ,t_3)}\omega (t_4 )\, dt_4 \Big)^4 \times \end{equation} \begin{equation*} \omega (t_{i_4})^{16/3}\, \prod_{k=1,k\not=k_0}^3 \omega (t_{i_k})^{1/3}\, V(t_1 ,t_2 ,t_3 )\,dt_3 \,dt_2 \,dt_1 . \end{equation*} Now if $\{j_1 ,j_2 ,j_3 \}$ is the permutation of $\{1,2,3\}$ such that $t_{j_1}<t_{j_2}<t_{j_3}$ whenever $t_1 \in\Omega_0 ,\,t_2 \in \Omega (t_1),\, t_3 \in\Omega (t_1 ,t_2 )$, then \eqref{ineq17}, \eqref{ineq18}, and \eqref{ineq8.7} imply that \begin{equation}\label{ineq19} I\gtrsim \alpha^4 \int_{\Omega_0} \int_{\Omega (t_1 )}\int_{\Omega (t_1 ,t_2 )} \omega (t_{j_3})^{16/3}\,\omega (t_{j_2})^{1/3}\,\omega (t_{j_1})^{1/3}\,V(t_1 ,t_2 ,t_3 )\,dt_3 \,dt_2 \,dt_1 . \end{equation} If $3=j_3$, \begin{equation* \int_{\Omega (t_1 ,t_2 )} \omega (t_3 )^3 \,|t_3 -t_2 |\cdot |t_3 -t_1 |\,dt_3 \geq |t_1 -t_2 |\int_{\Omega (t_1 ,t_2 )} \omega (t_3 )^3 \,|t_3 -t_{j_2} |\,dt_3 \gtrsim \end{equation*} \begin{equation*} |t_1 -t_2 |\Big(\int_{\Omega (t_1 ,t_2 )} \omega (t_3 )\, dt_3 \Big)^2 \omega (t_{j_2} ), \end{equation*} where the $\gtrsim$ results from an application of Lemma \ref{lemma2} with $r=3$ and $\eta =1/3$. With \eqref{ineq19}, \eqref{ineq8.6}, and the monotonicity of $\omega$ this gives \begin{equation}\label{ineq21} I\gtrsim \alpha^4 \beta^2 \int_{\Omega_0} \int_{\Omega (t_1 )} \omega (t_{j_2})^{7/2}\,\omega (t_{j_1})^{1/2}\,|t_1 -t_2 |^2 \,dt_2 \,dt_1 . \end{equation} If $3=j_2$, the second conclusion of Lemma \ref{lemma2}, with $r=13/6$ and $\eta =6/13$, gives $$ \int_{\Omega (t_1 ,t_2 )} \omega (t_3 )^{13/6}|t_3 -t_{2} |\cdot |t_3 -t_{1} |\,dt_3 \gtrsim |t_{1}-t_{2}|\Big(\int_{\Omega (t_1 ,t_2 )} \omega (t_3 )\, dt_3 \Big)^2 \omega (t_{j_1})^{1/6}. $$ From \eqref{ineq19} it then follows that \begin{equation}\label{ineq22} I\gtrsim \alpha^4 \beta^2 \int_{\Omega_0} \int_{\Omega (t_1 )} \omega (t_{j_3})^{7/2}\,\omega (t_{j_1})^{1/2}\,|t_1 -t_2 |^2 \,dt_2 \,dt_1 . \end{equation} And if $3=j_1$ then $$ \int_{\Omega (t_1 ,t_2 )} \omega (t_3 )\omega (t_{j_2})\,|t_3 -t_{2} |\cdot |t_3 -t_{1} |\,dt_3 \geq |t_{1}-t_{2}|\int_{\Omega (t_1 ,t_2 )} \omega (t_3 )\, \omega (t_{j_2})\,|t_3 -t_{j_2} |dt_3 \gtrsim $$ $$ |t_{1}-t_{2}|\Big(\int_{\Omega (t_1 ,t_2 )} \omega (t_3 )\, dt_3 \Big)^2 $$ by Lemma \ref{lemma1} with $K=1$, and so \eqref{ineq19} gives \begin{equation}\label{ineq23} I\gtrsim \alpha^4 \beta^2 \int_{\Omega_0} \int_{\Omega (t_1 )} \omega (t_{j_3})^{7/2}\,\omega (t_{j_2})^{1/2}\,|t_1 -t_2 |^2 \,dt_2 \,dt_1 . \end{equation} Thus if $\{k_1 ,k_2 \}$ is the permutation of $\{1,2\}$ such that $t_{k_1}<t_{k_2}$ whenever $t_1 \in\Omega_0 ,\,t_2 \in \Omega (t_1)$, then \eqref{15.7} will follow from \eqref{ineq21}, \eqref{ineq22}, and \eqref{ineq23} if we establish that \begin{equation}\label{ineq24} \int_{\Omega_0} \int_{\Omega (t_1 )}\omega (t_{k_2})^{7/2}\,\omega (t_{k_1})^{1/2}\,|t_1 -t_2 |^2 \,dt_2 \, dt_1\gtrsim \alpha^3 \beta. \end{equation} If $2=k_2$, then $$ \int_{\Omega (t_1 )}\omega (t_{2})^{7/2}\,|t_1 -t_2 |^2 \,dt_2 \gtrsim \Big( \int_{\Omega (t_1 )}\omega (t_{2}) \,dt_2 \Big)^3 \omega (t_1 )^{1/2} $$ by Lemma \ref{lemma2} with $r=7/2$, $\eta =4/7$, and \eqref{ineq24} follows from \eqref{ineq8.5}. If $2=k_1$, then $$ \omega (t_{k_2})^{7/2}\,\omega (t_{k_1})^{1/2}\ge \omega (t_1 )^2 \omega (t_2 )^2 $$ and $$ \int_{\Omega (t_1 )}\omega (t_{2})^{2}\,\omega (t_1 )\,|t_1 -t_2 |^2 \,dt_2 \gtrsim \Big( \int_{\Omega (t_1 )}\omega (t_{2}) \,dt_2 \Big)^3 $$ by Lemma \ref{lemma1} with $K=2$. Again, \eqref{ineq24} follows from \eqref{ineq8.5}, and the proof of Theorem \ref{d=4} is complete. \end{proof} \noindent{\bf{Proof of Theorem \ref{d=3}.}} \begin{proof} The sharp $L^p \rightarrow L^q$ estimates (for the indices in \eqref{ineq4.2}) can be obtained by the method of \cite{O}. But to obtain the Lorentz space estimates in Theorem \ref{d=3}, we will follow the proof of the $d=3$ case in \cite{DLW}, again using the method of Christ. Thus we will begin by establishing the following claim (which, by itself, implies the almost sharp Lebesgue space estimates corresponding to strict inequality in \eqref{ineq4.2}): suppose that $\Omega\subset (a,b)^3$ is a set of the form \begin{equation* \Omega =\{(t_1 ,t_2 ,t_3 ):t_1 \in\Omega_0 ,\,t_2 \in \Omega (t_1 ),\, t_3 \in \Omega (t_1 ,t_2 )\} \end{equation*} where \begin{equation}\label{ineq3.002} \lambda (\Omega_0 )\geq \alpha >0,\ \lambda (\Omega (t_1 ) )\geq \beta >0 \ \text{for each}\ t_1\in\Omega_0 \ \text{and} \ \end{equation} \begin{equation*} \lambda (\Omega (t_1 ,t_2) )\geq \alpha \ \text{whenever}\ t_1\in\Omega_0 ,\,t_2\in\Omega (t_1). \end{equation*} If $$ E=\{\gamma (t_1 )-\gamma (t_2 )+\gamma (t_3 ):(t_1 ,t_2 ,t_3 )\in \Omega \}, $$ then we have \begin{equation}\label{ineq3.003} |E|\geq c(A)\,\alpha^4 \beta^2 . \end{equation} As before, we can assume that there is some permutation $\{i_1 ,i_2 ,i_3 \}$ of $\{1,2,3\}$ such that if $(t_1 ,t_2 ,t_3 )\in\Omega$ then \begin{equation*} t_{i_1}<t_{i_2}<t_{i_3}. \end{equation*} With $J(t_1 ,t_2 ,t_3 )$ the absolute value of the Jacobian determinant of the mapping $$ (t_1 ,t_2 ,t_3 )\mapsto \gamma (t_1 )-\gamma (t_2 )+\gamma (t_3 ), $$ we will need the following consequence of Lemma \ref{lemma0}: \begin{equation}\label{ineq3.005} J(t_1 ,t_2 ,t_3 )\geq c(A)\, \omega (t_{i_3})^5 \Big(\prod_{j=1}^2 \omega (t_{i_j})\Big)^{1/2}\,V(t_1 ,t_2 ,t_3 ). \end{equation} Define $I$ by \begin{equation* I= \int_{\Omega_0} \int_{\Omega (t_1 )}\int_{\Omega (t_1 ,t_2 )} \omega (t_{i_3})^5\, \Big(\prod_{j=1}^2 \omega (t_{i_j})\Big)^{1/2}\,V(t_1 ,t_2 ,t_3 ) \, dt_3 \, dt_2 \,dt_1 \end{equation*} so that, because of \eqref{ineq3.005}, we have \begin{equation* |E|\geq c(A)\,I. \end{equation*} (Again, the change of variables here is justified as in \cite{DM2}.) Then \eqref{ineq3.003} will follow from \begin{equation}\label{ineq3.008} I\gtrsim \alpha^4\, \beta^2. \end{equation} Since the proof of \eqref{ineq3.008} is very similar to the proof of \eqref{15.7}, we will only sketch the argument. The first step is to obtain the inequality \begin{equation* I\gtrsim \alpha^3 \int_{\Omega_0}\int_{\Omega (t_1 )} \omega (t_{j_2})^{5/2}\omega (t_{j_1})^{1/2}\,|t_1 -t_2 |\,dt_2 \, dt_1, \end{equation*} where $\{t_1 ,t_2 \}=\{ t_{j_1},t_{j_2}\}$ and $ t_{j_1}<t_{j_2}$. Recalling \eqref{ineq3.002}, this is done by using Lemma \ref{lemma2}, with $r=7/2$ and $\eta =4/7$, if $3=i_3$ and by using Lemma \ref{lemma1}, with $K=2$, if $3=i_2$ or $3=i_1$. The proof of \eqref{ineq3.008} is then concluded by showing that \begin{equation*} \int_{\Omega_0}\int_{\Omega (t_1 )} \omega (t_{j_2})^{5/2}\omega (t_{j_1})^{1/2}\,|t_1 -t_2 |\,dt_2 \, dt_1 \gtrsim \beta^2 \,\alpha \end{equation*} by using Lemma \ref{lemma2} with $r=5/2$, $\eta =2/5$ if $t_1 <t_2$ and Lemma \ref{lemma1} with $K=1$ if $t_2 <t_1$. This proves \eqref{ineq3.008} and thus, as mentioned above, establishes the almost-sharp Lebesgue space bounds by the method of \cite{C}. To obtain the Lorentz space bounds claimed in Theorem \ref{d=3}, we follow the proof of the analogous result in \cite{DLW} (itself based on a further argument of Christ \cite{C2}). Thus it is enough to establish an analogue of Lemma 1 in \cite{DLW} for our curves $\gamma$. The crux of the matter is to show the following: if $\Omega\subset (a,b)^3$ is a set of the form \begin{equation*} \Omega =\{(t_1 ,t_2 ,t_3 ):t_1 \in\Omega_0 ,\,t_2 \in \Omega (t_1 ),\, t_3 \in \Omega (t_1 ,t_2 )\}, \end{equation*} where \begin{equation*} \lambda (\Omega_0 )\geq \beta >0,\ \lambda (\Omega (t_1 ) )\geq \beta |G|/|E| >0 \ \text{for each}\ t_1\in\Omega_0 \ \text{and} \ \end{equation*} \begin{equation*} \lambda (\Omega (t_1 ,t_2) )\geq \delta >0 \ \text{whenever}\ t_1\in\Omega_0 ,\,t_2\in\Omega (t_1), \end{equation*} and if $$ E'=\{\gamma (t_1 )-\gamma (t_2 )+\gamma (t_3 ):(t_1 ,t_2 ,t_3 )\in \Omega \}, $$ then we have \begin{equation*} |E'|\geq c(A)\,\delta^3 \Big(\frac{\beta |G|}{|E|}\Big)^2\, \beta . \end{equation*} This can be established by exactly the argument given above for \eqref{ineq3.003}. \end{proof} \section{Proofs of lemmas}\label{lemmas} \noindent{\bf{Proof of Lemma \ref{lemma0}.}} \begin{proof} Assume without loss of generality that $a<t_1 <\dots <t_d <b$. It is enough to prove the lemma in the special case when each $n_j$ can be written \begin{equation}\label{ineq3.05} n_j =\frac{d(d+1)}{2}\cdot\frac{l_j}{2^n} \end{equation} for some large integer $n$ and positive integers $l_j$. (To see this, find $n$ and $l_1 \leq \cdots \leq l_d$ such that $$ \sum_{j=1}^d l_j =2^n \ \text{and}\ n_j \le\frac{d(d+1)}{2}\cdot\frac{l_j}{2^n},\,j=2,\dots ,d. $$ Then note that $$ \prod_{j=1}^d \omega (t_j )^{n_j}\leq \prod_{j=1}^d \omega (t_j )^{\tfrac{d(d+1)l_j}{2^{n+1}}} $$ by the monotonicity of $\omega$.) It is shown in \cite{BOS2} that there exists a nonnegative function $\psi =\psi (u;t_1 ,\dots ,t_d )$ supported in $[t_1 ,t_d ]$ such that \begin{equation}\label{ineq3.1} J(t_1 ,\dots ,t_d )=\int_{t_1}^{t_d}\omega (u)^{d(d+1)/2}\,\psi (u)\,du . \end{equation} The choice $\phi (t)=t^d /d!$ in \eqref{ineq0} shows that \begin{equation* \int_{t_1}^{t_d}\psi (u)\,du =c(d)\, V(t_1 ,\dots ,t_d ). \end{equation*} For $\delta >0$, define $$ t_\delta =\big(1-(d-1)\delta\big)t_d +\delta (t_1 +t_2 +\cdots +t_{d-1}). $$ The inequality \eqref{ineq3.0} will follow from \eqref{ineq3.1}, the monotonicity of $\omega$, the inequality \begin{equation}\label{ineq3.3} \int_{t_\delta}^{t_d}\psi (u; t_1 , \dots ,t_d )\,du \ge c(\delta )\, V(t_1 ,\dots ,t_d ), \end{equation} and the fact that there is a $\delta =\delta (n_1 ,\dots n_d )>0$ such that \begin{equation}\label{ineq3.4} \omega (t_\delta )^{d(d+1)/2}\geq c(A;n_1 ,\dots ,n_d )\,\prod_{j=1}^d \omega (t_j )^{n_j}. \end{equation} The proof of \eqref{ineq3.3} is by induction on $d$. Since $$ \psi (u;t_1 ,t_2 )=\chi_{[t_1 ,t_2 ]}(u), $$ the case $d=2$ is clear. The inductive step requires an identity from \cite{BOS2}: \begin{equation* \psi (u;t_1 ,\dots ,t_d )= \int_{t_1}^{t_2}\cdots \int_{t_{d-1}}^{t_d}\psi (u;s_1 ,\dots ,s_{d-1})\,ds_1 \cdots ds_{d-1}. \end{equation*} Thus \begin{equation}\label{ineq3.6} \int_{t_\delta}^{t_d}\psi (u)\,du = \int_{t_1}^{t_2}\cdots \int_{t_{d-1}}^{t_d} \int_{\{u\ge t_\delta\}}\psi (u;s_1 ,\dots ,s_{d-1})\,du\,ds_1 \cdots ds_{d-1}. \end{equation} We need the following additional fact from \cite{BOS2}: suppose $\lambda_j \in (0,1)$ for $j=1 ,\dots ,d-1$ and let $$ t'_j= \lambda_{j}t_{j}+(1-\lambda_j )t_{j+1} $$ for $j=1 ,\dots ,d-1$. Then \begin{equation}\label{ineq3.7} \int_{t'_1}^{t_2}\cdots \int_{t'_{d-1}}^{t_d}V(s_1 ,\dots ,s_{d-1})\,ds_1 \cdots ds_{d-1} \ge c(\lambda_1 ,\dots ,\lambda_{d-1})\, V(t_1 ,\dots ,t_d ). \end{equation} Now choose $\lambda_1 ,\dots ,\lambda_{d-1}\in(0,1)$ and $\delta' >0$ such that if $ t'_j \leq s_j \leq t_{j+1}$ for $j=1,\dots,d-1$ then \begin{equation}\label{ineq3.75} s_{\delta'} \doteq \big(1-(d-2)\delta' \big)s_{d-1} +\delta' (s_1 +s_2 +\cdots +s_{d-2})\ge \end{equation} $$ t_\delta =\big(1-(d-1)\delta\big)t_d +\delta (t_1 +t_2 +\cdots +t_{d-1}). $$ (Here is how to make this choice: we can assume that $t_d =1$. If $$1-(d-2)\delta ' >0,$$ then \eqref{ineq3.75} holds for all $s_j \in [ t'_j ,t_{j+1}]$ if and only if it holds for $s_j =t'_j$. So fix $s_j =t'_j$ for $j=1,\dots ,d-1$. Then, with $\lambda =(\lambda _1 ,\dots ,\lambda_{d-1})$, $$ s_{\delta '}=(1-\lambda_{d-1})\big( 1-(d-2)\delta' \big)+\sum_{j=1}^{d-1} c_j (\delta ' ,\lambda )t_j $$ where $$ |c_j (\delta ' ,\lambda )|=O(\delta ' +\|\lambda \|). $$ Assume that $\delta '$ and $\lambda$ are chosen so that $$ (1-\lambda_{d-1})\big( 1-(d-2)\delta' \big)\ge \big( 1-(d-1)\delta \big)\ \text{and}\ |c_j (\delta ' ,\lambda )|\le \delta . $$ Since $$ t_\delta =\big( 1-(d-1)\delta )+\delta (t_1 +\cdots +t_{d-1} ), $$ it then follows from the fact that $s_{\delta '}=t_\delta =1$ when $t_1 =\cdots =t_{d-1}=1$ that $s_{\delta '}\ge t_\delta$ if $0\le t_j \le1$.) \noindent Now \begin{equation*} \eqref{ineq3.6} \ge \int_{t'_1}^{t_2}\cdots \int_{t'_{d-1}}^{t_d} \int_{\{u\ge t_\delta\}}\psi (u;s_1 ,\dots ,s_{d-1})\,du\,ds_1 \cdots ds_{d-1}\ge \end{equation*} \begin{equation*} \int_{t'_1}^{t_2}\cdots \int_{t'_{d-1}}^{t_d} \int_{\{u\ge s_{\delta'}\}}\psi (u;s_1 ,\dots ,s_{d-1})\,du\,ds_1 \cdots ds_{d-1}\ge \end{equation*} \begin{equation*} c (\delta' )\int_{t'_1}^{t_2}\cdots \int_{t'_{d-1}}^{t_d} V(s_1 ,\dots ,s_{d-1}) \,ds_1 \cdots ds_{d-1}\ge \end{equation*} \begin{equation*} c(\delta' ;\lambda_1 ,\dots ,\lambda_{d-1})\,V(t_1 ,\dots ,t_d ), \end{equation*} where the second inequality is due to \eqref{ineq3.75} and the fact that $\psi (u;s_1 ,\dots ,s_d )$ is nonnegative, the third to the induction hypothesis, and the fourth to \eqref{ineq3.7}. This completes the proof by induction on $d$ of \eqref{ineq3.3}. To see \eqref{ineq3.4}, recall from \eqref{ineq3.05} that \begin{equation*} n_j =\frac{d(d+1)}{2}\cdot\frac{l_j}{2^n} \end{equation*} for some large integer $n$ and positive integers $l_j$ satisfying $$ \sum_{j=1}^d \frac{l_j}{2^n}=1. $$ Choose $\delta >0$ so small that $$ \delta< \frac{l_j}{2^{n}} $$ for $j=1,\dots ,d-1$. Note that, since $t_j <t_d $ if $j<d$, $$ t_\delta =\big(1-(d-1)\delta \big)t_d +\delta (t_2 +\cdots +t_{d-1}) \ge \sum_{j=1}^d \frac{l_j}{2^{n}}\,t_j . $$ Now the inequality \begin{equation*} A^n \, \omega \big(\tfrac{s_1+\dots+s_{2^n}}{2^n}\big)\ge \Big(\prod_{j=1}^{2^n} \omega(s_j)\Big)^{1/2^n} \end{equation*} (which follows from iterating \eqref{ineq2}) and the monotonicity of $\omega$ imply that $$ \omega (t_\delta )\ge \omega \Big(\frac{1}{2^n}\sum_{j=1}^d {l_j}\,t_j \Big) \ge A^{-n}\,\prod_{j=1}^d \omega (t_j )^{\frac{l_j}{2^{n}}}. $$ This give \eqref{ineq3.4}. \end{proof} \noindent{\bf{Proof of Lemma \ref{lemma1}.}} \begin{proof} By scaling we can assume that $\rho =1$. Partition $(c,d]$ into disjoint intervals $I_j =(a_j ,a_{j+1}]$ such that $2^{j}\le \omega \le 2^{j+1}$ on $I_j$. Assume $d\in I_{j_0}$. We will need the inequality \begin{equation}\label{ineq3.9} |\{t\in{\mathbb {R}} :\prod_{l=1}^K |t-c_l |\leq \tau \}|\leq C(K)\, \tau ^{1/K}, \ \tau >0. \end{equation} (To see \eqref{ineq3.9}, observe that ${\mathbb {R}}$ can be partitioned into at most $2K$ intervals $J_p$ with the property that $$ \prod_{l=1}^K |t-c_l |\geq |t-c_{l(p)}|^K ,\ t\in J_p .) $$ From \eqref{ineq3.9} it follows that if $$ E_j =\{t\in I_j :\omega (t)^{K-1}\,\omega (d)\, \prod_{l=1}^K |t-c_l |\leq 1\}, $$ then \begin{equation* |E_j | \leq \frac{C(K)} {2^{\big( j(K-1)+j_0 \big)/K}}. \end{equation*} Thus $$ \int_{E_j} \omega (t)\, dt\le C(K)\, 2^{(j-j_0 )/K}, $$ and the conclusion of Lemma \ref{lemma1} follows by summing a geometric series. \end{proof} \noindent{\bf{Proof of Lemma \ref{lemma2}.}} \begin{proof} We begin by observing that \begin{equation* \rho =\int_E \omega (t)\,|t-t_0 |^{\eta }\,|t-t_0 |^{-\eta }\,dt \leq \end{equation*} \begin{equation*} \Big(\int_E \omega (t)^r |t-t_0 |^{r \eta}\,dt\Big)^{1/r} \Big(\int_E |t-t_0 |^{-r' \eta}\,dt\Big)^{1/r'}\le \end{equation*} \begin{equation*} C(r,\eta )\,\Big(\int_E \omega (t)^r |t-t_0 |^{r \eta}\,dt\Big)^{1/r} \, |E|^{1-\eta -1/r}. \end{equation*} Thus, by the monotonicity of $\omega$, \begin{equation*} \rho^{1+r\eta }\big(\omega (c)\,|E|\big)^{r-1-r\eta}\leq \rho^{1+r\eta }\,\rho^{r-1-r\eta}=\rho ^r \leq \end{equation*} \begin{equation*} C(r,\eta )\,\int_E \omega (t)^r |t-t_0 |^{r \eta}\,dt \cdot |E|^{r-1-r\eta}. \end{equation*} This gives the first conclusion of Lemma \ref{lemma2}. Using the estimate $$ \int_E \big(|t-t_1 |\cdot |t-t_2 |\big)^{-r' \eta }\,dt\leq C(r,\eta )\,|E|^{1-r' \eta}\,|t_1 -t_2 |^{-r' \eta}, $$ the second conclusion follows similarly. \end{proof}
18a32d6dda7f2191ab2f31aa69b40a006a2ffc5e
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} The issue of the kaon interaction in the nucleus has attracted much attention in past years. Although from the study of kaon atoms one knows that the $K^-$-nucleus potential is attractive \cite{friedman-gal}, the discussion centers on how attractive the potential is and whether it can accommodate deeply bound kaon atoms (kaonic nuclei), which could be observed in direct reactions. All modern potentials based on underlying chiral dynamics of the $KN$ interaction \cite{lutz,angelsself,schaffner,galself,Tolos:2006ny} lead to moderate potentials of the order of 60 MeV attraction at normal nuclear density. They also have a large imaginary part making the width of the bound states much larger than the energy separation between the levels, which would rule out the experimental observation of these states. Deep $K^-$-N optical potentials are preferred by the phenomenological fits to kaon atoms data. One of the most known extreme cases of these type is a highly attractive phenomenological potential with about 600 MeV strength in the center of the nucleus, introduced in \cite{akaishi:2002bg,akainew}. In these picture such an attractive $K^-$, inserted inside the nucleus, would lead to a shrinkage of the nucleus, generating a new very compact object - kaonic nucleus - with central density which can be 10 times larger than normal nuclear density. Such super-deep potentials were criticized in \cite{toki,Hyodo:2007jq,Oset:2007vu,npangels}. It is important to keep in mind that in kaon atoms the $K^-$ is primary bound by the Coulomb force. These are extended systems, and therefore their properties cannot directly tell us about the $K^-$-N potential at short distances. From the experimental side the search for bound $K^-$ states with nucleons is a most direct and clear way to answer whether the $K^-$-nucleon potential is deep or shallow, because only deep potential may generate states sufficiently narrow to be observed experimentally. Experimental attempts to resolve this situation are made since 2004, but the situation is still very unclear. Several claims of observed deeply bound $K^-$ states have been made. However the first one, about $K^-pnn$ state bound by 195 MeV from the experiment at KEK \cite{Suzuki:2004ep}, is now withdrawn after a new more precise experiment \cite{Sato:2007sb}. The peaks seen by FINUDA and originally interpreted in terms of deeply bound $K^-pp$ \cite{Agnello:2005qj} and $K^-ppn$ \cite{:2007ph} clusters, are now put under the question, because in Refs. \cite{Magas:2006fn,Ramos:2007zz,Crimea,Magas:2008bp} these peaks found explanations based on conventional reactions that unavoidably occur in the process of kaon absorption. There are also claims (with very low statistical significance) of $K^-pp$ and $K^-ppn$ bound states from $\bar{p}$ annihilation in $ ^4He$ at rest measured by OBELIX@CERN \cite{Obelix}. And the recent claim of $K^-pp$ bound state, seen in $pp\rightarrow K^+ X$ reaction, from DISTO experiment \cite{DISTO}. These experimental claims are under investigation now. Before calling in new physics one has to make sure that these data cannot be explained with conventional mechanisms. There is however one more experiment where the authors claim the evidence for a strong kaon-nucleons potential, with a depth of the order of 200 MeV \cite{Kishimoto:2007zz}. The experiment looks for fast protons emitted from the absorption of in flight kaons by $^{12}C$ in coincidence with at least one charge particle in the decay counters sandwiching the target. The data analysis in \cite{Kishimoto:2007zz} is based on the assumption that the coincidence requirement does not change the shape of the final spectra. We shall see that this assumption doesn't hold and the interpretation of the data requires a more thorough approach than the one used in that work. One of the shortcomings of Ref.~\cite{Kishimoto:2007zz} stems from employing the Green's function method \cite{Morimatsu:1994sx} to analyze the data. The only mechanism considered in Ref.~\cite{Kishimoto:2007zz} for the emission of fast protons is the $\bar{K} p \to \bar{K} p$ process, taking into account the optical potential for the slow kaon in the final state. We shall show that there are other mechanisms that contribute to generate fast protons, namely multi-scattering reactions, and kaon absorption by one nucleon, $K^- N \to \pi \Sigma$ or $K^- N \to \pi \Lambda$ or by a pair of nucleons, $\bar{K} N N\to \Sigma N$ and $\bar{K} N N\to \Lambda N$, followed by decay of $\Sigma$ or $\Lambda$ into $\pi N$. The contributions from these processes were also suggested in Ref. \cite{YH}. In the present work, we take into account all the above mentioned reactions by means of a Monte Carlo simulation \cite{simulation}. The election of which reaction occurs at a certain point in the nucleus is done as usual. One chooses a step size $\delta l$ and calculates, by means of $\sigma_i \rho \delta l$, the probabilities that any of the possible reactions happens $i=$ {\it Quasi-elastic, 1N absorption, 2N absorption}; $\rho$ is nucleon density. The size of $\delta l$ is small enough such that the sum of probabilities that any reaction occurs is reasonably smaller than unity. A random number from 0 to 1 is generated and a reaction occurs if the number falls within the corresponding segment of length given by its probability. If the random number falls outside the sum of all segments then this means that no reaction has taken place and the kaon is allowed to proceed one further step $\delta l$. The simulation of one event is over when all the produced particles leave the nucleus. To adapt the calculations to the experiment of \cite{Kishimoto:2007zz} we select "good events" with fast protons that emerge within an angle of 4.1 degrees in the nuclear rest frame (lab frame). As in \cite{Kishimoto:2007zz} we plot our obtained $^{12}$C$(K^-,p)$ spectrum as a function of a binding energy of the kaon, $E_B$, should the process correspond to the trapping of a kaon in a bound state and emission of the fast proton. If there is a quasi-elastic collision at a certain point, then the momentum of the $K^-$ and that of the nucleon, which is randomly chosen within the Fermi sea, are boosted to their CM frame. The direction of the scattered momenta is determined according to the experimental cross section. A boost to the lab frame determines the final kaon and nucleon momenta. The event is kept as long as the size of the nucleon momentum is larger than the local Fermi momentum. Since we take into account secondary collisions we also consider the reaction $K^- p \to K^0 n$ and $K^- n \to K^- n$ with their corresponding cross sections. Once primary nucleons are produced they are also followed through the nucleus taking into account the probability that they collide with other nucleons, losing energy and changing their direction, see \cite{Magas:2006fn,Ramos:2007zz,Crimea,Magas:2008bp} for more details. We also follow the rescattered kaon on its way through the nucleus. In the subsequent interaction process we let the kaon experience whichever reaction of the three that we consider (quasi-elastic, one-body absorption, two-body absorption) according to their probabilities. This procedure continues until kaon is absorbed or leaves of the nucleus. Apart from following the kaons and nucleons, our calculations also need to consider the quasi-elastic scattering of $\Lambda$'s and $\Sigma$'s (produced in the kaon absorption reactions) on their way through the residual nucleus. Given the uncertainties in the hyperon-nucleon cross sections, we may use for $\Sigma N$ scattering the relation $\sigma_{\Sigma N} = 2\sigma_{NN}/3$, based on a simple non-strange quark counting rule. In the case of $\Lambda N$ scattering, we use the refined parameterization of Ref.~\cite{manolo}, as was also done in Ref.~\cite{Crimea}. One nucleon $K^-$ absorption leads to $K^- N$ $\to \pi \Lambda$ or $K^- N \to \pi \Sigma$, with all the possible charge combinations. The elastic and inelastic two-body ${\bar K}N$ cross sections for kaons are taken from the Particle Data Group \cite{PDG}. The kaon absorption by two nucleons is a bit more tricky. Here we take into account the following processes: $K^- NN \to \Lambda N$ or $K^- NN \to \Sigma N$ with all possible charge combinations. In these reactions an energetic nucleon is produced, as well as a $\Lambda$ or a $\Sigma$. Both the nucleon and the hyperon are followed through the nucleus as discussed above. Once out of the nucleus, the hyperons are let to decay weakly into $\pi N$ pairs. Therefore, the two-body absorption process provides a double source of fast protons, those directly produced in two nucleon absorption reaction and those coming from hyperon decays. We assume a total two body absorption rate to be 20\% that of one body absorption at about nuclear matter density, something that one can infer from data of $K^-$ absorption in $^4$He \cite{Katz:1970ng}. In practice, this is implemented in the following way. The probability per unit length for two nucleon absorption is proportional to the square of the nucleon density: $\mu_{K^-NN}(\rho) = C_{\rm abs} \rho^2\,.$ We assume that $ \langle \mu_{K^-NN}\rangle = C_{\rm abs} \langle \rho^2 \rangle = 0.2 \langle \mu_{K^-N} \rangle = 0.2 \sigma^{\rm tot}_{K^-N} \langle \rho \rangle \,,$ where $\sigma^{\rm tot}_{K^-N}$ accounts for the total one nucleon absorption cross section and, in symmetric nuclear matter it is given by: $\sigma^{\rm tot}_{K^-N}=(\sigma^{\rm tot}_{K^-p} + \sigma^{\rm tot}_{K^-n} - \sigma_{K^-p\rightarrow K^-p} - \sigma_{K^-n\rightarrow K^-n})/2\,. $ Taking $\langle \rho \rangle=\rho_0/2$, where $\rho_0=0.17$ fm$^{-3}$ is normal nuclear matter density, we obtain $C_{\rm abs}\approx 6\ {\rm fm}^{5} \,.$ The different partial processes that can take place in a two-nucleon absorption reaction are: $K^- p p \to p \Lambda,\ p \Sigma^0,\ n \Sigma^+ $; $K^- p n \to n \Lambda,\ n \Sigma^0,\ p \Sigma^- $; $K^- n n \to n \Sigma^- \ .$ Ideally, their corresponding branching ratios should be obtained from relevant microscopic mechanisms, however in the present exploratory work, we will consider a much simpler approach consisting of assigning equal probability to each of the above reactions. Noting that the chance of the kaon to find a $pn$ pair is twice as large as that for $pp$ or $nn$ pairs, we finally assign a probability of 3/10 for having a $p\Sigma$ pair in the final state, 4/10 for $n \Sigma$, 1/10 for $p \Lambda$ and 2/10 for $n \Lambda$. \begin{figure}[htb] \vspace{-0.25cm} \includegraphics[width=.5\textwidth]{ds_dEB_nocoin.eps} \vspace{-0.75cm} \caption{Calculated $ ^{12}C(K^-,p)$ spectra with $V_{\rm opt}=(-60,-60)\rho/\rho_0$ MeV, taking into account only quasi-elastic processes (dash-dotted line), and including all the contributing processes (full line).} \label{fig1} \end{figure} We also take into account a kaon optical potential $V_{\rm opt}={\rm Re}\, V_{\rm opt} + {\rm i}~ {\rm Im}\, V_{\rm opt} $, which will influence the kaon propagation through the nucleus, especially when it will acquire a relatively low momentum after a high momentum transfer quasi-elastic collision. In the present study we take the strength of the potential as predicted by chiral models: ${\rm Re}\, V_{\rm opt}= -60\, \rho/\rho_0$ MeV \cite{lutz,angelsself,schaffner,galself,Tolos:2006ny}; ${\rm Im}\, V_{\rm opt} \approx -60\, \rho/\rho_0$ MeV, as in the experimental paper \cite{Kishimoto:2007zz} and the theoretical study of \cite{angelsself}. In the Monte Carlo simulation we implement this distribution by generating a random kaon mass $\tilde{M}_K$ around a central value, $M_K + {\rm Re}\,V_{\rm opt}$, within a certain extension determined by the width of the distribution $\Gamma_K = -2 {\rm Im}\, V_{\rm opt}$. The probability assigned to each value of $\tilde{M}_K$ follows the Breit-Wigner distribution given by the kaon spectral function: \begin{center} $ S(\tilde{M}_K )=\frac{1}{\pi} \frac{-2M_K {\rm Im}\,V_{\rm opt}} {(\tilde{M}^2_K -M_K^2-2 M_K {\rm Re}\,V_{\rm opt})^2 + (2M_K {\rm Im}\,V_{\rm opt})^2} \,. $ \end{center} In Fig.~\ref{fig1} we show the results of the Monte Carlo simulation obtained with an optical potential $V_{\rm opt}=(-60,-60)\rho/\rho_0$ MeV: first, taking into account only quasi-elastic processes; and then taking into account all the discussed mechanisms. We can see that there is some strength gained in the region of "bound kaons" due to the new mechanisms. Although not shown separately in the figure, we have observed that one nucleon absorption and several rescatterings contribute to the region $-E_B > -50$ MeV. To some extent, this strength can be simulated by the parametric background used in \cite{Kishimoto:2007zz}. However, this is not true anymore for the two nucleon absorption process, which contributes to all values of $-E_B$, starting from almost as low as $-300$ MeV. It is very important to keep in mind that in the spectrum of \cite{Kishimoto:2007zz} the outgoing forward protons were measured in coincidence with at least one charged particle in the decay counters sandwiching the target. Obviously, the real simulation of such a coincidence experiment is tremendously difficult, practically impossible with high accuracy, because it would require tracing out all the charged particles coming out from all possible scatterings and decays. Although we are studying many processes and following many particles in our Monte Carlo simulation, which is not the case in the Green function method used in the data analysis \cite{Kishimoto:2007zz}, we cannot simulate precisely the real coincidence effect. \begin{figure}[htb] \vspace{-0.25cm} \includegraphics[width=.5\textwidth]{ds_dEB_2.eps} \vspace{-0.75cm} \caption{Calculated $ ^{12}C(K^-,p)$ spectra with $V_{\rm opt}=(-60,-60)\rho/\rho_0$ MeV taking into account all contributing processes (dash-dotted line). Then we impose minimal coincidence requirement (full line). Data points are from \cite{Kishimoto:2007zz}.} \label{fig2} \end{figure} The best we can do is to eliminate the processes which, for sure, will not produce a coincidence, this can be called minimal coincidence requirement. If the kaon in the first quasi-elastic scattering produces an energetic proton falling into the peaked region of the spectra, then the emerging kaon will be scattered backwards. In our Monte Carlo simulations we can select events were neither the proton, nor the kaon will have any further reaction after such a scattering. In these cases, although there is a "good" outgoing proton, there are no charged particles going out with the right angle with respect to the beam axis to hit a decay counter, since the $K^-$ escapes undetected in the backward direction. Therefore, this type of events must be eliminated for comparison with the experimental spectra. It is clear from Fig.~\ref{fig1} that the main source of the energetic protons for $ ^{12}C(K^-,p)$ spectra is $K^-p$ quasi-elastic scattering, however many of these events will not pass the coincidence condition. Implementing the minimal coincidence requirement, as discussed above, we will cut off a substantial part of the potentially "good" events, and drastically change the form of the final spectrum, as illustrated in Fig. \ref{fig2}. \begin{figure}[htb] \vspace{-0.25cm} \includegraphics[width=.5\textwidth]{ds_dEB_zoom_2.eps} \vspace{-0.75cm} \caption{Calculated $ ^{12}C(K^-,p)$ spectrum with $V_{\rm opt}=(-60,-60)\rho/\rho_0$ MeV with minimal coincidence requirement - solid line; and with additional suppression factors - dash-dotted and dotted lines. Experimental points are from \cite{Kishimoto:2007zz}.} \label{fig3} \end{figure} To further simulate the coincidence requirement we introduce additional constant suppression factors to the obtained spectrum - see Fig. \ref{fig3}. Comparing our results with the experimental data we can conclude that in the "bound" region, $-E_B < 0$ MeV, these additional suppression is about $\sim 0.7$ and more or less homogeneous, while in the continuum the suppression weakens and for $-E_B > 50$ MeV it is negligible. This picture is natural from the physical point of view, because the r.h.s. of the spectrum, Fig. \ref{fig3}, with relatively low momentum protons is mostly populated by many particle final states, which have a good chance to score the coincidence. To conclude, the main point of our analysis is not to state that the data of Ref.~ \cite{Kishimoto:2007zz} supports ${\rm Re}\, V_{\rm opt}=-60\rho/\rho_0$ MeV rather than $-200\rho/\rho_0$. We want to make it clear that trying to simulate these data one necessarily introduces large uncertainties due to the experimental set up. Thus, this experiment is not appropriate for extracting information on the kaon optical potential. Contrary to what it is assumed in Ref.~\cite{Kishimoto:2007zz}, we clearly see, Fig. \ref{fig2}, that the spectrum shape is affected by the required coincidence. The experimental data without the coincidence requirement would be a more useful observable. {\bf Acknowledgments.}\ \ \ This work is partly supported by the contracts FIS2006-03438, FIS2008-01661 from MICINN (Spain), by CSIC and JSPS under the Spain-Japan research Cooperative program, and by the Ge\-ne\-ra\-li\-tat de Catalunya contract 2009SGR-1289. We acknowledge the support of the European Community-Research Infrastructure Integrating Activity ``Study of Strongly Interacting Matter'' (HadronPhysics2, Grant Agreement n. 227431) under the Seventh Framework Programme of EU. J.Y. is a Yukawa Fellow and this work is partially supported by the Yukawa Memorial Foundation.
94b74eb100c505628a9b679aa1a1e18ca849ce4c
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{INTRODUCTION} \subsection{Disc-Disc Mergers} In a $\Lambda$ Cold Dark Matter Universe ($\Lambda$CDM), the formation of structures generally involves the merging of smaller structures. Mass is built up predominantly by the merger of objects of masses between $3-30\%$ of the galaxy's total mass at redshift $z=0$ \citep{stewartetal08}. Since $z=2$, approximately 60\% of galaxies have experienced a major merger, which we will take in this paper as those involving a ratio of masses 3:1 or higher. Also, an important fraction of the baryonic mass (30 - 50\%) of $z=0$ galaxies is provided by major mergers \citep{stewartetal09}. Recent observations show that nearly 20\% of the massive disc galaxies have undergone a major merger since $z\sim1.5$ \citep{lopezetal09} This process is virtually scale-free, meaning that low-mass dark matter halos, in which disc galaxies presumably reside, have undergone similar merging histories to high-mass dark matter halos in which, predominantly, ellipticals reside. This at first is puzzling, as mergers have long been associated with forming spheroidal structures, i.e. early-type galaxies \citep{toomre77}. Following this line of reasoning, every galaxy in the nearby universe should be an elliptical or an irregular, which is clearly not the case. Furthermore, observed elliptical galaxies can be described by simple relations relating colors, luminosities, masses, etc. These simple relations are difficult to reproduce with a scenario involving collisions \citep{bs98,bs99}. The resolution of this, of course, lies with the baryonic physics. Studies using two stable discs as initial conditions (\citealt{sh05,robertsonetal06}; \citealt[][hereafter B07]{brooketal07}; \citealt{rb08}) as well as fully cosmological simulations \citep{gv08} have shown that binary gas-rich mergers can result in disc morphologies. Hence, major mergers between gas-rich disc galaxies can possibly play an important role in the formation of a large number of the spiral galaxies we observe today. These facts bring an addition to the original scenario for the formation of disc galaxies, which involved the assembly of a halo of dark matter and ionized gas by hierarchical clustering and mergers of sub-halos, followed by the collapse and virialization of the dark matter halo, and finally the dissipative collapse of the baryonic component, resulting in a rotationally-supported disc, that will later fragment to form stars \citep{wr78,fe80,blumenthaletal84}. One must also consider a formation scenario involving major collisions between spiral galaxies. The first numerical simulations of major collisions between disc galaxies were performed under the assumption that dissipation by gasdynamical effects was negligible. The results showed that the remaining galaxy had several properties attributed to elliptical galaxies, including luminosity profiles \citep{nb03,barnes02,bh92,hernquist92,hernquist93,bs98,bs99,nbh99}. Even with the quite recent inclusion of feedback processes, the results remained mostly unchanged \citep{bh96,mh96,springel00,naabetal06}. Minor collisions have also been studied in the past and results showed that this kind of collision tends to form galaxies whose disc is destroyed or very perturbed \citep{hm95,bekki98}. All the aforementioned studies assumed a ratio gas/stars more representative of present low redshift mergers. Of course, stars have not always been present inside galaxies. Going back in time, we can expect to find fewer and fewer stars, and therefore a more important gaseous component \citep{stewartetal09}. \subsection{Thick Discs} The Milky Way's thick disc is made predominantly of old stars, with ages in the range of $8-12\,{\rm Gyr}$, significantly older than the typical star found in the thin disc \citep{gwj95}. The thick and thin disc differ in their structural, chemical, and kinematical properties: (1) The thick disc has a scale height of about 900 pc, compared to 300 pc for the thin disc \citep{juric08}. (2) The stars in the thick disc have a low metallicity, $\rm[Fe/H]\sim-0.6$ on average, and a ratio $\rm[\alpha/Fe]$ larger than the stars in the thin disc. (3) Stars in the thick disc tend to rotate slower than stars in the thin disc, by about $40\,{\rm km\,s^{-1}}$ \citep{gwk89} and have an average vertical velocity dispersion of $45\,{\rm km\,s^{-1}}$ compared to $\sim10-20\,{\rm km\,s^{-1}}$ for the stars in the thin disc \citep{delhaye65,cb00}. The formation of the Milky Way's thick disc remains an open question, with popular theories including the heating of a thin disc by minor mergers \citep{quinnetal93,kazantzidisetal08,kzb08}, or the direct accretion of stars, which are tidally torqued into the plane of the existing thin disc \citep{abadietal03}, or dynamical heating by destruction of very massive star clusters \citep{kroupa02,ee06}. We favor a scenario in which the Milky Way's thick disc is formed during the high-redshift epoch in which mergers are most common in $\Lambda$CDM, and accretion rates are high, with the disc which formed during this turbulent period being born thick; the thin disc subsequently grows during the relatively quiescent period which follows, which in the case of the Milky Way is approximately the last 10 Gyrs \citep{brooketal04b,brooketal05,brooketal06}. Recent observations suggest that all spiral galaxies are surrounded by a red, flattened envelope of stars, and these structures have been interpreted as showing that thick discs are ubiquitous in spiral galaxies \citep{db02}. Hence, understanding the formation of the thick disc will provide major insights into the formation processes of spiral galaxies. If we make the hypothesis that thick disc are formed during gas-rich major mergers, it would mean that a significant number of spiral galaxies have experienced major mergers. Indeed, simulations have shown that gas-rich mergers tend to form two disc components (\citealt{robertsonetal06}; B07). These simulations included the effect of star formation and supernovae feedback, which allowed to study the kinematical and structural properties of galaxy merger remnants in detail. However, they did not include a detailed treatment of chemical enrichment and, therefore, could not predict the abundances of metals in stars and the interstellar medium \citep[but see also][who studied the formation of ellipticals galaxies due to a more violent major merger]{bs98,bs99} . It is well-established that the thick disc of the Milky Way possesses a unique chemical signature, which is believed to be directly related to the formation process and it is important to analyze if the merger scenario for the formation of the thick disc component is compatible with these chemical signatures. This is the basis of our study here: we believe that the remnants of major mergers may be associated with a significant fraction of the ``red envelopes'' observed in \citet{db02}, i.e. with extragalactic thick discs. Since we expect the conditions for merging galaxies during hierarchical clustering at a high redshift to vary, we need to consider a wide range of parameters, such as mass ratio, orbit, and gas fraction of the merger progenitors. In this paper, we present a series of 8 simulations of mergers of gas-rich disc galaxies, with different mass ratios and orbital parameters. We analyze the kinematic and structural properties of the merger remnants. The results for one of the simulations have already been presented in a previous paper (B07). We now complete this work by presenting the entire suite of simulations. For kinematic properties, we determine the relative importance of rotation and velocity dispersion in providing support. We also investigate if a counter-rotating component arises naturally in a gas-rich merger, as such components are observed in nature \citep{yd06}. For structural properties, we determine if the luminosity profiles of these thick discs formed in major mergers follow an exponential law, and if so, what their respective scale lengths are. We determine if the thick and thin discs in merger remnants are coplanar, or if there is a significant angle between them, something that was not included in our previous work. However, the main goal of the present paper is to study the viability of the merger mechanism for the formation of the thick disc from the chemical evolution's point of view, but without neglecting the kinematical and dynamical properties. With this in mind, we choose initial conditions likely to produce a discy remnant. In this paper, we use the numerical algorithm GCD+, which includes a detailed treatment of chemical enrichment, to investigate whether key abundance ratios such as $\rm[\alpha/Fe]$ and $\rm[Fe/H]$ in simulated remnants of gas-rich disc galaxy mergers have values and distributions similar to the ones found in the observed disc galaxies, including the Milky Way. The reminder of this paper is organized as follows: in \S2, we briefly describe the numerical simulations employed, including the N-body/SPH code adopted (GCD+), the initial conditions software (GalactICS), and the basic parameters of our 8 realizations. The kinematical and structural properties of the merger remnants are presented in \S3 and discussed in \S4. Conclusions are presented in \S5. \section{THE NUMERICAL SIMULATIONS} \subsection{The Algorithm} All simulations were performed using GCD+ \citep{kg03a,kg03b}, self-consistently modeling the effects of gravity, gas dynamics, radiative cooling, and star formation. GCD+ is a Tree/SPH algorithm that includes Type~Ia and Type~II SNe feedback, and traces the lifetimes of individual stars, enabling us to monitor the chemical enrichment history of our simulated galaxies. Star formation occurs in a convergent gas velocity field where gas density is greater than a critical density, $\rho_{\rm crit}= 2 \times 10^{-25}{\rm g\,cm^{-3}}$. The star formation rate (SFR) of eligible gas particles is then $d\rho_*/dt=d\rho_g/dt=c_*\rho_g/t_g$ where $c_*=0.05$ is a dimensionless star formation efficiency, and $t_g$ is the dynamical time. This formula corresponds to a Schmidt law: SFR $\propto\rho^{1.5}$. The mass, energy, and heavy elements are smoothed over the neighboring gas particles using the SPH smoothing kernel. The code separately tracks the abundances of 9 elements: H, He, C, N, O, Ne, Mg, Si, and Fe. Gas within the SPH smoothing kernel of Type~II SNe explosions is prevented from cooling, creating an adiabatic phase for gas heated by such SNe. This adiabatic phase is assumed to last for the lifetime of the lowest mass star that ends as a Type~II SNe, i.e., the lifetime of an $8M_\odot$ star (100 Myr). This is similar to a model presented in \cite{tc00}. Stellar yields are dependent of the progenitor mass and metallicity. For Type~II SNe, we use the metallicity-dependent stellar yields of \cite{ww95} for stars with mass over $11M_\odot$, while for low- and intermediate-mass stars, we use the stellar yields of \cite{vdhg97}. For Type~Ia SNe, we adopt the model of \cite{ktn00} in which Type~Ia SNe occur in binary systems with a primary star and a companion of a definite mass and metallicity range. The yields are from \cite{iwamotoetal99}. Note that GCD+ relaxes the instantaneous recycling approximation and takes into account the lifetime of progenitor stars, and chemical enrichment from intermediate mass stars. The code calculates, within each timestep, the amount of energy and heavy element released by every star particle and distributes them over the neighbours using the SPH kernel for both Type II and Ia SNe. We did not include in the present paper any black hole feedback. Several studies have demonstrated that this can be very important for the final morphology and the termination of the star formation in the merger remnant \citep{sdh05,oka05,jnb09}. However, the implementation of the blackhole feedback in numerical simulations is still highly ambiguous and we prefer here, for simplicity, to include only SN feedback. For more details about GCD+, we refer the reader to \citet{kg03a,kg03b, brooketal04a}. \subsection{Setting Up Initial Conditions} \begin{figure*} \includegraphics[width=6in]{figure01.eps} \caption{Geometry of the initial conditions. The $Y$-axis is pointing away from the viewer. Gal1 is initially at rest, while Gal2 is initially moving away from the viewer, in the $+Y$ direction, as the thick arrow shows. The dotted lines indicate the rotation axes of the galaxies. Gal1 is in the $X-Y$ plane, with the rotation axis in the $Z$-direction. The rotation axis of Gal2 is in the $X-Z$ plane, at an angle $\theta$ relative to the $Z$-axis. Both galaxies are rotating clockwise when seen from above, hence the left edges are moving away from the viewer, and the right edges are moving toward the viewer. The particular case shown on this figure corresponds to simulation M12, with the edges of the discs located at two scale-lengths. \label{fig-initial}} \end{figure*} In each simulation, the initial conditions consist of two galaxies with exponential gas discs embedded in dark matter halos. These galaxies are generated using the GalactICS package \citep{kd95}, producing stable discs for a large number of galactic rotations. For the dark matter halos, GalactICS use the lowered Evans model \citep{evans93}, which leads to a constant-density core. This model differs from models which have a steeper central density profile suggested by $\Lambda$CDM cosmological simulations \citep{nfw96,nfw97,mooreetal99,ghignaetal00,js00,klypinetal01}. For our purpose this difference is not critical, as the central regions contain only a small fraction of the total mass. The star formation mechanism is shut down for the first $\sim180{\rm Myr}$ enabling very gas-rich mergers. In our simulations, the gas ratio immediately before merger is more relevant than the initial value. Pure gas disc will induce an unrealistic starburst initially. However, by turning off star formation initially, we can control the gas fraction at the merger epoch. For each simulation, the most massive galaxy (Gal1) has total mass of $5\times10^{11}M_\odot$ (except for simulation M11) and a scale length of 4.5~kpc. The mass and scale length of the second galaxy (Gal2) depends on the specific mass ratio chosen for the simulation. Using the observed gas fraction of galaxies at high \citep{erbetal06} and low \citep{mcgaugh05} redshift, \citet{stewartetal09} provide a relation between gas fraction and redshift for galaxies of various masses. Using this relation (their equation~[1]) allows us to estimate that the mergers in the epoch between $z=2.37-3.7$ would be between galaxies which have gas fractions as high as the ones used in our study. Note that this is an upper bound of such epoch due to the fact that the high redshift observations of \citet{erbetal06} are considered lower bounds of the gas content. Also, these redshifts coincide with the epoch of formation of the stars in the Milky Way thick disc. The initial configuration of the system is illustrated in Fig.~\ref{fig-initial}. Gal1 is initially located at ${\bf R}_1=(50,0,0)\,{\rm kpc}$, with no initial velocity, and its rotation axis pointing in the $Z$ direction. Gal2 is located at ${\bf R}_2=(0,0,15){\rm kpc}$ (except for simulations M12C and M12orb), with initial velocity ${\bf V}_2=(0,100,0){\rm km\,s^{-1}}$ (except for simulation M12z), and its rotation axis in the $X-Z$ plane at an angle $\theta$ relative to the $Z$-axis. We shall refer to this angle as the {\it interaction angle}. All encounters are prograde-prograde, except for simulation rM12, for which the rotation of Gal2 is retrograde. \subsection{A Series of 8 Gas-Rich Mergers} \begin{table*} \centering \begin{minipage}{220mm} \caption{Initials Conditions for All Simulations\label{table-initial}} \begin{tabular}{@{}lccccccccc@{}} \hline Run & $M_{\rm Gal1} (M_\odot)$ & mass ratio & $f_{\rm gas}$ & $\theta$ & $V_{2z}\;{\rm(km\,s^{-1}})$ & ${\bf R}_2\;{\rm(kpc)}$ & $e$ & $\omega$[degree] & $\iota$[degree] \\ \hline M12 & $5.0\ee{11}$ & 2:1 & 0.94 & 30.0 & 0 & 0, 0, 15 & 0.79 & 90 & $-13$ \\ M12orb & $5.0\ee{11}$ & 2:1 & 0.67 & 30.0 & 0 & 0, 50, 15 & 0.86 & 51 & $-13$ \\ M12z & $5.0\ee{11}$ & 2:1 & 0.83 & 30.0 & 100 & 0, 0, 15 & 0.66 & 29 & $-20$ \\ M1290 & $5.0\ee{11}$ & 2:1 & 0.96 & 90.0 & 0 & 0, 0, 15 & 0.91 & 90 & $73$\\ rM12 & $5.0\ee{11}$ & 2:1 & 0.91 & 210.0 & 0 & 0, 0, 15 & 0.79 & 90 & $167$\\ M11 & $2.5\ee{11}$ & 1:1 & 0.88 & 30.0 & 0 & 0, 0, 15 & 0.89 & 90 & $-13$\\ M13 & $5.0\ee{11}$ & 3:1 & 0.92 & 30.0 & 0 & 0, 0, 15 & 0.70 & 90 & $-13$\\ M110 & $5.0\ee{11}$ & 10:1 & 0.85 & 30.0 & 0 & 0, 0, 15 & 0.37 & 90 & $-14$\\ \hline \end{tabular} \end{minipage} \end{table*} We have performed a total of 8 simulations with different initial conditions, in which each galaxy is modeled using 100,000 dark matter halo particles and 40,000 gas particles, for a total of 280,000 particles. This resolution is lower by a factor 2 than the one used in \cite{robertsonetal06}. Since the primary goal of this work is to experiment on chemical enrichment following a major merger, we had to make some choices relatives to the numerical resolution we had to use. The primary concern being the relatively high computation time needed to distribute enrich gas to the particles in the neighborhood. The baryon/DM mass fraction is 17\%, which is equal to the universal ratio $\Omega_b/\Omega_0$ according to recent estimates \citep{wmap3,komatsuetal09}. Every simulation starts with an initial metallicity $\log(Z/Z_{\odot})=-4$ and $\alpha$-element abundance $\rm[\alpha/Fe]=0.30$. Table~\ref{table-initial} lists the parameters used for generating the initial conditions for each simulation. The five simulations within the M12 ``family'' involve a major merger of two galaxies with a mass ratio of 2:1. We start with M12, which has already been discussed in detail in B07. The galaxies collide after 320~Myr, and by that time 6\% of the gas has already been converted to stars, leaving a gas fraction $f_{\rm gas}=0.94$. The next 4 simulations within that family are variations of the ``parent'' simulation M12. In simulation M12orb, we increase the initial separation between the galaxies, and in simulation M12z, we add an additional velocity component in the $z$-direction. In both cases, the merger is delayed, and this results in lower gas fractions at the time of the merger. In simulation M1290 we changed the interaction angle to 90 degrees. Simulation rM12 is similar to simulation M12, except that the direction of rotation of Gal2 is reversed, making it retrograde relative to the orbit. With the last 3 simulations, we consider different mass ratios. In simulation M11, we used equal-mass galaxies. Also, this simulation is the only one for which we use a smaller mass for Gal1: $2.5\times10^{11}M_\odot$ instead of $5\times10^{11}M_\odot$. In simulation M13, we use a mass ratio of 3:1. This is still considered to be a major merger. In the simulation M110, we use a mass ratio of 10:1, which corresponds to a minor merger. Column 8 of Table~1 gives the initial eccentricities of the orbits. Except for simulation M110 (the minor merger), the eccentricities are quite high, corresponding to elongated elliptical orbits. As a result, the pericenters are smaller than the actual size of the galaxies, and the collision takes place before a full orbit is completed. Column~9 gives the angle between the position of the pericenter and the line of nodes (intersection between the orbital plane and the plane of Gal1; see p.~632 of \citealt{tt72}). Column~10 gives the inclination $\iota$ of Gal2 relative to the plane of the orbit. For the simulation M11, the softening length of the algorithm is $0.65\,\rm kpc$ for the dark matter and $0.48\,\rm kpc$ for the baryons. For the other simulations, the softening length is $0.82\,\rm kpc$ for the dark matter and $0.61\,\rm kpc$ for the baryons. \begin{table} \centering \caption{Lookback Times and Final Gas Fraction\label{table-times}} \begin{tabular}{@{}lccc@{}} \hline Run & $t_f$ [Gyrs] & $t_{\rm coll}$ [Gyrs] & $f_{\rm gas}^{\rm final}$\\ \hline M12 & 2.50 & 0.27 & 0.14 \\ M12orb & 3.40 & 0.83 & 0.18 \\ M12z & 2.50 & 0.50 & 0.20 \\ M1290 & 2.50 & 0.25 & 0.10 \\ rM12 & 2.47 & 0.32 & 0.18 \\ M11 & 2.50 & 0.40 & 0.06 \\ M13 & 2.50 & 0.32 & 0.18 \\ M110 & 2.50 & 0.42 & 0.30 \\ \hline \end{tabular} \end{table} Table~\ref{table-times} lists the time $t_f$ elapsed between the beginning and the end of the simulation, the time $t_{\rm coll}$ elapsed between the beginning of the simulation and the collision, and the final gas fraction $f_{\rm gas}^{\rm final}$. In each simulation, the collision leaves a very complex and chaotic system, which then relaxes to a ``quiescent state'' where the final structure and kinematics are well-established, and the star formation rate has dropped significantly. We stopped the simulations after $2.5\,\rm Gyrs$, when the quiescent state had been reached. One simulation (M12orb) was extended to $t_f=3.40\,\rm Gyrs$, which enabled us to check that the final state was indeed quiescent. Comparing the values of $f_{\rm gas}^{\rm final}$ with the values of $f_{\rm gas}$ listed in Table~\ref{table-initial} reveals the efficiency of the starburst in converting gas to stars. The gas fraction typically dropped from $\sim90\%$ to $\sim15\%$. It is interesting to note that simulation M110, the only case of a minor merger, was less efficient in converting gas to stars than the other simulations, with $f_{\rm gas}^{\rm final}=0.297$. \section{RESULTS} Following the approach of \citet{robertsonetal06}, we have computed the SFR for all the simulations, and used these results to identify stellar populations. Some examples of them are shown in the top left panels of Figs.~\ref{M12_kin}--\ref{M13_kin}. Notice that the simulations start at $t=0$. The results are quite similar for all simulations. A major starburst invariably occurs during the merger. This starburst peaks between $\sim60M_\odot\,{\rm yr}^{-1}$ and $500M_\odot\,{\rm yr}^{-1}$, comparable to that seen in high-redshift Lyman-break galaxies \citep{erbetal06}. The results are qualitatively similar to recent work involving similar gas ratio but different feedback assumptions \citep{jnb09} or different gas ratio and different feedback \citep{coxetal08}. The main quantitative difference come from the amount of gas available during the merger. As stated in \citet[][see their Figs.~1 and 3]{jnb09}, there is a huge difference in the maximum value of the SFR depending on gas mass fraction. The SFR is also dependent of the numerical methods used for feedback in case of major mergers \citep{coxetal06}. After the starburst, the star formation steadily drops, as the system relaxes and a disc forms. We identify the beginning and the end of the merger with the beginning and the end of the starburst, respectively. We then define two populations of stars: {\it old stars}, which include stars already present in the galaxies before the merger and stars formed during the merger by the starburst, and {\it young stars}, which include all stars formed after the merger, when the starburst is completed. The definitions for young and old stars remain the same throughout the paper. The dashed vertical lines in the top-left panels of Figs.~\ref{M12_kin}--\ref{M13_kin} indicate the beginning and end of the starburst, identified by eye as the merger boundaries are not crucial for the subsequent analysis. The time when the starburst begins is listed in Table~\ref{table-times}. For most simulations, the starburst lasts 0.2--0.3~Gyr. The simulation M12z (Fig.~\ref{M12z_kin}) is a notable exception, with a starburst that is weaker and lasts for 0.5~Gyr. We use the term {\it remnant} to designate the final state of each simulation. Our simulations are meant to represent the interactions occurring at high redshift, when the gas content is very high. In this sense, the merger remnant represents objects at $z\sim2-3$ and the galaxies still have time to form more stars until $z=0$. The subsequent infall of extragalactic gas may also produce more stars at $z=0$ and help to reform a disc \citep{brooksetal09}. \subsection{Structure and Density Profiles} \begin{figure*} \begin{center} \includegraphics[width=5.5in]{figure02.eps} \caption{V-band image of M12 seen face-on (top panels) and edge-on (bottom panels), for young stars (left panels), old stars (middle panels), and all stars (right panels). $X$, $Y$, $Z$ represent cartesian coordinates, with $Z$ along the axis of rotation. } \label{M12} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=5.5in]{figure03.eps} \caption{V-band image of M12z seen face-on (top panels) and edge-on (bottom panels), for young stars (left panels), old stars (middle panels), and all stars (right panels). The gas shows a complex structure that lasts for several hundred million years. } \label{M12z} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=5.5in]{figure04.eps} \caption{V-band image of M1290 seen face-on (top panels) and edge-on (bottom panels), for young stars (left panels), old stars (middle panels), and all stars (right panels). For this simulation, the stellar ring and the disc are not coplanar. } \label{M1290} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=5.5in]{figure05.eps} \caption{Radial luminosity profile in the V-band, for old stars (squares) and young stars (circles). The straight lines show exponential fits of the old stars (dashed) and young stars (dotted). The dotted curves for simulation M12z and M11 are fits to a de~Vaucouleurs profile for the young stars. } \label{allprof} \end{center} \end{figure*} At the end of each simulation, we calculated the V-band luminosity of the merger remnant, using the stellar synthesis model of \citet{ka97}. From this, we produced mock V-band luminosity maps (modulo the lack of dust extinction). Figs.~\ref{M12}--\ref{M1290} show the luminosity maps for 3 remnants. On each figure, the left panels show the stars born after the merger (young stars), the middle panels show the stars born before or during the merger (old stars) and the right panels show all stars. We immediately see that old and young stars have very different distributions. In simulation M12 (Fig.~\ref{M12}), young and old stars form discs that have comparable radii, but the edge-on views (bottom panels) clearly shows that the young disc is thin, while the old disc is thick. The presence of two distinct discs, a thin one and a thick one, is in remarkable agreement with observations \citep{yd06}, and this motivated us to publish the results of that particular simulation in an earlier paper (B07). We found that all the simulations except for simulations M12z and M11 do result in the formation of a thin disc made of young stars, and a thick disc made of old stars that were already present before the merger, or formed during the mergers. These stars end up either in the thick disc or in the halo. Simulations M12z and M11 show more elliptical like remnants. Simulation M12z (Fig.~\ref{M12z}) produced a remnant that has a very complex structure. Even though we find a small disc made of young stars, the overall structure resembles more an elliptical galaxy than a disc galaxy. Simulation M11 also shows a small young disc. Compared with the other simulations, the initial galaxies Gal1 and Gal2 are less massive in simulation M11, but the intensity of the starburst taking place during the merger is comparable. As a result, there is less gas available after the merger to form young stars and build up an extended thin disc. In most simulations, the merger resulted in the formation of a ring made of young stars. This implies that prograde gas-rich major mergers have a tendency to form rings. These rings seem to be signatures of a gas substructure since most of the rings are coplanar with the thin disc. The large impact parameter use here is quite contrary to the normal scenarios forming a ring remnant \citep{hernquistandweil93,mapelli08}. That can be a signature of a gas-rich merger, and more work is needed to certify this assertion. The simulation M12z formed a very complex structure, with a non-coplanar ring connected to a very small thin disc by spiral arms, embedded in an ellipsoid distribution of old stars (Fig.~\ref{M12z}). Also, the face-on views of the mergers for simulations M1290 and rM12 (top panels of Figs.~\ref{M1290} for an example ) clearly reveal the presence of a central bar made of both old and young stars. No bars are seen in the other simulations. We can use the surface density brightness to trace the radial luminosity profiles of the discs. The radial luminosity profiles are shown in Fig.~\ref{allprof}, for all simulations. We calculated these profiles separately for young stars and old stars. The dotted and dashed lines are fits to the profiles of young and old stars, respectively. In cases where a ring was clearly visible, we excluded it from the fit. All profiles are well-fitted by an exponential profile, except for simulations M12z and M11, where the young stars are better-fitted by a de~Vaucouleurs profile. We can use the exponential profiles to calculate the scale-lengths of the discs. In Table~\ref{tablescales}, we list the scale-length $h_o$ of the old population, the scale-length $h_y$ of the young population, and their ratio $h_o/h_y$, except for the simulations M12z and M11, for which an exponential profile does not fit the distribution of young stars. The scale-length is usually larger for the old population than the young one, with ratios varying from 0.98 to 1.60. This is in agreement with the recent observational work to \citet{yd06} if we make a direct correspondence between old stars and thick-disc stars. {\it By definition\/}, by the time the merger is completed, old stars have all formed, while the matter destined to form young stars is still in the form of gas. Hydrodynamical processes, such as oblique shock waves, can potentially exert a torque on the gas, modifying its angular momentum. Such processes would not affect the old population directly, only indirectly through the gravitational interaction between the old stars and the remaining gas. This might result in a misalignment between the young and old discs. To investigate this issue, we performed a least-square fit of a plane to the final distribution of young stars, and to the final distribution of old stars. The second column of Table~\ref{angles} shows the angle $\theta_{oy}$ in degrees between the two planes, that is, the angle between the young and old discs. The angles are $6^\circ$ or less for all the runs, indicating that the discs tend to be coplanar. To estimate whether these angles are significant, we fitted the distribution of stars in the old disc using an oblate ellipsoid. We then calculated a {\it relevance angle} $\theta_o$, defined as the arc tangent of the short-to-long axis ratio of the ellipsoid. If we were to fit the old disc inside a flat square box, $\theta_o$ would be the angle between the edge and the corner, as seen from the center, so it is a measure of how thick the old disc is. If $\theta_{oy}<\theta_o$, then the young disc is entirely embedded inside the old disc, in spite of the angle between them. But if $\theta_{oy}>\theta_o$, then the young disc ``sticks out'' of the old disc. The third column of Table~\ref{angles} shows the relevance angles. Interestingly, the simulation rM12, the only one for which one galaxy is spinning in the retrograde direction, produces an old disc that is actually over an order of magnitude thinner than the ones produced by the other simulations It is well-known that a retrograde disc will suffer very little tidal disruption prior to merger compared to a prograde disc \citep{tt72}. This could explain why the old disc in simulation rM12 ends-up being quite thin, though a more detailed investigation is needed to confirm this hypothesis. The last column of Table~\ref{angles} shows the ratio $\theta_{oy}/\theta_o$. We find that all cases except rM12 have $\theta_{oy}/\theta_o<1$. Hence, young discs tend to be embedded inside old discs. \begin{table} \caption{Scale-length ratio for the young and old populations} \begin{tabular}{@{}lccc@{}} \hline Run & ${h_o}$[kpc] & ${h_y}$[kpc] & $h_o/h_y$\\ \hline M12 & 5.56 & 5.27 & 1.06 \\ M1290 & 6.22 & 6.05 & 1.03 \\ M12orb & 8.97 & 5.59 & 1.60 \\ M12z & 6.55 & $\ldots$ & $\ldots$ \\ rM12 & 7.54 & 5.21 & 1.45 \\ M11 & 5.94 & $\ldots$ & $\ldots$ \\ M13 & 5.37 & 4.47 & 1.20 \\ M110 & 7.71 & 7.87 & 0.98 \\ \hline \label{tablescales} \end{tabular} \end{table} \begin{table} \caption{Angle between the plane of the disc for by the young and old populations} \begin{tabular}{@{}lrrr@{}} \hline Run & $\theta_{oy}$[degrees] & $\theta_o$[degree] & $\theta_{oy}/\theta_o$\\ \hline M12 & 3.28 & 17.62 & 0.19 \\ M1290 & 0.70 & 11.19 & 0.06 \\ M12orb & 3.26 & 20.64 & 0.16 \\ M12z & 6.00 & 41.17 & 0.78 \\ rM12 & 2.21 & 1.27 & 1.74 \\ M11 & 2.31 & 26.68 & 0.08 \\ M13 & 5.63 & 17.82 & 0.32 \\ M110 & 0.25 & 13.80 & 0.02 \\ \hline \end{tabular} \label{angles} \end{table} \subsection{Kinematics} \begin{figure} \begin{center} \includegraphics[width=3.3in]{figure06.eps} \caption{Star formation rate and kinematics for simulation M12. On each panel, the red curves represent old stars, the blue curves and dots represent young stars, and the black curves represents all stars. The vertical dashed lines indicate the beginning and the end of the merger. Top left: star formation rate vs. formation epoch. $t=0$ corresponds the the beginning of the simulation. Top right: rotation velocity vs. radius for stars located near the plane of the disc ($|Z|\leq1\,{\rm kpc}$). Middle left: Rotational support versus radius. The dashed line separates rotationally-supported stars (above) from stars supported by velocity dispersion (below). Middle right: velocity dispersion vs formation epoch. Bottom left: histogram of stellar mass vs. rotation velocity, withe negative values indicating counter-rotating stars. Bottom right: Toomre diagram. The black, green, and blue dots indicate stars formed before, during, and after the merger. } \label{M12_kin} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.3in]{figure07.eps} \caption{Same as Fig.~\ref{M12_kin}, for simulation M12z.} \label{M12z_kin} \end{center} \end{figure} We have analyzed the kinematic properties of all 8 remnants. We computed the rotation curves of the old and young populations, for all the remnants. The top right panels of Figs.~\ref{M12_kin}--\ref{M13_kin} show some examples of these curves. For simulations M12, M12orb, M1290, rM12, M13, and M110, the old populations have a lower mean rotational velocity than their younger counterparts. The difference is about $200-300\,{\rm km\,s^{-1}}$ for M12, M12orb, and M1290 (mass ratio 2:1), $150\,{\rm km\,s^{-1}}$ for M13 (mass ratio 3:1) and $50\,{\rm km\,s^{-1}}$ for M110 (mass ratio 10:1). Hence, it decreases with increasing mass ratio of the progenitors. Comparing M12 with rM12, we also find that reversing the rotation of Gal2 (so that it is retrograde relative to the orbit of the galaxies) also reduces the differences between the two populations (about $300\,{\rm km\,s^{-1}}$ for M12 versus $150\,{\rm km\,s^{-1}}$ for rM12). The rotation curve of M12z and M11 are drastically different. The young population of the outer ring is counter-rotating with respect to the young population of the outer disc. We also found some counter-rotating stars in other simulations, but (i) they belong to the old population, and (ii) these stars were in the minority, so the overall rotation curve showed no counter-rotation. \begin{figure} \begin{center} \includegraphics[width=3.3in]{figure08.eps} \caption{Same as Fig.~\ref{M12_kin}--\ref{M12z_kin}, for simulation rM12.} \label{rM12_kin} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.3in]{figure09.eps} \caption{Same as Fig.~\ref{M12_kin}--\ref{rM12_kin}, for simulation M13.} \label{M13_kin} \end{center} \end{figure} To estimate the relative importance of rotation and velocity dispersion in providing support against gravity, we calculated for each population the average rotational velocity $V$ and its dispersion $\sigma_V^{\phantom2}$ in radial bins. The middle left panels of Figs.~\ref{M12_kin}--\ref{M13_kin} the quantity $\langle(V/\sigma^{\phantom2}_V)^2\rangle^{1/2}$ versus radius, for some of the remnants. In all simulations, we find $\langle(V/\sigma^{\phantom2}_V)^2\rangle^{1/2}>1$ for the young population, indicating that this population forms a massive, rotationally supported disc. The only exception is simulation M12z, where $\langle(V/\sigma^{\phantom2}_V)^2\rangle^{1/2}<1$ between radii 3 and $6\,\rm kpc$. As the top panel of Fig.~\ref{M12z_kin} shows, this region corresponds to the discontinuity between the inner disc and the counter-rotating outer ring. Simulations M12, M12orb, M12z, and M11 all have $\langle(V/\sigma^{\phantom2}_V)^2\rangle^{1/2}<1$, indicating the presence of a ``hot disc'' supported by internal motions rather than rotation. Simulation M13 has $\langle(V/\sigma^{\phantom2}_V)^2\rangle^{1/2}<1$ at radii $r<3\,{\rm kpc}$ and $\langle(V/\sigma^{\phantom2}_V)^2\rangle^{1/2}>1$ at larger radii. Simulations M1290, rM12, and M110 all have $\langle(V/\sigma^{\phantom2}_V)^2\rangle^{1/2}>1$, indicating that the old population is also rotationally supported, but in all three cases the rotational support is significantly smaller for the old population, typically by a factor of order 5. In the case of simulation rM12, this result is consistent with the fact that retrograde orbits are less destructive than prograde orbits. The middle right panels of Figs.~\ref{M12_kin}--\ref{M13_kin} show the velocity dispersion versus formation epoch, with the dash vertical lines indicating the starburst. In 5 of the 8 simulations, the velocity dispersion is larger for stars formed prior to the collision, and smaller for stars born after the collision. This is consistent with the statement that old stars have a smaller rotational support than young stars. The exceptions are the simulations with high mass ratios, M13 and M110, for which $\sigma^{\phantom2}_V$ peaks during the starburst, and M12z, for which $\sigma^{\phantom2}_V$ peaks after the starburst (Fig.~\ref{M12z_kin}). The bottom left panels of Figs.~\ref{M12_kin}--\ref{M13_kin} show histograms of the stellar mass versus rotation velocity. For simulations M12, M12orb, M1290, rM12, and M13, the young population is concentrated in a narrow region of the histogram, while the old population is spread in velocity, from the positive to the negative side (though it is concentrated mostly on the positive side for rM12). The young component is counter-rotating for simulation M11. For simulations M12z, parts of the young component are counter-rotating, and the total distribution is centered near zero. Simulation M110 (the minor merger) differs from all others in that the old population is concentrated at positive velocities. Notice that this is the only case for which the old population is rotationally supported. One way to discriminate between kinematically different star populations is the use of the Toomre Diagram \citep{sf87}. By comparing $V$ (circular velocity) with $U+W$ (radial and perpendicular velocity respectively), we can have a clearer picture of the kinematic distribution of the young and old population. A thin disc will be mostly formed by stars with low $U+W$ and a large $V$ component since all stars will be in nearly coplanar orbits. The bottom right panels of Figs.~\ref{M12_kin}--\ref{M13_kin} show the Toomre diagram for each simulations. For M12, the young stars are concentrated in a small region of the diagram, centered around $V=400\,{\rm km\,s^{-1}}$ and $(U^2+W^2)^{1/2}=0\,{\rm km\,s^{-1}}$, which is characteristic of a thin disc since these stars are mainly on fast rotating planar orbits. The old stars are distributed throughout the diagram, indicating that these stars follow orbits with significant radial and orthogonal velocities, characteristic of systems supported by velocity dispersion. These stars are located mostly in the thick disc and halo. The Toomre diagrams for the other simulations show similarities, and also interesting differences, with the diagram for simulation M12. Some simulations present a discontinuity between the positive and negative region of the diagram. For example, simulation rM12 produces two different population of old stars, one forming a thick disc and another one forming a counter-rotating thick disc/halo. Simulation M12z and M11 have a different Toomre diagram than the other simulations. For M12z (Fig.~\ref{M12z_kin}), there is no clear disc formed in the remnant, and the young stars are dispersed throughout the diagram. As we saw in the previous section, young stars in this simulation are mostly located in a massive ring around the galaxy. For simulation M11, the young stars in the outer ring are counter-rotating while all stars in the inner disc are co-rotating. For simulation M110, the young population has a significant $(U^2+W^2)^{1/2}$ component ($\sim150\,{\rm km\,s^{-1}}$), almost as large as the $V$ component. We have also performed an analysis of the circularity of the stellar orbits, where $e_{\rm j}=j_{\rm z}/j_{\rm circ}$, and $j_{\rm z}$ is the $z$-component of the specific angular momentum of each star, and $j_{\rm circ}$ is the angular momentum expected for a circular orbit, following the method of \cite{pat09}, which is an approximation of a method used by \cite{abadietal03}. These results are showed in Fig.~\ref{e_j}. Simulations M12, M13, M110, M1290, rM12, and M12orb all present a young star population with circular orbits ($e_j\sim1.0$) and an old population with a high dispersion of circularity. One exception is M110, which presents near circular orbits for both the young and old population, which is expected in a minor merger. Simulations M11 and M12z show non-circular orbits, in good agreement with the other kinematical results. The small thin disc of M11 is also well represented at $e_j\sim-1.0$. \begin{figure} \begin{center} \includegraphics[width=3.3in]{figure10.ps} \caption{Circularity of the orbits for all simulations. $e_j$ represent the angular momentum of a star particle normalized to a particle in a circular orbit of the same radius. On each panel, the red curves represent old stars, the blue curves represent young stars, and the black curves represent all stars. } \label{e_j} \end{center} \end{figure} The main conclusion we can draw from the analysis and comparison of the kinematical properties of the remnants is that these kinematical properties, as the structural ones, are strongly dependent of the initial conditions, and support previous results by \citet{robertsonetal06}. \subsection{Chemical Abundances} As mentioned in the Section~I, one of the most important results not yet achieved concerning gas-rich major mergers is their chemical signature. Here we present some of our first results. The chemical properties of all remnants are presented in Figs.~\ref{M12_chem}--\ref{M110_chem}. The top panels show the radial profiles of the $\alpha$-element abundances $\rm[\alpha/Fe]$ (calculated by averaging the abundances of O, Mg, and Si) and metallicity $\rm[Fe/H]$ , calculated by averaging over radial bins. The old population has a higher $\rm[\alpha/Fe]$ ratio and lower metallicity $\rm[Fe/H]$ than the young population, at all radii. The gradients of $\rm[\alpha/Fe]$ in the old population are all very flat. The distributions of $\rm[\alpha/Fe]$ in the young population are much more complex. In particular, simulations M12, M12z, M1290, and M13 show several local minima and maxima at various radii, with variations as large as 0.5 dex. The gradients of $\rm[Fe/H]$ are much smoother. In all simulations except M11, we find a gradient in $\rm[Fe/H]$, ranging from $-0.01$ to $-0.1$~dex per kpc. The third rows of panels in Figs.~\ref{M12_chem}--\ref{M110_chem}, show the $\rm[\alpha/Fe]$ and $\rm[Fe/H]$ ratio versus formation epoch. The $\rm[\alpha/Fe]$ ratio remains fairly constant over time during the merger, and starts decreasing after the merger. The $\rm[Fe/H]$ ratio shows a completely opposite behavior: it increases strongly before and during the merger, and keeps increasing, but more slowly, after the merger. This would suggest that the galaxy grows outside-in, with the older, metal-poor stars being located at the outer regions of the disc. To investigate this question, we plot in Fig.~\ref{ageradM12} the formation epoch of old and young stars versus their final location in the disc, for all simulations. The dashed lines indicate the beginning and the end of the merger. For old stars, the formation epoch either decreases with increasing radius (as in M12) or remains constant (as in M1290). For instance, in the case of M12, stars that formed before the merger are dominant at radius R$>$3 kpc, while stars that formed during the merger and the associated starburst are more centrally concentrated. The young stars show the opposite trend: the formation epoch increases with radius up to $R=7.5\,{\rm kpc}$ in most simulations. Star-formation after the merger therefore proceeded inside-out. It is the efficiency of chemical enrichment, and not the epoch of star formation, that explains the gradient in $\rm[Fe/H]$. Chemical enrichment is more efficient in the center of the remnant, where the stellar density is higher. \begin{figure} \begin{center} \includegraphics[width=3.3in]{figure11.eps} \caption{Chemical abundances for simulation M12. On the top four panels and bottom left panel, solid curves represent old stars, and dot-dashed curves represent young stars. The first three rows show the $\alpha$-elements abundance (left) and metallicity (right) versus radius (first row), height (second row), and formation epoch (third row). Bottom left panel: $\alpha$-element abundance versus metallicity. Bottom right panel: Stellar mass versus metallicity, for stars born before or after the merger (medium line), after the merger (thin line), and for all stars (thick line). } \label{M12_chem} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.3in]{figure12.eps} \caption{Same as Fig.~\ref{M12_chem}, for simulation M12orb} \label{M12orb_chem} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.3in]{figure13.eps} \caption{Same as Fig.~\ref{M12_chem}--\ref{M12orb_chem}, for simulation M12z} \label{M12z_chem} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.3in]{figure14.eps} \caption{Same as Fig.~\ref{M12_chem}--\ref{M12z_chem}, for simulation M1290.} \label{M1290_chem} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.3in]{figure15.eps} \caption{Same as Fig.~\ref{M12_chem}--\ref{M1290_chem}, for simulation rM12.} \label{rM12_chem} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.3in]{figure16.eps} \caption{Same as Fig.~\ref{M12_chem}--\ref{rM12_chem}, for simulation M11. We could not calculate vertical profiles for the young stars, because the thin disc is too thin. Instead we indicate the mean values of $\rm[\alpha/Fe]$ and $\rm[Fe/H]$ with a single symbol ($\times$) in the second-row panels.} \label{M11_chem} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.3in]{figure17.eps} \caption{Same as Fig.~\ref{M12_chem}--\ref{M11_chem}, for simulation M13} \label{M13_chem} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=3.3in]{figure18.eps} \caption{Same as Fig.~\ref{M12_chem}--\ref{M13_chem}, for simulation M110} \label{M110_chem} \end{center} \end{figure} The second row of panels in Figs.~\ref{M12_chem}--\ref{M110_chem} show $\rm[\alpha/Fe]$ and $\rm[Fe/H]$ versus height above the plane. The different extents of the two curves on each panel reflects the different thicknesses of the two discs. Notice that the thin disc is {\it very thin\/} in simulation M11, less than 800~pc. There is essentially no vertical gradient in $\rm[\alpha/Fe]$ for the old population. For the young population, we find a gradient in $\rm[\alpha/Fe]$ for M12 (0.1 dex per kpc), and possibly another one for M12orb, while there are no evidence for gradients in the other cases. The $\rm[Fe/H]$ ratios are either constant, as in simulations M12z and M110, or drops by a factor of order 2 or 3, as in simulations M12 and M1290, for both the old and the young populations. Since the stellar density, and the resulting chemical enrichment, is a much stronger function of radius than height, we naturally expect the vertical gradient of the metallicity to be much weaker than the radial one \citep{brooketal05}. The most important plots on Figs.~\ref{M12_chem}--\ref{M110_chem} are the $\rm[\alpha/Fe]$ versus $\rm[Fe/H]$ plots (bottom-left panels). Old stars have larger $\alpha$-element abundance than young stars, up to a relatively high metallicity ($\rm[Fe/H]\simeq-0.5$). We can explain this result by considering the relations between $\rm[\alpha/Fe]$, $\rm[Fe/H]$, and the formation epoch of stars (third-row panels in Fig.~\ref{M12_chem}--\ref{M110_chem}). The metallicity $\rm[Fe/H]$ increases with time during the starburst, and levels-off after the starburst is completed. The ratio $\rm[\alpha/Fe]$ then decreases with time as the metallicity increases, until it reaches the values found in the thin disc. This decrease of $\rm[\alpha/Fe]$ is slowed near the beginning of the collision, as the starburst leads to a large number of Type~II SNe, that enrich the gas in $\alpha$-elements. After the starburst, Type~Ia SNe become effective, and enrich the gas in iron. This explains why the ratio $\rm[\alpha/Fe]$ gradually decreases after the collision. We complete this study by calculating the stellar mass in metallicity bins, for both populations. The results are shown in the bottom right panels of Figs.~\ref{M12_chem}--\ref{M110_chem}. Old and young stars have different metallicity distributions. The old population peaks in the range $\rm[Fe/H]=[-1.0,-0.6]$, while the young population peaks in the range $\rm[Fe/H]=[-0.4,0.4]$. The implies that old stars are formed, on average, by gas that has not yet been enriched by a large number of nearby Type~Ia SNe. We can gain more insight into this process by plotting Toomre diagrams for different metallicity bins. The results for simulation M12 are shown in Fig.~\ref{toomreM12}. We recall that the concentration of stars in the region $V\simeq400\rm\,km/s$, $(U^2+W^2)^{1/2}\simeq0\rm\,km/s$ represents the thin disc, and the reminder of the diagram represents the thick disc and the halo. Low-metallicity stars ($\rm[Fe/H]<-3$) are located in the halo, high-metallicity stars ($\rm[Fe/H]>0$) are located in the thin disc, and intermediate-metallicity stars are found in both locations. The main result we draw from this analysis of the chemical abundance is that the merger leaves a clear signature: the old stars, located in the thick disc and the halo, have a ratio $\rm[\alpha/Fe]$ that remains constant with increasing $\rm[Fe/H]$ up to high metallicities ($\rm[Fe/H]=-0.5$), while the young stars, located in the thin disc, have a lower ratio $\rm[\alpha/Fe]$ which decreases with increasing metallicities. This result is very robust: we found it in each of the 8 simulations included in our study, including the minor merger (simulation M110). Early gas-rich mergers in our simulations occur, by construction, prior to the timescale for significant Type~Ia SNe pollution. Ultimately, this is responsible for the robustness of our results. \section{DISCUSSION} The results presented in this paper support the gas-rich merger scenario as a way to explain the formation of large disc galaxies in the Universe. Our results agree very well with the kinematical results of \citet{robertsonetal06}. That said, the gas-rich process may be necessary to form a disc, but it is not a sufficient condition. We can see this in our two simulations which formed bulge-dominated galaxies, namely M11 and M12z. Explaining this difference is not simple because of the chaotic nature of the merger process. There is likely a complex relationship between the initial conditions and the behavior of the gas during the merger. The duration of the merger and the violence of the collision seems to favor a higher dynamical warming, and this favors the formation of a small rotationally supported disc, embedded within a large spheroidal component. In the case of simulation M11, the two progenitors have a lower mass than in the others simulations, but the starburst has the same intensity. As a result, there is not enough gas left after the merger to form a substantial thin disc. This agrees with the results of \citet{robertsonetal06} who found that low-mass progenitors are more likely to form an important spheroidal component than high-mass ones. When a disc-like remnant is formed, it invariably consists of two disc components: a thick disc made of stars formed before or during the merger, and a thin disc made of stars born after the merger. The thick disc has a lower rotation velocity than the thin disc, in agreement with the findings of \citet{yd06}. The scale-length ratio between the two components is also in agreement with this work, which suggests that gas-rich disc mergers may be a common way to form disc galaxies. However, we have to be careful since subsequent gas infall from satellites or the intergalactic medium is not included in our simulations. Such gas infall could increase the scale-length of the thin disc and affect our results (see, e.g., \citealt{brooksetal09}). Stars belonging to the thick and thin discs differ by their abundances in various metals. Old stars have an excess in $\alpha$-elements relative to young stars at metallicities $\rm[Fe/H]$ below $-0.5$. This result is in good agreement with observational studies on the chemical abundances of the two disc components of the Milky Way, if the old and young stars in our simulations are analogous to the thick- and thin-disc stars, respectively. \citet{rlap06} published a spectroscopic survey of 176 stars with high probability to be part of the galactic thick disc, selected with the {\sl Hipparcos} catalog. These authors analyze the relationship between the abundances of 22 chemical elements and the $\rm[Fe/H]$ ratio. It is clear form their Fig.~12 that the $\rm[\alpha/Fe]$ ratio is higher for the thick disc than the thin disc by almost 0.15 until the metallicity reaches $\rm[Fe/H]=-0.3$. At larger metallicities, the $\rm[\alpha/Fe]$ ratios of the two discs becomes similar. The authors explain this fact by invoking an extended period of enrichment in iron by Type Ia SNe, an explanation that is supported by our work. Fig. 16 in the same article shows that chemical elements other than the $\alpha$-elements do not follow the same pattern, which is further evidence of the important role of Type II SNe in the formation of the Milky Way, and support the scenario that a major collision took place in the history of the Milky Way. Another similar study was done by \citet{bensbyetal05}. That study presents abundances for 102 dwarf F and G stars, and uses their kinematics to determine if they belong to the thin or thick disc. Their Fig.~8 shows the abundances of oxygen, magnesium, and silicon as functions of metallicity, which indicates that that stars in the thick and thin disc have different abundances for a metallicity $\rm[Fe/H]<-0.5$. Their Fig. 10 is particularly interesting and shows that the abundance of oxygen decreases with increasing metallicity in the interval $\rm-1.0<[Fe/H]<0.0$. This is all in agreement with the results presented here. These authors suggest the existence of several observational constraints related to the formation and chemical evolution of the thick disc: \begin{itemize} \item Thick-disc stars and thin-disc stars have different chemical abundances. \item For a $\rm[Fe/H]$ smaller than a certain value, thick disc stars have a larger abundance of $\alpha$-elements than thin-disc stars. \item The ratio $\rm[\alpha/Fe]$ decreases with increasing $\rm[Fe/H]$, which shows the important contribution of Type~Ia SNe. \item Stars in the thick disc are on average older than stars in the thin disc. \end{itemize} Our results mostly agree with these constraints, which supports the gas-rich, major-collision scenario for the formation of the thick disc of the Milky Way, a scenario in which a collision leads to an intense starburst accompanied by a very large number of Type~II SNe. These SNe enrich the gas supply in $\alpha$-elements, allowing the formation of stars with a high $\rm[\alpha/Fe]$ ratio. The starburst also leads to a progressive enrichment in iron caused by Type~Ia SNe. Stars formed after the collision will therefore have a smaller $\rm[\alpha/Fe]$ for the same metallicity. These stars will form a thin disc with a smaller velocity dispersion than the stars formed during the merger. Another interesting result is the drop in $\rm[Fe/H]$ with increasing radius. Finally, we must point out that the initial conditions used in our study are greatly simplified. First, the initial conditions do not include a spheroidal component made of stars. We assume that the merger progenitors are pure disc for simplicity. We might expect that these stars, if included, would end up in the halo of the final galaxy, which might explain that most simulations produced a low-mass halo. We can argue that our results are valid in the limit where the initial spheroidal components are too small to affect the dynamics of the collision. We have not included a bulge component as well. Although morphologies of high redshift galaxies are unknown, this assumption may be too simplistic and our results may be biased by this assumption. However, a recent work showed the presence of a bulge can produce a disc remnant with a gas ratio as low as 12\% \citep{barnes02}. These facts are an evidence that a merger between bulgeless disc-disc galaxies is not a sufficient condition to form a dick remnant. In fact, bulgeless mergers generally produce elliptical remnants in dissipationless simulations \citep{GB05}. So, the presence of a massive gaseous component is probably the major condition to reform a disc after a major merger. We also have to add that the softening length used in this study is slightly higher than what are used in the recent state-of-the-art simulations. However, the morphological aspects of our study show good agreement with previous such higher spatial resolution simulations, showing that the resolution is sufficient for the analysis presented in this paper. The hierarchical model of structure formation normally includes a significant amount of collision and accretion of low-mass galaxies. It is presumptuous to assume that the formation of the thick disc can be explained entirely by one single collision. It is possible that several of these collisions take place in the initial phases of the formation of spiral galaxies \citep{brooketal05,conselice06}. Nevertheless, our study can explain several observations with the important starburst that takes place during a major, gas-rich collision. \section{SUMMARY AND CONCLUSION} Using GCD+, we have performed 8 simulations of major mergers between gas-rich spiral galaxies. We have analyzed the kinematic, structural, and chemical properties of stars formed before and during the collision (the old population) and stars formed after the collision (the young population). We used the star formation rate to define these two populations. A fraction of the old stars end up in the halo of the merger remnant, while the remaining stars form a thick disc which is partly supported by velocity dispersion, partly supported by rotation, and sometimes includes a significant counter-rotating component. The young stars form a thin disc that is supported by rotation, and, in many cases, a ring that might or might not be coplanar with the thin disc. The disc themselves tend to be coplanar, the angle between them being $\sim6^\circ$ or less. With rare exceptions, both discs are well-fitted by an exponential profile, and the scale-length of the thick disc exceeds the one of the thin disc a few percents up to a factor of 1.60. The starburst occurring during the collision rapidly enriches the gas in various metals. Explosions of Type~II SNe follow rapidly the start of the collision, owing to the short lifetime of their progenitors, and enrich the intergalactic medium in $\alpha$-elements. Enrichment by Type~Ia SNe is spread over a large period of time, enabling the enrichment in iron of both stellar populations, which results in an old population having a $\rm[\alpha/Fe]$ ratio higher than the young population, even at relatively large metallicities ($\hbox{$\rm[Fe/H]$ }=-0.5$). This could explain the high $\rm[\alpha/Fe]$ ratio observed in stars in the thick disc and the halo of the Milky Way. This result do not depend strongly upon the initial conditions, since it was found in all simulations. \begin{figure*} \begin{center} \includegraphics[width=4.8in]{figure19.eps} \caption{Star formation epoch versus final radius for old stars (solid line) and young stars (dot-dashed line), for the 8 simulations. The horizontal dashed lines indicate the beginning and the end of the starburst.} \label{ageradM12} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=4.8in]{figure20.eps} \caption{Toomre diagram for stars located near the disc plane ($Z<1\,{\rm kpc}$), for simulation M12, divided according to metallicity. The metallicity range is indicated on top of each panel.} \label{toomreM12} \end{center} \end{figure*} Our main conclusion is that the morphological, kinematical and chemical properties of the thick and thin disc can be reproduced in an scenario where thick disc formed in a gas rich merger between two disc galaxies. Furthermore, while the structural and kinematical properties of the merger remnants are strongly dependent on the initial conditions --features such as the ratio of the disc scale-lengths, the extent of the discs, the presence of rings or of a counter-rotating component, vary from simulation to simulation-- the chemical abundances show a remarkable consistency among the various simulations. The key result of this study is that the ratio $\rm[\alpha/Fe]$ remains constant with increasing $\rm[Fe/H]$ for old stars, up to $\rm[Fe/H]=-0.5$ while it decreases with increasing metallicity for young stars. This is true in each of the 8 simulations we considered. The observed chemical signatures in our merger remnants require that mergers happen before the onset of the majority of Type~Ia SNe, regardless of the orbits or mass ratios of the progenitors. Furthermore, the mergers need to be gas rich to lead to a remnant with a rotationally supported disc. These characteristics are consistent with the mergers we expect to find at high redshift and, therefore, our results support this scenario for the formation of the different disc components in the Milky Way. \section*{Acknowledgments} The simulations were performed at the Laboratoire d'Astrophysique Num\'erique, Universit\'e Laval, the University of Central Lancashire's High Performance Computing Facility, and at the Center for Computational Astrophysics of the National Astronomical Observatory of Japan. The code used for generating the entries displayed in Table~\ref{angles} was written by Keven Roy. SR, \& HM acknowledge the support of the Canada Research Chair program and NSERC. PSB acknowledges the support of a Marie Curie Intra-European Fellowship within the $6^{\rm th}$ European Community Framework Programme. BKG and CBB acknowledge the support of the UK's Science \& Technology Facilities Council (ST/F002432/1) and the Commonwealth Cosmology Initiative.
d130edce8893924695a04c6a425d4c6af6b149e3
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section*{Introduction} The $6j$-symbols are tensors describing the associativity of the tensor product in a tensor category. Formulas exist for the classical and quantum $6j$-symbols associated to the defining representations of $\ensuremath{\mathfrak{sl}(2)}$ and its powers (see \cite{KR,MV}). At a root of unity ${\xi}$, new representations appear for the quantum group $\ensuremath{U_{{{\xi}}}(\slt) } $. There are essentially two new families: the nilpotent and the cyclic representations. Unlike the cyclic family, the nilpotent representations can be enriched to form a ribbon category. The theory of modified dimensions developed with V. Turaev by the authors in \cite{GPT,GPT2} produces a family of modified $6j$-symbols that share properties similar with usual $6j$-symbol. Nevertheless this family has a very different nature than previously defined $6j$-symbols. Indeed, the whole family of nilpotent representation can be thought as an unique module with parameters. This module is then a non trivial one parameter deformation of the so called Kashaev module. For this reason, the $6j$-symbols can be described by a finite set of parametrized functions and more precisely by a family of $3$-variable Laurent polynomials $\J_{i,j,k}$ with coefficients in $\ensuremath{\mathbb{Z}} [{\xi}]$. These Laurent polynomials have wonderful properties. They have some symmetries (see \eqref{eq:Jsym}), satisfy a Biedenharn-Elliott type identity (see \eqref{eq:JBE}) and an orthonormality relation (see \eqref{eq:Jon}). These three identities, imply that from a triangulation of a $3$-manifold, one can compute a state sum, that is a weighted sum of product of these $\J$ polynomials associated with the tetrahedra of the triangulation, which is a topological invariant of $M$. Furthermore, F. Costantino and J. Murakami \cite{CM} show that the asymptotical behavior of these polynomial $6j$-symbols is related to the volume of truncated tetrahedra. The main substance of this paper, is the careful computation of the $6j$-symbols associated with nilpotent representations of $\ensuremath{U_{{{\xi}}}(\slt) } $. This is done in the third section. But once the $6j$-symbols are identified with certain values of the $\J$ polynomials, all the machinery of tensor category can be forgotten. This is what we want to highlight by the structure of this document. Hence the first part only defines the $\J$ polynomials and announces their properties. Here the tensor categories does not appear except in the fact that we do not have, without them, a direct proof of the identities. The second part is a short exposition of how the polynomials can be used to construct a Turaev-Viro type invariant, following and refining the ideas of \cite{TV,GPT2}. \section{Polynomial $6j$-symbols}\label{S:J} Fix a non-zero positive integer ${{r'}}$. In this section, we define a set of formal $6j$-symbols $\J^{{r'}}_{i_1,i_2,i_3}$ for $i_1,i_2,i_3\in \ensuremath{\mathbb{Z}} $ and give some of their properties. Since ${{r'}}$ is fixed, we write $\J_{i_1,i_2,i_3}$ for $\J^{{r'}}_{i_1,i_2,i_3}$. Let $\ensuremath{\mathbb{N}} $ be the set of positive integers including zero. Let ${{r}}=2{{{r'}}}+1$ and ${{\xi}}=\e^{im\pi/{{r}}}$ for $m$ coprime with $2{{r}}$. Let ${\mathfrak L}=\ensuremath{\mathbb{Z}} [{{\xi}}][q_1^{\pm1},q_2^{\pm1},q_3^{\pm1}]$ be the ring of Laurent polynomials in three variables, with coefficients in $\ensuremath{\mathbb{Z}} [{{\xi}}]$. We denote with a bar the involutive ring automorphism of ${\mathfrak L}$ defined by $\b {{\xi}}={{\xi}}^{-1},\b q_1=q_1^{-1},\b q_2=q_2^{-1}$ and $\b q_3=q_3^{-1}$. For any invertible element $X$ of a ring and $N\in\ensuremath{\mathbb{N}} $ let $\Qn{X}$ and $\Fn N X$ be analogues of quantum integer and quantum factorial, given by: $$ \Qn{X}=X-X^{-1},\quad \Fn N X=\prod_{i=0}^{N-1}\Qn{{{\xi}}^iX}=\Qn X\Qn{{{\xi}}X}\cdots\Qn{{{\xi}}^{N-1}X}. $$ Also, for $n, N \in\ensuremath{\mathbb{N}} $ and $i_1,i_2,i_3\in\{-{{r'}},-{{r'}}+1,\cdots,{{r'}}\}$ such that $n\leq N$ we set $$\qn{n}=\Qn{{{\xi}}^n}={{\xi}}^n-{{\xi}}^{-n}\qquad\text{ and }\qquad \qn N !=\qn1\qn2\cdots\qn N$$ $$ \qb N n=\frac{\qn N!}{\qn n! \qn {N-n}!}\quad\text{ and }\quad \qn{i_1,i_2,i_3}=\frac{\qn{2{{r'}}}!}{\qn {{{r'}}-i_1}! \qn {{{r'}}-i_2}!\qn {{{r'}}-i_3}!} $$ Remark that $\Fn k{X^{-1}}=(-1)^k\Fn kX$, also $\qn{2{{r'}}}!=(-1)^{{r'}} r$ and $\Fn {{r}} X=\prod_{i=0}^{{{r}}-1}{{\xi}}^{i}X^{-1}(X^2-{{\xi}}^{-2i})=(-1)^{{{r'}}}\Qn{X^r}$. \\ Let us consider the finite set $$ \H_{{r'}}=\big\{(i_1,i_2,i_3)\in\ensuremath{\mathbb{N}} : -{{r'}}\leq i_1,\,i_2,\,i_3,\,i_1+i_2+i_3\leq{{r'}}\big\}. $$ One can easily show that $\operatorname{card}(\H_{{r'}})=\frac13{{r}}(2{{r}}^2+1)$. It can be useful to have in mind the action of the tetrahedral group $\ensuremath{\mathfrak{S}}_4$ on $\H_{{r'}}$ by permutting $i_1,i_2,i_3$ and $i_4=-(i_1+i_2+i_3)$. For all $(i_1,i_2,i_3)\in\H_{{r'}}$, we define a Laurent polynomial $$\J_{i_1,i_2,i_3}(q_1,q_2,q_3)\in {\mathfrak L}$$ as follows.\\ $\bullet$ If $i_1,i_3\leq i_1+i_2+i_3$ then let $N={{r'}}-i_1-i_2-i_3$ and define \begin{equation} \label{eq:J} \begin{array}{r} \J_{i_1,i_2,i_3}(q_1,q_2,q_3)=\qn{i_1,i_2,i_3}\Fn{i_2+i_3}{q_1 {\xi}^{-i_3-{{r'}}}}\Fn{i_1+i_2}{q_3 {{\xi}}^{-i_2-{{r'}}}}\times \\[2ex] \Bigg(\sum_{n=0}^N\qb Nn\Fn{N-n}{q_2\b q_1{{\xi}}^{i_3+{{r'}}+1}}\Fn{N-n}{q_2\b q_3{{\xi}}^{i_3+i_2-i_1-{{r'}}}} \times \\ \Fn{n}{q_1\b q_2 {{\xi}}^{-2i_3-N}} \Fn{n}{q_3\b q_2{{\xi}}^{i_1+{{r'}}+1}} \Fn{{{{r'}}}-i_2}{q_2 {{\xi}}^{-i_1-{{r'}}-n}}\Bigg). \end{array} \end{equation} $\bullet$ If $i_2,i_3\geq i_1+i_2+i_3$ then let $N={{r'}}+i_1+i_2+i_3$ and define \begin{equation} \label{eq:Ja} \begin{array}{r} \displaystyle{\J_{i_1,i_2,i_3}(q_1,q_2,q_3)=\frac{\Fn{{{{r'}}}+i_2-N} {q_3\b q_1{{\xi}}^{N-2i_2+1}}}{\qn{N}!}\Bigg(\sum_{n=0}^N\qb Nn \Fn{n}{q_2{{\xi}}^{-i_1+{{r'}}+1}} \times} \\[2ex] \displaystyle{\Fn{N-n}{q_2{{\xi}}^{-i_1-i_2+n+1}} \Fn{{{{r'}}}+i_3}{q_1\b q_2{{\xi}}^{N-n-2i_3+1}} \Fn{{{{r'}}}+i_1}{q_2\b q_3{{\xi}}^{n-2i_1+1}}\Bigg)}. \end{array} \end{equation} $\bullet$ For other $(i_1,i_2,i_3)\in\H_{{r'}}$, the polynomial $\J_{i_1,i_2,i_3}$ is obtained from Equation \eqref{eq:Jsym}, below. \\[2ex] The definition of these symbols come from the $6j$-symbols associated to nilpotent representations of $\ensuremath{U_{{{\xi}}}(\slt) } $ (see Definition \ref{D:6j} and Theorem \ref{Th:formula}). Theorem \ref{Th:formula} shows that Equations \eqref{eq:J} and \eqref{eq:Ja} agree when both conditions are satisfied. We also extend the definition for $(i_1,i_2,i_3)\in\ensuremath{\mathbb{Z}} ^3\setminus\H_{{r'}}$ by $\J_{i_1,i_2,i_3}=0$. Theorem \ref{T:Laurent} implies that $\J_{i_1,i_2,i_3}(q_1,q_2,q_3)$ is an element of ${\mathfrak L}$. It is well known that the $6j$-symbols satisfy certain relations. We use Equation \eqref{E:J-sjv} to show the family of polynomials defined above satisfy equivalent relations. Indeed the theory of modified $6j$-symbols developed in \cite{GPT2} shows that these identities are satisfied as functions over some open dense subset of $\ensuremath{\mathbb{C}} ^n$. Therefore, since the elements $\J_{*,*,*}$ are Laurent polynomials they satisfy the identities formally. Let us now discuss these relations. Since the $6j$-symbols have tetrahedral symmetry we have \begin{equation} \label{eq:Jsym} \J_{i_1,i_2,i_3}(q_1,q_2,q_3)= \J_{i_2,i_1,i_3}(\b q_2,\b q_1,\b q_3) =\J_{i_2,i_3,i_4}(q_1\b q_2{{\xi}}^{-2i_3},q_1\b q_3{{\xi}}^{2i_2},q_1) \end{equation} where $i_4=-i_1-i_2-i_3$. These two equalities generate the 24 symmetries of the tetrahedral group. In particular, if $\sigma$ is permutation of the set $\{1,2,3\}$ then $\J_{i_1,i_2,i_3}(q_1,q_2,q_3)=\J_{i_{\sigma(1)},i_{\sigma(2)},i_{\sigma(3)}} (q_{\sigma(1)}^{\epsilon}, q_{\sigma(2)}^{\epsilon}, q_{\sigma(3)}^{\epsilon})$ where $\epsilon=\epsilon(\sigma)$ is the signature of $\sigma$. The other relations involve a function called a modified dimension (see \cite{GPT2}). We introduce the following polynomial in $q_1$ which is a formal analog of the inverse of this function: \begin{equation} \label{eq:qD} \operatorname{\mathsf{D}}(q_1)=\Fn{2{{r'}}}{q_1{\xi}}= (-1)^{{r'}}\dfrac{{q_1^{{r}}}-{q_1^{-{{r}}}}} {{q_1}-{q_1}^{-1}}. \end{equation} The $\J$ polynomials satisfy the Biedenharn-Elliott identity: For $x\in \ensuremath{\mathbb{Z}} $ let $\b{x}$ be the element of $\{-{{r'}},-{{r'}}+1,\cdots,{{r'}}\}$ congruent to $x$ modulo ${{r}}$. For any $i_1,i_2,i_3,i_4,i_5,i_6\in\ensuremath{\mathbb{Z}} $, \begin{equation} \label{eq:JBE} \begin{array}{c} \J_{i_1,i_2,i_3}(q_1,q_2,q_3) \J_{i'_1,i'_2,i'_3} (q_0{\xi}^{2i_4}/q_1,q_0{\xi}^{2i_5}/q_2,q_0{\xi}^{2i_6}/q_3)= \\%[1ex] \displaystyle{ \sum_{n=-{{r'}} }^{{{r'}}}}\frac{ \J_{i_1,{i'_6},-i'_5}(q_0{{\xi}}^{2n},q_2,q_3) \J_{i_2,{i'_4},-i'_6}(q_0{{\xi}}^{2n},q_3,q_1) \J_{i_3,{i'_5},-i'_4}(q_0{{\xi}}^{2n},q_1,q_2)}{\operatorname{\mathsf{D}}({q_0{\xi}^{2n}})} \end{array} \end{equation} where $\ms{\left\{ \begin{array}{l} i'_1=\b{-i_1+i_5-i_6}\\ i'_2=\b{-i_2+i_6-i_4}\\ i'_3=\b{-i_3+i_4-i_5} \end{array}\right.}$, $\ms{\left\{\begin{array}{l} i'_4=\b{i_4-n}\\ i'_5=\b{i_5-n}\\ i'_6=\b{i_6-n} \end{array}\right.}$, and $q_0$ is an independent variable. For any $(i_1,i_2,i_3)\in\H_{{r'}}$ and any $i'_1\in\ensuremath{\mathbb{Z}} $ the orthonormality relation is expressed as \begin{equation} \label{eq:Jon} \begin{array}{r} \displaystyle{ \sum_{n=-{{r'}}}^{{{r'}}}\frac{ \J_{i_1,\b{i_2-n},\b{i_3+n}}(q_1{{\xi}}^{2n},q_2,q_3) \J_{-i'_1,\b{n-i_2},\b{-i_3-n}}(\b q_1{{\xi}}^{-2n},\b q_2,\b q_3)} {\operatorname{\mathsf{D}}({q_2\b q_3{{\xi}}^{-2i_1}})\operatorname{\mathsf{D}}({q_1{{\xi}}^{2n}})} =\delta_{i_1,i_1'}} \end{array} \end{equation} where $\delta_{i_1,i'_1}$ is the Kronecker symbol. \section{$3$-manifold invariant} In this section we derive a set of topological invariant of links in closed $3$-manifolds $M$ from the family $\J_{***}(q_1,q_2,q_3)$. These invariants are indexed by element of $H_1(M,\ensuremath{\mathbb{Z}} )$ and they refine the invariant constructed in \cite[Section 10.4]{GPT2}. Let $M$ be a closed $3$-manifold and $L$ a link in $M$. Here we follow the exposition of \cite{GPT2} inspired from \cite{BB}. A \emph{quasi-regular triangulation of $M$} is a decomposition of $M$ as a union of embedded tetrahedra such that the intersection of any two tetrahedra is a union (possibly, empty) of several of their vertices, edges, and (2-dimensional) faces. Quasi-regular triangulations differ from usual triangulations in that they may have tetrahedra meeting along several vertices, edges, and faces. Nevertheless, the edges of a quasi-regular triangulation have distinct ends. A \emph{Hamiltonian link} in a quasi-regular triangulation $\mathcal{T}$ is a set $\mathcal{L}$ of unoriented edges of $\mathcal{T}$ such that every vertex of $\mathcal{T}$ belongs to exactly two edges of $\mathcal{L}$. Then the union of the edges of $\mathcal{T}$ belonging to $\mathcal{L}$ is a link $L$ in $M$. We call the pair $(\mathcal{T},\mathcal{L})$ an \emph{$H$-triangulation} of $(M,L)$. \begin{proposition}[\cite{BB}, Proposition 4.20]\label{L:Toplemma-} Any pair (a closed connected $3$-manifold $M$, a non-empty link $L\subset M$) admits an $H$-triangulation. \end{proposition} The language of both triangulation and skeleton are useful here. In particular, it is convenient to use triangulation to give the notion of a Hamiltonian link and skeleton to define the state sum. A skeleton of $M$ is a 2-dimensional polyhedron $\P$ in $M$ such that $M\setminus \P$ is a disjoint union of open 3-balls and locally $\P$ looks like a plane, or a union of 3 half-planes with common boundary line in $\ensuremath{\mathbb{R}} ^3$, or a cone over the 1-skeleton of a tetrahedron (see, for instance \cite{BB,Tu}). A typical skeleton of $M$ is constructed from a triangulation $\mathcal{T}$ of $M$ by taking the union $\P_\mathcal{T}$ of the 2-cells dual to its edges. This construction establishes a bijective correspondence $\mathcal{T}\leftrightarrow \P_\mathcal{T}$ between the quasi-regular triangulations $\mathcal{T}$ of $M$ and the skeletons $\P$ of $M$ such that every 2-face of $\P$ is a disk adjacent to two distinct components of $M\setminus \P$ and no connected component of the 1-dimensional strata of $\P$ is a circle. To specify a Hamiltonian link $\mathcal{L}$ in a triangulation $\mathcal{T}$, we provide some faces of $\P_\mathcal{T}$ with dots such that each component of $M\setminus \P_\mathcal{T}$ is adjacent to precisely two (distinct) dotted faces. These dots correspond to the intersections of $\mathcal{L}$ with the 2-faces. Let $R$ be a commutative ring with a morphism $\ensuremath{\mathbb{Z}} [{\xi}]\to R$. We still demote by ${\xi}$ the image of ${\xi}$ in $R$ and we assume that the group of $2{{r}}$\textsuperscript{th} root of $1$ in $R$ is of order $2{{r}}$ generated by ${\xi}$. Let $R^\times$ be the group of units of $R$ and consider any subgroup $G$ of $\{x^{{r}} : x\in R^\times\}$, for example $(R,G)=(\ensuremath{\mathbb{C}} ,\ensuremath{\mathbb{C}} ^*)$. Clearly any element $x\inG$ has exactly ${{r}}$ ${{r}}$\textsuperscript{th} roots in $R$. They form a set $\sq{x}=\{y{\xi}^{-2{{r'}}},\ldots,y,y{\xi}^2,\ldots,y{\xi}^{2{{r'}}}\}$ for some $y\in R$ such that $y^{{r}}=x$. We call $(R,G)$ a \emph{coloring pair}. Let $(\mathcal{T},\mathcal{L})$ be a $H$-triangulation of $(M,L)$. Let $\P_\mathcal{T}$ be a skeleton dual to $\mathcal{T}$. The skeleton $\P_\mathcal{T}$ gives $M$ a cell decomposition $M_\P$. So a $n$-chain of cellular homology with coefficients in $G$ can be represented by a map from the oriented $n$-cells of $M_\P$ to $G$. By a \emph{$G$-coloring} of $\mathcal{T}$ (or of $\P_\mathcal{T}$), we mean a $G$-valued $2$-cycle $\wp$ on $\P_\mathcal{T}$, that is a map from the set of oriented faces of $\P_\mathcal{T}$ to $G$ such that \begin{enumerate} \item the product of the values of $\wp$ on the three oriented faces adjacent to any oriented edge of $\P_\mathcal{T}$ is $1$, \item $\wp(-f)=\wp(f)^{-1}$ for any oriented face $f$ of $\P_\mathcal{T}$, where $-f$ is $f$ with opposite orientation. \end{enumerate} Each $G$-coloring $\wp$ of $\mathcal{T}$ represents a homology class $[\wp]\in H_2(M,G)$. When $M$ is oriented, a $G$-coloring of $\mathcal{T}$ can be seen as a $1$-cocycle (a map on the set of oriented edges of $\mathcal{T}$, see \cite{GPT2}). In general, it can also be interpreted as a map on the set of co-oriented edges of $\mathcal{T}$ but we prefer to adopt the point of view of $\P_\mathcal{T}$. A {\it state} $\varphi$ of a $G$-coloring $\wp$ is a map assigning to every oriented face $f$ of $\P_\mathcal{T}$ an element $\varphi (f)$ of $\sq{\wp(f)}$ such that $\varphi(-f)=\varphi (f)^{-1}$ for all $f$. The set of all states of $\wp$ is denoted $\operatorname{St}(\wp)$. A state $\varphi$ can also be seen as a $2$-chain on $\P_\mathcal{T}$ with values in $R^\times$ but its boundary, the $1$-chain $\delta\varphi$ might not be trivial. Nevertheless, as $\varphi^{{r}}=\wp$, we have $(\delta\varphi)^{{r}}=1$. We call the height of $\varphi$ the unique map $h_\varphi$ assigning to every oriented edge $e$ of $\P_\mathcal{T}$ an element of $\{-{{r'}},-{{r'}}+1,\ldots,{{r'}}\}$ such that $(\delta\varphi)(e)={\xi}^{2h_\varphi(e)}$. It follows that modulo ${{r}}$, $h_\varphi$ is a $1$-cycle on $\P_\mathcal{T}$. In the case when $h_\varphi$ is also a $1$-cycle with integer coefficients, let us denote its homology class by $[h_\varphi]\in H_1(M,\ensuremath{\mathbb{Z}} )$. For $h\in H_1(M,\ensuremath{\mathbb{Z}} )$, set $\operatorname{St}_{h}(\wp)=\{\varphi\in\operatorname{St}(\wp):\,\delta h_\varphi=0\text{ and }[h_\varphi]=h\}$. Given a $G$-coloring $\wp$ of $(\mathcal{T},\mathcal{L})$, we define a certain partition function (state sum) as follows: For each vertex $x$ of $\P_\mathcal{T}$, fix a little $3$-ball $B$ centered at $x$ whose intersection with $\P_\mathcal{T}$ is homeomorphic to the cone on the $1$-skeleton of a tetrahedron. The trace of $\P_\mathcal{T}$ on $\partial B$ gives a triangulation of this sphere whose one skeleton is a tetrahedron with four vertices $v_1,v_2,v_3,v_4$. Let $f_1,f_2,f_3$ be the regions of $\P_\mathcal{T}$ contained in the triangles $xv_2v_3$,$xv_3v_1$,$xv_1v_2$, respectively. Also, let $e_1,e_2,e_3,e_4$ be the segments of $\P_\mathcal{T}$ contained in $xv_1$,$xv_2$,$xv_3$,$xv_4$, respectively. The segment $xv_i$ is oriented from $x$ to $v_i$ and induces an orientation on $e_i$. Similarly, the triangles above induce orientations on $f_1,f_2,f_3$. For each $\varphi\in \operatorname{St}(\wp)$, if $h_\varphi$ does not satisfy the cycle condition at $x$ (i.e. if $\sum_ih_\varphi(e_i)\neq0$) we set $\J(\varphi,x)=0$, otherwise define $$ \J(\varphi,x)=\J_{h_\varphi(e_1),h_\varphi(e_2),h_\varphi(e_3)} \bigg(\varphi(f_1),\varphi(f_2),\varphi(f_3)\bigg)\in R. $$ Equation \eqref{eq:Jsym} implies that $\J(\varphi,x)$ does not depend of the choice of ordering of the vertices $v_1,v_2,v_3,v_4$. For example, if one choose the ordering $v_2,v_1,v_3,v_4$ then $e_1$ and $e_2$ are exchanged, $xv_2v_3$ become $xv_1v_3$ and so $f_1$ becomes $-f_2$, etc... and the first equality of \eqref{eq:Jsym} implies that the two expressions for $\J(\varphi,x)$ are equal. We say that $g\in G$ is \emph{admissible} if $\Qn g=g-g^{-1}$ is invertible in $R$. We call a $G$-coloring $\wp$ \emph{admissible} if it takes admissible values. If $\varphi$ is a state of an admissible coloring $\wp$, and $f$ is an unoriented face of $\P_\mathcal{T}$, then we define $$\operatorname{\mathsf{d}}(\varphi,f)=\operatorname{\mathsf{D}}({g})^{-1}=\dfrac{(-1)^{{r'}}\Qn{g}}{\Qn{g^{{r}}}}\in R$$ where $g$ is $\varphi(\vec f)$ for any orientation $\vec f$ of $f$. Note that $\operatorname{\mathsf{d}}(\varphi,f)$ does not depend on the orientation of $f$, as $\operatorname{\mathsf{D}}({g})=\operatorname{\mathsf{D}}({g}^{-1})$. \begin{lemma}\label{L:admisColoring} Let $(R,G)$ be a coloring pair with the following property \begin{enumerate} \item \label{eq:Gadm} for all $ g_1,\ldots,g_n\inG$ there exists $ x\inG$ such that $ xg_1,\ldots,xg_n$ are all admissible. \end{enumerate} Then for any {$H$-triangulation} $(\mathcal{T},\mathcal{L})$ of $(M,L)$ and for any homology class $h_2\in H_2(M,G)$, there exists an admissible $G$-coloring $\wp$ on $\mathcal{T}$ representing $h_2$. \end{lemma} \begin{proof} Take any $G$-coloring $\wp$ of $\mathcal{T}$ representing $h_2$. As mentioned above $M\setminus \P_\mathcal{T}$ is the disjoint union of open 3-balls. We say that a such a 3-ball $b$ is {\it bad} for $\wp$ if there is a oriented face $f$ in $\mathcal{T}$ which is oriented away from $b$ such that $\wp(f)$ is not admissible. It is clear that $\wp$ is admissible if and only if $\wp$ has no bad 3-balls. We show how to modify $\wp$ in its homology class to reduce the number of bad 3-balls. Let $b$ be a bad 3-ball for $\wp$ and let $E_b$ be the set of all oriented faces of $\mathcal{T}$ which are oriented away from $b$. From Property \eqref{eq:Gadm} of the lemma, there exists $x\in G$ such that $x\wp(f)$ is admissible for all $f\in E_b$. Let $c$ be the $G$-valued 3-chain on $M_\P$ assigning $x$ to $b$ and $1$ to all other 3-balls (recall $M_\P$ is the cell decomposition of $M$ coming from $\P_\mathcal{T}$). Taking the boundary of this 3-chain we obtain a $G$-valued 2-chain $\delta c$ on $M_\P$. The 2-cycle $(\delta c)\wp$ on $\P_\mathcal{T}$ takes values which are admissible on all faces of $\P_\mathcal{T}$ incident to $b$ and takes the same values as $\wp$ on all other faces of $\mathcal{T}$. Here we use the fact every 2-face of $\P_\mathcal{T}$ is a disk adjacent to two distinct components of $M\setminus \P_\mathcal{T}$. The transformation $\wp \mapsto (\delta c)\wp$ decreases the number of bad 3-balls. Repeating this argument, we find a 2-cycle without bad 3-balls. \end{proof} Let $\wp$ be an admissible $G$-coloring of $\mathcal{T}$ and $h_1\in H_1(M,\ensuremath{\mathbb{Z}} )$. Then we define \begin{equation* \operatorname{\mathsf{\tau}}(\mathcal{T},\mathcal{L},h_1,\wp)=r^{-2N}\sum_{\varphi\in \operatorname{St}_{h_1} ( \wp)}\,\, \prod_{f\in \P_2\setminus \mathcal{L}} \operatorname{\mathsf{d}}(\varphi,f)\, \prod_{x\in\P_0}\J(\varphi,x)\in R \end{equation*} where $\P_2\setminus \mathcal{L}$ is the set of unoriented faces of $\P_\mathcal{T}$ without dots, $\P_0$ is the set of vertices of $\P_\mathcal{T}$ and $N$ is the number of connected component of $M\setminus\P_\mathcal{T}$ (that is the number of vertices in $\mathcal{T}$). When the coloring pair does not satisfy Property \eqref{eq:Gadm} of Lemma \ref{L:admisColoring}, we explain how to perturb a non admissible $G$-coloring $\wp$: Consider the set $S$ of element of $R[X^{\pm1}]$ that are monic polynomials in $X$ (i.e. Laurent polynomials whose leading coefficient is $1$). Then $S$ is a multiplicative set that does not contain zero divisor, hence we can form $R'=S^{-1}R[X^{\pm1}]\supset R$. Let $G'$ be the multiplicative group generated by $G$ and $X^{{r}}$. Then $(R',G')$ is a coloring pair with property \eqref{eq:Gadm} as any $\Qn{X^{k{{r}}}h}$ with $h\in G$ and $k\in\ensuremath{\mathbb{Z}} ^*$ is invertible in $R'$. Using the inclusion above we can view $\wp$ and $[\wp]$ as taking values in $R'$. Then Lemma \ref{L:admisColoring} implies there exists a $2$-boundary $x$ with values in $G'$ such that $\wp'=\wp+x$ is an admissible $G'$-coloring. We say that $\wp'$ is a perturbation of $\wp$. \begin{theorem} Let $L$ be a link in a $3$-manifold $M$, $(R,G)$ be a coloring pair and $(h_1,h_2)\in H_1(M,\ensuremath{\mathbb{Z}} )\times H_2(M,G)$. Choose any \emph{$H$-triangulation} $(\mathcal{T},\mathcal{L})$ of $(M,L)$ and let $\wp$ be any admissible (or perturbation of a) $G$-coloring representing $h_2$. Then $\operatorname{\mathsf{\tau}}_R(M,L,h_1,h_2)=\operatorname{\mathsf{\tau}}(\mathcal{T},\mathcal{L},h_1,\wp)$ belongs to $R$ and it is an invariant of the four-uple $(M,L,h_1\in H_1(M,\ensuremath{\mathbb{Z}} ),h_2\in H_2(M,G))$ up to diffeomorphism. \end{theorem} \begin{proof} First, let us assume that $(R,G)$ satisfy \eqref{eq:Gadm} of Lemma \ref{L:admisColoring}, so there exists an admissible $\wp$ representing $h_2$. In this case the proof is essentially the same as the proof of Theorem 22 in \cite{GPT2}. Here we sketch the main steps: \begin{enumerate}[(I)] \item \label{I:Step1} In \cite{BB} it is shown that any two $H$-triangulation are related by a finite sequence of so called elementary $H$-moves. One can then colors this sequence and makes it a sequence of ``colored $H$-moves.'' \item One shows that the state sum $\operatorname{\mathsf{\tau}}(\mathcal{T},\mathcal{L},h_1,\wp)$ is invariant under an elementary ``admissible colored $H$-move,'' i.e. an elementary $H$-move where the colors of the $H$-triangulation on both sides of the move are admissible. The main point here is that Equation \eqref{eq:JBE} implies that if one performs a so called Pachner $2-3$ move (which consists in replacing in $\mathcal{T}$ two tetrahedra glued along a face with $3$ tetrahedra having a common edge) the state sum is unchanged. Similarly, Equation \eqref{eq:Jon} imply the invariance of the state sum under the lune move which consists in removing two tetrahedra which have $2$ common faces and then gluing by pairs the orphan faces. Here the following observation make the refinement with $h_1$ possible: if two state $\varphi,\varphi'$ of $\wp$ differ only on a set of faces then $h_\varphi,h_{\varphi'}$ differ only on the set $E$ of edges adjacent to these faces. Assume that the set $E$ is included in a simply connected part of $\P_\mathcal{T}$. Then if $\varphi$ and $\varphi'$ have nontrivial contributions in the state sum (which implies $\delta h_\varphi=\delta h_{\varphi'}=0$), we have $[h_\varphi]=[h_{\varphi'}]$ since $h_\varphi$ and $h_{\varphi'}$ are equal outside the simply connected space containing $E$. Hence the colored $H$-moves do not modify the partial state sum associated to any homology class $h_1$. \item The $2$-cycles representing $h_2$ in the sequence of colored $H$-moves of Step \ref{I:Step1} are not necessarily always admissible $G$-colorings. However, using Equation \eqref{eq:Gadm} one can prove that $\operatorname{\mathsf{\tau}}(\mathcal{T},\mathcal{L},h_1,\wp)$ depends only on the cohomology class of the admissible $G$-coloring $\wp$ (see Lemmas 27 and 28 of \cite{GPT2}). This then allows us to modify a sequence of colored $H$-move to a sequence of admissible colored $H$-move such that the state sum is the same at each step, thus showing the theorem when $(R,G)$ satisfy \eqref{eq:Gadm} of Lemma \ref{L:admisColoring}. \end{enumerate} Let us now consider the case where the coloring pair does not satisfy property \eqref{eq:Gadm}. We will prove that the perturbed state sum belongs to $R$. Let $\wp'=\wp+x$ be a perturbation of any $G$-coloring $\wp$ representing $h_2$. The idea is that the only component of $\wp'$ which depends on $X\in R'$ is the boundary $\delta$ and as the state sum depend of the coloring only up to a boundary, it does not depend on $X$. To be more precise, let $\rho:R'\to R'$ be the ring morphisms which is the identity on $R$ and sends $X$ to $X^2$. Then $\wp''=\rho(\wp')$ is also an admissible $R'$-coloring of $\mathcal{T}$. Moreover, $\rho(x)$ is a boundary and $\rho(\wp)=\wp$, hence $\wp'-\wp''$ is a boundary. But from above we know that two admissible colorings representing the same homology class give equal state sums. Thus, $\operatorname{\mathsf{\tau}}(\mathcal{T},\mathcal{L},h_1,\wp')=\operatorname{\mathsf{\tau}}(\mathcal{T},\mathcal{L},h_1,\wp'')=\rho(\operatorname{\mathsf{\tau}}(\mathcal{T},\mathcal{L},h_1,\wp'))$ which implies that $\operatorname{\mathsf{\tau}}(\mathcal{T},\mathcal{L},h_1,\wp)\in R$. \end{proof} We also define \begin{equation* \operatorname{\mathsf{\tau}}_R(M,L,h_2)=\sum_{h_1\in H_1(M,\ensuremath{\mathbb{Z}} )} \operatorname{\mathsf{\tau}}_R(M,L,h_1,h_2)\in R \end{equation*} The first fundamental example is obtained when $(R,G)=(\ensuremath{\mathbb{C}} ,\ensuremath{\mathbb{C}} ^*)$. It is easy to see that this coloring pair satisfies Property \eqref{eq:Gadm} of Lemma~\ref{L:admisColoring} since $1,-1\in\ensuremath{\mathbb{C}} ^*$ are the only non admissible elements. In this case, if $M$ is oriented we denote the Poincar\'e dual of $h_2\in H_2(M,\ensuremath{\mathbb{C}} ^*)$ by $h_2^*\in H^1(M,\ensuremath{\mathbb{C}} ^*)$. Then $$\operatorname{\mathsf{\tau}}_\ensuremath{\mathbb{C}} (M,L,h_2)=TV(M,L,h_2^*)$$ where $TV(M,L,h_2^*)$ is the invariant defined in \cite[Section 10.4]{GPT2}. We now consider an universal example: Let $H=H_1(M,\ensuremath{\mathbb{Z}} )$, and assume that $M$ is oriented so that for any abelian group $G$ we have $H_2(M,G)\simeq H^1(M,G)\simeq \operatorname{Hom}(H,G)$. Then $H_2(M,H)$ has a particular universal element $\eta$ whose image in $\operatorname{Hom}(H,H)$ is the identity. We will assume that the order of the torsion of $H$ is coprime with $r$. Then multiplication by $r$ is an injective morphism $m_r:H\to H$. Denote the image of $m_r$ by $rH$ and consider $\operatorname{\mathsf{\tau}}(M,L,r\eta)\in\ensuremath{\mathbb{Z}} [{\xi}][H]$. \begin{proposition} The invariant $\operatorname{\mathsf{\tau}}(M,L,r\eta)$ takes values in $\ensuremath{\mathbb{Z}} [rH]$ making it possible to define $$\operatorname{\mathsf{\tau}}(M,L)=m_r^*(\operatorname{\mathsf{\tau}}(M,L,r\eta))\in\ensuremath{\mathbb{Z}} [{\xi}][H].$$ Then for any pair $(R,G)$ as above and any $\psi\in \operatorname{Hom}(H,G)$ we have $$\operatorname{\mathsf{\tau}}(M,L,\bar\psi)=\psi_*(\operatorname{\mathsf{\tau}}(M,L))$$ where $\bar\psi$ is the image of $\psi$ in $H_2(M,G)$. \end{proposition} \begin{proof} First, let us show that for each $h_1\in H_1(M,\ensuremath{\mathbb{Z}} )$ we have $\operatorname{\mathsf{\tau}}(M,L,h_1,r\eta)\in\ensuremath{\mathbb{Z}} [{\xi}][rH]$. We choose a base of the free part of $H$; that is we write $H=\operatorname{Tor}(H)\oplus \ensuremath{\mathbb{Z}} x_1\oplus \cdots\oplus \ensuremath{\mathbb{Z}} x_k$. Then define the ring morphism $\rho_i:\ensuremath{\mathbb{Z}} [{\xi}][H]\to \ensuremath{\mathbb{Z}} [{\xi}][H]$ as the identity on this basis except that $\rho_i(\e^{x_i})={\xi} \e^{x_i}$. Clearly, the set of states and thus the state sum is invariant by $\rho_i$ for any $i$ and thus belongs to $\ensuremath{\mathbb{Z}} [{\xi}][rH]$. The last point follows from the fact that $\psi_*(\eta)=\psi$. \end{proof} \begin{remark} Suppose that $\mathcal{T}$ is not a quasi-regular triangulation but a generalized triangulation where some edges might be loops. Then all homology classes of $H_{2}(M,G)$ can't be represented on $\mathcal{T}$ by admissible colorings. Nethertheless, suppose that an admissible coloring $\wp$ is given on $\mathcal{T}$. Then one can prove that the state sum $\operatorname{\mathsf{\tau}}(\mathcal{T},\mathcal{L},h_1,\wp)$ as above is still equal to the invariant $TV(M,L,h_1,[\wp])$. This might be usefull for effective computations. This can be proven using the fact that up to perturbing the coloring, the triangulation $\mathcal{T}$ can be transformed into a quasi-regular one by a sequence of elementary moves such that at each step, the locally modified coloring is admissible. \end{remark} \section{Skein calculus} \subsection{The category ${\mathcal{C}^H}$ of $\ensuremath{U^H_{{{\xi}}}(\slt) } $ weight modules} For $x\in \ensuremath{\mathbb{C}} $ we extend the notation ${{\xi}}^x$ by setting ${{\xi}}^x=\e^{im\pi x/{{r}}}$. Also, if $(\alpha,k)\in\ensuremath{\mathbb{C}} \times\ensuremath{\mathbb{N}} $, $$\qn{\alpha}={{\xi}}^\alpha-{{\xi}}^{-\alpha}\quad\text{ and }\quad \qn {\alpha;k}!=\Fn k{{{\xi}}^\alpha}=\qn\alpha\qn{\alpha+1}\cdots\qn{\alpha+k-1}. $$ Many computations in this section use the identity: $$\qn{x+z}\qn{y+z}-\qn{x}\qn{y}=\qn{x+y+z}\qn{z}$$ Let $\ensuremath{U^H_{{{\xi}}}(\slt) } $ be the ``unrolled'' quantization of $\ensuremath{\mathfrak{sl}(2)}$, i.e. the $\ensuremath{\mathbb{C}} $-algebra with generators $E, F, K, K^{-1},H$ and the following defining relations: \begin{equation* KK^{-1} =K^{-1}K=1, \, KEK^{-1} ={\xi}^2E, \, KFK^{-1}={\xi}^{-2}F,\, \end{equation*} \begin{equation*} HK=KH,\, [H,E]=2E,\, [H,F]=-2F,\, [E,F] =\frac{K-K^{-1}}{{\xi}-{\xi}^{-1}}. \end{equation*} This algebra is a Hopf algebra with coproduct $\Delta$, counit $\varepsilon$, and antipode $S$ defined by the formulas \begin{align*} \Delta(E)&= 1\otimes E + E\otimes K, &\varepsilon(E)&= 0, &S(E)&=-EK^{-1}, \\ \Delta(F)&=K^{-1} \otimes F + F\otimes 1, &\varepsilon(F)&=0,& S(F)&=-KF, \\ \Delta(K)&=K\otimes K &\varepsilon(K)&=1, & S(K)&=K^{-1}, \\ \Delta(H)&=H\otimes 1 + 1 \otimes H, & \varepsilon(H)&=0, &S(H)&=-H. \end{align*} Following \cite{GPT}, we define $\bar{U}^H_{{{\xi}}}(\slt)$ to be the quotient of $\ensuremath{U^H_{{{\xi}}}(\slt) } $ by the relations $E^{{{r}}}=F^{{{r}}}=0$. It is easy to check that the operations above turn $\bar{U}^H_{{{\xi}}}(\slt)$ into a Hopf algebra. Let $V$ be a $\bar{U}^H_{{{\xi}}}(\slt)$-module. An eigenvalue $\lambda\in \ensuremath{\mathbb{C}} $ of the operator $H:V\to V$ is called a \emph{weight} of $V$ and the associated eigenspace $E_\lambda(V)$ is called a \emph{weight space}. We call $V$ a \emph{weight module} if $V$ is finite-dimensional, splits as a direct sum of weight spaces, and ${\xi}^H=K$ as operators on $V$. Let ${\mathcal{C}^H}$ be the tensor category of weight $\bar{U}^H_{{{\xi}}}(\slt)$-modules. By Section 6.2 of \cite{GPT}, ${\mathcal{C}^H}$ is a ribbon Ab-category with ground ring $\ensuremath{\mathbb{C}} $. The braiding $c_{V,W}:V\otimes W \rightarrow W \otimes V$ on ${\mathcal{C}^H}$ is defined by $v\otimes w \mapsto \tau(R(v\otimes w))$ where $\tau$ is the permutation $x\otimes y\mapsto y\otimes x$ and $R$ is the operator of $V\otimes W$ defined by \begin{equation} \label{eq:R} R={\xi}^{H\otimes H/2} \sum_{n=0}^{{{r}}-1} \frac{\{1\}^{2n}}{\{n\}!}{\xi}^{n(n-1)/2} E^n\otimes F^n. \end{equation} The inverse of the twist on a weight module $V$ is given by the operator \begin{equation} \label{eq:twist} \theta_V^{-1}=K^{{{r}}-1}{\xi}^{-H^2/2}\sum_{n=0}^{{{r}}-1} \frac{\{1\}^{2n}}{\{n\}!}{\xi}^{n(n-1)/2} S(F^n)E^n. \end{equation} For an isomorphism classification of simple weight modules over the usual quantum $\ensuremath{\mathfrak{sl}(2)}$, see for example \cite{Kas}, Chapter~VI. This classification implies that simple weight $\bar{U}^H_{{{\xi}}}(\slt)$-modules are classified up to isomorphism by highest weights. For $\alpha\in \ensuremath{\mathbb{C}} $, we denote by $V_{\alpha}$ the simple weight $\bar{U}^H_{{{\xi}}}(\slt)$-module of highest weight $\alpha+{{{r}}}-1$. This notation differs from the standard labeling of highest weight modules. Note that $V_{-{{{r}}}+1}=\ensuremath{\mathbb{C}} $ is the trivial module and $V_0$ is the so called Kashaev module. The well-known Reshetikhin-Turaev construction defines a $\ensuremath{\mathbb{C}} $-linear functor $F$ from the category of ${\mathcal{C}^H}$-colored ribbon graphs with coupons to ${\mathcal{C}^H}$. Let $B=(\ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} )\cup {{r}}\ensuremath{\mathbb{Z}} $. The modules $\{V_\alpha\}_{\alpha\in B}$ are called typical and all have dimension ${{r}}=2{{r'}}+1$. Note that $F$ is trivial on all closed ${\mathcal{C}^H}$-colored ribbon graph that have at least one color in $B$. In \cite{GPT}, the definition of $F$ is extended to a non-trivial map $F'$ defined on closed ${\mathcal{C}^H}$-colored ribbon graphs with at least one edge colored by a typical module. Let us recall how one can compute $F'$. If $T\subset\ensuremath{\mathbb{R}} \times[0,1]$ is a ${\mathcal{C}^H}$-colored (1-1)-tangle with the two ends colored by the same typical module $V_\alpha$, we can form its ``braid closure'' $\hat T$. Then we say that $T$ is a \emph{cutting presentation} of the closed ${\mathcal{C}^H}$-colored ribbon graph $\hat T$. In this situation, $F(T)$ is an endomorphism of $V_\alpha$ that is a scalar. Then $F'(\hat T)$ is this scalar multiplied by the modified dimension of $V_\alpha$ which is given by $$ \operatorname{\mathsf{d}}(V_\alpha)=(-1)^{{{r'}}}\frac{\qn{\alpha}}{\qn{{{r}}\alpha}} =\prod_{k=1}^{2{{{r'}}}}\frac1{\qn{\alpha+k}}. $$ It can be shown that $F'(\hat T)$ does not depend on the cutting presentation $T$ of $\hat T$ (see \cite{GPT}). For $\alpha\in B$ let us consider the basis of $V_\alpha$ given by $(v_i=F^iv_0)_{i=0..2{{{r'}}}}$ where $v_0$ is a highest weight vector of $V_\alpha$. Then the $\ensuremath{U^H_{{{\xi}}}(\slt) } $-module structure of $V_\alpha$ is given by: $$ H.v_i=(\alpha+2({{{r'}}}-i)) v_i,\quad E.v_i= \frac{\qn i\qn{i-\alpha}}{\qn1^2} v_{i-1} ,\quad F.v_i=v_{i+1}. $$ \begin{remark} The family of module indexed by $B$ can be seen as a vector bundle $\mathcal E\twoheadrightarrow B$ on which elements of $\ensuremath{U^H_{{{\xi}}}(\slt) } $ act by continuous linear transformations. Then the $v_i$ are sections of this vector bundle that form a trivialization $\mathcal E\simeq B\times\ensuremath{\mathbb{C}} ^{{r}}$. In fact one can extend $\mathcal E$ to an unique vector bundle $\mathcal E'$ over $\ensuremath{\mathbb{C}} \supset B$ with an action of $\ensuremath{U^H_{{{\xi}}}(\slt) } $ but the fiber over $k\in\ensuremath{\mathbb{Z}} \setminus {{r}}\ensuremath{\mathbb{Z}} $ is not an irreducible module. \end{remark} Let $\mathtt v$ be the $2$-dimensional simple weight $\ensuremath{U^H_{{{\xi}}}(\slt) } $-module of highest weight $1$ and basis $(v_0,v_1)$ with $E.v_1=v_0$ and $v_1=F.v_0$. The categorical dimension of $V_\alpha$ is zero but non-trivial for $\mathtt v$ and equal to $\qdim(\mathtt v) =\qN{-2}=-{{\xi}}-{{\xi}}^{-1}$. \subsection{Duality in ${\mathcal{C}^H}$}\ \\ As in \cite{GPT2}, the ribbon structure of ${\mathcal{C}^H}$ induce the existence of functorial left and right duality given by $V^*=\operatorname{Hom}_\ensuremath{\mathbb{C}} (V,\ensuremath{\mathbb{C}} )$ and the morphisms \begin{align*}\label{E:DualityForCat} b_{V} :\, & \ensuremath{\mathbb{C}} \rightarrow V\otimes V^{*} \text{ is given by } 1 \mapsto \sum v_j\otimes v_j^*,\notag\\ d_{V}:\, & V^*\otimes V\rightarrow \ensuremath{\mathbb{C}} \text{ is given by } f\otimes w \mapsto f(w),\notag\\ d_{V}':\, & V\otimes V^{*}\rightarrow \ensuremath{\mathbb{C}} \text{ is given by } v\otimes f \mapsto f(K^{1-{{{r}}}}v),\notag \\ b_V':\, & \ensuremath{\mathbb{C}} \rightarrow V^*\otimes V \text{ is given by } 1 \mapsto \sum v_j^*\otimes K^{{{{r}}}-1}v_j. \end{align*} For $\alpha\in B$, the classification of simple modules implies that $V_{-\alpha}^*$ is isomorphic to $V_\alpha$. We consider the isomorphism $w_\alpha:V_\alpha\to V_{-\alpha}^*$ given by $$v_i\mapsto -{{\xi}}^{i^2-1-i\alpha}v_{2{{{r'}}}-i}^*.$$ The isomorphism $w_\alpha$ is the unique map up to a scalar that send $v_0$ to $-\frac1{{\xi}}v_{2{{{r'}}}}^*$ and $v_i=F^iv_0$ to $-\frac1{{\xi}}v_{2{{{r'}}}}^*\circ (-KF)^i=-\frac1{{\xi}}v_{2{{{r'}}}}^*\circ ((-K)^i{{\xi}}^{i(i-1)}F^i)= -(-1)^i{{\xi}}^{i^2-i-1+i(-\alpha-2{{{r'}}})}v_{2{{{r'}}}-i}^*= -{{\xi}}^{i^2-1-i\alpha}v_{2{{{r'}}}-i}^*$. Let $w_\mathtt v=w_{1-2{{{r'}}}}:\mathtt v\stackrel{\sim}{\to}\mathtt v^*$ be the isomorphism given by $v_0\mapsto -{{\xi}}v_1^*$ and $v_1\mapsto v_0^*$. \begin{lemma} For $\alpha\in B$, one has \begin{equation}\label{E:d-andw} d_{V_{\alpha}}(w_{{-\alpha}} \otimes \operatorname{Id}_{V_{\alpha}})=d'_{V_{{-\alpha}}}(\operatorname{Id}_{V_{{-\alpha}}}\otimes w_{\alpha}) \end{equation} and similarly $d_\mathtt v(w_{1-2{{{r'}}}}\otimes\operatorname{Id}_\mathtt v)=d'_\mathtt v(\operatorname{Id}_{\mathtt v}\otimes w_{1-2{{{r'}}}})$. \end{lemma} \begin{proof} Let us denote by $f$ the left hand side of \eqref{E:d-andw} and by $g$ the right hand side of \eqref{E:d-andw}. By a direct computation on $v_i\otimes v_{2{{{r'}}}-i}\in V_{{-\alpha}}\otimes V_{\alpha}$, $$ f(v_i\otimes v_{2{{{r'}}}-i})=d_{V_\alpha}(-{{\xi}}^{i^2-1+i\alpha}v_{2{{{r'}}}-i}^*\otimes v_{2{{{r'}}}-i})=-{{\xi}}^{i^2-1+i\alpha} $$ and \begin{align*} g(v_i\otimes v_{2{{{r'}}}-i})=& d'_{V_{-\alpha}}(-{{\xi}}^{(2{{{r'}}}-i)^2-1-(2{{{r'}}}-i)\alpha}v_{i}\otimes v_{i}^*) \\ =&-{{\xi}}^{4{{{r'}}}^2-4{{{r'}}}i+i^2-1-(2{{{r'}}}-i)\alpha}v_i^*(K^{-2{{{r'}}}}v_i) \\ =&-{{\xi}}^{4{{{r'}}}^2-4{{{r'}}}i+i^2-1-(2{{{r'}}}-i)\alpha}{{\xi}}^{(-2{{{r'}}})(-\alpha+2({{{r'}}}-i))} \\ =& -{{\xi}}^{i^2-1+i\alpha}. \end{align*} The analogous equation for $\mathtt v$ follows similarly $$ g(v_0\otimes v_1)=-qv_1^*(v_1)=-{{\xi}}= {{\xi}}^{-2{{{r'}}}}=v_0^*(K^{-2{{{r'}}}}v_0)=f(v_0\otimes v_1). $$ \end{proof} For $\alpha\in B$, we denote $d^\alpha$ and $b^\alpha$ as the following morphisms \begin{equation}\label{eq:def_duality} d^\alpha=d_{V_{\alpha}}\circ(w_{{-\alpha}} \otimes \operatorname{Id}_{V_{\alpha}}) :V_{-\alpha}\otimes V_\alpha\to\ensuremath{\mathbb{C}} \end{equation} \begin{equation}\label{eq:def_dualityy} b^\alpha= (\operatorname{Id}_{V_{\alpha}}\otimes (w_{-\alpha})^{-1} )\circ b_{V_{\alpha}} :\ensuremath{\mathbb{C}} \to V_\alpha\otimes V_{-\alpha} \end{equation} Similarly, $$ d^\mathtt v=d_\mathtt v\circ(w_\mathtt v\otimes\operatorname{Id}):\mathtt v\otimes\mathtt v\to\ensuremath{\mathbb{C}} \quad b^\mathtt v=(\operatorname{Id}\otimes w_\mathtt v)\circ b_\mathtt v:\ensuremath{\mathbb{C}} \to\mathtt v\otimes\mathtt v. $$ We use the isomorphism $w_{-\alpha}$ to identify $V_\alpha^*$ with $V_{-\alpha}$. Under this identification, we get $d_{{V_{\alpha}}}\cong d'_{{V_{-\alpha}}}\cong d^\alpha$ and $b_{{V_{\alpha}}}\cong b'_{{V_{-\alpha}}}\cong b^\alpha$. Similarly, $d_{\mathtt v}\cong d'_{\mathtt v^*}\cong d^\mathtt v$ and $b_{\mathtt v}\cong b'_{\mathtt v^*}\cong b^\mathtt v$. Graphically, for a ${\mathcal{C}^H}$-colored ribbon graph $\Gamma$, this means that one can reverse the orientation of an edge colored by $V_\alpha$ and simultaneously replace its coloring by $V_{-\alpha}$. Also, if $\Gamma$ has an oriented edge colored by $\mathtt v$, one can forgot its orientation. We will represent edges colored by $\mathtt v$ with dashed unoriented edges (see for example Figure \ref{F:X}). \subsection{Multiplicity modules in $V_\alpha\otimes V_{-\alpha\pm1}\otimes \mathtt v$} \ \\ We consider the following spaces of morphisms of ${\mathcal{C}^H}$ using the notation $$ H_{U,V}^{W}=\operatorname{Hom}_{\mathcal{C}^H}(U\otimes V,W),\quad \quad H^{U,V}_{W}= \operatorname{Hom}_{\mathcal{C}^H}(W,U\otimes V), $$ $$ H_{U,V,W}=\operatorname{Hom}_{\mathcal{C}^H}(U\otimes V\otimes W,\ensuremath{\mathbb{I}}),\quad \quad H^{U,V,W}= \operatorname{Hom}_{\mathcal{C}^H}(\ensuremath{\mathbb{I}},U\otimes V\otimes W), $$ where $U,V,W$ are weight modules. If there is no ambiguity, for $\alpha\in B$ we replace $V_\alpha$ with $\alpha$ in this notation, e.g. $H_{V_\beta,V_\gamma}^{V_\alpha}=H_{\beta,\gamma}^{\alpha}$. Also, since $V_\alpha^*$ and $V_{-\alpha}$ are identified we can replace $V_\alpha^*$ with ${-\alpha}$, e.g. $H_{V_\beta,V_\gamma}^{V_\alpha^*}=H_{\beta,\gamma}^{-\alpha}$. We define the symmetric multiplicity module of $U,V,W$ to be the space $H(U,V,W)$ obtained by identifying the $12$ following isomorphic spaces \begin{align}\label{E:12iso} H^{U,V,W}\simeq H^{W,U,V}\simeq H^{V,W,U}\simeq H^{U}_{W^*,V^*}\simeq H^{W}_{V^*,U^*}\simeq H^{V}_{U^*,W^*}\simeq \notag\\ H^{U,V}_{W^*}\simeq H^{W,U}_{V^*}\simeq H^{V,W}_{U^*}\simeq H_{W^*,V^*,U^*}\simeq H_{V^*,U^*,W^*}\simeq H_{U^*,W^*,V^*} \end{align} where each of these isomorphisms come from certain duality morphisms (see \cite{Tu}). For $\alpha\in\ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} $ the character formula implies \begin{equation} \label{eq:vst.Va} \mathtt v\otimes V_{\alpha}\simeq V_{\alpha-1}\oplus V_{\alpha+1}. \end{equation} Therefore, for $\alpha\in\ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} $, the space $H^{\mathtt v,\alpha}_{\beta}$ is the zero space if $\beta\neq\alpha\pm1$ and $H^{\mathtt v,\alpha}_{\beta}$ has dimension 1 if $\beta=\alpha\pm1$. Consider the morphism $\Yv-{\alpha+1}{\mathtt v}{\alpha}: V_{\alpha+1}\rightarrow \mathtt v\otimes V_{\alpha}$ given by $$\quad v_0\mapsto v_0\otimes v_0\quad\text{ and }\quad v_i\mapsto {{\xi}}^{-i}v_0\otimes v_i+\qN iv_1\otimes v_{i-1}.$$ This morphism forms a basis of $H^{\mathtt v,\alpha}_{\alpha+1}$. Thus, this morphism and the cyclic isomorphisms $$H^{\alpha,\mathtt v}_{\alpha+1}\simeq H^{\mathtt v,-\alpha-1}_{-\alpha},\quad H_{\alpha+1,\mathtt v}^{\alpha}\simeq H^{\mathtt v,-\alpha-1}_{-\alpha},\quad H_{\mathtt v,\alpha+1}^{\alpha}\simeq H^{\mathtt v,\alpha}_{\alpha+1}$$ induce a basis on $H^{\alpha,\mathtt v}_{\alpha+1}, H_{\alpha+1,\mathtt v}^{\alpha}$, and $H_{\mathtt v,\alpha+1}^{\alpha}$. Each of these basis consists of the single morphism which we denote by $$\Yv-{\alpha+1}{\alpha}{\mathtt v} ,\quad \Zv-{\alpha}{\alpha+1}{\mathtt v},\quad \Zv-{\alpha}{\mathtt v}{\alpha+1},$$ respectively. Moreover, the morphism $\Yv-{\alpha+1}{\mathtt v}{\alpha}$ and isomorphisms represented in Equation \eqref{E:12iso} define a basis vector $\omega^-(\alpha)$ for the symmetric module $H(\mathtt v,\alpha,-\alpha-1)$. Similarly, consider the basis of $H^{\mathtt v,\alpha+1}_\alpha$ given by the morphism $$\Yv+{\alpha}{\mathtt v}{\alpha+1}:\quad v_{2{{{r'}}}}\mapsto {{\xi}}^{-1}\qn{\alpha-2{{{r'}}}}v_1\otimes v_{2{{{r'}}}}\quad\text{ and }$$ $$v_i\mapsto -{{\xi}}^{\alpha-i-1}\qn{1}v_0\otimes v_{i+1} + {{\xi}}^{-1}\qn{\alpha- i}v_1\otimes v_{i}.$$ As above this morphism and the isomorphisms in \eqref{E:12iso} induce basis of $H^{\alpha+1,\mathtt v}_\alpha, H_{\alpha,\mathtt v}^{\alpha+1}, H_{\mathtt v,\alpha}^{\alpha+1}$, and $H(\mathtt v,\alpha+1,-\alpha)$ which each consist of one morphism which we denote by $$\Yv+{\alpha}{\alpha+1}{\mathtt v} ,\quad \Zv+{\alpha+1}{\alpha}{\mathtt v},\quad \Zv+{\alpha+1}{\mathtt v}{\alpha}, \quad \omega^+(\alpha),$$ respectively. \begin{figure}[t,b] \centering \hspace{10pt} $\epsh{fig09}{16ex}$ \put(2,5){\ms{\alpha+1}}\put(-7,35){\ms{\alpha}} \hspace{28pt} $=\hspace{3pt}\qn{\alpha+1}\epsh{fig10}{16ex}$ \put(-7,5){\ms{\alpha}} \hspace{10pt} \caption{The duality for $H(V_\alpha,V_{\alpha\pm1},\mathtt v)$}\label{F:Mixte_dual} \end{figure} The next proposition is illustrated by Figure \ref{F:Mixte_dual}. It computes the pairing of some of the families of morphisms defined above. \begin{proposition}\label{P:Pairing} \begin{equation} \label{eq:dual_mixte} \Zv- {\alpha}{\mathtt v}{\alpha+1} \circ \Yv+{\alpha}{\mathtt v}{\alpha+1}=\qn{\alpha+1}\operatorname{Id}_{V_{\alpha}} \end{equation} The evaluation of $F'$ on the colored $\Theta$-graph $\epsh{fig14}{6ex}$ induce the pairing\\ $H(\mathtt v,\alpha,-\alpha-1)\otimes H(\mathtt v,\alpha+1,-\alpha)\to\ensuremath{\mathbb{C}} $ determined by $$\left\langle\omega^-(\alpha),\omega^+(\alpha)\right\rangle= (-1)^{{{r'}}}\frac{\qn{\alpha}\qn{\alpha+1}}{\qn{l\alpha}} =\prod_{k=2}^{2{{{r'}}}}\frac1{\qn{\alpha+k}}.$$ \end{proposition} \begin{proof} The duality follows from the value of $\operatorname{\mathsf{d}}(V_\alpha)$ and from the first statement which is the result of the following computation $$\Zv- {\alpha}{\mathtt v}{\alpha+1}\circ \Yv+{\alpha}{\mathtt v}{\alpha+1}(v_0)=$$ $$ (d_\mathtt v\otimes{\operatorname{Id}_{V_\alpha}})\circ (w_\mathtt v\otimes{\operatorname{Id}_{\mathtt v}}\otimes{\operatorname{Id}_{V_\alpha}})\circ ({\operatorname{Id}_{\mathtt v}}\otimes\Yv-{\alpha+1}{\mathtt v}{\alpha})\circ \Yv+{\alpha}{\mathtt v}{\alpha+1}(v_0)= $$ $$ (d_\mathtt v\otimes{\operatorname{Id}})\circ (w_\mathtt v\otimes{\operatorname{Id}}\otimes{\operatorname{Id}})\circ ({\operatorname{Id}}\otimes\Yv-{\alpha+1}{\mathtt v}{\alpha})( -{{\xi}}^{\alpha-1}\qn{1}v_0\otimes v_{1} + {{\xi}}^{-1}\qn{\alpha}v_1\otimes v_{0} )= $$ $$ (d_\mathtt v\otimes{\operatorname{Id}})\circ (w_\mathtt v\otimes{\operatorname{Id}}\otimes{\operatorname{Id}})( -{{\xi}}^{\alpha-1}\qn{1}v_0\otimes v_1\otimes v_{0} + {{\xi}}^{-1}\qn{\alpha}v_1\otimes v_{0}\otimes v_{0} )= $$ $$ (d_\mathtt v\otimes{\operatorname{Id}})( {{\xi}}^{\alpha}\qn{1}v_1^*\otimes v_1\otimes v_{0} + {{\xi}}^{-1}\qn{\alpha}v_0^*\otimes v_{0}\otimes v_{0} )= $$ $$ {{\xi}}^{\alpha}\qn{1} v_{0} + {{\xi}}^{-1}\qn{\alpha}v_{0} = \qn{\alpha+1}v_0. $$ \end{proof} \begin{remark}\label{R:zeroproj} If $\alpha\neq\beta$ are in $\ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} $ then $V_\alpha$ and $V_\beta$ are non isomorphic simple modules and we have $\operatorname{Hom}(V_\alpha,V_\beta)=0$. Thus, $$\Zv-**\mathtt v\circ\Yv-**\mathtt v=\Zv+**\mathtt v\circ\Yv+**\mathtt v= \Zv-*\mathtt v*\circ\Yv-*\mathtt v*=\Zv+*\mathtt v*\circ\Yv+*\mathtt v*=0.$$ Here and after, the stars $*$ shall be replaced by any element in $\ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} $ such that the morphisms are defined. \end{remark} \begin{corollary}\label{coro:fusion}(Fusion rule): For any $\alpha\in\ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} $, \begin{equation} \label{eq:fusion} \qn\alpha\operatorname{Id}_{\mathtt v\otimes V_\alpha}= \Yv+{\alpha-1}\mathtt v{\alpha}\circ\Zv-{\alpha-1}\mathtt v{\alpha} -\Yv-{\alpha+1}\mathtt v{\alpha}\circ\Zv+{\alpha+1}\mathtt v{\alpha}. \end{equation} \end{corollary} \begin{proof} This is a direct consequence of Proposition \ref{P:Pairing} and the fact that $\mathtt v\otimes V_\alpha$ split into a direct sum of simple modules as in \eqref{eq:vst.Va}. (Also see Remark \ref{R:zeroproj}.) \end{proof} \begin{lemma}\label{lem:com} For all $ \alpha\in\ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} $, one has $$ \left(\operatorname{Id}\otimes\Yv-{\alpha+1}{\alpha}{\mathtt v}\right)\circ \Yv-{\alpha+2}{\mathtt v}{\alpha+1}= \left(\Yv-{\alpha+1}{\mathtt v}{\alpha}\otimes\operatorname{Id}\right)\circ \Yv-{\alpha+2}{\alpha+1}{\mathtt v} $$ and similarly $$ \left(\operatorname{Id}\otimes\Yv+{\alpha-1}{\alpha}{\mathtt v}\right)\circ \Yv+{\alpha-2}{\mathtt v}{\alpha-1}= \left(\Yv+{\alpha-1}{\mathtt v}{\alpha}\otimes\operatorname{Id}\right)\circ \Yv+{\alpha-2}{\alpha-1}{\mathtt v}. $$ \end{lemma} \begin{proof} The first equality is true because both side are maps $V_{\alpha+2}\rightarrow \mathtt v\otimes V_\alpha\otimes \mathtt v$ determined by $v_0\mapsto v_0\otimes v_0\mapsto v_0\otimes v_0\otimes v_0$. Similarly, an easy computation gives that the other two maps $V_{\alpha-2}\rightarrow \mathtt v\otimes V_\alpha\otimes \mathtt v$ are determined by $ v_{2{{{r'}}}}\mapsto {{\xi}}^{-2}\qn{\alpha}\qn{\alpha-1}v_1\otimes v_{2{{{r'}}}} \otimes v_1$. \end{proof} \begin{figure}[t,b] \centering \hspace{10pt} ${X}=\dfrac1{\qn1}\epsh{fig11}{16ex}$ \put(2,35){\ms{\beta+1}}\put(2,-25){\ms{\beta}}\put(-52,35){\ms{\alpha+1}} \put(-52,-25){\ms{\alpha}} \hspace{10pt} \caption{The family of maps ${X}$}\label{F:X} \end{figure} If $\alpha,\beta\in \ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} $, we will use the following family of operators $${X}:V_\alpha\otimes V_\beta\to V_{\alpha+1}\otimes V_{\beta+1}$$ given by ${X}=\dfrac1{\qn1}(\operatorname{Id}\otimes d^\mathtt v\otimes\operatorname{Id})\circ \left(\Yv+\alpha{\alpha+1}\mathtt v\otimes\Yv+\beta\mathtt v{\beta+1}\right)$. The following lemma shows that the denominator of ${X}$ disappears. \begin{lemma}\label{lem:X} For $\alpha,\beta\in \ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} $ the map ${X}:V_\alpha\otimes V_\beta\to V_{\alpha+1}\otimes V_{\beta+1}$ is given by $$ {X}:v_i\otimes v_j\mapsto {{\xi}}^{\beta+i-j-1}\qn{\alpha-i}v_i\otimes v_{j+1} +{{\xi}}^{-1}\qn{\beta-j}v_{i+1}\otimes v_{j} $$ where $v_{2{{{r'}}}+1}$ should be understood as $0$. \end{lemma} \begin{proof} First, a direct computations shows that \begin{equation} \label{eq:cgc-d} d^\alpha(v_i\otimes v_{2{{{r'}}}-j})=-\delta^i_j{{\xi}}^{i\alpha+i^2-1},\\ \end{equation} \begin{equation} \label{eq:cgc-b} b^\alpha(1)=\sum_{i=0}^{2{{{r'}}}}-{{\xi}}^{-i\alpha+1-i^2}v_{2{{{r'}}}-i}\otimes v_i. \end{equation} Then using $\Yv+\alpha{\alpha+1}\mathtt v=(\operatorname{Id}\otimes d^{\alpha})\circ (\operatorname{Id}\otimes\Yv+{-\alpha-1}\mathtt v{-\alpha}\otimes\operatorname{Id})\circ (b^{\alpha+1}\otimes\operatorname{Id})$, we have $$ \Yv+\alpha{\alpha+1}\mathtt v(v_i)= -{{\xi}}^i\qn{\alpha-i}v_i\otimes v_1 -{{\xi}}^{-1}\qn{1}v_{i+1}\otimes v_{0}. $$ On the other hand, by definition we have $$ \Yv+\beta\mathtt v{\beta+1}(v_j)=-{{\xi}}^{\beta-j-1}\qn{1}v_0\otimes v_{j+1} + {{\xi}}^{-1}\qn{\beta- j}v_1\otimes v_{j}. $$ Combining these equalities with $d^\mathtt v(v_0\otimes v_1)=-{{\xi}}$ and $d^\mathtt v(v_1\otimes v_0)=1$ the result follows. \end{proof} \subsection{Multiplicity modules in $V_\alpha\otimes V_\beta\otimes V_\gamma$} \label{S:mult} It is well-known that, in the quantum plane $\ensuremath{\mathbb{Z}} \langle x,y\rangle_{/yx={{\xi}}^2xy}$, one has $$\quad(x+y)^i=\sum_{k=0}^i{{\xi}}^{k(i-k)}\qb i kx^ky^{i-k}$$ for all $i\in\ensuremath{\mathbb{N}} .$ Applying this to $y=K^{-1}\otimes F$ and $x=F\otimes1$, we get \begin{equation}\label{E:DeltaF} (\Delta F)^i=(x+y)^i=\sum_{k=0}^i{{\xi}}^{k(i-k)}\qb i k F^kK^{k-i}\otimes F^{i-k}. \end{equation} The character formula for typical modules (see \cite{GPT}) also implies that for all $\alpha,\beta\in B$ with $\alpha+\beta\notin\ensuremath{\mathbb{Z}} $, \begin{equation} \label{eq:Va.Vb} V_\alpha\otimes V_\beta=\sum_{k=-{{r'}}}^{{r'}} V_{\alpha+\beta+2k} \end{equation} Hence, for $\alpha,\beta,\gamma\in\ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} $, $$ \dim(H_\alpha^{\beta\,\gamma})=\left\{ \begin{array}{l} 1\text{ if }\beta+\gamma-\alpha\in\{-2{{{r'}}},-2{{{r'}}}+2,\ldots,2{{{r'}}}\}\\ 0 \text{ else.} \end{array}\right. $$ Now for $\alpha,\beta,\gamma\in\ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} $ with $\beta+\gamma-\alpha=2k\in\{-2{{{r'}}},-2{{{r'}}}+2,\ldots,2{{{r'}}}\}$ we define a map $\Yn{2k}\alpha\beta\gamma$ which will form a basis for the 1-dimensional space $H_\alpha^{\beta\,\gamma}$. First, suppose $\beta+\gamma-\alpha=-2{{{r'}}}$ then $$ \begin{array}{rcl} \Yn{-2{{{r'}}}}\alpha\beta\gamma:V_\alpha&\to& V_\beta\otimes V_\gamma\\ v_0&\mapsto& v_0\otimes v_0 \\ v_n&\mapsto&(\Delta F)^nv_0\otimes v_0 =\sum_{k=0}^n{{\xi}}^{(n-k)(k-\alpha-2{{{r'}}})}\qb n kv_k\otimes v_{n-k} \end{array} $$ where the last equality follows from Equation \eqref{E:DeltaF}. Now, let $n={{{r'}}}+k$ and define $$ \Yn{2k}\alpha\beta\gamma={X}^{\circ n}\circ \Yn{\hspace*{-6ex}-2{{{r'}}}}\alpha{\beta-n}{\gamma-n}\quad :V_\alpha\to V_\beta\otimes V_\gamma. $$ We now show that these bases are compatible with the cyclic isomorphisms defining the symmetric multiplicity modules. Let ${\mathcal R}$ be the cyclic isomorphism \begin{equation} \label{eq:rot} \begin{array}{rcl} {{\mathcal R}}:H_\alpha^{\beta,\gamma}&\to& H_{-\beta}^{\gamma,-\alpha}\\ f&\mapsto& (d^{\beta}\otimes\operatorname{Id}\otimes\operatorname{Id})\circ f\circ(\operatorname{Id}\otimes\operatorname{Id}\otimes b^{\alpha}) \end{array} \end{equation} \begin{remark} The family of maps $\Yn{-2{{{r'}}}}{*}{*}{*}\,$ can be seen as a section of the vector bundle $\mathcal E_{-2{{{r'}}}}$ which is a restriction of $\mathcal E\otimes \mathcal E\otimes \mathcal E$ to the subset of $B^3$ defined by the equation $\beta+\gamma-\alpha=-2{{{r'}}}$. The cyclic isomorphism ${{\mathcal R}}$ is a lift to this vector bundle of the permutation on the basis: $(\alpha,\beta,\gamma)\mapsto(-\beta,\gamma,-\alpha)$. The following proposition means that the section $\Yn{-2{{{r'}}}}\alpha\beta\gamma$ is a fixed point of the cyclic isomorphism ${{\mathcal R}}:\mathcal E_{-2{{{r'}}}}\to \mathcal E_{-2{{{r'}}}}$. \end{remark} \begin{proposition}\label{prop:cycl-2M} For all $ (\alpha,\beta,\gamma)\in B^3$ with $\beta+\gamma-\alpha=-2{{{r'}}}$, we have $$ {{\mathcal R}}\left(\Yn{-2{{{r'}}}}\alpha\beta\gamma\right) =\Yn{-2{{{r'}}}}{-\beta}\gamma{-\alpha}. $$ \end{proposition} \begin{proof} Let $f_1=\Yn{-2{{{r'}}}}{-\beta}\gamma{-\alpha}$ and $f_2={{\mathcal R}}\left(\Yn{-2{{{r'}}}}\alpha\beta\gamma\right)$. Since $f_1(v_0)=v_0\otimes v_0$, then $f_2$ is determined by its value on $v_0\in V_{-\beta}$ which must be a multiple of the unique weight vector $v_0\otimes v_0\in V_\gamma\otimes V_{-\alpha}$. Because of this, we don't need to compute all the terms to see that $f_2(v_0)=v_0\otimes v_0$. In particular, from the facts: \begin{itemize} \item $b_{V_\alpha}:1\mapsto v_{2{{{r'}}}}\otimes v_{2{{{r'}}}}^*+\cdots$ \item $w_{-\alpha}^{-1}(v_{2{{{r'}}}}^*)=-{{\xi}} v_0$ \item $w_{-\beta}(v_0)=-{{\xi}}^{-1}v_{2{{{r'}}}}^*$ \item $\Yn{-2{{{r'}}}}\alpha\beta\gamma(v_{2{{{r'}}}})=(\Delta F)^{2{{{r'}}}}(v_0\otimes v_0)=v_{2{{{r'}}}}\otimes v_0+\cdots$\\ (because $(\Delta F)^{2{{{r'}}}}=F^{2{{{r'}}}}\otimes1+\cdots$) \item $d_{V_\beta}(v_{2{{{r'}}}}^*\otimes v_{2{{{r'}}}})=1$ \end{itemize} one can see that $$f_2(v_0)=(d_{V_\beta}\otimes\operatorname{Id}\otimes\operatorname{Id}) \circ(w_{-\beta}\otimes\Yn{-2{{{r'}}}}\alpha\beta\gamma\otimes w_{-\alpha}^{-1}) \circ (\operatorname{Id}\otimes b_{V_\alpha})(v_0)$$ is equal to $v_0\otimes v_0$. Thus, $f_1=f_2$. \end{proof} To establish the same statement for the maps $\Yn{k}\alpha\beta\gamma$ we will need the two following lemmas. \begin{lemma} \label{lem:turn} Let $\alpha,\beta,\gamma\in\ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} $ such that $\alpha+\beta+\gamma=2-2{{{r'}}}$ then $$ ({X}\otimes\operatorname{Id}) \circ \left(\operatorname{Id}\otimes\Yn{\hspace*{-3ex}-2{{{r'}}}}{1-\alpha}{\beta-1}\gamma\right) \circ b^{\alpha-1}=(\operatorname{Id}\otimes {X}) \circ \left(\operatorname{Id}\otimes\Yn{\hspace*{-6ex}-2{{{r'}}}}{-\alpha}{\beta-1}{\gamma-1}\right) \circ b^{\alpha} $$ \end{lemma} \begin{proof} Both side of this equality are invariant maps $\ensuremath{\mathbb{C}} \to V_{\alpha}\otimes V_{\beta} \otimes V_{\gamma}$. Let $Z_l$ and $Z_r$ be the maps on the right and left hand sides, respectively. The space $H^{\alpha, \beta, \gamma}$ has dimension $1$ so the maps $Z_l$ and $Z_r$ are proportional. Thus, to show they are equal it is enough to show the functions $(d^\alpha\otimes\operatorname{Id}\otimes d^\gamma)(v_0\otimes Z_i(1)\otimes v_{2{{{r'}}}})$, for $i=l,r$, are equal. First, let us work on the left hand side. By considering the formulas for $d^\alpha$ and $X$ the only terms of $b^{\alpha-1}(1)$ that contribute nontrivially to the function are $-{{\xi}} v_{2{{{r'}}}}\otimes v_0$ and $-{{\xi}^{1-\alpha}} v_{2{{{r'}}}-1}\otimes v_1$. Therefore, we only need to consider $$\Yn{-2{{{r'}}}}***(v_{0})=v_0\otimes v_0 \text{ and } \Yn{-2{{{r'}}}}***(v_{1})=v_1\otimes v_0+\cdots$$ where the other term(s) contained in the $\cdots$ can be disregarded since $d^\gamma(v_i\otimes v_{2{{{r'}}}})$ is non-zero if and only if $i=0$. So $Z_l(1)$ is equal to \begin{eqnarray*} &-{{\xi}}({X}\otimes\operatorname{Id})(v_{2{{{r'}}}}\otimes v_0\otimes v_0)-{{\xi}^{1-\alpha}}({X}\otimes\operatorname{Id})(v_{2{{{r'}}}-1}\otimes v_1\otimes v_0)+\cdots \\ & =-{\xi}^{\beta-2}\qn{\alpha}(v_{2{{{r'}}}}\otimes v_1\otimes v_0)-{\xi}^{-\alpha}\qn{\beta-2}(v_{2{{{r'}}}}\otimes v_1\otimes v_0)+\cdots \end{eqnarray*} where as above the term(s) contained in the $\cdots$ can be disregarded since they do not contribute non-trivially to the function. So, we have \begin{align*} (d^\alpha\otimes\operatorname{Id}\otimes d^\gamma)(v_0\otimes Z_l(1)\otimes v_{2{{{r'}}}}) &=-({\xi}^{\beta-4}\qn{\alpha} +{\xi}^{-\alpha-2}\qn{\beta-2})v_1\\ &=-{\xi}^{-2}\qn{\alpha+\beta-2}v_1 \end{align*} since $d^x(v_0\otimes v_{2{{{r'}}}})=-{\xi}^{-1}$. Similarly, \begin{align*}(d^\alpha\otimes\operatorname{Id}\otimes d^\gamma)(v_0\otimes Z_r(1)\otimes v_{2{{{r'}}}}) = -{\xi}^{-2}\qn{\gamma-1}v_1 =-{\xi}^{-2}\qn{\alpha+\beta-2}v_1. \end{align*} \end{proof} \begin{lemma}\label{lem:comX} Let $\alpha,\beta,\gamma\in\ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} $, then ${X}\otimes\operatorname{Id}$ and $\operatorname{Id}\otimes {X}$ commute: $$({X}\otimes\operatorname{Id})\circ(\operatorname{Id}\otimes {X})=(\operatorname{Id}\otimes {X})\circ({X}\otimes\operatorname{Id}):V_{\beta-1}\otimes V_{\alpha-2}\otimes V_{\gamma-1}\to V_{\beta}\otimes V_{\alpha}\otimes V_{\gamma}.$$ \end{lemma} \begin{proof} The proof follows from composing both side of the second equality of Lemma \ref{lem:com} with $$\Zv+{\beta}{\beta-1}\mathtt v \otimes \operatorname{Id}_{V_\alpha}\otimes \Zv+{\gamma}\mathtt v{\gamma-1}.$$ \end{proof} \begin{proposition}\label{P:rot} For all $ (\alpha,\beta,\gamma)\in (\ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} )^3$ with $\beta+\gamma-\alpha=k\in\{-2{{{r'}}},-2{{{r'}}}+2,\ldots,2{{{r'}}}\}$ we have $${\mathcal R}\left(\Yn{k}\alpha\beta\gamma\right)= \Yn{\hspace*{-2ex}k}{-\beta}\gamma{-\alpha}$$ where ${\mathcal R}:H_\alpha^{\beta,\gamma}\to H_{-\beta}^{\gamma,-\alpha}$ is given in Equation \eqref{eq:rot}. \end{proposition} \begin{proof} Let $\alpha,\beta,\gamma\in\ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} $ such that $\alpha+\beta+\gamma=2-2{{{r'}}}$ then from Lemmas \ref{lem:turn} and \ref{lem:comX} we have that for $p,q\in\ensuremath{\mathbb{N}} $, $$ ({X}^{\circ p+1}\otimes\operatorname{Id}) \circ (\operatorname{Id}\otimes {X}^{\circ q}) \circ \left(\operatorname{Id}\otimes\Yn{\hspace*{-3ex}-2{{{r'}}}}{1-\alpha}{\beta-1}\gamma\right) \circ b^{\alpha-1}$$ $$=({X}^{\circ p}\otimes\operatorname{Id}) \circ (\operatorname{Id}\otimes {X}^{\circ q+1}) \circ \left(\operatorname{Id}\otimes\Yn{\hspace*{-6ex}-2{{{r'}}}}{-\alpha}{\beta-1}{\gamma-1}\right) \circ b^{\alpha}. $$ Moreover, for any $n\in\ensuremath{\mathbb{N}} $, $$ ({X}^{\circ n}\otimes\operatorname{Id}) \circ \left(\operatorname{Id}\otimes\Yn{\hspace*{-3ex}-2{{{r'}}}}{n-\alpha}{\beta-n}\gamma\right) \circ b^{\alpha-n}=(\operatorname{Id}\otimes {X}^{\circ n}) \circ \left(\operatorname{Id}\otimes\Yn{\hspace*{-6ex}-2{{{r'}}}}{-\alpha}{\beta-n}{\gamma-n}\right) \circ b^{\alpha}. $$ Therefore, for $n=k/2+{{{r'}}}\in\ensuremath{\mathbb{N}} $ we have $$ r\left(\Yn{k}\alpha\beta\gamma\right)=(d^{\beta}\otimes\operatorname{Id}\otimes\operatorname{Id}) \circ(\operatorname{Id}\otimes \left({X}^{\circ n}\circ\Yn{\hspace*{-6ex}-2{{{r'}}}}\alpha{\beta-n}{\gamma-n} \right) \otimes\operatorname{Id}) \circ(\operatorname{Id}\otimes b^{\alpha}) $$ $$ =(d^{\beta}\otimes\operatorname{Id}\otimes\operatorname{Id})\circ (\operatorname{Id}\otimes\operatorname{Id}\otimes {X}^{\circ n}) \circ \left(\operatorname{Id}\otimes\Yn{\hspace*{-3ex}-2{{{r'}}}}{\alpha-n}{\beta}{\gamma-n} \otimes\operatorname{Id}\right) \circ(\operatorname{Id}\otimes b^{\alpha-n}) $$ $$ ={X}^{\circ n}\circ (d^{\beta}\otimes\operatorname{Id}\otimes\operatorname{Id}) \circ \left(\operatorname{Id}\otimes\Yn{\hspace*{-3ex}-2{{{r'}}}}{\alpha-n}{\beta}{\gamma-n} \otimes\operatorname{Id}\right) \circ(\operatorname{Id}\otimes b^{\alpha-n}) $$ $$ ={X}^{\circ n}\circ \Yn{\hspace*{-6ex}-2{{{r'}}}}{-\beta}{\gamma-n}{n-\alpha}= \Yn{\hspace*{-2ex}k}{-\beta}{\gamma}{-\alpha} $$ where the second to last equality is given by Proposition \ref{prop:cycl-2M}. \end{proof} The cyclic isomorphisms allow us to define the basis $\Zn{k}\alpha\beta\gamma$ of $H^\alpha_{\beta,\gamma}$ in two equivalent way: if $\alpha-\beta-\gamma=k$, let $$ \Zn{k}\alpha\beta\gamma=(\operatorname{Id}\otimes d^\gamma)\circ (\Yn{\hspace*{-2ex}k}{\beta}{\alpha}{-\gamma}\otimes\operatorname{Id}) =(d^{-\beta}\otimes\operatorname{Id})\circ (\operatorname{Id}\otimes\Yn{\hspace*{-2ex}k}{\gamma}{-\beta}{\alpha}). $$ Similarly, if $\alpha+\beta+\gamma=k$, we get a vector $\omega^k(\alpha,\beta,\gamma)$, which forms a canonical basis of the symmetric multiplicity module $H(\alpha,\beta,\gamma)$. In what follows we consider ribbons graphs with coupons colored by the elements $\omega^k(\alpha,\beta,\gamma)$. For such a coupon $c$, Proposition \ref{P:rot} implies that we do not need to know what edges are attached to the bottom of $c$ and what edges are attached to its top. Only the information of the cyclic ordering of these edges is needed to compute $F$ or $F'$. The choice of an half twist $\theta'$ (a family of endomorphisms whose square are given by the twist) produces isomorphisms \begin{align*} H_{\gamma}^{\alpha,\beta} \rightarrow H_{\gamma}^{\beta,\alpha},\text{ give by } f \mapsto {\theta'_{\alpha}\theta'_{\beta}}{\theta'_{\gamma}}^{-1} C_{V_\alpha,V_\beta}\circ f \end{align*} for details see \cite{GPT2}. These isomorphism produces isomorphisms $H(\alpha,\beta,\gamma)\rightarrow H(\beta,\alpha,\gamma)$. The following lemma shows that the bases we have defined above are compatible with these isomorphisms. \begin{lemma} We can define an half twist on the set of typical modules $\{V_\alpha\}_{\alpha\in B}$ by the formula $$\theta'_\alpha={\xi}^{(\alpha/2)^2-{{r'}}^2}\operatorname{Id}_{V_\alpha}.$$ Let $\alpha,\beta,\alpha+\beta\in\ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} $, then \begin{equation} \label{eq:miror0} C_{V_\alpha,V_\beta}\circ\Yn{\hspace*{0ex}-2{{{r'}}}}{*}{\alpha}{\beta} ={\xi}^{\frac12(\alpha+2{{r'}})(\beta+2{{r'}})} \Yn{\hspace*{0ex}-2{{{r'}}}}{*}{\beta}{\alpha}, \end{equation} \begin{equation} \label{E:br2***} C_{V_\alpha,\mathtt v}\circ\Yv+{\alpha-1}\alpha\mathtt v ={\xi}^{-\frac12\alpha}{\xi}^{{r'}}\Yv+{\alpha-1}\mathtt v\alpha, \end{equation} \begin{equation} \label{E:br3***} C_{\mathtt v,V_\alpha}\circ\Yv+{\alpha-1}\mathtt v\alpha ={\xi}^{-\frac12\alpha}{\xi}^{{r'}}\Yv+{\alpha-1}\alpha\mathtt v, \end{equation} \begin{equation} \label{E:br4***} C_{V_{\alpha},V_{\beta}}\circ {X} ={\xi}^{(-\alpha-\beta+1)/2}{X}\circ C_{V_{\alpha-1},V_{\beta-1}}, \end{equation} and for $n={{r'}}+k$, \begin{equation} \label{eq:miror} C_{V_\alpha,V_\beta}\circ \Yn{\hspace*{-5ex}2{k}}{\alpha+\beta-2k}{\alpha}{\beta} =\dfrac{\theta'_{\alpha+\beta-2k}}{\theta'_{\alpha}\theta'_{\beta}} \Yn{\hspace*{-5ex}2{k}}{\alpha+\beta-2k}{\beta}{\alpha}. \end{equation} \end{lemma} \begin{proof} From Formula \eqref{eq:twist}, we have that $\theta_{V_\alpha}$ acts on the highest weight vector $v_0\in V_\alpha$ as $K^{-2{{r'}}}{\xi}^{H^2/2}v_0={\xi}^{-2{{r'}}(\alpha+2{{r'}})+(\alpha+2{{r'}})^2/2}v_0 ={\xi}^{\alpha^2/2-2{{r'}}^2}v_0$. Hence $\theta'$ is an half twist. Only the ``Cartan'' part ${\xi}^{H\otimes H}$ of the R-matrix \eqref{eq:R} acts non-trivially on the tensor product of two highest weight vectors. Hence $$C_{V_\alpha,V_\beta}(v_0\otimes v_0)={\xi}^{\frac12(\alpha+2{{r'}})(\beta+2{{r'}})}v_0\otimes v_0\in V_\beta\otimes V_\alpha.$$ But $v_0\otimes v_0=\Yn{\hspace*{0ex}-2{{{r'}}}}{*}{\alpha}{\beta}(v_0)$ and this gives Equation \eqref{eq:miror0}. Similarly, $C_{V_\alpha,\mathtt v}\circ\Yv-{\alpha+1}\alpha\mathtt v ={\xi}^{\frac12\alpha}{\xi}^{{r'}}\Yv-{\alpha+1}\mathtt v\alpha$ and Equation \eqref{E:br2***} follows from the duality of \eqref{eq:dual_mixte}. Equation \eqref{E:br2***} is proved with analogous techniques. To prove Equation \eqref{E:br4***}, we use the fact that $\qn1C_{V_{\alpha},V_{\beta}}\circ {X}$ is equal to \begin{equation}\label{E:erte} \left(\operatorname{Id}\otimes \left(\Zv+\alpha{\alpha-1}\mathtt v\circ C^{-1}_{V_{\alpha-1},\mathtt v}\right)\right) \circ \left(\left(C_{\mathtt v,V_{\beta}}\circ\Yv+{\beta-1}\mathtt v\beta\right) \otimes\operatorname{Id}\right)\circ C_{V_{\alpha-1},V_{\beta-1}}. \end{equation} Here Equation \eqref{E:br3***} can be used to remove the braiding $C_{\mathtt v,V_{\beta}}$ in \eqref{E:erte}. Now Equation \eqref{E:br2***} implies that $\Zv+\alpha{\alpha-1}\mathtt v\circ C^{-1}_{V_{\alpha-1},\mathtt v} ={\xi}^{-\alpha/2}{\xi}^{{r'}}\Zv+\alpha\mathtt v{\alpha-1}$ and Equation \eqref{E:br4***} follows. Finally, Equation \eqref{eq:miror} follows from \\ $ \begin{array}{rl} C_{V_{\alpha},V_{\beta}}\circ {X}^n=&{\xi}^{-\frac12 \big((\alpha+\beta-1)+(\alpha+\beta-3)+\cdots+(\alpha+\beta-2n+1)\big)} {X}^n\circ C_{V_{\alpha-n},V_{\beta-n}} \\ =&{\xi}^{-\frac12n(\alpha+\beta-n)}{X}^n\circ C_{V_{\alpha-n},V_{\beta-n}}. \end{array}$\\ Composing this equation with $\Yn{\hspace*{-5ex}-2{{{r'}}}}{\alpha+\beta-2k}{\alpha-n}{\beta-n}$ and applying Equation \eqref{eq:miror0} the result follows. \end{proof} \subsection{A Laurent polynomial invariant of planar trivalent graphs} In this section we discuss how to defined maps lead to invariant of planar graphs that are in some sense Laurent polynomial. Let $\Gamma\subset \ensuremath{\mathbb{R}} \times[0,1]$ be a planar uni-tri-valent framed graph with trivalent vertices marked by heights, that are integers in $\{-2{{{r'}}},-2{{{r'}}}+2,\ldots,2{{{r'}}}-2,2{{{r'}}}\}$ and whose set $\Gamma_u$ of univalent vertices is included in $\ensuremath{\mathbb{R}} \times\{0,1\}$. The heights can be seen as a $0$-chain $h$ on the CW-complex $\Gamma$ relative to $\Gamma_u$. A coloring of $\Gamma$ is a complex $1$-chain $c\in C_1(\Gamma,\Gamma_u;\ensuremath{\mathbb{C}} )$ such that its boundary is $\delta c=h$. Let $\operatorname{Col}(\Gamma)$ be the affine space of coloring of $\Gamma$ and $\operatorname{Col}_0(\Gamma)$ be the subset of coloring that have no values (no coefficients) in $\ensuremath{\mathbb{Z}} $. Since a coloring is a realisation of $h$ as a boundary we have the set $\operatorname{Col}(\Gamma)$ is nonempty if and only if $[h]=0\in H_0(\Gamma,\Gamma_u;\ensuremath{\mathbb{Z}} )$. This mean that the sum of the heights of any connected component of $\Gamma$ that does not meet $\Gamma_u$ is zero. Let us assume that this is true and let $n=\dim H_1(\Gamma,\Gamma_u;\ensuremath{\mathbb{C}} )$. Then $\operatorname{Col}(\Gamma)$ is an affine space over $H_1(\Gamma,\Gamma_u;\ensuremath{\mathbb{C}} )$. We then choose a family of $n$ edges $e_1,\ldots ,e_n$ of $\Gamma$. We assume that the union of the interior of these edges has a complement in $\Gamma/\Gamma_u$ which is simply connected. Then the map $$\operatorname{Col}(\Gamma)\to\ensuremath{\mathbb{C}} ^n, \text{ given by } c\mapsto(c(e_1),\ldots,c(e_n))$$ is bijective. We will also suppose that every edge of $\Gamma$ is in the support of a relative cycle. Hence, any coloring that takes an integer value on an edge can be infinitesimally modified to a coloring of $\operatorname{Col}_0(\Gamma)$. Then $\operatorname{Col}_0(\Gamma)$ is an open dense subset of $\operatorname{Col}(\Gamma)$. If $c\in\operatorname{Col}_0(\Gamma)$, we can form a $\mathcal{C}$-colored ribbon graph $c(\Gamma)$ as follows. First, we choose an orientation of the edges of $\Gamma$. Color each oriented edge $e$ of $\Gamma$ with $V_{c(e)}$. Any trivalent vertex of $\Gamma$ with height $k$ is replaced with a trivalent coupon containing the morphism $\omega_k$ previously defined. Positioning the edges around the coupon involves some choice but the value under $F$ (or $F'$ if $\Gamma$ is closed i.e. has no univalent vertices) of the resulting ribbon graph does not depend of these choices. \begin{theorem}\label{T:Laurent} Let $\Gamma$ be a planar uni-tri-valent framed graph with height $h$ as above. Also, as above choose $n$ edges $e_1,\ldots ,e_n$ of $\Gamma$. Suppose that $\Gamma$ is not a circle then for any coloring $c\in \operatorname{Col}_0(\Gamma)$ define $x(c)$ as follows: \begin{enumerate} \item if $\Gamma$ has univalent vertices, then let $x(c)$ be any fixed coefficient of the matrix in the canonical bases of $F(c(\Gamma))$, \item else, $\Gamma$ is closed and let $x(c)$ be $F'(c(\Gamma))$. \end{enumerate} Then there exist a unique Laurent polynomial $$P(q_1,\ldots,q_n)\in\ensuremath{\mathbb{Z}} [{{\xi}}][q_1^{\pm1},\ldots,q_n^{\pm1}]$$ such that for any coloring $c\in \operatorname{Col}_0(\Gamma)$, $x(c)=P({{\xi}}^{c(e_1)},\ldots,{{\xi}}^{c(e_n)})$. \end{theorem} \begin{proof} First consider the case $\Gamma_u\neq\emptyset$. For the existence of the Laurent polynomials, it is sufficient to remark that it is true for the elementary morphisms $\Yn{}{*}{*}{*}$, $\Zn{}{*}{*}{*}$ and $b^*$, $d^*$ from \eqref{eq:def_duality}, \eqref{eq:def_dualityy}. Now the unicity follows from the general fact that a Laurent polynomials in $n$ variables with complex coefficients which vanishes on an open dense subset of $(\ensuremath{\mathbb{C}} ^*)^n$ must be $0$. In the other case, $\Gamma$ is a closed graph and $x(c)=F'(c(\Gamma))$. To compute $F'(c(\Gamma))$ we open $c(\Gamma)$ on an edge $e$ to get a cutting presentation of $c(\Gamma)$. The invariant of this cutting presentation is then a scalar times the identity of $V_ {c(e)}$. By the previous argument, this scalar is given by a Laurent polynomial $P_e$. $F'(c(\Gamma))$ is by definition this scalar times $\operatorname{\mathsf{d}}(V_{c(e)})=\operatorname{\mathsf{D}}({{\xi}^{c(e)}})^{-1}= \dfrac{\Qn{{\xi}^{c(e)}}}{\Qn{({\xi}^{c(e)})^{{r}}}}$. This denominator seems to be a problem but in fact it must cancel. Indeed, as $\Gamma$ is not a circle, we have $n\geq2$. But $F'(c(\Gamma))$ does not depend on where we cut and open $c(\Gamma)$. Hence cutting alternatively on the edges $e_1$, and then $e_2$, we get that there exits polynomials $P_1,P_2\in\ensuremath{\mathbb{Z}} [{{\xi}}][q_1^{\pm1},\ldots,q_n^{\pm1}]$ such that $F'(c(\Gamma))= \dfrac{P_1({{\xi}}^{c(e_1)},\ldots,{{\xi}}^{c(e_n)})} {{({\xi}^{c(e)})^{{r}}}-{({\xi}^{c(e)})^{-{{r}}}}}$ with $\dfrac{P_1}{q_1^{{r}}-q_1^{-{{r}}}}=\dfrac{P_2}{q_2^{{r}}-q_2^{-{{r}}}}$. Even if $\ensuremath{\mathbb{Z}} [{\xi}]$ is not an unique factorization domain, one easily see that this last equality implies that $\frac{1}{q_1^{{r}}-q_1^{-{{r}}}}P_1\in\ensuremath{\mathbb{Z}} [{{\xi}}][q_1^{\pm1},\ldots,q_n^{\pm1}]$. \end{proof} We now use this theorem applied to the tetrahedron graph to give an alternative definition of the polynomials $\J$. This will prove in particular that their coefficients are in $\ensuremath{\mathbb{Z}} [{\xi}]$. As we will see Theorem \ref{Th:formula} implies that this definition coincide with the formulas given in Section \ref{S:J}. For $(i,j,k)\in\H_{{r'}}$ we consider the planar $1$-skeleton $\Gamma$ of the tetrahedron with heights as follows. Let $v_1, v_2, v_3, v_4$ be the vertices of $\Gamma$: $\Gamma= \epsh{fig03}{8ex} \put(-30,2){$\ms v_1$}\put(-44,13){$\ms v_2$}\put(-43,-13){$\ms v_3$}\put(-12,7){$\ms v_4$}$. Assign $v_1, v_2, v_3, v_4$ the heights $2i, 2j, 2k, -2i-2j-2k$, respectively. \begin{definition}\label{D:6j} Let $\J_{i,j,k}\in\ensuremath{\mathbb{Z}} [{\xi}][q_1^{\pm1},q_2^{\pm1},q_3^{\pm1}]={\mathfrak L}$ be the Laurent polynomial of Theorem \ref{T:Laurent} associated to $\Gamma$ and the edges $(e_1,e_2,e_3)=(v_2v_3,v_3v_1,v_1v_2)$. Thus, $\J_{i,j,k}$ is the unique Laurent polynomial such that for all $\alpha,\beta,\gamma\in\ensuremath{\mathbb{C}} $ with $\alpha,\beta,\gamma,\alpha-\beta,\beta-\gamma,\gamma-\alpha\notin\ensuremath{\mathbb{Z}} $, \begin{equation} \label{E:J-sjv} \sjv\alpha\beta\gamma{2i}{2j}{2k} =\J_{i,j,k}({\xi}^\alpha,{\xi}^\beta,{\xi}^\gamma) \end{equation} where $\sjv\alpha\beta\gamma{2i}{2j}{2k}=F'(c(\Gamma))$ is the invariant of the graph $\Gamma= \epsh{fig03}{8ex} \put(-27,0){$\ms v_1$}\put(-41,13){$\ms v_2$}\put(-43,-13){$\ms v_3$}\put(-9,5){$\ms v_4$}$ colored with $$ \begin{array}{lll} c(v_2v_3)=\alpha&c(v_3v_1)=\beta&c(v_1v_2)=\gamma\\ c(v_1v_4)=\beta-\gamma-2i\quad&c(v_2v_4)=\gamma-\alpha-2j\quad& c(v_3v_4)=\alpha-\beta-2k \end{array} $$ \begin{figure}[t,b] \framebox{\begin{minipage}[c]{1.0\linewidth} $$ \sjv\alpha\beta\gamma{2i}{2j}{2k} =\epsh{fig03}{18ex}\put(-85,2){$\alpha$} \put(-55,-22){$\beta$}\put(-52,20){$\gamma$} \put(-60,0){$2i$}\put(-90,30){$2j$}\put(-90,-30){$2k$} =\sjtop {j_1}{j_2}{j_3}{j_4}{j_5}{j_6} =\,\epsh{fig02}{20ex}\put(-70,15){$j_1$} \put(-55,0){$j_2$}\put(-60,-25){$j_3$} \put(-35,-15){$j_4$}\put(-48,23){$j_6$}\put(-13,3){$j_5$} $$ $$ \text{ with }\quad \begin{array}{lll} j_1=\alpha&j_2=-\beta&j_6=-\gamma\\ j_4=\beta-\gamma-2i\quad&j_5=\alpha-\gamma+2j\quad&j_3=\alpha-\beta-2k \end{array} $$ \end{minipage}} \caption{The two notations for the $6j$-symbols $\J_{i,j,k}({\xi}^\alpha,{\xi}^\beta,{\xi}^\gamma)$}\label{F:6j} \end{figure} If $|i|,|j|,|k|$ or $|i+j+k|$ is $>{{{r'}}}$, then by convention, set $\mathsmall{\sjv\alpha\beta\gamma{2i}{2j}{2k}}=0$. \end{definition} Here we change from the usual notation $\sjtop {j_1}{j_2}{j_3}{j_4}{j_5}{j_6}$ of \cite{GPT2} to the notation $\sjv\alpha\beta\gamma{2i}{2j}{2k}$. We use the new notation because it is closely related to the polynomials $J$ and easily adapts to the computations below. The correspondence between the two notations is given in Figure \ref{F:6j}. \subsection{Computations of the $6j$-symbols} The next proposition establishes the unexpected fact that the family of bases of the multiplicity modules constructed in Section \ref{S:mult} is self dual. Proposition \ref{prop:dual} is illustrated by Figure \ref{F:typic_dual} where the left hand side may be seen as a cutting presentation of the $\Theta$-graph. \begin{figure}[t,b] \centering \hspace{10pt} $\epsh{fig12}{16ex}$ \put(-10,37){\ms{\alpha}}\put(-7,24){\ms{k}}\put(-9,-24){\ms{-k}} \hspace{28pt} $=\hspace{3pt}\operatorname{\mathsf{d}}(\alpha)^{-1}\epsh{fig10}{16ex}$ \put(-7,5){\ms{\alpha}} \hspace{10pt} \caption{The duality for $H(V_\alpha,V_{\beta},V_{\gamma})$}\label{F:typic_dual} \end{figure} \begin{proposition}\label{prop:dual} Let $\alpha=\beta+\gamma-2{{{r'}}}$ then $$ \Zn{-2{{{r'}}}}\alpha\beta\gamma\hspace*{2ex}\circ {X}^{\otimes 2{{{r'}}}}\circ \Yn{\hspace*{-10ex}-2{{{r'}}}}\alpha{\beta-2{{{r'}}}}{\gamma-2{{{r'}}}} =\operatorname{\mathsf{d}}({\alpha})^{-1}\operatorname{Id}_{V_\alpha $$ and consequently, if $\alpha+\beta+\gamma=k$ then $$ \left\langle\omega^k(\alpha,\beta,\gamma), \omega^{-k}(-\gamma,-\beta,-\alpha)\right\rangle=1 $$ where the duality $H(\alpha,\beta,\gamma)\otimes H(-\gamma,-\beta,-\alpha)\to\ensuremath{\mathbb{C}} $ is obtained by the evaluation of $F'$ on the colored $\Theta$-graph $\epsh{fig13}{6ex}$ \put(-20,-9){\ms{\alpha}}\put(-20,5){\ms{\beta}}\put(-20,21){\ms{\gamma}}. \end{proposition} \begin{proof} Let us denote by $\Theta=\epsh{fig13}{6ex} \put(-20,-9){\ms{\alpha}}\put(-20,5){\ms{\beta}}\put(-20,21){\ms{\gamma}}$ the $\Theta$-graph where the coupons are filled with the morphisms $\omega^k(\alpha,\beta,\gamma)$ and $\omega^{-k}(-\gamma,-\beta,-\alpha)$. We use properties of $F'$ to compute $F'(\Theta)$ as follows. We have $$ F'( \Theta )=\operatorname{\mathsf{d}}(V_\alpha)\left\langle \Zn{-2{{{r'}}}}\alpha\beta\gamma\hspace*{2ex}\circ {X}^{\otimes 2{{{r'}}}}\circ \Yn{\hspace*{-11.5ex}-2{{{r'}}}}\alpha{\beta-2{{{r'}}}}{\gamma-2{{{r'}}}}\right\rangle $$ $ =\operatorname{\mathsf{d}}(V_{\gamma-2{{{r'}}}}) \left\langle \Zn{-2{{{r'}}}}{\gamma-2{{{r'}}}}\alpha\beta\gamma\hspace*{2ex} \circ \left( \operatorname{Id}\otimes \Zn{-2{{{r'}}}}\alpha\beta\gamma\hspace*{2ex}\right) \right. \qquad\, $ \\ $\hspace*{\fill} \circ \left(\operatorname{Id}\otimes {X}^{\circ 2{{{r'}}}}\right) \circ \left(b^{2{{{r'}}}-\beta}(1)\otimes\operatorname{Id}_{V_{\gamma-2{{{r'}}}}}\right)\Big\rangle . $ \\ We compute the bracket of the right hand side of the last equality by evaluating the morphisms on the lowest weight vector $v_{2{{{r'}}}}\in V_{\gamma-2{{{r'}}}}$. First remark that according to Lemma \ref{lem:X}, ${X}$ sends $$ V_{\delta+i}\otimes V_{\epsilon+i}\ni v_i\otimes v_{2{{{r'}}}}\mapsto {{\xi}}^{-1}\qn{\epsilon+i-2{{{r'}}}} v_{i+1}\otimes v_{2{{{r'}}}} \in V_{\delta+i+1}\otimes V_{\epsilon+i+1}. $$ Therefore, ${X}^{\circ 2{{{r'}}}}(v_i\otimes v_{2{{{r'}}}})=0$ if $i\geq1$ and $$ {X}^{\circ 2{{{r'}}}}(v_0\otimes v_{2{{{r'}}}})=-{{\xi}}\left(\prod_{i=1}^{2{{{r'}}}}\qn{\epsilon+i}\right) v_{2{{{r'}}}}\otimes v_{2{{{r'}}}} $$ where here we use the equalities ${{\xi}}^{-2{{{r'}}}}=-{{\xi}}$ and $\qn{\epsilon+i-2{{{r'}}}}=-\qn{\epsilon+i+1}$. Applying this to $b^{2{{{r'}}}-\beta}(1)\otimes v_{2{{{r'}}}}\in V_{2{{{r'}}}-\beta}\otimes V_{\beta-2{{{r'}}}}\otimes V_{\gamma-2{{{r'}}}}$ we get $$\operatorname{Id}\otimes {X}^{\circ 2{{{r'}}}}\left(b^{2{{{r'}}}-\beta}(1)\otimes v_{2{{{r'}}}}\right)=\operatorname{Id}\otimes {X}^{\circ 2{{{r'}}}}\left(-{{\xi}}\,v_{2{{{r'}}}}\otimes v_0\otimes v_{2{{{r'}}}}\right) $$ $$ = {{\xi}}^2 \left(\prod_{i=1}^{2{{{r'}}}}\qn{\gamma-2{{{r'}}}+i}\right) v_{2{{{r'}}}}\otimes v_{2{{{r'}}}}\otimes v_{2{{{r'}}}}\in V_{2{{{r'}}}-\beta}\otimes V_{\beta}\otimes V_\gamma . $$ Now using the fact that $\Yn{-2{{{r'}}}}***(v_{2{{{r'}}}})=v_{2{{{r'}}}}\otimes v_0+\cdots$ and $d^*(v_0\otimes v_{2{{{r'}}}})=-{{\xi}}^{-1}$ we have $$\Zn{-2{{{r'}}}}***(v_{2{{{r'}}}}\otimes v_{2{{{r'}}}})=(\operatorname{Id}\otimes d^*)\circ\left(\Yn{-2{{{r'}}}}***\otimes\operatorname{Id}\right)(v_{2{{{r'}}}}\otimes v_{2{{{r'}}}})=-{{\xi}}^{-1}v_{2{{{r'}}}}$$ and we see that the above bracket is equal to $\operatorname{\mathsf{d}}(V_{\gamma-2{{{r'}}}})^{-1}$. \end{proof} Remark that with Theorem \ref{T:Laurent}, the previous result can be restated as saying: the Laurent polynomial associated to the $\Theta$-graph with heights $k,-k$ is constant equal to $1$. \begin{proposition} \begin{equation} \label{eq:assoc} \left(\operatorname{Id}\otimes\Yn{-2{{{r'}}}}***\right)\circ\Yn{-2{{{r'}}}}***= \left(\Yn{-2{{{r'}}}}***\otimes\operatorname{Id}\right)\circ\Yn{-2{{{r'}}}}*** \end{equation} \begin{equation} \label{eq:assocmixte} \left(\operatorname{Id}\otimes\Yv-*\mathtt v *\right)\circ\Yn{-2{{{r'}}}}***= \left(\Yv-**\mathtt v\otimes\operatorname{Id}\right)\circ\Yn{-2{{{r'}}}}***. \end{equation} \end{proposition} \begin{proof} All these maps send the highest weight vector of the bottom irreducible module to $v_0$ to $v_0\otimes v_0\otimes v_0$. \end{proof} Define the following operators: \begin{itemize} \item $\Xl:V_\alpha\otimes V_\beta\to V_{\alpha-1}\otimes V_{\beta +1}$ by $$ \Xl=(\operatorname{Id}\otimes d^\mathtt v\otimes\operatorname{Id})\circ \left(\Yv-\alpha{\alpha-1}\mathtt v\otimes\Yv+\beta\mathtt v{\beta+1}\right), $$ \item $\Xr:V_\alpha\otimes V_\beta\to V_{\alpha+1}\otimes V_{\beta-1}$ by $$ \Xr=(\operatorname{Id}\otimes d^\mathtt v\otimes\operatorname{Id})\circ \left(\Yv+\alpha{\alpha+1}\mathtt v\otimes\Yv-\beta\mathtt v{\beta-1}\right), $$ \item $\Xlr:V_\alpha\otimes V_\beta\to V_{\alpha-1}\otimes V_{\beta-1}$ by $$ \Xlr=(\operatorname{Id}\otimes d^\mathtt v\otimes\operatorname{Id})\circ \left(\Yv-\alpha{\alpha-1}\mathtt v\otimes\Yv-\beta\mathtt v{\beta-1}\right). $$ \end{itemize} From Corollary \ref{coro:fusion} we have a commutation rule for these operators: \begin{lemma}\label{L:Xcom}We have the following equalities of maps from $V_\alpha \otimes V_\beta$, $$\Xl\circ {X}={X}\circ\Xl\text{ and }\Xr\circ {X}={X}\circ\Xr$$ $$\Xl\circ \Xlr=\Xlr\circ\Xl\text{ and }\Xr\circ \Xlr=\Xlr\circ\Xr$$ $$\Xl\circ \Xr-\qn1\Xlr\circ{X}= \qn{\alpha+1}\qn{\beta}\operatorname{Id}_{V_\alpha\otimes V_\beta}$$ \end{lemma} \begin{proof} Consider the map $\End(\mathtt v\otimes V_\gamma) \rightarrow \operatorname{Hom}(V_*\otimes V_\gamma, V_*\otimes V_\gamma)$ given by $$y \mapsto \left(\Zv\pm**\mathtt v\otimes\operatorname{Id}\right)\circ\left(\operatorname{Id}\otimes y \right)\circ\left(\Yv\pm**\mathtt v\otimes\operatorname{Id}\right).$$ The identities of the lemma are obtain by composing this map with both sides of Equation \eqref{eq:fusion}. \end{proof} The following proposition describes how these operators act on multiplicity modules. \begin{proposition} For any $\alpha,\beta,\gamma\in\ensuremath{\mathbb{C}} \setminus\ensuremath{\mathbb{Z}} $ and any $k\in\{-{{r'}},\ldots,{{r'}}\}$, $$ \Xl\circ\Yn{\hspace{-3ex}2k}**{\beta-1}= \qn{\beta+{{{r'}}}-k}\Yn{\hspace{-3ex}2k}*{*-1}{\beta} $$ $$ \Xr\circ\Yn{\hspace{-3ex}2k}*{\alpha-1}*= \qn{\alpha+{{{r'}}}-k}\Yn{\hspace{-3ex}2k}*{\alpha}{*-1} $$ $$ \Xlr\circ\Yn{\hspace{0ex}2k\!+\!2}{\gamma}**= \frac{\qn{{{{r'}}}-k}}{\qn1}\qn{\gamma+k+{{{r'}}}+1} \Yn{\hspace{-6ex}2k}{\gamma}{*-1}{*-1} $$ \end{proposition} \begin{proof} Let us start with the first equality. If $k=-{{r'}}$ it is obtained by composing \eqref{eq:assocmixte} with $\operatorname{Id}\otimes\Zv+*\mathtt v*$ where the the factor $\qn{1-\beta}=\qn{\beta+2{{r'}}}$ arises from the duality of \eqref{eq:dual_mixte}. Now, for any $k=n-{{r'}}\in\{-{{r'}},\ldots,{{r'}}\}$, we have \begin{eqnarray*} \Xl\circ\Yn{\hspace{-3ex}2k}**{\beta-1}= \Xl\circ X^{n}\circ \Yn{\hspace{-10ex}-2{{r'}}}*{*-n}{\beta-1-n} =X^{n}\circ\Xl\circ\Yn{\hspace{-10ex}-2{{r'}}}*{*-n}{\beta-1-n}\\ =\qn{\beta+2{{r'}}-n}X^{n}\circ\Yn{\hspace{-10ex}-2{{r'}}}*{*-n-1}{\beta-n} = \qn{\beta+{{r'}}-k}\Yn{\hspace{-3ex}2k}*{*-1}{\beta} \end{eqnarray*} which proves the first equality. The proof of the second identity is similar. For the third, Lemma \ref{L:Xcom} implies $$\qn1 \Xlr\circ{X}= \Xl\circ\Xr-\qn{\alpha}\qn{\beta-1}\operatorname{Id}_{V_{\alpha-1}\otimes V_{\beta-1}}.$$ Then since $\Yn{\hspace{0ex}2k\!+\!2}{\gamma}\alpha\beta ={X}\circ\Yn{\hspace{-6ex}2k}{\gamma}{\alpha-1}{\beta-1}$, the identity comes from the equality $$\qn{\alpha+{{r'}}-k}\qn{\beta+{{r'}}-k-1} -\qn{\alpha}\qn{\beta-1} =\qn{\alpha+\beta-k+{{r'}}-1}\qn{{{r'}}-k}$$ with $\gamma=\alpha+\beta-2k-2$. \end{proof} \begin{proposition}\label{P:sym} $$ \sjv\alpha\beta\gamma{2i}{2j}{2k} =\sjv\beta\gamma\alpha{2j}{2k}{2i} =\sjv\gamma\alpha\beta{2k}{2i}{2j} =\sjv{-\gamma}{-\beta}{-\alpha}{2k}{2j}{2i $$ $$ =\sjv{-\gamma}{\beta-\gamma-2i}{\alpha-\gamma+2j}{-2(i+j+k)}{2j}{2i} $$ $$ =\sjv{-\beta}{\alpha-\beta-2k}{\gamma-\beta+2i}{-2(i+j+k)}{2i}{2k} =\sjv{-\alpha}{\gamma-\alpha-2j}{\beta-\alpha+2k}{-2(i+j+k)}{2k}{2j} $$ \end{proposition} \begin{proof} These identities are exactly the usual symmetries of $6j$-symbols. \end{proof} \begin{lemma}\label{L:rec} If ($i\leq{{r'}}$ and) $j\geq-{{r'}}$ then \begin{eqnarray} \mathsmall{\qn{i+{{{r'}}}}\qn{\beta-\gamma-i+{{{r'}}}+2}\sjv\alpha\beta\gamma {2i-2}{2j+2}{2k}} =\mathsmall{\qn{\gamma+i+{{{r'}}}-1}\qn{\alpha+j+{{{r'}}}+1} \sjv{\alpha}{\beta}{\gamma-2}{2i}{2j}{2k}}\nonumber\\ \mathsmall{+\qn{\gamma-1}\qn{\alpha-k-{{{r'}}}} \sjv{\alpha+1}{\beta+1}{\gamma-1}{2i}{2j}{2k}}\label{eq:rec} \end{eqnarray} \end{lemma} \begin{proof} We give a graphical proof:\\ $\qn{\gamma}\qn{\alpha-{{r'}}-k}\epsh{fig08}{16ex} \put(-37,14){\ms{\alpha}}\put(-18,1){\ms{\beta}}\put(-9,-5){\ms{\gamma}} \put(-26,25){\ms{2j}}\put(-53,-22){\ms{2k}}\put(-20,-22){\ms{2i}} \quad=\quad-\qn{\gamma}\epsh{fig04}{16ex}\quad =$ \\ \hspace*{10ex}$\qn{-\gamma}\epsh{fig05}{16ex} \put(-50,11){\ms{\alpha\!-\!1}}\put(-30,-1){\ms{\beta\!-\!1}} \quad\stackrel{\eqref{eq:fusion}}{=}\quad \epsh{fig07}{16ex}\quad-\quad\epsh{fig06}{16ex}\quad=$ \\ $\ms{\qn{i+{{r'}}}\qn{\gamma-\beta+i+{{r'}}-1}}\epsh{fig08}{16ex} \put(-47,14){\ms{\alpha\!-\!1}}\put(-30,-2){\ms{\beta\!-\!1}} \put(-1,-6){\ms{\gamma\!+\!1}} \put(-37,28){\ms{2j\!+\!2}}\put(-53,-22){\ms{2k}}\put(-31,-22){\ms{2i\!-\!2}} $ \hspace*{4ex} $ -\ms{\qn{\gamma+i+{{r'}}}\qn{\alpha+j+{{r'}}}}\epsh{fig08}{16ex} \put(-47,14){\ms{\alpha\!-\!1}}\put(-30,-2){\ms{\beta\!-\!1}} \put(-1,-6){\ms{\gamma\!-\!1}} \put(-26,25){\ms{2j}}\put(-53,-22){\ms{2k}}\put(-20,-22){\ms{2i}}$ \\ then substitute $\gamma$ with $\gamma-1$. \end{proof} \begin{proposition}\label{th:bound6j}\ \\ $\bullet$ If $i+j+k={{{r'}}}$ then \begin{equation} \label{eq:low_formula} \sjv\alpha\beta\gamma{2i}{2j}{2k}= \qn{i,j,k}\qn{\alpha\!-\!{{{r'}}}\!-\!k;{{{r'}}}\!-\!i}! \qn{\beta\!-\!{{{r'}}}\!-\!i;{{{r'}}}\!-\!j}! \qn{\gamma\!-\!{{{r'}}}\!-\!j;{{{r'}}}\!-\!k}! \end{equation} $\bullet$ If $i+j+k=\!-\!{{{r'}}}$ then \begin{equation} \label{eq:high_formula} \sjv\alpha\beta\gamma{2i}{2j}{2k} =\qn{\delta+1;{{{r'}}}+i}!\qn{\epsilon+1;{{{r'}}}+j}!\qn{\phi+1;{{{r'}}}+k}! \end{equation} where we use the notation $\left\{ \begin{array}[c]{l} \delta=\beta-\gamma-2i\\ \epsilon=\gamma-\alpha-2j\\ \phi=\alpha-\beta-2k. \end{array}\right. $ \end{proposition} \begin{proof} Remark first that these formulas are symmetric for the action of $\ensuremath{\mathfrak{S}}_3$ which permute simultaneously $\{i,j,k\}$ and $\{\alpha,\beta,\gamma\}$ and multiplies the last three variables by the signature of the permutation (see Proposition \ref{P:sym}). We will prove these identities by a recurrence on the natural number $n=2{{r'}}-\max(i,j,k)+\min(i,j,k)$. First, we prove Equation \eqref{eq:low_formula} when $n=0$. In this case up to a permutation we have $(i,j,k)=(-{{r'}},{{{r'}}},{{{r'}}})$. Hence, we must prove that \begin{equation}\label{eq:extr_low_formula} \sjv\alpha\beta\gamma{-2{{r'}}}{2{{r'}}}{2{{r'}}}=\qn{\alpha-2{{r'}};2{{r'}}}!=\operatorname{\mathsf{d}}(\alpha)^{-1}. \end{equation} To do this, recall Equation \eqref{eq:assoc}: $$ \left(\Yn{\hspace{-9ex}-2{{r'}}}**{\alpha-\beta-2{{r'}}}\otimes\operatorname{Id}\right)\circ \Yn{\hspace{0ex}-2{{r'}}}\gamma*\beta =\left(\operatorname{Id}\otimes\Yn{\hspace{0ex}-2{{r'}}}\alpha*\beta\right)\circ \Yn{\hspace{0ex}-2{{r'}}}\gamma*\alpha. $$ Composing this identity with $\Zn{\hspace{0ex}2{{r'}}}**\alpha\circ \left(\operatorname{Id}\otimes\Zn{\hspace{0ex}2{{r'}}}\alpha**\right)$, we get that \begin{equation}\label{E:6jwithd} \operatorname{\mathsf{d}}(\gamma)^{-1}\sjv\alpha\beta\gamma{-2{{r'}}}{2{{r'}}}{2{{r'}}} =\Zn{\hspace{0ex}2{{r'}}}\gamma*\alpha\circ \left(\operatorname{Id}\otimes\left(\Zn{\hspace{0ex}2{{r'}}}\alpha**\circ \Yn{\hspace{0ex}-2{{r'}}}\alpha**\right)\right)\circ \Yn{\hspace{0ex}-2{{r'}}}\gamma*\alpha. \end{equation} Proposition \ref{prop:dual} states that $\Zn{\hspace{0ex}2{{r'}}}x**\circ \Yn{\hspace{0ex}-2{{r'}}}x**=\operatorname{\mathsf{d}}(x)^{-1}\operatorname{Id}$. Applying this identity twice in Equation \eqref{E:6jwithd} we arrive at \eqref{eq:extr_low_formula}. Remark that using the symmetries of the $6j$-symbols, Equation \eqref{eq:extr_low_formula} also implies that \begin{equation}\label{eq:extr_high_formula} \sjv\alpha\beta\gamma{2{{r'}}}{-2{{r'}}}{-2{{r'}}}=\qn{\beta-\gamma-2{{r'}};2{{r'}}}! \end{equation} which is the case $n=0$ for Equation \eqref{eq:high_formula}. Now let $\mathsmall{\sjv\alpha\beta\gamma{2i}{2j}{2k}'}$ be the right hand sides of Equation \eqref{eq:low_formula} (respectively Equation \eqref{eq:high_formula}). For the heredity of the recurrence, it is enough to show that $\mathsmall{\sjv\alpha\beta\gamma{2i}{2j}{2k}'}$ satisfy the relation of Lemma \ref{L:rec}. Indeed, this relation applied to both side of Equation \eqref{eq:low_formula} (respectively Equation \eqref{eq:high_formula}) after a well chosen permutation of $(i,j,k)$ allows us to reduce $n$ by $1$ or $2$. \\ $\bullet$ Let us start with Equation \eqref{eq:low_formula}. Here $i+j+k={{r'}}$ and direct computation shows \\ $ \begin{array}{r} \mathsmall{\qn{\gamma+i+{{{r'}}}-1}\qn{\alpha+j+{{{r'}}}+1} \sjv{\alpha}{\beta}{\gamma-2}{2i}{2j}{2k}}' \mathsmall{+\qn{\gamma-1}\qn{\alpha-k-{{{r'}}}} \sjv{\alpha+1}{\beta+1}{\gamma-1}{2i}{2j}{2k}}'= \\ \mathsmall{\qn{i,j,k}\qn{\alpha-{{r'}}-k,{{r'}}-i+1}!\qn{\beta-i-{{r'}}+1,{{r'}}-j-1}! \qn{\gamma-j-{{r'}}-1,{{r'}}-k}!\times} \\ \mathsmall{ (-\qn{\beta-i-{{r'}}}\qn{\gamma-j+{{r'}}-1}+\qn{\beta+k-{{r'}}}\qn{\gamma-1})} \end{array} $ \\ where we use that $\qn{x\pm(2{{r'}}+1)}=-\qn{x}$. Now this is equal to $\mathsmall{\qn{i+{{{r'}}}}\qn{\beta-\gamma-i+2+{{{r'}}}}\sjv\alpha\beta\gamma {2i-2}{2j+2}{2k}'}$ since $$\mathsmall{\qn{\beta+k-{{r'}}}\qn{\gamma-1}-\qn{\beta-i-{{r'}}}\qn{\gamma-j+{{r'}}-1} =\qn{{{r'}}-j}\qn{\beta-\gamma-i+{{r'}}+2}}.$$ Thus, $\mathsmall{\sjv\alpha\beta\gamma{2i}{2j}{2k}'}$ satisfy the relation of Lemma \ref{L:rec} when $i+j+k={{r'}}$. \\ $\bullet$ We now deal similarly with the case $i+j+k=-{{r'}}$. \\ $ \begin{array}{r} \mathsmall{\qn{\gamma+i+{{{r'}}}-1}\qn{\alpha+j+{{{r'}}}+1} \sjv{\alpha}{\beta}{\gamma-2}{2i}{2j}{2k}}' \mathsmall{+\qn{\gamma-1}\qn{\alpha-k-{{{r'}}}} \sjv{\alpha+1}{\beta+1}{\gamma-1}{2i}{2j}{2k}}'= \\ \mathsmall{\qn{\beta-\gamma-2i+3,i+{{r'}}}!\qn{\gamma-\alpha-2j-1,j+{{r'}}}! \qn{\alpha-\beta-2k+1,k+{{r'}}}!\times} \\ \mathsmall{\left(\qn{\gamma+i+{{r'}}-1}\qn{\alpha+j+{{r'}}+1} +\qn{\gamma-1}\qn{\alpha+i+j}\right)} \end{array} $ \\ But now this is equal to $\mathsmall{\qn{i+{{{r'}}}}\qn{\beta-\gamma-i+2+{{{r'}}}}\sjv\alpha\beta\gamma {2i-2}{2j+2}{2k}'}$ because $$\mathsmall{\left(\qn{\gamma+i+{{r'}}-1}\qn{\alpha+j+{{r'}}+1} +\qn{\gamma-1}\qn{\alpha+i+j}\right)=\qn{i+{{r'}}}\qn{\gamma-\alpha-j-1+{{r'}}}}.$$ Thus, $\mathsmall{\sjv\alpha\beta\gamma{2i}{2j}{2k}'}$ satisfy the relation of Lemma \ref{L:rec} when $i+j+k=-{{r'}}$ and this completes the proof. \end{proof} Let us now rewrite and generalize the relation of Lemma \ref{L:rec}: \begin{lemma}\label{L:recud}Let $(i,j,k)\in\H_{{r'}}$ and let $l=-i-j-k$. Here, we again use the ``colors'': $\left\{ \begin{array}[c]{l} \delta=\beta-\gamma-2i\\ \epsilon=\gamma-\alpha-2j\\ \phi=\alpha-\beta-2k \end{array}\right.$ \\ $\bullet$ If $i+j+k=-l<{{r'}}$ and $k<{{r'}}$ then \begin{equation \label{eq:recsym} \begin{array}[t]{r} \mathsmall{\sjv{\alpha}{\beta}{\gamma}{2i}{2j}{2k}= \frac1{\qn{k-{{r'}}}\qn{-\alpha+k-{{r'}}}}\Bigg(} \mathsmall{\qn{-\phi-k-{{r'}}}\qn{\delta-l-{{r'}}} \sjv{\alpha}{\beta}{\gamma}{2i}{2j}{2k+2}} \\ \hspace{10ex}\mathsmall{+\qn{\phi-1}\qn{-\delta-i-{{r'}}} \sjv{\alpha}{\beta-1}{\gamma}{2i}{2j}{2k+2}\Bigg)} \end{array} \end{equation} $\bullet$ If $N>0$, $i+j+k=-l\leq{{r'}}-N$ and $k\leq{{r'}}-N$ then \begin{equation \label{eq:recgenup} \begin{array}[t]{r} \mathsmall{\sjv{\alpha}{\beta}{\gamma}{2i}{2j}{2k} =\frac{1}{\qn{k-{{r'}};N}! \qn{-\alpha+k-{{r'}};N}!} \displaystyle{\Bigg(\sum_{n=0}^N\qb Nn\qn{-\phi-k-{{r'}};N-n}!\times}} \\ \mathsmall{\qn{\delta-l-{{r'}};N-n}!\qn{\phi-N;n}!\qn{-\delta-i-{{r'}};n}! \sjv{\alpha}{\beta-n}{\gamma}{2i}{2j}{2k+2N}\Bigg)} \end{array} \end{equation} $\bullet$ If $N>0$, $i+j+k=-l\geq N-{{r'}}$ and $k\geq N-{{r'}}$ then \begin{equation \label{eq:recgendown} \begin{array}[t]{r} \mathsmall{\sjv{\alpha}{\beta}{\gamma}{2i}{2j}{2k} =\frac{1}{\qn{l-{{r'}};N}! \qn{-\epsilon+l-{{r'}};N}!} \displaystyle{\Bigg(\sum_{n=0}^N\qb Nn\qn{\phi-l-{{r'}};N-n}!\qn{-\beta-k-{{r'}};N-n}!}} \\ \mathsmall{\qn{-\phi-N;n}!\qn{\beta-i-{{r'}};n}! \sjv{\alpha}{\beta+n}{\gamma}{2i}{2j}{2k-2N}\Big)} \end{array} \end{equation} \end{lemma} \begin{proof} The first relation \eqref{eq:recsym} is obtained from Equation \eqref{eq:rec} by using the symmetry $\sjv{\alpha}{\beta}{\gamma}{2i}{2j}{2k} =\sjv{\beta}{\beta-\gamma-2i}{\beta-\alpha+2k}{2l}{2k}{2i}$ and renaming the variables. The second relation is shown by recurrence on $N$. For $N=1$ it is just Equation \eqref{eq:recsym}. The induction step also follows from Equation \eqref{eq:recsym}. The meticulous reader who wants to check this relation carefully will have to use the identity: $$\mathsmall{\qb Nn \qn{\phi+n-N-1}+\qb N{n-1}\qn{\phi+n-2N-2} =\qb{N+1}n\qn{\phi-N-1}}.$$ The third identity can be obtained from the second by using the symmetry $\sjv{\alpha}{\beta}{\gamma}{2i}{2j}{2k} =\sjv{\epsilon}{-\delta}{\gamma}{2i}{2j}{2l}$ and renaming the variables. \end{proof} \begin{theorem}\label{Th:formula}\ \\ $\bullet$ For any $(i,j,k)\in\H_{{r'}}$ with $i,k\leq i+j+k$, let $N={{r'}}-i-j-k$. Then \begin{equation \label{eq:mastertop} \begin{array}[t]{r} \sjv{\alpha}{\beta}{\gamma}{2i}{2j}{2k}= \qn{i,j,k}\qn{\alpha-{{{r'}}}-k;j+k}!\qn{\gamma-{{{r'}}}-j;i+j}! \displaystyle{\left(\sum_{n=0}^N\qb Nn\times\right.} \hspace*{0ex}\\ \qn{\beta+k-\alpha+{{r'}}+1;N-n}!\qn{\beta+k-\gamma-i+j-{{r'}};N-n}! \\ \qn{\alpha-\beta-2k-N;n}!\qn{\gamma-\beta+i+{{r'}}+1;n}! \qn{\beta-{{{r'}}}-i-n;{{{r'}}}-j}!\Big) \end{array} \end{equation} \\[1ex] $\bullet$ For any $(i,j,k)\in\H_{{r'}}$ with $j,k\geq i+j+k$, let $N={{r'}}+i+j+k$. Then \begin{equation \label{eq:masterbot} \begin{array}[t]{r} {\sjv{\alpha}{\beta}{\gamma}{2i}{2j}{2k} =\frac{\qn{\epsilon+N+1;{{{r'}}}+j-N}!}{\qn{N}! } \displaystyle{\Bigg(\sum_{n=0}^N\qb Nn\qn{\beta-i-j+n+1;N-n}!\times}} \\ {\qn{\beta-i+{{r'}}+1;n}!\qn{\phi+N-n+1;{{r'}}+k}! \qn{\delta+n+1;{{{r'}}}+i}!\Bigg)} \end{array} \end{equation} where we use the notation $\left\{ \begin{array}[c]{l} \delta=\beta-\gamma-2i\\ \epsilon=\gamma-\alpha-2j\\ \phi=\alpha-\beta-2k. \end{array}\right. $ \end{theorem} \begin{proof} The first formula is obtained by replacing $\sjv\alpha{\beta-n}\gamma{2i}{2j}{2k+2N}$ with $$\mathsmall{\qn{i,j,k}\qn{\alpha-{{{r'}}}-k+N;{{{r'}}}-i}! \qn{\beta-{{{r'}}}-i-n;{{{r'}}}-j}! \qn{\gamma-{{{r'}}}-j;{{{r'}}}-k-N}!}$$ in Equation \eqref{eq:recgenup} where $l=-i-j-k=N-{{r'}}$. The second formula is obtained from Equation \eqref{eq:recgendown} by replacing $\sjv\alpha{\beta+n}\gamma{2i}{2j}{2k-2N}$ with $$\qn{\delta+n+1;{{{r'}}}+i}!\qn{\epsilon+1;{{{r'}}}+j}! \qn{\phi+2N-n+1;{{{r'}}}+k-N}!$$ where $l=-i-j-k={{r'}}-N.$ \end{proof} \linespread{1}
761a70c19f1a9c776a3095025d8c31d146e17203
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{sec:intro} Networks are an essential tool for modeling complex systems. The nodes of a network represent the components of the system and the links between nodes represent interactions between those components. Networks have been applied fruitfully to a wide variety of social \cite{guimer_self-similar_2003, newman_social_2003}, technological \cite{broder_graph_2000}, and biological \cite{jeong_large-scale_2000} systems. Many network properties have been studied to discover how different functional or generative constraints on the network influence the network's structure. In this paper we examine five properties of particular importance: the degree sequence \cite{newman_random_2001}, which counts the number of nodes in the network with $k$ links; the clustering coefficient \cite{newman_properties_2003}, which measures the tendency of connected triples of nodes to form triangles; the number of $q$-cliques, i.e. complete subgraphs with $q$ nodes; the assortativity \cite{newman_assortative_2002}, which measures the tendency of nodes to connect to other nodes of similar degree; and the modularity \cite{newman_finding_2004}, which measures the tendency of nodes in the network to form tightly interconnected communities. Their formal definitions are recalled in Sec.~2. Models of network {\it ensembles} are of interest because they formalize and guide our expectations about real-world networks and their properties \cite{foster_link_2007}. The most famous are the Erd\"os-R\'enyi model of random networks \cite{erdos_random_1959}, and the scale-free Barab\'asi-Albert model \cite{barabasi-albert_1999}. Comparison with an {\it a priori} realistic ``null" model can also indicate which features of a real network are expected based on the null model features, and which are surprising and thus of interest, as in motif search \cite{milo_network_2002}. In the latter context, the most popular ensemble is the {\it configuration model} \cite{molloy-reed_1995} and related variants \cite{maslov_specificity_2002}, in which all networks with a given number of nodes and a given degree sequence have the same weight. One problem of the configuration model is that it shows far too little clustering; this problem is especially important when the model is applied to motif search in, e.g., protein interaction networks \cite{baskerville_2007}. A model where clustering can be enhanced by means of a fugacity term in a network Hamiltonian was introduced by D. Strauss \cite{strauss_general_1986} and studied in detail in \cite{park_solution_2005}. In the Strauss model, the density of edges is also controlled by a second fugacity. Thus it is a generalization of the Erd\"os-R\'enyi model with fixed edge {\it probability}, not with fixed edge number. In the Strauss model there is a strong first order phase transition \cite{park_solution_2005} from a phase with weak clustering to a phase where nearly all edges condensate in a single densely connected cluster consisting of high degree nodes. This phase transition is often seen as a flaw, as it does not allow the intermediate clustering observed in most real networks \footnote{This transition can also be seen as first order {\it percolation} transition, since a giant percolating cluster is formed when $\beta$ is increased through the critical point. It is, however, very different from ``explosive percolation" in Achlioptas processes \cite{achlioptas_2009}, which is also a first order percolation transition. While the Strauss model is a genuine thermodynamic model with Hamiltonian structure, and the phase transition happens as a true control parameter is increased, explosive percolation is a strictly nonequilibrium process where the control is done via a {\it density} (of established bonds). We could also try to control the bond density in the Strauss model, but then we would get a continuous transition with phase coexistence, as in any system which undergoes a thermodynamic first order transition.}. In the present paper, we introduce and analyze the {\it Biased Rewiring Model} (BRM). As in the configuration model, we fix the exact degree sequence--accounting for quenched heterogeneity in node properties. But as in the Strauss model, we control the average number of closed triangles by a Hamiltonian \cite{park_statistical_2004} containing a conjugate fugacity $\beta$. By fixing the degree sequence we prevent the extreme condensation of edges typical of the Strauss model, and we might {\it a priori} hope to achieve a smooth control of the clustering. Indeed, a very similar model, but with a slightly different Hamiltonian, had been proposed in \cite{milo_network_2002}. To our surprise we found that this is not the case, and the clustering cannot be smoothly controlled. To search for phase transitions, we plotted several characteristics (number of triangles, number of $q$-cliques with $q=4$ and 5, assortativity, and modularity) against $\beta$. In all these plots and for all non-regular graphs (i.e. graphs with a non-trivial degree distribution) we found {\it several} jumps which look like first order phase transitions (or large Barkhausen jumps in ferromagnets \cite{Sears-Zemanski,Perkovic_1995,Zapperi_1997}. Associated with these jumps are important hysteresis effects. Further, we found that high degree nodes play a crucial role in generating these phase transitions. It is thus not surprising that a somewhat simpler scenario holds for regular graphs (same degree $k$ for all nodes), where we found a single phase transition for all $k>2$. The only case where we found no transition at all is that of regular graphs with $k=2$. Unfortunately it is only in the last, somewhat trivial, case that we can do exact analytic calculations. In all other cases our results are based on simulations. In \cite{milo_network_2002}, the Hamiltonian was chosen to bias not towards a {\it larger} number of triangles, but towards a {\it specific} number. In order to achieve this reliably, one needs a fugacity which is larger than that in the BRM. In the limit of large fugacities this is similar to a model with a hard constraint. In general, statistical models with hard constraints show slower relaxation and worse ergodic behavior than models with soft constraints \cite{barkema_MC}. We expect thus that hysteresis effects might be even more pronounced in the model of \cite{milo_network_2002} and might render it less useful as a null model, even if the problem of phase transitions is hidden. For simplicity we shall in the following call the model of \cite{milo_network_2002} ``triangle conserving", although the name is not strictly correct. We find that for triangle conserving rewiring, important structures remain largely unchanged on extremely long time scales, requiring particular care when using the method. In general, phase transitions, strong hysteresis, and persistent structures of highly connected nodes together present substantial pitfalls for null-models of clustered networks. In the next section we shall collect some basic background information, including the precise definitions of the model with unbiased rewiring and the Strauss model. The definition of the BRM and our numerical procedure is given in Sec.~II.F. Our main results are found in Secs.~III.A to III.C, while some results for the model with hard constraints of Milo {\it et al.} \cite{milo_network_2002} are presented in Sec.~III.D Finally, Sec.~IV contains our conclusions. \section{Background} \label{sec:bg} \subsection{Degree sequences} \label{sec:deg} The degree of a node is the number of links in which the node participates. The network's degree sequence $\{n_k |\; k=0,1\ldots k_{\rm max}\}$ counts the number of nodes in the network which degree $k$. The networks studied in this paper are regular ($n_k = \delta_{k,k_0}$), Erd\"os-R\'enyi (poissonian $n_k$), and several real world networks with fat tails. Network properties often depend strongly on the degree sequence \cite{newman_random_2001}. Thus real networks are often compared with null models which preserve the degree sequence. \subsection{Clustering coefficient and $q$-cliques} \label{sec:c} Three nodes are {\it connected}, if at least two of the three possible links between them exits. If all three links exist, they form a {\it triangle}. The clustering coefficient \cite{watts_collective_1998} measures the ``transitivity" of relationships in the network, i.e. the probability that three connected nodes are also a triangle. Denoting the number of triangles by $n_\Delta$ and the degree of node $i$ by $k_i$, one has \begin{equation} C = {3n_\Delta\over {1\over 2}\sum_{i=1}^N (k_i-1)k_i}. \label{clust} \end{equation} If every relationship in the network is transitive, $C = 1$; if no relationships are transitive, $C = 0$. Note that the denominator of equation \ref{clust} depends only on the degree sequence, and thus $C \propto n_\Delta$ in any ensemble with fixed degrees. In addition to $C$, we can also define similar higher order clustering coefficients based on $q$-cliques, i.e. on complete subgraphs with $q$ nodes, as \begin{equation} C_q = {q\;n_{q-\rm clique} \over \sum_{i=1}^N {k_i \choose q-1}}, \end{equation} where $n_{q-\rm clique}$ is the number of $q-$cliques in the network. Notice that $C=C_3$. As we shall see, we can use any $C_q$ as an order parameter in the phase transitions discussed below. \subsection{Assortativity} \label{sec:r} The assortativity $r$ measures the tendency for nodes in the network to be linked to other nodes of a similar degree. It is defined as the Pearson correlation coefficient between the degrees of nodes which are joined by a link \cite{newman_assortative_2002}. \begin{equation} r = \frac{L\sum_{i=1}^L j_i k_i - [\sum_{i=1}^L j_i]^2} {L\sum_{i=1}^L j_i^2 - [\sum_{i=1}^L j_i]^2} \label{assort} \end{equation} Here $L$ is the number of links in the network and $j_i$ and $k_i$ are the degrees of nodes at each end of link $i$. Thus, if high degree nodes are linked exclusively to other high degree nodes, $r \approx 1$. If high degree nodes are exclusively linked to low degree nodes, $r \approx -1$. \subsection{Modularity} \label{sec:m} There are many methods for identifying community structure in complex networks \cite{porter_communities}, each with its own strengths and drawbacks. We shall use a measure proposed by Newman and Girvan \cite{newman_finding_2004} called {\it modularity}. Assume one has a given partition of the network into $k$ non-overlapping communities. Define $e_{ij}$ as the fraction of all edges which connect a node in community $i$ to a node in community $j$. Thus $a_i = \sum_j e_{ij}$ is the fraction of all links which connect to community $i$. The modularity of the partition is then defined as: \begin{equation} Q = \sum_i (e_{ii} - a_i^2), \label{mod} \end{equation} and the modularity of the network is the maximum of $Q$ over all partitions. $Q$ measures the fraction of `internal' links, versus the fraction expected for a random network with the same degree sequence. It is large when communities are largely isolated with few cross links. The main problem in computing $Q$ for a network is the optimization over all partitions, which is usually done with some heuristics. The heuristics used in the present paper is a greedy algorithm introduced by Newman \cite{newman_fast_2004}. We start with each node in its own community (i.e., all communities are of size 1). Joining two communities $i$ and $j$ would produce a change $\delta Q_{ij}$. All pairs $(i,j)$ are checked, and the pair with the largest $\delta Q_{ij}$ is joined. This is repeated until all $\delta Q_{ij}$ are negative, i.e. until $Q$ is locally maximal. We follow the efficient implementation of this method described by Clauset et al \cite{clauset_finding_2004}. \subsection{Exponential Network Ensembles and Network Hamiltonians} Let us assume that ${\cal G}$ is a set of graphs (e.g. the set of all graphs with fixed number $N$ of nodes, or with fixed $N$ and fixed number $L$ of links, or with fixed $N$ and fixed degree sequence, ...), and $G\in {\cal G}$. Following \cite{park_statistical_2004}, a network {\it Hamiltonian} $H(G)$ is any function defined on ${\cal G}$, used to define an exponential ensemble (analogous to a canonical ensemble in statistical mechanics) by assigning a weight \begin{equation} P(G) \propto e^{-H(G)} \end{equation} to any graph, similar to the Boltzmann-Gibbs weight. Examples of exponential ensembles are the Erd\"os-R\'enyi model $G(N,p)$ where $H = -L\ln [p/(1-p)]$ and the Strauss model with \begin{equation} H_{\rm Strauss} = \theta L - \beta n_\Delta. \end{equation} Here, $p$ (which is not to be confused with $P(G)$) is the probability that a link exists between any two nodes, while $\theta$ and $\beta$ are ``fugacities" conjugate to $L$ and $n_\Delta$, respectively. In the configuration model, ${\cal G}$ is the set of all graphs with a fixed degree sequence and $H=0$. Thus all graphs have the same weight. In contrast, in the ``triangle conserving" biased model of Milo {\it et al.} \cite{milo_network_2002} ${\cal G}$ is again the set of graphs with fixed degree sequence, but \begin{equation} H_{\rm Milo} = \beta |n_\Delta - n_{\Delta,0}|. \end{equation} where $n_{\Delta,0}$ is some target number of triangles, usually the number found in an empirical network. Finally, in the BRM, ${\cal G}$ is again the same but \begin{equation} H_{\rm BRM} = - \beta n_\Delta. \end{equation} Thus, while large weights are given in the BRM (with $\beta >0$) to graphs with many triangles (high clustering), in the model of \cite{milo_network_2002} the largest weights are given to graphs with $n_\Delta = n_{\Delta,0}$. \subsection{Simulations: Rewiring} \label{sec:rewire} Simulations of these ensembles are most easily done by the Markov chain Metropolis-Hastings method \cite{barkema_MC}. This is particularly easy for models without fixed degree sequences, e.g. the Strauss model. There, new configurations are simply generated by randomly adding or removing links. This is not possible for the ensembles with fixed degree sequences, where the most natural method is {\it rewiring} \cite{maslov_specificity_2002}. We will first discuss the unbiased case (the configuration model), and then discuss the two biased cases $H_{\rm Milo}$ and $H_{\rm BRM}$. \subsubsection{Unbiased Rewiring} Starting from a current graph configuration $G$, a new graph $G'$ is proposed as follows: Two links which have no node in common are chosen at random, e.g. $X$---$Y$ and $W$---$Z$. Links are then swapped randomly either to $X$---$W$ and $Y$---$Z$, or to $X$---$Z$ and $Y$---$W$. If this leads to a double link (i.e. one or both of the proposed new links is already present), the new graph $G'$ is discarded and $G$ is kept. Otherwise, $G'$ is accepted. It is easily seen that this conserves the degree sequence, satisfies detailed balance, and is ergodic \cite{maslov_specificity_2002}. Thus it leads to equidistribution among all graphs with the degree sequence of the initial graph. Although there seem to exist no exact results on the speed of equilibration, previous experience \cite{maslov_specificity_2002,baskerville_2007} suggests that the above unbiased rewiring is very fast indeed, and can be used efficiently even for large networks. \subsubsection{Biased Rewiring} For biased rewiring with a Hamiltonian $H(G)$, the proposal stage is the same, and only the acceptance step has to be modified, according to the standard Metropolis-Hastings procedure \cite{hastings_monte_1970, barkema_MC}: If $H(G') \leq H(G)$, then $G'$ is accepted (unless it has a double link, of course). Otherwise the swap is accepted only with a probability \begin{equation} p = e^{H(G) - H(G')} \label{prob} \end{equation} which is less than 1. The detailed protocols for simulating the two biased models studied in this paper are different. For the BRM we start with the actual network $G_0$ whose degree sequence we want to use, and propose first $M_0$ {\it unbiased} swaps, with $M_0$ sufficiently large so that we end up in the typical region of the unbiased ensemble. After that we increase $\beta$ in small steps (typically $\delta\beta = 0.002$), starting with $\beta=0$. After each step in $\beta$ we propose $M_1$ swaps to equilibrate approximately, and then take take at fixed $\beta$ an ensemble average (with further equilibration) by making $m$ measurements, each separated by $M_2$ additional proposed swaps. Thus the total number of proposed swaps at each fixed $\beta$ is $M_1 + (m-1)M_2$. Typically, $M_0 \approx 10^6, M_1 > 10^5, M_2 \approx 10^3 - 10^5$, and $m\approx 500 - 10,000$. Following the $m$ measurements we increase $\beta$ and repeat this procedure, until a preset maximal value $\beta_{\rm max}$ is reached. After that, we reverse the sign of $\delta\beta$ and continue with the same parameters $M_1, M_2,$ and $m$ until we reach again $\beta=0$, thereby forming a hysteresis loop. Fugacity values during the ascending part of the loop will in the following be denoted by $\beta^+$, those in the descending part as $\beta^-$. In cases where we start from a real world network with $n_{\Delta,0}$ triangles, we choose $\beta_{\rm max}$ sufficiently large so that $n_{\Delta}(\beta^+) > n_{\Delta,0}$, i.e. the clustering covered by the hysteresis loop includes the clustering coefficient of the original network. For the biased model of Milo {\it et al.} \cite{milo_network_2002} we skip the first stage (i.e., we set $M_0=0$), and we jump immediately to a value of $\beta$ (estimated through preliminary runs) which must be larger than the smallest $\beta^+$ which gave rise to $n_{\Delta,0}$ triangles in the ascending part of the loop discussed above. We first make $M_1$ swaps to equilibrate, and then make $m$ measurements, each separated by $M_2$ further swaps (an alternative protocol using multiple annealing periods will be discussed in Sec.~\ref{sec:null}). Averages are taken only over configurations with exactly $n_{\Delta,0}$ triangles. If $\beta$ is too small, the bias will not be sufficient to keep $n_\Delta$ near $n_{\Delta,0}$, and $n_\Delta$ will drift to smaller values. Even if this is not the case and if $\beta$ is sufficiently large in principle, the algorithm will slow down if $\beta$ is near its lower limit, since then $n_\Delta$ will seldom hit its target value. On the other hand, if $\beta$ is too large then the algorithm resembles an algorithm with rigid constraint, which usually leads to increased relaxation times. Thus choosing an optimal $\beta$ is somewhat delicate in this model. \section{Results} \label{sec:results} We explored the behavior of the BRM for three different classes of degree sequences: Fixed $k$ networks, in which every node of the network is degree $k$; Poisson degree distributions as in Erd\"os-R\'enyi networks; and typical fat-tailed distributions as in most empirical networks. Although we studied many more cases (Erd\"os-R\'enyi networks with different connectivities and sizes and several different protein-protein interaction networks), we present here only results for fixed $k$ with different $k$, for one Erd\"os-R\'enyi network, and for two empirical networks with fat-tailed degree distributions: A high energy physics collaboration network \cite{newman_structure_2001} and a protein-protein interaction network for yeast ({\it S. cerevisiae}) \cite{gavin_functional_2002}). In all but fixed $k$ networks we found multiple discontinuous phase transitions, while we found a single phase transition in all fixed $k$ networks with $k>2$. \subsection{Fixed $k$ networks, analytic and simulation results} \label{sec:fixedk} \begin{figure} \includegraphics[scale=.32]{Figure1.pdf} \caption{\label{Figure1} (color online) Average number of triangles for networks with fixed $k=3$, plotted against $\beta$. All curves are obtained by full hysteresis cycles, with $M_1 = 200000$ initial swaps after increase/decrease of $\beta$, and $M_2=5000$ additional swaps after each of $m = 40000$ measurements at the same value of $\beta$. Hysteresis loops are seen for $N\geq 200$, but not for $\leq 100$. The straight line corresponds to the approximation Eq.~(\ref{n_approx}).} \end{figure} \subsubsection{Fixed $k$ simulations} For each $k$, the configuration with maximal $n_\Delta$ is a disjoint set of $(k+1)-$cliques, i.e. the graph decomposes into disjoint completely connected components of $k+1$ nodes. When $N$ is divisible by $k+1$, this gives \begin{equation} n_\Delta^{(k,\rm max)} = {N\over k+1}{k+1\choose 3}. \label{n_max} \end{equation} For $k=2$, this $n_\Delta^{(k,\rm max)}$ is reached in a smooth way. For each $k\geq 3$, in contrast, and for sufficiently large $N$, $n_\Delta$ first increases proportional to $\exp(\beta)$, but then the increase accelerates and finally it jumps in a discrete step to a value very close to $n_\Delta^{(k,\rm max)}$. This is illustrated for $k=3$ in Fig.~\ref{Figure1}, where we plot hysteresis curves for $n_\Delta$ against $\beta$. From this and from similar plots for different $k$ we observe the following features: \begin{itemize} \item For small $\beta$, all curves are roughly described by \begin{equation} n_\Delta \approx {(k-1)^3 \over 6} e^\beta \label{n_approx} \end{equation} (see the straight line in Fig.~\ref{Figure1}), and this approximation seems to become exact as $N\to\infty$. Notice that this implies that $n_\Delta$ is independent of $N$, and the clustering coefficient is proportional to $1/N$. \item While the curves are smooth and do not show hysteresis for small $N$, they show both jumps and hysteresis above a $k-$dependent value of $N$. This is our best indication that the phenomenon is basically a first order phase transition, similar to the one in the Strauss model. Above the jump, the curves saturate (within the resolution of the plot) the bound given in Eq.(\ref{n_max}). \item The critical values of $\beta$ increase logarithmically with $N$, although a precise determination is difficult due to the hysteresis. Notice that size dependent critical points are not very common, but there are some well known examples. Maybe the most important ones are models with long range or mean field type interactions, where the number of interaction terms increases faster than $N$. In the present case the reason for the logarithmic increase of $\beta_c$ is that networks with fixed $k$ become more and more sparse as $N$ increases. Thus also the {\it density} of triangles (the clustering coefficient) decreases, and in a Markov chain MC method, there are increasingly more proposed moves which destroy triangles than moves which create them. To compensate for this and make the number of accepted moves equal, $\exp(\beta_c)$ has to increase $\propto N$. \end{itemize} \begin{figure} \includegraphics[scale=.22]{Figure2.pdf} \caption{\label{Figure2} Average number of triangles of fixed-$k$ degree sequence networks, with $k = 2,3,5,10,$ and $16$, versus the fugacity (bias) $\beta$. Network size is $N=400$ for all curves. In these simulations $\beta$ was slowly increased, until a jump in $n_\Delta$ was seen (for $k\geq 3$). The straight line shows the theoretical prediction for $k = 2$: $n_\Delta = \frac{1}{6} e^{\beta}$. The inset shows $n_\Delta/e^{\beta}$ for $k=2$.} \end{figure} In Fig. \ref{Figure2} we show the average number of triangles as a function of $\beta$ for fixed $k$ networks, $k = 2, 3, 5, 10,$ and $16$, with $N = 400$ nodes. For each curve we used $M_1=4000000$ initial swaps after each increase in $\beta$, and $M_2=200000$ additional swaps after each of $m\geq 5000$ measurements at the same value of $\beta$. For clarity we show only values for increasing $\beta$, although there is strong hysteresis for all $k\geq 3$ and for $N=400$. For $k=2$ there is not only no hysteresis, but there is indeed no indication of any phase transition. As seen from the inset, the data for $k=2$ are for all values of $\beta$ very well described by Eq.~(\ref{n_approx}), up to the point where it reaches the bound Eq.(\ref{n_max}). Close to that point there is a tiny bump in the curve shown in the inset, that will be explained in the next sub-sub-section. \subsubsection{$k = 2$ analytic results} We now give an analytical derivation of Eq.~(\ref{n_approx}) for $k=2$, and we also show that this should become exact in the limit $N\to\infty$. In a fixed $k = 2$ network, there are $N$ nodes and $N$ links all arranged in a set of disjoint simple loops. Triangles are the smallest possible loops, since self-links and double links are not allowed. For large $N$ and small $\beta$ nearly all loops are large, thus the number of loops of length $<7$ is of order $1/N$ and can be neglected for $N\to\infty$ and finite $\beta$, except that we have to allow for a small fraction of loops to have length 3, in order to achieve equilibration of the rewiring procedure. Consider now a network of size $N$ with $n_\Delta$ triangles and a triangle bias $\beta$. The rewiring process will reach an equilibrium, when the probability of destroying a triangle is equal to the probability of creating a new one. First we calculate the probabilities of randomly generating a swap which destroys a triangle. The total number of ways to choose a pair of links and perform a swap is $\cal{N}$ $= \frac{N(N - 1)}{2} \times 2$, where $\frac{N(N - 1)}{2}$ gives the number of distinct pairs of links and the extra factor of 2 accounts for the two possible ways of swapping the links. To destroy a triangle, one of the links must be chosen from it, and the other from a larger loop (the chance that both links are chosen from triangles, which would lead to the destruction of both, can be neglected). There are $3n_\Delta$ possible links in triangles to choose from, and $(N- 3n_\Delta)$ links in larger loops. Thus the probability of choosing a swap which would destroy a triangle is \begin{equation} p_{\Delta-} = \frac{3n_\Delta(N-3n_\Delta)\times 2}{\cal{N}} = \frac{6n_\Delta}{N} \times [1+O(N^{-1})], \end{equation} where the factor of $2$ in the numerator corresponds to the fact that both possible swaps destroy a triangle and the correction term takes also into account the neglected loops of lengths 4,5, and 6. To add a triangle to the network, two links must be chosen from the same long loop. They must be separated by exactly two links. There are $\ell$ such pairs in a loop of length $\ell$, and thus the total number of such pairs in the network is $N$, neglecting terms of $O(1)$, corresponding to the triangles and loops shorter than $7$. This leaves us with the probability of adding a triangle \begin{equation} p_{\Delta+} = \frac{N}{\cal{N}} = N^{-1} \times [1+O(N^{-1})]. \end{equation} where there is no factor of $2$ in the numerator because only one of the two possible swaps will lead to triangle creation. Balance will be achieved when \begin{equation} p_{\Delta+} = e^{-\beta}p_{\Delta-}, \end{equation} giving \begin{equation} n_\Delta = \frac{e^\beta}{6} \label{balance2} \end{equation} up to correction terms of order $1/N$, which is just Eq.~(\ref{n_approx}) for $k=2$. The simple exponential behavior of $n_\Delta$ with $\beta$ occurs because swaps create/destroy triangles (except in the rare case of breaking up a loop of length 6) independently and one at a time. For networks with nodes of degree greater than $2$ this is still basically true when $\beta$ is small. But as $\beta$ increases, nodes cluster together more densely, allowing each link to participate in many triangles. For large values of $\beta$ these links, once formed, become difficult to remove from the network. This cooperativity -- in which the presence of triangles helps other triangles to form and makes it harder for them to be removed -- explains intuitively the existence of first order phase transitions for $k\geq3$ but not for $k=2$, where the cooperative effect is not possible. Indeed, for $n_\Delta$ very close to $n_\Delta^{(2,\rm max)}$ there is {\it some} cooperativity even for $k=2$. The configuration with $n_\Delta=n_\Delta^{(2,\rm max)}$ can be changed only by breaking up {\it two} triangles and joining their links in a loop of length 6. When $n_\Delta$ is close to $n_\Delta^{(2,\rm max)}$, link swaps which involve two triangles become increasingly prevalent. The tendency to form and destroy triangles two at a time introduces a very weak cooperativity, which is only strong enough to be effective when $n_\Delta^{(2,\rm max)}-n_\Delta=O(1)$. It is thus not enough to give rise to a phase transition, but it explains the small bump seen in the inset of Fig.~\ref{Figure2}. \subsection{Networks with non-trivial degree sequences} \label{sec:ER} \begin{table} \begin{tabular}{ c c c c c c l } \multicolumn{6}{c}{Network properties} \\ \hline \hline Network & $N$ & $\langle k\rangle $ & $C$ & $r$ & $Q$ & Comment and Ref.\\ \hline ER & $800$ & $5.0$ & $.002$ & $-.0004$ & $0.196$ & Erd\"os-R\'enyi \\ HEP & $7610$ & $4.1$ & $.33$ & $.29$ & $0.397$ & scientific collab. \cite{newman_structure_2001}\\ Yeast & $1373$ & $10.0$ & $.58$ & $.58$ & $0.380$ & protein binding \cite{gavin_functional_2002}\\ \end{tabular} \caption{\label{table1} The number of nodes $N$, the number of links $L$, the average degree $\langle k\rangle$, the clustering coefficient $C$, and the assortativity $r$ for each of the networks discussed in Sec.~\ref{sec:ER}.} \end{table} \begin{figure} \includegraphics[scale=.22]{Figure3.pdf} \caption{\label{Figure3} Average number of triangles in BRM networks with an ER degree sequence with 800 nodes and $\langle k\rangle = 5$, plotted against the bias $\beta$. The lower curve corresponds to slowly increasing $\beta$, the upper to decreasing $\beta$.} \end{figure} \begin{figure} \includegraphics[scale=.22]{Figure4.pdf} \caption{\label{Figure4} Similar to Fig.~\ref{Figure3}, but for the HEP network (see Table~\ref{table1}). The dotted line indicates the number of triangles in the real network.} \end{figure} \begin{figure} \includegraphics[scale=.22]{Figure5.pdf} \caption{\label{Figure5} Similar to Fig.~\ref{Figure3}, but for the Yeast network (see Table~\ref{table1}).} \end{figure} We explored the behavior of our biased rewiring model for various degree sequences. These included Erd\"os-R\'enyi graphs with different sizes and different connectivities and several real-world networks. The latter typically show more or less fat tails. In order to find any dependence on the fatness, we also changed some of the sequences manually in order to reduce or enhance the tails. We found no significant systematic effects beyond those visible already from the following three typical networks, and restrict our discussion in the following to these: an Erd\"os-R\'enyi graph \cite{erdos_random_1959} (henceforth ER), a high energy physics collaboration network (HEP) \cite{newman_structure_2001}, and a yeast protein binding network (\cite{gavin_functional_2002} (Yeast). Some of their properties are collected in Table~\ref{table1}. \begin{figure} \includegraphics[scale=.22]{Figure6.pdf} \caption{\label{Figure6} (color online) Four network characteristics (modularity ($Q$), clustering coefficient ($C_3$), 4-clique clustering coefficient ($C_4$), and assortativity ($r$)) for BRM networks with the Yeast degree sequence of Table~\ref{table1} versus $\beta$. These data are drawn from the same simulation as in Fig.~\ref{Figure5}, but for clarity only the results for increasing values of $\beta$ are shown.} \end{figure} Figs.~\ref{Figure3}, \ref{Figure4}, and \ref{Figure5} show $n_\Delta$ for these three networks. In each case $M_0 = 10^6, M_1 = 1.5\times 10^5, M_2 = 50000$, and $m = 500$. In each of them a full hysteresis cycle is shown, with the lower curves (labeled $\beta^+$) corresponding to increasing and the upper curves ($\beta^-$) corresponding to decreasing $\beta$. In Figs.~\ref{Figure4} and \ref{Figure5} the dotted line shows the number of triangles in the empirical networks. For small values of $\beta^+$ all three figures exhibit a similar exponential increase in the number of triangles as that observed in fixed $k$ networks. At different values of $\beta$, however, there is a sudden, dramatic increase in $n_\Delta$, which does {\it not}, however, lead to saturation as it did for fixed $k$. This first phase transition is followed by a series of further transitions through which the network becomes more and more clustered. Many of them are comparable in absolute magnitude to the first jump. Although the rough positions of the jumps depend only on the degree sequences, their precise positions and heights change slightly with the random number sequences used and with the speed with which $\beta$ is increased. Thus the precise sequence of jumps has presumably no deeper significance, but their existence and general appearance seems to be a universal feature found in {\it all} cases. Associated with the jumps in $n_\Delta$ are jumps in all other network characteristics we looked at, see Fig.~\ref{Figure6}. Although the locations of the jumps in $n_\Delta$ depend slightly on the details of the simulation, the jumps in the other characteristics occur always at {\it exactly} the same positions as those in $n_\Delta$. Obviously, at each jump a significant re-structuring of the network occurs, which affects all measurable quantities. Speculations how these reorganizations can be best described and what is their most ``natural" driving mechanism will be given in the next subsection. \begin{figure} \includegraphics[scale=.32]{Figure7.pdf} \caption{\label{Figure7} (color online) Values of the rescaled characteristics $n_\Delta/n_{\Delta,\rm max}$ and $(r-r_{\rm min})/(r_{\rm max}-r_{\rm min})$, measured at the same values of $\beta^\pm$, and plotted against each other. The points represent the values for the real HEP and Yeast networks.} \end{figure} In the downward branch of the hysteresis loop, as $\beta^-$ decreases toward zero, the number of triangles remains high for a long time, forming a significant hysteresis loop. This loop suggests that all jumps should be seen as discontinuous (first order) phase transitions. Since all studied systems are finite, these hysteresis loops would of course disappear for infinitely slow increase/decrease of the bias. But the sampling shown involved $>25$ million attempted swaps at each value of $\beta$, and no systematic change in the hysteresis was seen when compared to twice as fast sweeps. In Fig.~\ref{Figure7}, we plotted $n_\Delta$ against the assortativity for the {\it same} values of $\beta^\pm$, normalizing both quantities to the unit interval. The hope was that in this way we would get universal curves which are the same for $\beta^+$ and $\beta^-$, and maybe even across different networks. Indeed we see a quite remarkable data collapse. It is certainly not perfect, but definitely better than pure chance. It suggests that biasing with the BRM leads to networks where the two characteristics $n_\Delta/n_{\Delta,\rm max}$ and $(r-r_{\rm min})/(r_{\rm max}-r_{\rm min})$ are strongly -- but non-linearly -- correlated. This indicates a potential scaling relationship between these network parameters in our model. For the two empirical networks, we show also the real values of these characteristics. They fall far from the common curve, indicating that these networks are not typical for the BRM with any value of $\beta$. \begin{figure*}[htp] \includegraphics[scale=.5]{Figure8.pdf} \caption{\label{Figure8} (color online) Relevant parts of 3-clique adjacency plots for the Yeast degree sequence. The color of each point indicates the number of $3$cliques (or triangles) in which the link participates, as given by the scale on the right hand side. Each pair of plots shows (from top to bottom) the 3CAP for a typical member of the ensemble shortly before the first jump seen in Fig.~\ref{Figure5}, shortly after it, shortly after the second jump, and shortly after the third jump. The plots on the left hand side show the 3CAP with the nodes ranked in order of their degree. In the ``diagonalized" plots we rearranged the ranking so that nodes which participate in the three clusters formed by each jump are ranked together, at the head of the list. The rest of the nodes are ranked by degree.} \end{figure*} Among the three networks studied here, the ER network is closest to a fixed $k$ network, and it should thus show behavior closest to that studied in the last subsection. This is not very evident from Figs.~\ref{Figure3} to \ref{Figure5}. On the other hand, we see clearly from these figures that the position of the first transition -- in particular in $\beta^+$ -- decreases with the average degree. Also, hysteresis seems to be more closely tied to individual jumps for ER, while it is more global (and thus also more important overall) for HEP and Yeast. For the HEP and Yeast networks, we can compare the clustering of the BRM ensemble to that in the real empirical networks. The latter numbers are shown as a dashed lines in Figs.~\ref{Figure4} and \ref{Figure5}. In both cases, the line intersects the hysteresis loop where it is very broad. This means that a large value of $\beta^+$ is required to reach the real network's level of clustering when the bias is increased, whereas a much lower value $\beta^-$ must be reached before these triangles can be rewired out of the network again. This gap between $\beta^+$ and $\beta^-$ at fixed $n_\Delta$ has important implications for the triangle ``conserving" null model of Ref.~\cite{milo_network_2002}, as we will discuss later. \subsection{Clique adjacency plots and clustering cores} Up to now we have not given any intuitive arguments why clustering seems to increase in several jumps, and not in one single jump or in a continuous way. {\it A priori} one might suggest that each jump is related to the break-up of a connected component into disconnected subgraphs, just as the phase transition in regular graphs was associated to such a break-up. By counting the numbers of disconnected components we found that this is not the case, except in special cases \footnote{If the degree sequence has, e.g., 20 hubs each of degree 19 and otherwise only nodes with degrees $<4$, then we would expect that the first jump leads to a clique with all 20 hubs which would then be disconnected from the rest. But this is a very atypical situation}. Instead, we will now argue that each jump is associated with the sudden formation of a highly connected cluster of high degree nodes. The first jump in a scan with increasing $\beta$ occurs when some of the strongest hubs link among themselves, forming a highly connected cluster. Subsequent jumps indicate the formations of other clusters with high intra- but low inter-connections. What distinguishes this picture from the standard modularity observed in many real-world networks is that it automatically leads to large assortativity: Since it is high degree nodes which form the first cluster(s), there is a strong tendency that clusters contain nodes with similar degrees (for previous discussions on how clustering of nodes depends on their degree, see e.g. \cite{ravasz_hierarchical_2003, soffer_network_2005}). Even though the modules formed are somewhat atypical, the BRM does demonstrate the ability of a bias for triangle formation to give rise to community structure {\it de novo}, whereas in other models, community structure must be put in by hand \cite{newman_social_2003}. In the following, the clusters of tightly connected nodes created by the BRM are called {\it clustering cores}. To visualize them, we use what we call $q${\it-clique adjacency plots} (qCAPs) in the following. A $q$-clique adjacency plot is based on an integer-valued $N\times N$ matrix $T^q_{ij}$ called the $q$-clique adjacency matrix. It is defined as $T^q_{ij}=0$ when there is no link between $i$ and $j$, and otherwise as the number of $q$-cliques which this link is part of. In other words, if $q = 3$, $T^{q=3}_{ij}$ is non-zero only when $i$ and $j$ are connected, and in the $3$CAP case it counts the number of common neighbors. $T^q_{ij}$ can be considered a proximity measure for nodes: linked nodes with many common neighbors are likely to belong to the same community. Similar proximity measures between nodes which depend on the similarity of their neighborhoods have been used in \cite{Ravasz_2002,leicht_2006,Ahn_2009}. To visualize $T_{ij}$, we first rank the nodes and then plot for each pair of ranks a pixel with corresponding color or gray scale. Possible ranking schemes are by degree, by the number of triangles attached to the node, or by achieving the most simple looking, block diagonalized, $q$-clique adjacency. Examples for the Yeast degree sequence are given in Fig.~\ref{Figure8}. The four rows, descending from the top, show the 3CAP for typical members of the BRM ensemble before the first jump and after the first, second, and third jumps. The plots in the left column show the ranking done by the degrees of the nodes. The plots on the right show the same matrices after ``diagonalization", with the nodes forming the first cluster placed in the top ranks, followed by the nodes forming the second cluster, and the nodes forming the third cluster. Only the relevant parts of the 3CAPs are shown: nodes with lower ranks do not play any substantial role except for very large values of $\beta$. We notice several features: \begin{itemize} \item Not all highest degree nodes participate in the first clustering cores. Obviously, the selection of participating nodes is to some degree random, and when sufficiently many links are established they are frozen and cannot be changed easily later. This agrees with our previous observation that the positions of the jumps change unsystematically with details like the random number sequence or the speed with which $\beta$ is increased. \item Clustering cores that have been formed once are not modified when $\beta$ is further increased. Again this indicates that existing cores are essentially frozen. \item Clustering cores corresponding to different steps do not overlap. \end{itemize} All three points are in perfect agreement with our previous finding that hysteresis effects are strong and that structures which have been formed once are preserved when $\beta$ is increased further. From other examples (and from later jumps for the same Yeast sequence) we know that the last two items in the list are not strictly correct in general, although changes of cores and overlap with previous cores do not occur often. Thus the results in Fig.~\ref{Figure8} are too extreme to be typical. When a clustering core is formed, most of the links connected to these nodes will be saturated, and the few links left over will not have a big effect on the further evolution of the core. We find that as $\beta^-$ decreases, the clustering cores persist well below the value of $\beta^+$ at which they were created (not shown here). This shows again that once a link participates in a large number of triangles, it is very stable and unlikely to be removed again. $3$-clique adjacency plots are also useful for analyzing empirical networks, independent of any rewiring null model, to help visualize community structure. While nodes in different communities often are linked, these links between communities usually take part in fewer triangles than links within communities. Thus simply replacing the standard adjacency matrix by the $3$clique adjacency matrix should help discover and highlight community structure \cite{Ravasz_2002,leicht_2006,Ahn_2009}. In the top left panel of Figs \ref{Figure9} and \ref{Figure10} we show parts of the 3CAPs for the yeast protein-protein interaction and HEP networks respectively. In both cases, nodes are ranked by degree. We see that the triangles are mostly formed between strong hubs, as we should have expected. But clustering in the real networks does not strictly follow the degree pattern, in the sense that some of the strongest hubs are not members of prominent clusters. This shows again that real networks often have features which are not encoded in their degree sequence, and that a null model entirely based on the latter will probably fail to reproduce these features. We see also that links typically participate in {\it many} triangles, if they participate in at least one. This is in contrast to a recently proposed clustering model, which assumes that each link can only participate in a single triangle \cite{newman_random_2009}. \begin{figure} \includegraphics[scale=.25]{Figure9.pdf} \caption{\label{Figure9} (color online) Parts of 3CAPs for the real yeast protein-protein interaction network of \cite{gavin_functional_2002}, for a typical network of the ``triangle conserving" ensemble with no annealing, for a network obtained after an ``annealing" period with $\beta=0$ and a subsequent quench with $\beta\neq 0$ using `triangle conserving" rewirings \cite{milo_network_2002}, and for an ensemble obtained by 500 of such annealing/quenching alternations.} \end{figure} \begin{figure} \includegraphics[scale=.25]{Figure10.pdf} \caption{\label{Figure10} (color online) Analogous to Fig.~\ref{Figure9}, but for the real high energy physics collaboration network and for the HEP degree sequence, respectively.} \end{figure} \subsection{Triangle conserving null models} \label{sec:null} In the previous subsection we considered the case where the bias is ``unidirectional". In contrast to this, Milo {\it et al.} \cite{milo_network_2002} considered the case where the bias tends to increase the number of triangles when it is below a number $n_{\Delta,0}$, but pushes it {\it down} when it is above. In this way one neither encounters any of the jumps discussed above nor any hysteresis. But that does not mean that the method is not plagued by the same basic problem, i.e. extreme sluggish dynamics and effectively broken ergodicity. In the most straightforward implementation of triangle conserving rewiring with the Hamiltonian $H_{\rm Milo}$ \cite{milo_network_2002} one first estimates during preliminary runs a value of $\beta$ which is sufficiently large so that $n_\Delta$ fluctuates around $n_{\Delta,0}$. Then one starts with the original true network and rewires it using this $\beta$, {\it without first `annealing' it} to $\beta=0$. The effect of this is seen in the top right panels of Figs. \ref{Figure9} and \ref{Figure10}. In both cases, the $3$CAPs shown were obtained after $>10^9$ attempted swaps. At $\beta=0$, this number would have been much more than enough to equilibrize the ensemble. But for the large values of $\beta$ needed for these plots ($\beta=1.5$ for Yeast, and $\beta= 2.4$ for HEP), few changes from the initial configurations are seen. This is particularly true for the strongest clusters existing in the real networks. Triangles not taking part in these clusters change more rapidly, but are also less important. Thus we see a pitfall inherent in triangle conserving rewiring: when the bias is strong enough to push the number of triangles in the network up to the desired target number, the bias will also be large enough that links between high degree nodes are hardly ever randomized. \begin{figure} \includegraphics[scale=.25]{Figure11.pdf} \caption{\label{Figure11} (color online) Values of the assortativity $r$ and of the number of 4-cliques in the real HEP network and in 400 members of the triangle-conserving biased ensemble. These 400 realizations were obtained approximately by 200 anneal/quench cycles with $\beta = 2.3$ and 200 cycles with $\beta = 2.5$, as described in the text. Notice that the results for biased simulations should become more exact as $\beta$ decreases towards $\beta_c \leq 2$.} \end{figure} As a way out of this dilemma, we can alternate epochs where we use triangle conserving swaps with ``annealing periods" where we use $\beta=0$. In this way we would guarantee that memory is wiped out during each annealing period (see the lower left panels in Figs. \ref{Figure9} and \ref{Figure10}), and each ``quenching epoch" would thus contribute essentially one independent configuration to the ensemble. After many such cycles we would obtain an ensemble which looks much more evenly sampled (lower right panels in Figs. \ref{Figure9} and \ref{Figure10}), although even then we can not be sure that it really represents the equilibrium ensemble for the Hamiltonian $H_{\rm Milo}$. Apart from the last caveat, the method would presumably be too slow for practical applications where high accuracy and precise variances of ensemble observables are needed, since one needs one entire cycle per data point. But it can be useful in cases where it is sufficient to estimate fluctuations roughly, and where high precision is not an issue. To illustrate this, we present in Fig.~\ref{Figure11} results for the HEP network where we made 200 anneal/quench cycles for two different values of $\beta$ ($\beta = 2.3$ and $\beta = 2.5$). In each cycle the quenching was stopped when the number of triangles reached the value of the real network, and the values of $r$ and of the number of 4-cliques was recorded. We see from Fig.~\ref{Figure11} that these values scatter considerably, but are in all cases far from the values for the real network. Thus the ensemble is a poor model for the real HEP network. We also see from Fig.~\ref{Figure11} that $r$ and $n_{\rm 4-clique}$ depend slightly on $\beta$ (as was expected), but not so much as to invalidate the above conclusion. \section{Conclusion} \label{sec:conc} In highly clustered networks -- and that means for most real world networks -- most of the clustering is concentrated amongst the highest degree nodes. The Strauss model correctly pointed to an important feature: clustering tends to be cooperative. Once many triangles are formed in a certain part of the network, they help in forming even more. Thus, clustering cannot be smoothly and evenly introduced into a network; it is often driven by densely interconnected, high-degree regions of the network. In triangle biased methods these high-degree regions can emerge quite suddenly and thereafter prove quite resistant to subsequent randomization. The biased rewiring model studied in the present paper is of exponential type, similar to the Strauss model, with the density of triangles controlled by a `fugacity' or inverse `temperature' $\beta$. However, we prevent the catastrophic increase of connectivity at the phase transition of the Strauss model by imposing a fixed degree sequence. Yet there is still a first order transition for homogeneous networks, i.e. those with fixed degree. In the phase with strong clustering (large fugacity / low temperature), the configuration is basically a collection of disjoint $k-$cliques. If the degree sequence is not trivial, the formation of {\it clustering cores} can no longer happen at the same $\beta$ for different parts of the network. Thus the single phase transition is replaced by a sequence of discrete and discontinuous jumps, which resemble both first order transitions and Barkhausen jumps. As in the real Barkhausen phenomenon, frozen randomness is crucial for the multiplicity of jumps. There, each jump corresponds to a {\it flip} of a spin cluster {\it already defined} by the randomness -- at least at zero temperature \cite{Perkovic_1995,Zapperi_1997}. In the present case, however, each jump corresponds to the {\it creation} of a cluster whose detailed properties are not fixed by the quenched randomness (the degree sequence), but depend also on the `thermal' (non-quenched) noise. As in any first order phase transition, our model shows strong hysteresis. Clustering cores, once formed, are extremely stable and cannot be broken up easily later. This limits its usefulness as a null model, even if it is treated numerically such that the phase transition jumps do not appear explicitly, as in the version of \cite{milo_network_2002}. Because of the very slow time scales involved, Monte Carlo methods cannot sample evenly from these ensembles. Care should be taken to demonstrate that results found using them are broadly consistent across various sampling procedures. The spontaneous emergence of clustering cores in the BRM does suggest that triangle bias can give rise to community structure in networks, without the need to define communities {\it a priori}, thanks to the cooperativity of triangle formation. Together with jumps in the number of triangles (i.e. in the clustering coefficient), there are also jumps in all other network properties at the same control parameter positions. In particular, we found jumps in the number of $k-$cliques with $k>3$, in the modularity, and in the assortativity. This immediately raises the question whether the model can be generalized so that a different fugacity is associated to each of these quantities. For assortativity, this was proposed some time ago by Newman \cite{newman_assortative_2002}. With the present notation, biased rewiring models with and without target triangle number $n_{\Delta,0}$ and target assortativity $r_0$ are given by the Hamiltonians \begin{equation} H_{\rm Milo}(G;\beta,\gamma) = \beta |n_\Delta(G) - n_{\Delta,0} | + \gamma |r(G) - r_0 | \label{CandRHam1} \end{equation} and \begin{equation} H_{\rm BRM}(G;\beta,\gamma) = - \beta n_\Delta(G) -\gamma r(G), \label{CandRHam2} \end{equation} respectively, where $\gamma$ is the fugacity associated to the assortativity. It is an interesting open question whether such a model might lead to less extreme clustering and thus might be more realistic. First simulations \footnote{D. V. Foster {\it et al.}, in preparation} indicate that driving assortativity leads to smooth increases of all other quantities without jumps. The reason for that seems to be that the basic mechanism leading to increased assortativity -- the replacement of existing links by links between similar nodes -- is not cooperative, but further studies are needed. As Newman remarked in \cite{newman_random_2009}, clustering in networks ``has proved difficult to model mathematically." In that paper he introduced a model where each link can participate in one triangle at most. In this way, the phase transitions seen in the Strauss model and in the present model are avoided. However, in the real-world networks studied here we found that the number of triangles in which a link participates is broadly distributed, suggesting that the Newman model \cite{newman_random_2009} may not be realistic for networks with significant clustering. Indeed, specifying for each link the number of triangles in which it participates adds valuable information to the adjacency matrix (which just specifies whether the link exists or not). The resulting `$3$ clique adjacency plots' revealed structures which would not have been easy to visualize otherwise and are useful also in other contexts. Thus, in contrast to what is claimed in \cite{newman_random_2009}, the quest for realistic models for network clustering is not yet finished.
59f1fcb7da06a837dbe772d3bc1bdc02d70d25c6
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Let $\ensuremath{\mathbb{N}}=\{0,1,2,\ldots\}$ be the set of non-negative integers. For $n\in\ensuremath{\mathbb{N}}$, write $\ensuremath{\mathcal{S}_n}$ for the set of permutations of the set $[n]=\{1,\ldots,n\}$. In this article, we study properties of \textit{record-biased permutations}~\cite{auger2016analysis}, defined as follows. Let $n\in\ensuremath{\mathbb{N}}$ and $\theta\in[0,\infty)$. The record-biased distribution with parameters $n$ and $\theta$ is the probability measure $w_{n,\theta}$ on $\ensuremath{\mathcal{S}_n}$ defined by \begin{align*} w_{n,\theta}(\sigma)=\frac{\theta^{\ensuremath{\mathrm{record}}(\sigma)}}{W_{n,\theta}}\,, \end{align*} where $\ensuremath{\mathrm{record}}(\sigma)=|\{i\in[n]:\forall j\in[i-1],\,\sigma(i)>\sigma(j)\}|$ is the number of records of $\sigma$ and $W_{n,\theta}=\sum_{\sigma\in\ensuremath{\mathcal{S}_n}}\theta^{\ensuremath{\mathrm{record}}(\sigma)}$ is a normalizing constant. This model of random permutations was introduced by Auger, Bouvel, Nicaud, and Pivoteau~\cite{auger2016analysis}, who characterized some of its properties: the asymptotic distribution of the number of records, the number of descents, the first value $\sigma(1)$, and the number of inversions. This paper derives the first order asymptotic behaviour of the heights of the binary search trees associated to record-biased permutations. \medskip Let $T_\infty=\{\ensuremath{\varnothing}\}\cup\bigcup_{k\geq1}\{\ensuremath{\overline{0}},\ensuremath{\overline{1}}\}^k$ be the complete infinite rooted binary tree, where nodes $u$ at depth $|u|\geq1$ are indexed by strings written as $u=u_1\ldots u_{|u|}\in\{\ensuremath{\overline{0}},\ensuremath{\overline{1}}\}^{|u|}$. This means that $u$ has parent $u_1\ldots u_{|u|-1}$ and children $u\ensuremath{\overline{0}}$ and $u\ensuremath{\overline{1}}$. For a set $V\subset T_\infty$ and a node $u\in T_\infty$, let $uV=\{uv,v\in V\}$. Call a \textit{subtree} of $T_\infty$ (or just ``tree'', for short) a set $T\subset T_\infty$ which is connected when viewed as a subgraph of $T_\infty$. For any subtree $T$ of $T_\infty$, its \textit{root} is defined to be the unique node of $T$ of minimum depth. Given a tree $T$ and any node $u\in T_{\infty}$, let $T(u)=(uT_\infty)\cap T$ be the subtree of $T$ rooted at $u$; note that if $\ensuremath{\varnothing}\in T$, then $T(u)=\emptyset$ if and only if $u\notin T$. Finally, for any subtree $T\subset T_\infty$, write $h(T)=\sup(|u|,u\in T)-\inf(|u|,u\in T)$ for its height, corresponding to the greatest distance between any node and the root of $T$. Call a \textit{labelled tree} any pair $(T,\tau)$ where $T$ is a subtree of $T_\infty$ and $\tau:T\rightarrow\ensuremath{\mathbb{N}}$ is an injective function. Furthermore say that $(T,\tau)$ is a \textit{binary search tree} if, for any $u\in T$, and for any $v\in T(u\ensuremath{\overline{0}})$ (respectively $v\in T(u\ensuremath{\overline{1}})$), we have $\tau(u)>\tau(v)$ (respectively $\tau(u)<\tau(v)$); in other words, a binary search tree is a labelled tree where the label of each node is larger than the labels of its left subtree and smaller than the labels of its right subtree. Given an injective function $f:[n]\rightarrow\ensuremath{\mathbb{N}}$, call \textit{binary search tree of $f$} and write $\big(\bst{f},\bstl{f}\big)$ for the unique binary search tree such that, for all $i\in[n]$, the set \begin{align*} \bstl{f}^{-1}\Big(\big\{f(1),\ldots,f(i)\big\}\Big) \end{align*} is a subtree of $T_\infty$ of size $i$ and rooted at $\ensuremath{\varnothing}$. This definition can also be rephrased inductively. Indeed, $\big(\bst{f|_{[i]}},\bstl{f|_{[i]}}\big)$ is obtained from $\big(\bst{f|_{[i-1]}},\bstl{f|_{[i-1]}}\big)$ by inserting the value $f(i)$ in the labelled tree $\big(\bst{f|_{[i-1]}},\bstl{f|_{[i-1]}}\big)$ at the unique location for which $\big(\bst{f|_{[i]}},\bstl{f|_{[i]}}\big)$ remains a binary search tree. An example of this construction can be found in Figure~\ref{fig:BST}. \begin{figure}[htb] \centering \begin{tikzpicture}[scale=0.4] \begin{scope} \draw[line width=0.02cm, lightblock!80!block] (0,0) grid (7,-7); \draw[line width=0.025cm, lightblock!30!block, ->] (0,0) -- (7.5,0); \draw[line width=0.025cm, lightblock!30!block, ->] (0,0) -- (0,-7.5); \node[lightblock!30!block, anchor=east, scale=0.8] at (0,-7){$i$}; \node[lightblock!30!block, anchor=south, scale=0.8] at (7,0){$f(i)$}; \node[draw, circle, theme, fill=lighttheme, line width=0.05cm, scale=1.3] at (2,-1){}; \node[white, scale=0.9](2) at (2,-1){\textbf{2}}; \node[anchor=north west, scale=0.8] at (0.2,-7.1){$f=(2)$}; \end{scope} \begin{scope}[xshift=10cm] \draw[line width=0.02cm, lightblock!80!block] (0,0) grid (7,-7); \draw[line width=0.025cm, lightblock!30!block, ->] (0,0) -- (7.5,0); \draw[line width=0.025cm, lightblock!30!block, ->] (0,0) -- (0,-7.5); \node[lightblock!30!block, anchor=east, scale=0.8] at (0,-7){$i$}; \node[lightblock!30!block, anchor=south, scale=0.8] at (7,0){$f(i)$}; \node(2) at (2,-1){}; \node(4) at (4,-2){}; \draw[theme, line width=0.05cm] (2.center) -- (4.center); \node[draw, circle, theme, fill=white, line width=0.05cm, scale=1.3] at (2,-1){}; \node[draw, circle, theme, fill=lighttheme, line width=0.05cm, scale=1.3] at (4,-2){}; \node[theme, scale=0.9](2) at (2,-1){\textbf{2}}; \node[white, scale=0.9](4) at (4,-2){\textbf{4}}; \node[anchor=north west, scale=0.8] at (0.2,-7.1){$f=(2,4)$}; \end{scope} \begin{scope}[xshift=20cm] \draw[line width=0.02cm, lightblock!80!block] (0,0) grid (7,-7); \draw[line width=0.025cm, lightblock!30!block, ->] (0,0) -- (7.5,0); \draw[line width=0.025cm, lightblock!30!block, ->] (0,0) -- (0,-7.5); \node[lightblock!30!block, anchor=east, scale=0.8] at (0,-7){$i$}; \node[lightblock!30!block, anchor=south, scale=0.8] at (7,0){$f(i)$}; \node(2) at (2,-1){}; \node(4) at (4,-2){}; \node(1) at (1,-3){}; \draw[theme, line width=0.05cm] (2.center) -- (1.center); \draw[theme, line width=0.05cm] (2.center) -- (4.center); \node[draw, circle, theme, fill=white, line width=0.05cm, scale=1.3] at (2,-1){}; \node[draw, circle, theme, fill=white, line width=0.05cm, scale=1.3] at (4,-2){}; \node[draw, circle, theme, fill=lighttheme, line width=0.05cm, scale=1.3] at (1,-3){}; \node[theme, scale=0.9](2) at (2,-1){\textbf{2}}; \node[theme, scale=0.9](4) at (4,-2){\textbf{4}}; \node[white, scale=0.9](1) at (1,-3){\textbf{1}}; \node[anchor=north west, scale=0.8] at (0.2,-7.1){$f=(2,4,1)$}; \end{scope} \begin{scope}[yshift=-10cm] \draw[line width=0.02cm, lightblock!80!block] (0,0) grid (7,-7); \draw[line width=0.025cm, lightblock!30!block, ->] (0,0) -- (7.5,0); \draw[line width=0.025cm, lightblock!30!block, ->] (0,0) -- (0,-7.5); \node[lightblock!30!block, anchor=east, scale=0.8] at (0,-7){$i$}; \node[lightblock!30!block, anchor=south, scale=0.8] at (7,0){$f(i)$}; \node(2) at (2,-1){}; \node(4) at (4,-2){}; \node(1) at (1,-3){}; \node(6) at (6,-4){}; \draw[theme, line width=0.05cm] (2.center) -- (1.center); \draw[theme, line width=0.05cm] (2.center) -- (4.center); \draw[theme, line width=0.05cm] (4.center) -- (6.center); \node[draw, circle, theme, fill=white, line width=0.05cm, scale=1.3] at (2,-1){}; \node[draw, circle, theme, fill=white, line width=0.05cm, scale=1.3] at (4,-2){}; \node[draw, circle, theme, fill=white, line width=0.05cm, scale=1.3] at (1,-3){}; \node[draw, circle, theme, fill=lighttheme, line width=0.05cm, scale=1.3] at (6,-4){}; \node[theme, scale=0.9](2) at (2,-1){\textbf{2}}; \node[theme, scale=0.9](4) at (4,-2){\textbf{4}}; \node[theme, scale=0.9](1) at (1,-3){\textbf{1}}; \node[white, scale=0.9](6) at (6,-4){\textbf{6}}; \node[anchor=north west, scale=0.8] at (0.2,-7.1){$f=(2,4,1,6)$}; \end{scope} \begin{scope}[xshift=10cm,yshift=-10cm] \draw[line width=0.02cm, lightblock!80!block] (0,0) grid (7,-7); \draw[line width=0.025cm, lightblock!30!block, ->] (0,0) -- (7.5,0); \draw[line width=0.025cm, lightblock!30!block, ->] (0,0) -- (0,-7.5); \node[lightblock!30!block, anchor=east, scale=0.8] at (0,-7){$i$}; \node[lightblock!30!block, anchor=south, scale=0.8] at (7,0){$f(i)$}; \node(2) at (2,-1){}; \node(4) at (4,-2){}; \node(1) at (1,-3){}; \node(6) at (6,-4){}; \node(3) at (3,-5){}; \draw[theme, line width=0.05cm] (2.center) -- (1.center); \draw[theme, line width=0.05cm] (2.center) -- (4.center); \draw[theme, line width=0.05cm] (4.center) -- (3.center); \draw[theme, line width=0.05cm] (4.center) -- (6.center); \node[draw, circle, theme, fill=white, line width=0.05cm, scale=1.3] at (2,-1){}; \node[draw, circle, theme, fill=white, line width=0.05cm, scale=1.3] at (4,-2){}; \node[draw, circle, theme, fill=white, line width=0.05cm, scale=1.3] at (1,-3){}; \node[draw, circle, theme, fill=white, line width=0.05cm, scale=1.3] at (6,-4){}; \node[draw, circle, theme, fill=lighttheme, line width=0.05cm, scale=1.3] at (3,-5){}; \node[theme, scale=0.9](2) at (2,-1){\textbf{2}}; \node[theme, scale=0.9](4) at (4,-2){\textbf{4}}; \node[theme, scale=0.9](1) at (1,-3){\textbf{1}}; \node[theme, scale=0.9](6) at (6,-4){\textbf{6}}; \node[white, scale=0.9](3) at (3,-5){\textbf{3}}; \node[anchor=north west, scale=0.8] at (0.2,-7.1){$f=(2,4,1,6,3)$}; \end{scope} \begin{scope}[xshift=20cm,yshift=-10cm] \draw[line width=0.02cm, lightblock!80!block] (0,0) grid (7,-7); \draw[line width=0.025cm, lightblock!30!block, ->] (0,0) -- (7.5,0); \draw[line width=0.025cm, lightblock!30!block, ->] (0,0) -- (0,-7.5); \node[lightblock!30!block, anchor=east, scale=0.8] at (0,-7){$i$}; \node[lightblock!30!block, anchor=south, scale=0.8] at (7,0){$f(i)$}; \node(2) at (2,-1){}; \node(4) at (4,-2){}; \node(1) at (1,-3){}; \node(6) at (6,-4){}; \node(3) at (3,-5){}; \node(5) at (5,-6){}; \draw[theme, line width=0.05cm] (2.center) -- (1.center); \draw[theme, line width=0.05cm] (2.center) -- (4.center); \draw[theme, line width=0.05cm] (4.center) -- (3.center); \draw[theme, line width=0.05cm] (4.center) -- (6.center); \draw[theme, line width=0.05cm] (6.center) -- (5.center); \node[draw, circle, theme, fill=white, line width=0.05cm, scale=1.3] at (2,-1){}; \node[draw, circle, theme, fill=white, line width=0.05cm, scale=1.3] at (4,-2){}; \node[draw, circle, theme, fill=white, line width=0.05cm, scale=1.3] at (1,-3){}; \node[draw, circle, theme, fill=white, line width=0.05cm, scale=1.3] at (6,-4){}; \node[draw, circle, theme, fill=white, line width=0.05cm, scale=1.3] at (3,-5){}; \node[draw, circle, theme, fill=lighttheme, line width=0.05cm, scale=1.3] at (5,-6){}; \node[theme, scale=0.9](2) at (2,-1){\textbf{2}}; \node[theme, scale=0.9](4) at (4,-2){\textbf{4}}; \node[theme, scale=0.9](1) at (1,-3){\textbf{1}}; \node[theme, scale=0.9](6) at (6,-4){\textbf{6}}; \node[theme, scale=0.9](3) at (3,-5){\textbf{3}}; \node[white, scale=0.9](5) at (5,-6){\textbf{5}}; \node[anchor=north west, scale=0.8] at (0.2,-7.1){$f=(2,4,1,6,3,5)$}; \end{scope} \end{tikzpicture} \caption{An example of construction of the binary search tree $(\bst{\sigma},\bstl{\sigma})$ for the permutation $\sigma=(2,4,1,6,3,5)$. The nodes highlighted in green correspond to the most recently inserted elements, and the values on the nodes correspond to the values of the labelling function $\bstl{\sigma}$. This figure represents the recursive construction of binary search trees, where values are inserted one after the other at the unique location such that the label of each node is larger than the labels of its left subtree and smaller than the labels of its right subtree.} \label{fig:BST} \end{figure} For the rest of the paper, write $T_{n,\theta}$ for a \textit{record-biased tree}, defined to be distributed as $\bst{\sigma}$ where $\sigma$ is $w_{n,\theta}$-distributed. Let $c^*=4.311\ldots$ be the unique solution of $c\log\left(\frac{2e}{c}\right)=1$ with $c\geq2$. The goal of this work is to prove the following result on the height of record-biased trees. \begin{thm}\label{thm:combined} Let $(\theta_n)_{n\geq0}$ be any sequence of non-negative numbers. Then, as $n$ tends to infinity, \begin{align*} \frac{h(T_{n,\theta_n})}{\max\left\{c^*\log n,\,\theta_n\log\left(1+\frac{n}{\theta_n}\right)\right\}}\longrightarrow1\,, \end{align*} in probability and in $L^p$ for any $p>0$. \end{thm} For any $n\in\ensuremath{\mathbb{N}}$ and $\theta\in[0,\infty)$, write \begin{align} \mu(n,\theta)=\sum_{0\leq i<n}\frac{\theta}{\theta+i}\,.\label{eq:mu} \end{align} By comparison to the integral, we can see that \begin{align*} \bigg|\mu(n,\theta)-\left[1+\theta\log\left(1+\frac{n}{\theta}\right)\right]\bigg|\leq\theta\log\left(1+\frac{1}{\theta}\right)\,, \end{align*} which implies that, as $n$ tends to infinity, we have $\frac{\mu(n,\theta_n)}{1+\theta_n\log(1+n/\theta_n)}\rightarrow1$. It follows that, for any sequence $(\theta_n)_{n\geq0}$ of non-negative numbers, \begin{align*} \max\Big\{c^*\log n,\,\mu(n,\theta_n)\Big\}=\big(1+o(1)\big)\max\left\{c^*\log n,\,\theta_n\log\left(1+\frac{n}{\theta_n}\right)\right\}\,. \end{align*} This implies that the convergence in Theorem~\ref{thm:combined} is equivalent to the statement that \begin{align*} \frac{h(T_{n,\theta_n})}{\max\big\{c^*\log n,\,\mu(n,\theta_n)\big\}}\longrightarrow1 \end{align*} in probability and in $L^p$ for any $p>0$. For the rest of the paper, we aim at proving this statement rather than the one of Theorem~\ref{thm:combined}. By taking subsequences, to prove Theorem~\ref{thm:combined}, it suffices to consider two cases: if $\theta_n\equiv\theta\in[0,\infty)$ is constant, or if $\theta_n\rightarrow\infty$ as $n\rightarrow\infty$. In the case where $\theta_n=\theta$ for all $n\geq0$, then $\mu(n,\theta_n)\sim\theta\log n$. This case of Theorem~\ref{thm:combined} can be rewritten as follows. \begin{thm}\label{thm:height} For any non-negative number $\theta\in[0,\infty)$, as $n$ tends to infinity, \begin{align*} \frac{h(T_{n,\theta})}{\log n}\longrightarrow\max\big\{c^*,\theta\big\}\, \end{align*} in probability and in $L^p$ for any $p>0$. \end{thm} When $\theta=1$, the tree $T_{n,\theta}$ is a \textit{random binary search tree}, the binary search tree of a uniformly random permutation, and this case of Theorem~\ref{thm:height} corresponds to a well-known result of Devroye~\cite{devroye1986note}; this case will also be used as input for the proof of the general result. Theorem~\ref{thm:height} furthermore shows that there is a change of behaviour for the height of record-biased trees at the value $\theta=c^*$. Since previous results~\cite{auger2016analysis} on record-biased permutations proved that the number of records is of order $\theta\log n$, this change of behaviour corresponds to the moment where the first-order height of the tree becomes characterized by the length of its rightmost path. Note that, Theorem~\ref{thm:height} also states that the asymptotic behaviour of the height does not change around the value $\theta=1$. This result might be unexpected since, for $\theta<1$ the records of the permutations are penalized, whereas they are rewarded when $\theta>1$. This strong change of behaviour for the permutation does not affect the height of the tree since, in the case where $\theta<c^*$, the height of $T_{n,\theta}$ is actually mainly characterized by the height of the left subtree of the root. Theorem~\ref{thm:height} covers the case where $(\theta_n)_{n\geq0}$ is constant in Theorem~\ref{thm:combined}. For the case where $(\theta_n)_{n\geq0}$ diverges to infinity, first note that, for such sequences, $\mu(n,\theta_n)=\omega(\log n)$, where $a_n=\omega(b_n)$ means that $|a_n/b_n|\rightarrow\infty$. This case of Theorem~\ref{thm:combined} can thus be rewritten (and strengthened) as follows. \begin{thm}\label{thm:strongHeight} Let $(\theta_n)_{n\geq0}$ be a sequence of non-negative numbers such that $\theta_n\rightarrow\infty$. Then, as $n$ tends to infinity, \begin{align*} \frac{h(T_{n,\theta_n})}{\mu(n,\theta_n)}\longrightarrow1 \end{align*} in probability and in $L^p$ for any $p>0$. Moreover, for any $\epsilon>0$ and $\lambda>0$, \begin{align*} \mathbb{P}\left(\left|\frac{h(T_{n,\theta_n})}{\mu(n,\theta_n)}-1\right|>\epsilon\right)=O\left(\frac{1}{n^\lambda}\right)\,. \end{align*} \end{thm} By using the first Borel-Cantelli lemma, the second bound of this result implies that, whenever the random variables of the sequence $(T_{n,\theta_n})_{n\geq0}$ are defined on a common probability space, the first convergence also occurs almost-surely. Together, Theorems~\ref{thm:height} and \ref{thm:strongHeight} establish Theorem~\ref{thm:combined}. \subsection{Overview of the proofs}\label{subsec:overview} The main strategy to prove Theorems~\ref{thm:height} and \ref{thm:strongHeight} is based on a similar method to that of~\cite{addario2021height}: controlling the size of the rightmost path and the behaviour of the left subtrees hanging from this rightmost path. We now provide an overview of the proof of Theorem~\ref{thm:height} - a similar method is used to prove Theorem~\ref{thm:strongHeight}. In particular Proposition~\ref{prop:inductiveTree}, Lemma~\ref{lem:boundsRecords}, and Proposition~\ref{prop:upperBoundL} below are all non-asymptotic results, so can be applied in the setting of either theorem. The main result that we use to understand the overall structure of the tree is given by the following proposition. \begin{prop}\label{prop:inductiveTree} Let $n\in\ensuremath{\mathbb{N}}$ and $\theta\in[0,\infty)$. Let $T_{n,\theta}$ be a record-biased tree with parameters $n$ and $\theta$. Then, for any $k\in[n]$, \begin{align*} \mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{0}})\big|=k-1\Big)=\mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{1}})\big|=n-k\Big)=\frac{\theta}{\theta+n-k}\prod_{1\leq i<k}\left(1-\frac{\theta}{\theta+n-i}\right)\,. \end{align*} Moreover, conditionally given that $|T_{n,\theta}(\ensuremath{\overline{0}})|=k-1$, $T_{n,\theta}(\ensuremath{\overline{0}})$ is distributed as a random binary search tree of size $k-1$, $T_{n,\theta}(\ensuremath{\overline{1}})$ is distributed as a record-biased tree with parameters $n-k$ and $\theta$, and $T_{n,\theta}(\ensuremath{\overline{0}})$ and $T_{n,\theta}(\ensuremath{\overline{1}})$ are independent of each other. Conversely, the preceding properties completely characterize record-biased binary search trees. \end{prop} The proof of Proposition~\ref{prop:inductiveTree} can be found in Section~\ref{subsec:generating}. Note that this proposition implies that \begin{align*} \mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{0}})\big|\geq k\Big)=\prod_{1\leq i\leq k}\left(1-\frac{\theta}{\theta+n-i}\right)\,, \end{align*} a fact that will be used multiple times in what follows. This result implies that record-biased trees can be generated inductively starting from the split between left and right subtree at the root. It also states that only the right subtree keeps the $\theta$-record-biased distribution, since the left subtree is a random binary search tree. Since this work studies the binary search trees associated to record-biased permutations, it is important to understand the relation between the number of records of the permutation and the structure of the tree. Given a permutation $\sigma$, say that $\sigma(i)$ is a record of $\sigma$ if, for all $j<i$, $\sigma(j)<\sigma(i)$. With this definition, note that $\sigma(i)$ is a record of $\sigma$ if and only if the unique node $u\in\bst{\sigma}$ labelled $\sigma(i)$ lies on the rightmost path of $\bst{\sigma}$. Next, for any subtree $T$ of $T_\infty$, let \begin{align*} \ensuremath{\mathrm{record}}(T):=\big|\big\{k:\ensuremath{\overline{1}}^k\in T\big\}\big|=1+\max\big\{k:\ensuremath{\overline{1}}^k\in T\big\}-\min\big\{k:\ensuremath{\overline{1}}^k\in T\big\}\,; \end{align*} by the above remark, if $T=\bst{\sigma}$, then $\ensuremath{\mathrm{record}}(T)=\ensuremath{\mathrm{record}}(\sigma)$. From this definition, it is also useful to note that, for a subtree $T$ of $T_\infty$, we have $h(T)\geq\ensuremath{\mathrm{record}}(T)-1$. An important input to the proofs, which is a fairly straightforward consequence of~\cite[Theorem~3]{auger2016analysis}, is the following lemma, stating bounds on the asymptotic behaviour of the number of records of record-biased permutations and trees. \begin{lemma}\label{lem:boundsRecords} Let $\epsilon>0$. Then, there exists $\delta=\delta(\epsilon)>0$ such that, for all $n\in\ensuremath{\mathbb{N}}$ and $\theta\in[0,\infty)$, for any $w_{n,\theta}$-distributed permutation $\sigma$, we have \begin{align*} \mathbb{P}\left(\left|\frac{\ensuremath{\mathrm{record}}(\sigma)}{\mu(n,\theta)}-1\right|>\epsilon\right)=\mathbb{P}\left(\left|\frac{\ensuremath{\mathrm{record}}(T_{n,\theta})}{\mu(n,\theta)}-1\right|>\epsilon\right)\leq e^{-\delta\mu(n,\theta)}\,, \end{align*} where $\mu(n,\theta)$ is defined as in \eqref{eq:mu}. \end{lemma} The proof of Lemma~\ref{lem:boundsRecords}, whose first equality simply follows from the definition of $\ensuremath{\mathrm{record}}(T)$, can be found in Section~\ref{subsec:rightLength}. Combining this results with the asymptotic behaviour of $\mu$ gives tight bounds on the asymptotic behaviour of the number of records of $T_{n,\theta}$. In particular, if $\theta$ is fixed, and as $n$ tends to infinity, we have $\ensuremath{\mathrm{record}}(T_{n,\theta})=(\theta+o_\mathbb{P}(1))\log n$. An easy way to see the interest in bounding the number of records of record-biased trees is apparent when considering the following lower bound for their height: \begin{align*} h(T_{n,\theta})\geq\max\Big\{1+h\big(T_{n,\theta}(\ensuremath{\overline{0}})\big),\,\ensuremath{\mathrm{record}}(T_{n,\theta})-1\Big\}\,. \end{align*} Indeed, this bound, combined with Lemma~\ref{lem:boundsRecords}, implies that $h(T_{n,\theta})\geq(\theta+o_\mathbb{P}(1))\log n$. Moreover, from Proposition~\ref{prop:inductiveTree}, we can verify that \begin{align*} \log\big|T_{n,\theta}(\ensuremath{\overline{0}})\big|=\big(1+o_\mathbb{P}(1)\big)\log n\,. \end{align*} Since, conditioned on its size, $T_{n,\theta}(\ensuremath{\overline{0}})$ is a random binary search tree, the case $\theta=1$ of Theorem~\ref{thm:height} (which was already established in~\cite{devroye1986note}) implies that \begin{align*} h\big(T_{n,\theta}(\ensuremath{\overline{0}})\big)=\big(c^*+o_\mathbb{P}(1)\big)\log n\,. \end{align*} These two results together show that \begin{align*} h(T_{n,\theta})\geq\max\big\{c^*,\,\theta\big\}\log n+o_\mathbb{P}(\log n)\,, \end{align*} corresponding to the lower bound of Theorem~\ref{thm:height}. For the upper bound, since the left subtrees in $T_{n,\theta}$ are all binary search trees, for which the heights are already known to be concentrated around their expected height~\cite{devroye1986note,drmota2003analytic,reed2003height}, we expect the height of the left subtrees to be well-behaved, conditioned on their sizes. This means that, in order to bound the height of record-biased trees from above, we essentially need to understand two quantities: the size of the rightmost path, and the sizes of the left subtrees hanging from that path. Let $T_{n,\theta}$ be a record-biased tree with parameters $n$ and $\theta$. Since Lemma~\ref{lem:boundsRecords} already gives us strong bounds on the length of the rightmost path of $T_{n,\theta}$, it only remains to better understand the properties of the sizes of the left subtrees hanging from that path $(|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})|)_{j\geq0}$. Using Proposition~\ref{prop:inductiveTree}, it is easy to verify that \begin{align*} \mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})\big|\geq k\,\Big|\,\big|T_{n,\theta}(\ensuremath{\overline{0}})\big|,\ldots,\big|T_{n,\theta}(\ensuremath{\overline{1}}^{j-1}\ensuremath{\overline{0}})\big|\Big)=\prod_{1\leq i\leq k}\left(1-\frac{\theta}{\theta+n-\sum_{0\leq\ell<j}\left(|T_{n,\theta}(\ensuremath{\overline{1}}^\ell\ensuremath{\overline{0}})|+1\right)-i}\right)\,; \end{align*} this equality coming from the fact that, given $|T_{n,\theta}(\ensuremath{\overline{0}})|,\ldots,|T_{n,\theta}(\ensuremath{\overline{1}}^{j-1}\ensuremath{\overline{0}})|$, the tree $T_{n,\theta}(\ensuremath{\overline{1}}^j)$ is a record-biased tree of size $n-\sum_{0\leq\ell<j}(|T_{n,\theta}(\ensuremath{\overline{1}}^\ell\ensuremath{\overline{0}})|+1)$. Given two random variables $X$ and $Y$, we say that $X$ is stochastically smaller than $Y$, and write $X\preceq Y$, if and only if, for all $t\in\mathbb{R}$, we have $\mathbb{P}(X\geq t)\leq\mathbb{P}(Y\geq t)$. Using the above mentioned properties, we can prove that the sizes of the left subtrees $(|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})|)_{j\geq0}$ are stochastically bounded as follows. \begin{prop}\label{prop:upperBoundL} Let $\theta\in[0,\infty)$ and $(B_j)_{j\geq0}$ be a sequence of independent and identically distributed $\mathrm{Beta}(\theta+1,1)$ random variables. Then, for any $n\in\ensuremath{\mathbb{N}}$ and $j\in\ensuremath{\mathbb{N}}$, we have \begin{align*} \big|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})\big|\preceq j+n\prod_{0\leq i<j}B_i \end{align*} \end{prop} The proof of Proposition~\ref{prop:upperBoundL} can be found in Section~\ref{subsec:boundsLeft}. Using the bound from Proposition~\ref{prop:upperBoundL}, by the law of large numbers and the fact that $\mathbb{E}[\log B_0]=-\frac{1}{\theta}$, we expect to have \begin{align*} \big|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})\big|\leq j+ne^{-(1+o_\mathbb{P}(1))\frac{j}{\theta}}\,. \end{align*} Moreover, since $|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})|=0$ whenever $j>\ensuremath{\mathrm{record}}(T_{n,\theta})$ and, by Lemma~\ref{lem:boundsRecords}, $\ensuremath{\mathrm{record}}(T_{n,\theta})=(\theta+o_\mathbb{P}(1))\log n$, the $j$ term in the upper bound will not notably contribute and the previous inequality can be strengthened as follows: \begin{align*} \big|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})\big|\leq ne^{-(1+o_\mathbb{P}(1))\frac{j}{\theta}}\,. \end{align*} By the definition of the height of trees, for any subtree $T$ of $T_\infty$, we have that \begin{align*} h(T)=\max_{0\leq j\leq\ensuremath{\mathrm{record}}(T)}\Big\{j+h\big(T(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})\big)\Big\}\,. \end{align*} Replacing $T$ with $T_{n,\theta}$ in this equality and using the previous upper bound on the size of the left subtrees and the fact that, conditioned on their sizes, the left subtrees are independent random binary search trees, we obtain that \begin{align*} h(T_{n,\theta})&\leq\max_{0\leq j\leq\ensuremath{\mathrm{record}}(T_{n,\theta})}\left\{j+c^*\left(\log n-\big(1+o_\mathbb{P}(1)\big)\frac{j}{\theta}\right)\right\}\\ &=\max_{0\leq j\leq\ensuremath{\mathrm{record}}(T_{n,\theta})}\left\{c^*\log n+j\left(1-\frac{c^*}{\theta}+o_\mathbb{P}(1)\right)\right\}\,. \end{align*} The $o_\mathbb{P}(1)$ term comes from the law of large numbers for the heights of binary search trees and these heights are sufficiently concentrated that this term can in fact be taken out of the maximum. This leads to the following upper bound for the height of record-biased trees \begin{align*} h(T_{n,\theta})\leq c^*\log n+\max\left\{0,\,1-\frac{c^*}{\theta}+o_\mathbb{P}(1)\right\}\ensuremath{\mathrm{record}}(T_{n,\theta})\,. \end{align*} Finally, by using that $\ensuremath{\mathrm{record}}(T_{n,\theta})=(\theta+o_\mathbb{P}(1))\log n$ from Lemma~\ref{lem:boundsRecords}, we obtain that \begin{align*} h(T_{n,\theta})\leq\max\big\{c^*,\,\theta\big\}\log n+o_\mathbb{P}(\log n)\,, \end{align*} which corresponds to the upper bound of Theorem~\ref{thm:height}. The rest of the paper is organized in two sections. The first one, Section~\ref{sec:RBTrees}, uses a generative model for record-biased permutations introduced in~\cite{auger2016analysis} to deduce properties of record-biased trees. The second one, Section~\ref{sec:height}, combines the previously described properties to prove the theorems. Before moving into the details of the proof, we define an important set of events for the study of record-biased trees. Given any $\ensuremath{\mathbb{N}}$-valued sequence $K=(k_j)_{j\geq0}$, let $E_{n,\theta}(K)$ be the event that the left subtree sizes $(|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})|)_{j\geq0}$ are given by the entries of $K$: \begin{align} E_{n,\theta}(K):=\Big\{\big|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})\big|=k_j,\forall j\in\ensuremath{\mathbb{N}}\Big\}\,.\label{eq:EK} \end{align} Naturally, we will only be interested in $K$ such that $\mathbb{P}(E_{n,\theta}(K))>0$. Note that, for any finite subtree $T$ of $T_\infty$, $\ensuremath{\mathrm{record}}(T)$ corresponds to the unique value $r\geq1$ such that \begin{align*} r+\sum_{0\leq j<r}\big|T(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})\big|=|T|\,. \end{align*} This implies that, for any vector $K$ with $\mathbb{P}(E_{n,\theta}(K))>0$, there is a unique non-negative integer $r=r(K)$ such that $r+\sum_{0\leq j<r}k_j=n$ and on the event $E_{n,\theta}(K)$, necessarily $\ensuremath{\mathrm{record}}(T_{n,\theta})=r(K)$. In particular, conditioning on $E_{n,\theta}(K)$ determines $\ensuremath{\mathrm{record}}(T_{n,\theta})$. \subsection{Related work} As mentioned before, record-biased permutations were just recently introduced~\cite{auger2016analysis}; and prior to the current work, to the best of our knowledge, \cite{auger2016analysis} was the only paper studying the model. However, it is worth situating this model in a slightly larger literature on random permutations. An observation on this model made by the authors is that record-biased permutations can be bijectively mapped to Ewens permutations~\cite{ewens1972sampling} using the Foata bijection~\cite{foata1968netto}. This connection leads to interesting properties of both record-biased permutations and Ewens permutations related to their numbers of records and to their cycle structures. Moreover, although the record-biased model was originally connected with Ewens permutations~\cite{ewens1972sampling}, it is worth noting its strong resemblance to the Mallows model of permutations~\cite{mallows1957non}; this resemblance was part of the inspiration for the current work. Contrary to record-biased permutations, the literature on binary search trees and their height is vast and we only provide a glimpse here. The first order asymptotic behaviour of the height of random binary search trees was proven by Devroye~\cite{devroye1986note}, who showed that their height is of order $(c^*+o_\mathbb{P}(1))\log n$; this result was built off a previous work of Pittel~\cite{pittel1984growing}, stating that the height was of order $(\alpha+o_\mathbb{P}(1))\log n$, while not being able to identify the constant $\alpha$. Since then, these results have been extended to higher order asymptotic behaviour~\cite{drmota2003analytic,reed2003height}, and to other models of increasing trees~\cite{broutin2008height,drmota2009height}. Recently, similar results upon which this work is based on, proved the first order asymptotic behaviour of the height of Mallows trees, as well as distributional limits under some assumptions on the parameters of the model~\cite{addario2021height}. Finally, it is worth mentioning that the heights of binary search trees are often closely related to properties of extremes in \textit{branching random walks}. An important example comes from the results of~\cite{broutin2008height,devroye1986note}, which strongly rely on the Hammersley-Kingman-Biggins theorem~\cite{biggins1976first,hammersley1974postulates,kingman1975first}, providing a law of large numbers for the minimum of a branching random walk. Related results on the minimal position in branching random walks can be found in~\cite{addario2009minima,aidekon2013convergence,dekking1991limit,hu2009minimal}. \section{Properties of record-biased trees}\label{sec:RBTrees} In this section we provide useful properties of record-biased trees and prove Proposition~\ref{prop:inductiveTree}, Lemma~\ref{lem:boundsRecords} and Proposition~\ref{prop:upperBoundL}. Moreover, we use Proposition~\ref{prop:upperBoundL} to obtain upper tail bounds on the sizes of the left subtrees $(|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})|)_{j\geq0}$. Most results follow from the generative model of record-biased permutations developed in~\cite{auger2016analysis}. \subsection{Generating record-biased trees}\label{subsec:generating} The following proposition was proven in~\cite{auger2016analysis} and gives an easy way to understand and generate record-biased permutations. \begin{prop}\label{prop:permDistribution} Let $n\in\ensuremath{\mathbb{N}}$ and $\theta\in[0,\infty)$. Let $\sigma$ be a $w_{n,\theta}$-distributed permutation. Then, for all $k\in[n]$, we have \begin{align*} \mathbb{P}\big(\sigma^{-1}(1)=k\big)=\left\{\begin{array}{ll} \frac{\theta}{\theta+n-1} & \textrm{if $k=1$}\\ \frac{1}{\theta+n-1} & \textrm{otherwise} \end{array}\right.\,. \end{align*} Moreover, by defining $A_i=[n]\setminus\{\sigma^{-1}(1),\ldots,\sigma^{-1}(i-1)\}$, we have \begin{align*} \mathbb{P}\big(\sigma^{-1}(i)=k\mid\sigma^{-1}(1),\ldots,\sigma^{-1}(i-1)\big)=\left\{\begin{array}{ll} \frac{\theta}{\theta+n-i} & \textrm{if $k=\min(A_i)$} \\ \frac{1}{\theta+n-i} & \textrm{otherwise} \end{array}\right.\,. \end{align*} \end{prop} The previous proposition fully describes the joint distribution of the values of $\sigma^{-1}(1),\ldots,\sigma^{-1}(n)$, so uniquely characterizes record-biased permutations. Note that $\sigma(k)$ is a record if and only if $k=\min(A_i)$, and that the probability for $\sigma(k)$ to be a record is independent of the previous values of $\sigma^{-1}(1),\ldots,\sigma^{-1}(i-1)$, which gives a useful way to compute the number of records of a record-biased permutation, as stated in the following corollary. \begin{cor}\label{cor:mgfRecord} Let $n\in\ensuremath{\mathbb{N}}$ and $\theta\in[0,\infty)$. Let $\sigma$ be a $w_{n,\theta}$-distributed permutation. Then, for any $t\in\mathbb{R}$, the moment generating function of $\ensuremath{\mathrm{record}}(\sigma)$ is given by \begin{align*} \mathbb{E}\left[e^{t\cdot\ensuremath{\mathrm{record}}(\sigma)}\right]=\prod_{1\leq i\leq n}\left(1+(e^t-1)\frac{\theta}{\theta+n-i}\right)\,. \end{align*} \end{cor} Given a permutation $\sigma\in\ensuremath{\mathcal{S}_n}$ with $k=\sigma(1)$, write $\ensuremath{\sigma_-}$ for the unique permutation of $[k-1]$ corresponding to the ordering of $\sigma$ on the set $\sigma^{-1}([k-1])$, that is \begin{align*} \ensuremath{\sigma_-}^{-1}(i)<\ensuremath{\sigma_-}^{-1}(j)\,\Longleftrightarrow\sigma^{-1}(i)<\sigma^{-1}(j) \end{align*} Similarly, write $\ensuremath{\sigma_+}$ for the unique permutation of $[n-k]$ corresponding to the ordering of $\sigma$ on the set $\sigma^{-1}([n]\setminus[k])$, that is \begin{align*} \ensuremath{\sigma_+}^{-1}(i)<\ensuremath{\sigma_+}^{-1}(j)\,\Longleftrightarrow\sigma^{-1}(k+i)<\sigma^{-1}(k+j) \end{align*} The following corollary gives an interesting characterization of $\sigma(1)$, $\ensuremath{\sigma_-}$, and $\ensuremath{\sigma_+}$ when $\sigma$ is a record-biased permutation. \begin{cor}\label{cor:inductivePermutation} Let $n\in\ensuremath{\mathbb{N}}$ and $\theta\in[0,\infty)$. Let $\sigma$ be a $w_{n,\theta}$-distributed permutation. Then, for any $k\in[n]$, we have \begin{align*} \mathbb{P}\big(\sigma(1)=k\big)=\frac{\theta}{\theta+n-k}\prod_{1\leq i<k}\left(1-\frac{\theta}{\theta+n-i}\right)\,. \end{align*} Moreover, given that $\sigma(1)=k$, $\ensuremath{\sigma_-}$ is a uniformly random permutation of $\mathcal{S}_{k-1}$, $\ensuremath{\sigma_+}$ is a record-biased permutation with parameters $n-k$ and $\theta$, and $\ensuremath{\sigma_-}$ and $\ensuremath{\sigma_+}$ are independent of each other. \end{cor} \begin{proof} For the distribution of $\sigma(1)$, note that we have \begin{align*} \mathbb{P}\big(\sigma(1)=k\mid\sigma(1)>k-1\big)=\mathbb{P}\big(\sigma^{-1}(k)=1\mid\sigma(1)>k-1\big)=\frac{\theta}{\theta+n-k}\,, \end{align*} where the second equality follows from Proposition~\ref{prop:permDistribution} and the fact that $1\in A_k\Leftrightarrow1=\min(A_k)$. By induction, this proves the desired distribution for $\sigma(1)$. For the distribution of $\ensuremath{\sigma_-}$, note that, if $\sigma(1)=k$, then $1\notin\{\sigma^{-1}(1),\ldots,\sigma^{-1}(k-1)\}$. Combining this with Proposition~\ref{prop:permDistribution}, we have that, for any $i<k$, for any $j_1,\ldots,j_{i-1}$ all distinct in $[n]\setminus\{1\}$, and for any $j\in[n]\setminus\{1,j_1,\ldots,j_{i-1}\}$ \begin{align*} &\mathbb{P}\Big(\sigma^{-1}(i)=j\,\Big|\,\sigma(1)=k,\sigma^{-1}(1)=j_1,\ldots,\sigma^{-1}(i-1)=j_{i-1}\Big)\\ &\hspace{0.5cm}=\mathbb{P}\Big(\sigma^{-1}(i)=j\,\Big|\,\sigma^{-1}(i)\neq1,\sigma^{-1}(1)=j_1,\ldots,\sigma^{-1}(i-1)=j_{i-1}\Big)\\ &\hspace{0.5cm}\propto\frac{1}{\theta+n-i}\,. \end{align*} Since the last value does not depend on $j$, $\sigma^{-1}(i)$ is uniformly distributed over $j\in[n]\setminus\{1,j_1,\ldots,j_{i-1}\}$. Moreover, the definition of $\ensuremath{\sigma_-}$ implies that, for any $\tau\in\mathcal{S}_{k-1}$, we have \begin{align*} \mathbb{P}\Big(\ensuremath{\sigma_-}=\tau\,\Big|\,\sigma(1)=k\Big)=\mathbb{P}\Big(\sigma^{-1}\big(\tau(1)\big)<\sigma^{-1}\big(\tau(2)\big)<\ldots<\sigma^{-1}\big(\tau(k-1)\big)\,\Big|\,\sigma(1)=k\Big)=\frac{1}{(k-1)!} \end{align*} which proves that, conditionally given $\sigma(1)=k$, the random permutation $\ensuremath{\sigma_-}$ is uniformly distributed. Finally, for the distribution of $\ensuremath{\sigma_+}$, by writing $A_{k+1}=[n]\setminus\{\sigma^{-1}(1),\ldots,\sigma^{-1}(k)\}=\{x_1<\ldots<x_{n-k}\}$, Proposition~\ref{prop:permDistribution} implies that, for any $\tau\in\mathcal{S}_{n-k}$, we have \begin{align*} &\mathbb{P}\Big(\sigma^{-1}(k+1)=x_{\tau(1)},\ldots,\sigma^{-1}(n)=x_{\tau(n-k)}\,\Big|\,\sigma^{-1}(1),\ldots,\sigma^{-1}(k-1),\sigma(1)=k\Big)\\ &\hspace{0.5cm}=\prod_{k<i\leq n}\mathbb{P}\Big(\sigma^{-1}(i)=x_{\tau(i-k)}\,\Big|\,\sigma^{-1}(1),\ldots,\sigma^{-1}(k-1),\sigma(1)=k,\sigma^{-1}(j)=x_{\tau(j-k)},\forall k<j<i\Big)\\ &\hspace{0.5cm}=w_{n-k,\theta}(\tau)\,. \end{align*} By the definition of $\ensuremath{\sigma_+}$, this proves that \begin{align*} \mathbb{P}\Big(\ensuremath{\sigma_+}=\tau\,\Big|\,\sigma^{-1}(1),\ldots,\sigma^{-1}(k-1),\sigma(1)=k\Big)=w_{n-k,\theta}(\tau)\,, \end{align*} implying that, conditionally given $\sigma(1)=k$, the random permutation $\ensuremath{\sigma_+}$ is $w_{n-k,\theta}$-distributed and independent of $\ensuremath{\sigma_-}$. \end{proof} Proposition~\ref{prop:inductiveTree} now almost directly follows from Corollary~\ref{cor:inductivePermutation}. \begin{proof}[Proof of Proposition~\ref{prop:inductiveTree}] The definition of binary search trees implies that, for any permutation $\sigma$, we have \begin{align*} \bst{\sigma}(\ensuremath{\overline{0}})=\ensuremath{\overline{0}}\bst{\ensuremath{\sigma_-}} \end{align*} and \begin{align*} \bst{\sigma}(\ensuremath{\overline{1}})=\ensuremath{\overline{1}}\bst{\ensuremath{\sigma_+}}\,. \end{align*} Combining this with Corollary~\ref{cor:inductivePermutation} proves that, conditionally given their sizes, $T_{n,\theta}(\ensuremath{\overline{0}})$ is a random binary search tree, $T_{n,\theta}(\ensuremath{\overline{1}})$ is a $\theta$-record-biased tree, and they are independent of each other. The direct statement of Proposition~\ref{prop:inductiveTree} now simply follows from Corollary~\ref{cor:inductivePermutation} and the fact that $|\bst{\sigma}(\ensuremath{\overline{0}})|=\sigma(1)-1$. For the converse, use that binary search trees are completely characterized by their left and right subtree distributions to see that these distributions completely characterize record-biased trees. \end{proof} \subsection{Length of the rightmost path}\label{subsec:rightLength} As explained in Section~\ref{subsec:overview}, to prove our results on the height of record-biased trees, we control the length of the rightmost path and bound the sizes of the left subtrees hanging from that path. We now prove Lemma~\ref{lem:boundsRecords}, bounding the number of records of a record-biased permutation, hence bounding the length of the rightmost path of the corresponding binary search tree. \begin{proof}[Proof of Lemma~\ref{lem:boundsRecords}] Using Corollary~\ref{cor:mgfRecord}, for a $w_{n,\theta}$-distributed permutation $\sigma$, for any $t\in\mathbb{R}$, we have \begin{align*} \mathbb{P}\left[e^{t\cdot\ensuremath{\mathrm{record}}(\sigma)}\right]=\prod_{1\leq i\leq n}\left(1+(e^t-1)\frac{\theta}{\theta+n-i}\right)\,. \end{align*} Now, using Chernoff's bound, it follows that \begin{align*} \mathbb{P}\Big(\ensuremath{\mathrm{record}}(\sigma)>(1+\epsilon)\mu(n,\theta)\Big)\leq e^{-t(1+\epsilon)\mu(n,\theta)}\prod_{1\leq i\leq n}\left(1+(e^t-1)\frac{\theta}{\theta+n-i}\right)\,. \end{align*} Since $1+x\leq e^x$, we have that \begin{align*} \mathbb{P}\Big(\ensuremath{\mathrm{record}}(\sigma)>(1+\epsilon)\mu(n,\theta)\Big)\leq\exp\Big(-t(1+\epsilon)\mu(n,\theta)+(e^t-1)\mu(n,\theta)\Big)\,, \end{align*} where $\mu(n,\theta)$ is defined in \eqref{eq:mu}. For $t$ small enough, $-t(1+\epsilon)+(e^t-1)<0$, proving the upper bound of the lemma. Similarly for the lower bound, \begin{align*} \mathbb{P}\Big(\ensuremath{\mathrm{record}}(\sigma)<(1-\epsilon)\mu(n,\theta)\Big)&\leq e^{t(1-\epsilon)\mu(n,\theta)}\prod_{1\leq i\leq n}\left(1+(e^{-t}-1)\frac{\theta}{\theta+n-i}\right)\\ &\leq\exp\Big(t(1-\epsilon)\mu(n,\theta)+(e^{-t}-1)\mu(n,\theta)\Big)\,, \end{align*} and once again, for $t$ small enough, $t(1-\epsilon)+(e^{-t}-1)<0$. \end{proof} \subsection{Stochastic bound on the left subtrees}\label{subsec:boundsLeft} We conclude this section with results on the sizes of the left subtree of $T_{n,\theta}$. We start with a lemma useful to bound the size of the right subtree at the root of a record-biased tree. \begin{lemma}\label{lem:boundFirstLeft} Let $\theta\in[0,\infty)$ and let $B$ be a $\mathrm{Beta}(\theta+1,1)$-distributed random variable. Then, for any $n\in\ensuremath{\mathbb{N}}$, \begin{align*} \big|T_{n,\theta}(\ensuremath{\overline{1}})\big|\preceq nB+1\,. \end{align*} \end{lemma} \begin{proof} This statement holds when $n=0$ since $0\leq1$, so we can now assume that $n\geq1$. By Proposition~\ref{prop:inductiveTree}, for any $0\leq k\leq n-1$, we have \begin{align*} \mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{1}})\big|\leq n-k-1\Big)=\mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{0}})\big|\geq k\Big)=\prod_{1\leq i\leq k}\left(1-\frac{\theta}{\theta+n-i}\right)\,. \end{align*} By using that $\frac{1}{1+x}\geq e^{-x}$ along with the fact that $1-\frac{\theta}{\theta+n-i}=\frac{1}{1+\frac{\theta}{n-i}}$, this yields the lower bound \begin{align*} \mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{1}})\big|\leq n-k-1\Big)\geq\exp\left(-\theta\sum_{1\leq i\leq k}\frac{1}{n-i}\right)\,. \end{align*} By comparison to the integral, it follows that \begin{align*} \mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{1}})\big|\leq n-k-1\Big)\geq\exp\left(-\theta\int_{n-k-1}^{n-1}\frac{1}{t}dt\right)\geq\left(\frac{n-k-1}{n}\right)^\theta\,. \end{align*} Using this bound, for any $t\in[0,n-1]$, we have \begin{align*} \mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{1}})\big|\leq t\Big)=\mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{1}})\big|\leq \lfloor t\rfloor\Big)\geq\left(\frac{\lfloor t\rfloor}{n}\right)^\theta\,. \end{align*} Finally, use that \begin{align*} \mathbb{P}\big(nB+1\leq t\big)=\left(\left(\frac{t-1}{n}\right)_+\right)^\theta\leq\left(\frac{\lfloor t\rfloor}{n}\right)^\theta \end{align*} to conclude the proof of the proposition. \end{proof} Note that there is a natural stochastic lower bound for $|T_{n,\theta}(\ensuremath{\overline{1}})|$, given by $nB-1-\theta$, whose proof essentially follows the same line of argument as the upper bound. However, this lower bound is not useful for our argument, so we omit it. Using Lemma~\ref{lem:boundFirstLeft}, we can now prove Proposition~\ref{prop:upperBoundL}. \begin{proof}[Proof of Proposition~\ref{prop:upperBoundL}] By Lemma~\ref{lem:boundFirstLeft}, \begin{align*} \mathbb{P}\big(\big|T_{n,\theta}(\ensuremath{\overline{1}})\big|\leq k\big)\geq\mathbb{P}\big(1+nB_0\leq k\big)\,. \end{align*} Now, by using properties of subtrees of a record-biased tree from Proposition~\ref{prop:inductiveTree}, we have that \begin{align*} \mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{1}}^j)\big|\leq k\,\Big|\,\big|T_{n,\theta}(\ensuremath{\overline{1}}^{j-1})\big|=\ell\Big)=\mathbb{P}\Big(\big|T_{\ell,\theta}(\ensuremath{\overline{1}})\big|\leq k\Big)\geq\mathbb{P}\Big(1+\ell B_{j-1}\leq k\Big)\,. \end{align*} It follows that \begin{align*} \big|T_{n,\theta}(\ensuremath{\overline{1}}^j)\big|\preceq1+B_{j-1}\left(j-1+n\prod_{0\leq i<j-1}B_i\right)\leq j+n\prod_{0\leq i<j}B_i\,. \end{align*} To conclude this proof, simply use that $|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})|\leq|T_{n,\theta}(\ensuremath{\overline{1}}^j)|$. \end{proof} We can now conclude this section with a rather sharp upper tail bound on the sizes of the left subtrees in $T_{n,\theta}$. \begin{prop}\label{prop:boundL} Let $n\in\ensuremath{\mathbb{N}}$ and $\theta\in[0,\infty)$. Fix $\epsilon>0$ such that $\epsilon\theta<1$. Then, for any $M\in[0,\infty)$ and $k\in\ensuremath{\mathbb{N}}$ such that $ke^{\left(\frac{1}{\theta}-\epsilon\right)k}<ne^M$, we have \begin{align*} \mathbb{P}\Big(\exists j\leq k,\big|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})\big|>ne^{-\left(\frac{1}{\theta}-\epsilon\right)+M}\Big)\leq Ce^{-\lambda M}\cdot\left(1-\frac{ke^{\left(\frac{1}{\theta}-\epsilon\right)k}}{ne^M}\right)^{-\lambda} \end{align*} where $C=\frac{1}{1-(1-\epsilon\theta)e^{\epsilon\theta}}$ and $\lambda=\frac{\epsilon\theta^2}{1-\epsilon\theta}$. \end{prop} \begin{proof} Fix $j\leq k$. Using the bound from Proposition~\ref{prop:upperBoundL}, we have that \begin{align*} \mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})\big|>ne^{-\left(\frac{1}{\theta}-\epsilon\right)+M}\Big)\leq\mathbb{P}\left(\prod_{0\leq i<j}B_i>e^{-\left(\frac{1}{\theta}-\epsilon\right)j+M}-\frac{j}{n}\right)\,. \end{align*} Now, let $c\in(0,\infty)$ and $t\in(0,\infty)$. Using Markov's inequality, we have \begin{align*} \mathbb{P}\left(\prod_{0\leq i<j}B_i>c\right)\leq\frac{1}{c^t}\mathbb{E}\left[\left(\prod_{0\leq i<j}B_i\right)^t\right]\leq\frac{1}{c^t}\mathbb{E}\left[B_0^t\right]^j=\frac{1}{c^t}\left(\frac{\theta}{\theta+t}\right)^j\,. \end{align*} Since $j\leq k$ and by assumption on the value of $k$, we know that $e^{-\left(\frac{1}{\theta}-\epsilon\right)j+M}-\frac{j}{n}\geq e^{-\left(\frac{1}{\theta}-\epsilon\right)k+M}-\frac{k}{n}>0$. This implies that we can apply the previous bound with $c=e^{-\left(\frac{1}{\theta}-\epsilon\right)j+M}-\frac{j}{n}$ and obtain \begin{align*} \mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})\big|>ne^{-\left(\frac{1}{\theta}-\epsilon\right)+M}\Big)&\leq\left(e^{-\left(\frac{1}{\theta}-\epsilon\right)j+M}-\frac{j}{n}\right)^{-t}\left(\frac{\theta}{\theta+t}\right)^j\\ &=\exp\left(-t\log\left(e^{-\left(\frac{1}{\theta}-\epsilon\right)j+M}-\frac{j}{n}\right)+j\log\left(\frac{\theta}{\theta+t}\right)\right)\,. \end{align*} Write $\Xi_j=\frac{je^{\left(\frac{1}{\theta}-\epsilon\right)j}}{ne^M}<1$. Using that \begin{align*} \log\left(e^{-\left(\frac{1}{\theta}-\epsilon\right)j+M}-\frac{j}{n}\right)=-\left(\frac{1}{\theta}-\epsilon\right)j+M+\log(1-\Xi_j) \end{align*} and that $\Xi_j\leq\Xi_k$, we eventually have that \begin{align*} \mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})\big|>ne^{-\left(\frac{1}{\theta}-\epsilon\right)+M}\Big)\leq\exp\left(\left(\frac{1}{\theta}-\epsilon\right)tj-t\big(M+\log(1-\Xi_k)\big)+j\log\left(\frac{\theta}{\theta+t}\right)\right)\,. \end{align*} To conclude the proof, set $t=\lambda=\frac{\epsilon\theta^2}{1-\epsilon\theta}$ to obtain \begin{align*} \mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})\big|>ne^{-\left(\frac{1}{\theta}-\epsilon\right)+M}\Big)\leq\exp\bigg(\Big(\epsilon\theta+\log(1-\epsilon\theta)\Big)j-\lambda\Big(M+\log(1-\Xi_k)\Big)\bigg)\,. \end{align*} The desired result follows from using a union bound and summing the right hand side over $j\geq0$. \end{proof} \section{The height of record-biased trees}\label{sec:height} In this section, the characteristics of record-biased trees developed in Section~\ref{sec:RBTrees} are used to obtain bounds on their height and eventually prove Theorem~\ref{thm:height} and \ref{thm:strongHeight}. \subsection{Lower bounds on the height} We start by proving the following result, corresponding to the lower bound of Theorem~\ref{thm:height}. \begin{prop}\label{prop:lowerBound} Let $\theta\in[0,\infty)$ and $\epsilon>0$. Then, as $n$ tends to infinity \begin{align*} \mathbb{P}\Big(h(T_{n,\theta})<\big(\max\{c^*,\theta\}-\epsilon\big)\log n\Big)\longrightarrow0\,. \end{align*} \end{prop} \begin{proof} Using Lemma~\ref{lem:boundsRecords} and the fact that $h(T_{n,\theta})\geq\ensuremath{\mathrm{record}}(T_{n,\theta})-1$ along with the asymptotic behaviour of $\mu(n,\theta)$ from \eqref{eq:mu} when $\theta$ is fixed, we first have that \begin{align*} \mathbb{P}\Big(h(T_{n,\theta})<\big(\theta-\epsilon\big)\log n\Big)\longrightarrow0\,, \end{align*} Note that, in the case $\theta=0$ the previous result trivially holds since $h(T_{n,\theta})>0$. It now only remains to prove that \begin{align*} \mathbb{P}\Big(h(T_{n,\theta})<\big(c^*-\epsilon\big)\log n\Big)\longrightarrow0\,. \end{align*} By the definition of the height of trees, we know that \begin{align*} h(T_{n,\theta})\geq1+h\big(T_{n,\theta}(\ensuremath{\overline{0}})\big)\,. \end{align*} Hence, it suffices to show that \begin{align*} \mathbb{P}\Big(h\big(T_{n,\theta}(\ensuremath{\overline{0}})\big)<\big(c^*-\epsilon\big)\log n\Big)\longrightarrow0\,. \end{align*} Using Proposition~\ref{prop:inductiveTree}, we know that \begin{align*} \mathbb{P}\Big(\big|T_{n,\theta}(\ensuremath{\overline{0}})\big|\geq k\Big)=\prod_{1\leq i\leq k}\left(1-\frac{\theta}{\theta+n-i}\right)\,, \end{align*} from which it follows that \begin{align*} \mathbb{P}\left(\big|T_{n,\theta}(\ensuremath{\overline{0}})\big|\geq\left\lfloor\frac{n}{\log n}\right\rfloor\right)&=\prod_{1\leq i\leq\frac{n}{\log n}}\left(1-\frac{\theta}{\theta+n-i}\right)\geq\left(1-\frac{\theta}{\theta+n}\right)^\frac{n}{\log n}=1-o(1)\,. \end{align*} Recall now from Proposition~\ref{prop:inductiveTree} that, given $|T_{n,\theta}(\ensuremath{\overline{0}})|=k$, $T_{n,\theta}(\ensuremath{\overline{0}})$ is distributed as a random binary search tree of size $k$. Using the previous lower bound on the size of $T_{n,\theta}(\ensuremath{\overline{0}})$, and the law of large numbers for the height of a random binary search tree~\cite{devroye1986note}, it follows that \begin{align*} \mathbb{P}\Big(h\big(T_{n,\theta}(\ensuremath{\overline{0}})\big)<(c^*-\epsilon)\log n\Big)\longrightarrow0\,. \end{align*} This establishes the second desired lower bound for the height of $T_{n,\theta}$ and concludes the proof. \end{proof} Proving the lower bound of Theorem~\ref{thm:strongHeight} is similar, but easier. \begin{prop}\label{prop:strongLowerBound} Let $(\theta_n)_{n\geq0}$ be a sequence of non-negative numbers, such that $\theta_n\rightarrow\infty$. Then, for all $\epsilon>0$ and $\lambda>0$, we have \begin{align*} \mathbb{P}\Big(h(T_{n,\theta_n})<(1-\epsilon)\mu(n,\theta_n)\Big)=O\left(\frac{1}{n^\lambda}\right)\,. \end{align*} \end{prop} \begin{proof} Using Lemma~\ref{lem:boundsRecords} along with the fact that $h(T_{n,\theta})\geq\ensuremath{\mathrm{record}}(T_{n,\theta})-1$, we have that \begin{align*} \mathbb{P}\Big(h(T_{n,\theta})\leq(1-\epsilon)\mu(n,\theta_n)\Big)\leq e^{-\delta(\epsilon)\mu(n,\theta_n)}\,. \end{align*} Moreover, since $\mu(n,\theta_n)=\omega(\log n)$ whenever $\theta_n\rightarrow\infty$, we know that $e^{-\delta(\epsilon)\mu(n,\theta_n)}=O(n^{-\lambda})$, which concludes the proof of the proposition. \end{proof} \subsection{Upper bounds on the height} The first result of this section proves a very useful upper tail bound on the height of a record-biased tree, conditionally given the sizes of its left subtrees. Before stating this result, recall the definition of $E_{n,\theta}(K)$ when $K=(k_j)_{j\geq0}$ from \eqref{eq:EK}: \begin{align*} E_{n,\theta}(K)=\Big\{\big|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})\big|=k_j,\,\forall j\in\ensuremath{\mathbb{N}}\Big\}\,. \end{align*} Moreover, recall from the end of Section~\ref{subsec:overview} that, conditionally given $E_{n,\theta}(K)$ and given that $\mathbb{P}(E_{n,\theta}(K))>0$, there exists a unique $r=r(K)$ such that $\ensuremath{\mathrm{record}}(T_{n,\theta})=r(K)$. \begin{prop}\label{prop:upperBound} Let $n\in\ensuremath{\mathbb{N}}$ and $\theta\in[0,\infty)$. Consider $K=(k_j)_{j\geq0}$ such that $\mathbb{P}(E_{n,\theta}(K))>0$, where $E_{n,\theta}(K)$ is defined in \eqref{eq:EK} and let $r=r(K)$ be the unique value such that $r+\sum_{0\leq j<k}k_j=n$. Then, for any $\eta\in\ensuremath{\mathbb{N}}$ and $t>0$ \begin{align*} \mathbb{P}\Big(h(T_{n,\theta})\geq\eta\,\Big|\,E_{n,\theta}(K)\Big)\leq\sum_{0\leq j\leq r}(2e^{-t})^{\eta-j}\cdot(k_j+1)^{e^t-1} \end{align*} \end{prop} \begin{proof} Thanks to Proposition~\ref{prop:inductiveTree}, for any vector $K=(k_j)_{j\geq0}$ such that $\mathbb{P}(E_{n,\theta}(K))>0$, conditionally given $E_{n,\theta}(K)$, the trees $(T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}}))_{j\geq0}$ are independent random binary search trees of respective sizes $K=(k_j)_{j\geq0}$. Using this fact together with the identity \begin{align*} h(T_{n,\theta})=\max_{0\leq j\leq\ensuremath{\mathrm{record}}(T_{n,\theta})}\Big\{j+h\big(T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})\big)\Big\}\,. \end{align*} it follows that \begin{align*} \mathbb{P}\Big(h(T_{n,\theta})\geq\eta\,\Big|\,E_{n,\theta}(K)\Big)&=\mathbb{P}\left(\max_{0\leq j\leq\ensuremath{\mathrm{record}}(T_{n,\theta})}\Big\{j+h\big(T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})\big)\Big\}\geq\eta\,\bigg|\,E_{n,\theta}(K)\right)\\ &=\mathbb{P}\left(\max_{0\leq j\leq r}\Big\{j+h\big(\mathrm{RBST}_j\big)\Big\}\geq\eta\,\Big|\,E_{n,\theta}(K)\right)\,, \end{align*} where $(\mathrm{RBST}_j)_{j\geq0}$ are independent random binary search trees of respective sizes $k_j$, also independent of $E_{n,\theta}(K)$. Using a union bound, we obtain that \begin{align} \mathbb{P}\Big(h(T_{n,\theta})\geq\eta\,\Big|\,E_{n,\theta}(K)\Big)\leq\sum_{0\leq j\leq r}\mathbb{P}\Big(j+h\big(\mathrm{RBST}_j\big)\geq\eta\Big)\,.\label{eq:upperBound1} \end{align} To bound the right-hand side, use a union bound over the $2^{\eta-j}$ paths of length $\eta-j$ from the root of $T_\infty$ to obtain \begin{align} \mathbb{P}\Big(j+h\big(\mathrm{RBST}_j\big)\geq\eta\Big)\leq\sum_{v\in T_\infty:|v|=\eta-j}\mathbb{P}\big(v\in\mathrm{RBST}_j\big)=2^{\eta-j}\mathbb{P}\big(\ensuremath{\overline{1}}^{\eta-j}\in\mathrm{RBST}_j\big)\,.\label{eq:upperBound2} \end{align} Using the moment generating function from Corollary~\ref{cor:mgfRecord} and the fact that a binary search tree is simply a record-biased tree with parameter $\theta=1$, we have that \begin{align*} \mathbb{P}\big(\ensuremath{\overline{1}}^{\eta-j}\in\mathrm{RBST}_j\big)&=\mathbb{P}\Big(\ensuremath{\mathrm{record}}(\mathrm{RBST}_j)\geq\eta-j+1\Big)\leq e^{-t(\eta-j+1)}\prod_{1\leq\ell\leq k_j}\left(1+(e^t-1)\frac{1}{k_j-\ell+1}\right)\,. \end{align*} Now, using that \begin{align*} \prod_{1\leq i\leq n}\left(1+(e^t-1)\frac{1}{n-i+1}\right)\leq\exp\left((e^t-1)\sum_{1\leq\ell\leq k_j}\frac{1}{\ell}\right)\leq\exp\Big((e^t-1)\log(k_j+1)\Big) \end{align*} and that $e^{-t}\leq 1$, we obtain that \begin{align} \mathbb{P}\big(\ensuremath{\overline{1}}^{\eta-j}\in\mathrm{RBST}_j\big)\leq\exp\Big(-t(\eta-j)+(e^t-1)\log(k_j+1)\Big)=e^{-t(\eta-j)}\cdot(k_j+1)^{e^t-1}\,.\label{eq:upperBound3} \end{align} Combining \eqref{eq:upperBound1}, \eqref{eq:upperBound2}, and \eqref{eq:upperBound3}, we obtain the desired bound. \end{proof} We now use Proposition~\ref{prop:upperBound} to prove the following result, which allows us to extend our convergence in probability results to the convergence in $L^p$. Recall the definition of $\mu(n,\theta)$ from \eqref{eq:mu}. \begin{prop}\label{prop:UI} For any sequence $(\theta_n)_{n\geq0}$ and any $p>0$, the family of random variables \begin{align*} \left(\left(\frac{h(T_{n,\theta_n})}{\max\{c^*\log n,\mu(n,\theta_n)\}}\right)^p\right)_{n\geq0} \end{align*} is uniformly integrable. \end{prop} \begin{proof} First, write $\nu_n=\max\{c^*\log n,\,\mu(n,\theta_n)\}$ and fix $a>0$. We have that \begin{align*} \mathbb{P}\left(\left(\frac{h(T_{n,\theta_n})}{\nu_n}\right)^p\geq a\right)=\mathbb{P}\big(h(T_{n,\theta_n})\geq a^\frac{1}{p}\nu_n\big)\,. \end{align*} Using Proposition~\ref{prop:upperBound} with $\eta=\lceil a^\frac{1}{p}\nu_n\rceil$ and the fact that $k_j\leq n$, it follows that \begin{align*} \mathbb{P}\Big(h(T_{n,\theta_n})\geq\eta\,\Big|\,E_{n,\theta_n}(K)\Big)\leq\sum_{0\leq j\leq r(K)}(2e^{-t})^{\eta-j}(n+1)^{e^t-1}\,. \end{align*} Moreover, by using Lemma~\ref{lem:boundsRecords}, we know that there exists a universal constant $\delta>0$ such that \begin{align*} \mathbb{P}\Big(\ensuremath{\mathrm{record}}(T_{n,\theta_n})>2\mu(n,\theta_n)\Big)\leq e^{-\delta\mu(n,\theta_n)}\,. \end{align*} In the case where $\mu(n,\theta_n)$ is bounded, the previous tail bound might not suffice for our purpose. However, in this case, we would have that $\mathbb{E}[\ensuremath{\mathrm{record}}(T_{n,\theta_n})]=\mu(n,\theta_n)=O(1)$, from which a simple application of Markov's inequality gives us that \begin{align*} \mathbb{P}\Big(\ensuremath{\mathrm{record}}(T_{n,\theta_n})>2\mu(n,\theta_n)+\log(n+1)\Big)=O\left(\frac{1}{\log n}\right)=o(1)\,. \end{align*} Combining both bounds, we have that \begin{align*} \mathbb{P}\Big(\ensuremath{\mathrm{record}}(T_{n,\theta_n})>2\mu(n,\theta_n)+\log(n+1)\Big)=o(1)\,. \end{align*} Combine the previous tail bounds to obtain that \begin{align*} \mathbb{P}\Big(h(T_{n,\theta_n})\geq\eta\Big)&\leq\sum_{K:r(K)\leq 2\mu(n,\theta_n)+\log(n+1)}\mathbb{P}\Big(h(T_{n,\theta_n})\geq\eta\,\Big|\,E_{n,\theta_n}(K)\Big)\mathbb{P}\Big(E_{n,\theta_n}(K)\Big)+o(1)\\ &\leq\sum_{0\leq j\leq2\mu(n,\theta_n)+\log(n+1)}(2e^{-t})^{\eta-j}(n+1)^{e^t-1}+o(1)\,. \end{align*} Let $t=\log(2e)$. The previous equation can then be rewritten as \begin{align*} \mathbb{P}\Big(h(T_{n,\theta_n})\geq\eta\Big)&\leq\sum_{0\leq j\leq2\mu(n,\theta_n)+\log(n+1)}\frac{1}{e^{\eta-j}}(n+1)^{1+e}+o(1)\\ &=\frac{1}{e-1}\exp\Big(2\mu(n,\theta_n)+(2+e)\log(n+1)-\eta\Big)+o(1)\,. \end{align*} Recall now that $\eta=\lceil a^\frac{1}{p}\nu_n\rceil$ and that $\nu_n=\max\{c^*\log n,\,\mu(n,\theta_n)\}$. This implies that, whenever $a>\left(2+\frac{2+e}{c^*}\right)^p$, we have \begin{align*} \eta-2\mu(n,\theta_n)-(2+e)\log(n+1)>\left(a^\frac{1}{p}-2-\frac{2+e}{c^*}+o(1)\right)\max\big\{c^*\log n,\,\mu(n,\theta_n)\big\}\longrightarrow\infty\,. \end{align*} It follows that, for such $a$, \begin{align*} \lim_{n\rightarrow\infty}\mathbb{P}\left(\left(\frac{h(T_{n,\theta_n})}{\nu_n}\right)^p\geq a\right)=0\,, \end{align*} establishing the claimed uniform integrability. \end{proof} \subsection{Convergence of the height} In this section, we build on the results of the previous sections to finally prove Theorem~\ref{thm:height} and \ref{thm:strongHeight}. \begin{proof}[Proof of Theorem~\ref{thm:height}] Thanks to the uniform integrability proved in Proposition~\ref{prop:UI}, we only need to establish the convergence in probability in order to prove the theorem. Moreover, Proposition~\ref{prop:lowerBound} already proved the requisite lower bound, so it only remains to prove the upper bound. Assume first that $\theta=0$. In this case, $\ensuremath{\mathrm{record}}(T_{n,\theta})=1$ and we have that \begin{align*} T_{n,\theta}=\{\varnothing\}\cup\big(\ensuremath{\overline{0}} T_{n,\theta}(\ensuremath{\overline{0}})\big)\,, \end{align*} which implies that \begin{align*} h(T_{n,\theta})=1+h\big(T_{n,\theta}(\ensuremath{\overline{0}})\big)\,. \end{align*} Using that, in this case, $T_{n,\theta}(\ensuremath{\overline{0}})$ is random binary search tree of size $n-1$, along with the law of large numbers for the height of random binary search trees~\cite{devroye1986note}, it follows that \begin{align*} \mathbb{P}\Big(h(T_{n,\theta})>(c^*+\epsilon)\log n\Big)\longrightarrow0\,, \end{align*} which is the desired upper bound. Assume for the rest of the proof that $\theta>0$. For any $a>0$, by using the bound from Proposition~\ref{prop:upperBound}, we have \begin{align*} \mathbb{P}\Big(h(T_{n,\theta})\geq a\log n\,\Big|\,E_{n,\theta}(K)\Big)\leq\sum_{0\leq j\leq r(K)}(2e^{-t})^{\lfloor a\log n\rfloor-j}\cdot(k_j+1)^{e^t-1}\,. \end{align*} Moreover, from Lemma~\ref{lem:boundsRecords}, we know that, for any $\alpha>0$, as $n$ tends to infinity \begin{align*} \mathbb{P}\Big(\ensuremath{\mathrm{record}}(T_{n,\theta})>(\theta+\alpha)\log n\Big)\longrightarrow0\,. \end{align*} Finally, Proposition~\ref{prop:boundL} tells us that, for any fixed $\beta>0$ such that $\beta\theta<1$, by setting $k=\left\lfloor\left(\frac{1}{\theta}-\beta\right)^{-1}\log n\right\rfloor$ and $M=2\log\log n$, we have \begin{align*} \mathbb{P}\Big(\exists 0\leq j\leq k,|T_{n,\theta}(\ensuremath{\overline{1}}^j\ensuremath{\overline{0}})|>ne^{-\left(\frac{1}{\theta}-\beta\right)j+M}\Big)\leq Ce^{-\lambda M}\cdot\left(1-\frac{k}{e^M}\right)^{-\lambda}\longrightarrow0\,. \end{align*} Now fix $\alpha,\beta>0$ such that $\left(\frac{1}{\theta}-\beta\right)^{-1}>(\theta+\alpha)$, for example we could take $0<\beta<\frac{1}{\theta}$ and set $\alpha=\frac{\beta\theta}{2}\left(\frac{1}{\theta}-\beta\right)^{-1}$. Combining the previous results, we obtain \begin{align*} \mathbb{P}\Big(h(T_{n,\theta})\geq a\log n\Big)&\leq\sum_{K:\left\{\genfrac{}{}{0pt}{}{r(K)\leq(\theta+\alpha)\log n\hfill}{k_j\leq n\cdot\exp\left(-\left(\frac{1}{\theta}-\beta\right)j+M\right)}\right.}\mathbb{P}\Big(h(T_{n,\theta})\geq a\log n\,\Big|\,E_{n,\theta}(K)\Big)\mathbb{P}\Big(E_{n,\theta}(K)\Big)+o(1)\\ &\leq\sum_{0\leq j\leq(\theta+\alpha)\log n}(2e^{-t})^{\lfloor a\log n\rfloor-j}\cdot\left(ne^{-\left(\frac{1}{\theta}-\beta\right)j+M}+1\right)^{e^t-1}+o(1)\,, \end{align*} where the bounds on $k_j$ hold since we assumed that $\left(\frac{1}{\theta}-\beta\right)^{-1}>(\theta+\alpha)$, implying that $j\leq(\theta+\alpha)\log n\leq k=\left\lfloor\left(\frac{1}{\theta}-\beta\right)^{-1}\log n\right\rfloor$ for $n$ large enough. Moreover, since $\left(\frac{1}{\theta}-\beta\right)j\leq\left(\frac{1}{\theta}-\beta\right)(\theta+\alpha)\log n<\log n$, it follows that $ne^{-\left(\frac{1}{\theta}-\beta\right)j+M}+1\leq e^{\log n-\left(\frac{1}{\theta}-\beta\right)j+o(\log n)}$. Similarly, $(2e^{-t})^{\lfloor a\log n\rfloor-j}=e^{(\log 2-t)(a\log n-j)+o(\log n)}$. Combining these formulas with the previous upper tail bound on the height, we obtain that, for any $0<\beta<\frac{1}{\theta}$, and for any $0<\alpha<\beta\theta\left(\frac{1}{\theta}-\beta\right)^{-1}$, \begin{align*} \mathbb{P}\Big(h(T_{n,\theta})\geq a\log n\Big)&\leq\sum_{0\leq j\leq(\theta+\alpha)\log n}e^{(\log 2-t)(a\log n-j)+o(\log n)}\cdot\left(e^{\log n-\left(\frac{1}{\theta}-\beta\right)j+o(\log n)}\right)^{e^t-1}+o(1)\\ &=e^{[a(\log2-t)+(e^t-1)+o(1)]\log n}\sum_{0\leq j\leq(\theta+\alpha)\log n}\left(e^{-\log2+t-(e^t-1)\left(\frac{1}{\theta}-\beta\right)}\right)^j+o(1)\,. \end{align*} Set now $t=\log c^*$ and recall that $c^*\log\left(\frac{2e}{c^*}\right)=1$ to eventually rewrite the previous bound into \begin{align} &\mathbb{P}\Big(h(T_{n,\theta})\geq a\log n\Big)\label{eq:height1}\\ &\hspace{0.5cm}\leq\exp\left(-(c^*-1)\left[\frac{a}{c^*}-1+o(1)\right]\log n\right)\sum_{0\leq j\leq(\theta+\alpha)\log n}\exp\left((c^*-1)\left[\frac{1}{c^*}-\frac{1}{\theta}+\beta\right]j\right)+o(1)\,.\notag \end{align} We now bound the right-hand side from above in two cases, according to whether $\theta<c^*$ or $\theta\geq c^*$. \textbf{Case 1:} $\theta<c^*$. In this case, for $\beta$ small enough, the sum in $j$ in \eqref{eq:height1} can be bounded by a constant depending only on $c^*$ and $\theta$. Moreover, for any $\epsilon>0$, taking $a=c^*+\epsilon$, the term before the sum converges to $0$. This proves that \begin{align*} \mathbb{P}\Big(h(T_{n,\theta})\geq (c^*+\epsilon)\log n\Big)\longrightarrow0\,. \end{align*} Since we assumed that $\theta<c^*$, this corresponds to the desired upper bound in this case. \textbf{Case 2:} $\theta\geq c^*$. For any $\epsilon>0$, taking $a=\theta+\epsilon$ and using \eqref{eq:height1}, we have that \begin{align} &\mathbb{P}\Big(h(T_{n,\theta})\geq(\theta+\epsilon)\log n\Big)\label{eq:height2}\\ &\hspace{0cm}\leq\exp\left(-(c^*-1)\left[\frac{\theta+\epsilon}{c^*}-1+o(1)\right]\log n\right)\sum_{0\leq j\leq(\theta+\alpha)\log n}\exp\left((c^*-1)\left[\frac{1}{c^*}-\frac{1}{\theta}+\beta\right]j\right)+o(1)\notag\\ &\hspace{0cm}\leq\exp\left(-(c^*-1)\left[\frac{\theta+\epsilon}{c^*}-1+o(1)\right]\log n\right)\exp\left(-(c^*-1)\left[\frac{1}{c^*}-\frac{1}{\theta}+\beta\right](\theta+\alpha)\log n+O(1)\right)+o(1)\notag\\ &\hspace{0cm}=\exp\left(-(c^*-1)\left[\frac{\theta+\epsilon}{c^*}-1-\left(\frac{1}{c^*}-\frac{1}{\theta}+\beta\right)(\theta+\alpha)\right]\log n+o(\log n)\right)+o(1)\,,\notag \end{align} In order to conclude the proof in this case, note that \begin{align*} \frac{\theta+\epsilon}{c^*}-1-\left(\frac{1}{c^*}-\frac{1}{\theta}+\beta\right)(\theta+\alpha)&=\frac{\epsilon}{c^*}-\alpha\left(\frac{1}{c^*}-\frac{1}{\theta}\right)-\beta\theta-\alpha\beta\,. \end{align*} Moreover, the upper tail bound of \eqref{eq:height2} holds for any $\epsilon>0$, $0<\beta<\frac{1}{\theta}$, and $0<\alpha<\beta\theta\left(\frac{1}{\theta}-\beta\right)^{-1}$. Hence, for any fixed $\epsilon>0$, we can choose $\alpha$ and $\beta$ small enough, so that \begin{align*} \frac{\theta+\epsilon}{c^*}-1-\left(\frac{1}{c^*}-\frac{1}{\theta}+\beta\right)(\theta+\alpha)>0\,. \end{align*} This implies that \begin{align*} \mathbb{P}\Big(h(T_{n,\theta})\geq(\theta+\epsilon)\log n\Big)\longrightarrow0\,, \end{align*} which proves the desired upper bound in the case $\theta\geq c^*$. This concludes the proof of Theorem~\ref{thm:height}. \end{proof} We conclude this section with the proof of Theorem~\ref{thm:strongHeight}. \begin{proof}[Proof of Theorem~\ref{thm:strongHeight}] Let $(\theta_n)_{n\geq0}$ be such that $\theta_n\rightarrow\infty$ and fix $\epsilon>0$. We start by proving the desired upper tail bound. Using Proposition~\ref{prop:upperBound}, we know that \begin{align*} \mathbb{P}\Big(h(T_{n,\theta_n})\geq\eta\,\Big|\,E_{n,\theta}(K)\Big)\leq\sum_{0\leq j\leq r(K)}(2e^{-t})^{\eta-j}\cdot(k_j+1)^{e^t-1}\,. \end{align*} Moreover, we know that $k_j\leq n$. Using this upper bound for $k_j$ and letting $t=\log(2e)$ and $\eta=\lceil(1+2\epsilon)\mu(n,\theta_n)\rceil$, we obtain \begin{align*} \mathbb{P}\Big(h(T_{n,\theta_n})\geq(1+2\epsilon)\mu(n,\theta_n)\,\Big|\,E_{n,\theta}(K)\Big)&\leq\sum_{0\leq j\leq r(K)}e^{j-\eta}\cdot(n+1)^{e+1}\\ &\leq\frac{1}{e-1}\cdot(n+1)^{e+1}\cdot e^{r(K)-\eta}\,. \end{align*} Finally, using Lemma~\ref{lem:boundsRecords}, we know that \begin{align*} \mathbb{P}\Big(\ensuremath{\mathrm{record}}(T_{n,\theta_n})>(1+\epsilon)\mu(n,\theta_n)\Big)\leq e^{-\delta(\epsilon)\mu(n,\theta_n)}\,, \end{align*} from which we obtain that \begin{align*} &\mathbb{P}\Big(h(T_{n,\theta_n})\geq(1+2\epsilon)\mu(n,\theta_n)\Big)\\ &\hspace{0.5cm}\leq\sum_{K:r(K)\leq(1+\epsilon)\mu(n,\theta_n)}\mathbb{P}\Big(h(T_{n,\theta_n})\geq(1+2\epsilon)\mu(n,\theta_n)\,\Big|\,E_{n,\theta}(K)\Big)\mathbb{P}\Big(E_{n,\theta_n}(K)\Big)+e^{-\delta(\epsilon)\mu(n,\theta_n)}\\ &\hspace{0.5cm}\leq\frac{1}{e-1}\cdot(n+1)^{e+1}\cdot e^{(1+\epsilon)\mu(n,\theta_n)-\eta}+e^{-\delta(\epsilon)\mu(n,\theta_n)}\,. \end{align*} From the definition of $\eta$, we know that $(1+\epsilon)\mu(n,\theta_n)-\eta\leq-\epsilon\mu(n,\theta_n)$. Moreover, in the case where $\theta_n\rightarrow\infty$, we know that $\mu(n,\theta_n)=\omega(\log n)$, from which we obtain that \begin{align*} \mathbb{P}\Big(h(T_{n,\theta_n})\geq(1+2\epsilon)\mu(n,\theta_n)\Big)\leq e^{-\epsilon\mu(n,\theta_n)+o(\mu(n,\theta_n))}+e^{-\delta(\epsilon)\mu(n,\theta_n)}=O\left(\frac{1}{n^\lambda}\right)\,. \end{align*} This corresponds to the desired upper tail bound. Combining this result with the lower tail bound from Proposition~\ref{prop:strongLowerBound}, we proved both the tail bound of the theorem and the convergence in probability. To conclude this proof, use Proposition~\ref{prop:UI} to extend the convergence in probability to the convergence in $L^p$ for any $p>0$. \end{proof} \section{Future work and open questions} \begin{itemize} \item This work describes the first order asymptotic behaviour of the height of record-biased trees, and it is natural to next consider lower-order fluctuations. In the regime where $\theta>c^*$, the height is principally controlled by the number of records, and the latter quantity satisfies a central limit theorem; although we do not prove this, it is fairly straightforward to do so using the methods developed in this paper. It is is therefore natural to suspect that the height also satisfies a central limit theorem. On the other hand, when $\theta<c^*$, the height of record-biased permutations and the results of~\cite{drmota2003analytic,reed2003height} suggest that, in this case, $h(T_{n,\theta})$ has asymptotically bounded variance as $n\rightarrow\infty$. In the case that $\theta=c^*$, or more generally that $\theta_n\rightarrow c^*$, it is less clear to us what second-order behaviour to expect. \item The similarities between this work and~\cite{addario2021height} raise the question about more general frameworks under which these two models and results fall. Possible developments might come from using a more general model of random trees~\cite{evans2012trickle}, or a more general model of random permutations~\cite{pitman2019regenerative} upon which the binary search trees are built. Considering the height of these generalized models of trees might lead to a unifying framework for Mallows and record-biased trees, as well as other models. \end{itemize} \subsection*{Acknowledgements} BC wishes to thank his supervisor, Louigi Addario-Berry, for his help with the general structure and presentation of this paper. During the preparation of this research, BC was supported by an ISM scholarship. \medskip \bibliographystyle{siam}
23d06c79528ae337b0fc254841bb08012d0f67d9
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} In the classical nonabelian Hodge theory \cite{Sim92}, one has the following Simpson correspondence: Let $X$ be a compact K\"{a}hler manifold. There is an equivalence of categories $$ C^{-1}_{X}: \HIG(X)\to \MIC(X), $$ where $\HIG(X)$ is the category of polystable Higgs bundles over $X$ with vanishing first two Chern classes and $\MIC(X)$ is the category of semisimple flat bundles over $X$. The equivalence is independent of the choice of a background K\"{a}hler metric, and the following functoriality holds: Let $f: Y\to X$ be a morphism of compact K\"{a}hler manifolds. Then for any $(E,\theta)\in \HIG(X)$, one has a natural isomorphism in $\MIC(Y)$ \begin{equation}\label{functoriality over C} C_{Y}^{-1}f^*(E,\theta)\cong f^*C^{-1}_{X}(E,\theta). \end{equation} In the nonabelian Hodge theory in positive characteristic \cite{OV}, Ogus-Vologodsky established an analogue of \ref{functoriality over C} for derived categories, with the $W_2$-lifting assumption on $f$ (see Theorem 3.22 \cite{OV}). In a recent preprint \cite{La2}, A. Langer proved the equality \ref{functoriality over C} under the assumption that the $W_2(k)$-lifting of $f$ is good (see Definition 5.1 and Theorem 5.3 in loc. cit.). However, such an assumption on the lifting of $f$ is quite restrictive. Let $k$ be a perfect field of characteristic $p>0$. Let $X$ be a smooth variety over $k$ and $D$ a reduced normal crossing divisor in $X$. One forms the log smooth variety $X_{\log}$ whose log structure is the one determined by $D$. Equip $k$ and $W_2(k)$ with the trivial log structure. Assume that the log morphism $X_{\log}\to k$ is liftable to $W_2(k)$. Choose and then fix such a lifting $\tilde X_{\log}$. Then one has the inverse Cartier transform \footnote{Theorem 6.1 \cite{LSYZ19} deals with only the case of SNCD. However, a simple \'{e}tale descent argument extends the construction to the reduced NCD case.} (which is in general not an equivalence of categories without further condition on the singularities of modules along $D$) $$ C^{-1}_{X_{\log}\subset \tilde X_{\log}}: \HIG_{\leq p-1}(X_{\log}/k)\to \MIC_{\leq p-1}(X_{\log}/k). $$ Let $Y_{\log}=(Y,B)$ be a log smooth variety like above, together with a $W_2(k)$-lifting $\tilde Y_{\log}$. Our main result is the following analogue of \ref{functoriality over C} in positive characteristic: \begin{theorem}\label{main result} Notion as above. Then for any object $(E,\theta)\in \HIG_{\leq p-1}(X_{\log}/k)$, one has a natural isomorphism $$ C_{Y_{\log}\subset \tilde Y_{\log}}^{-1}f^{\circ}(E,\theta)\cong f^*C^{-1}_{X_{\log}\subset \tilde X_{\log}}(E,\theta), $$ where $f^{\circ}(E,\theta)$ is the twisted pullback of $(E,\theta)$. \end{theorem} The twisted pullback of $(E,\theta)$ refers to a certain deformation of $f^*(E,\theta)$ along the obstruction class of lifting $f$ over $W_2(k)$. When the obstruction class vanishes, the twisted pullback is just the usual pullback. See \S2 for details. Hence, one has the following immediate consequence. \begin{corollary}\label{main cor} Let $f: Y_{\log}\to X_{\log}$ be a morphism of log smooth varieties over $k$. Assume $f$ is liftable to $W_2(k)$. Then for any object $(E,\theta)\in \HIG_{\leq p-1}(X_{\log}/k)$, one has a natural isomorphism in $\MIC_{\leq p-1}(Y'_{\log}/k)$ $$ C_{Y_{\log}\subset \tilde Y_{\log}}^{-1}f^{*}(E,\theta)\cong f^*C^{-1}_{X_{\log}\subset \tilde X_{\log}}(E,\theta). $$ \end{corollary} The notion of \emph{twisted pullback} and the corresponding \emph{twisted functoriality} as exhibited in Theorem \ref{main result} was inspired by the work of Faltings in the $p$-adic Simpson correspondence \cite{Fa05}. It is a remarkable fact that char $p$ and $p$-adic Simpson correspondences have many features in common. As an application, we obtain the following result. \begin{theorem}\label{semistability} Let $k$ be an algebraically closed field and $f: (Y,B)\to (X,D)$ a morphism between smooth projective varieties equipped with normal crossing divisors over $k$. Let $(E,\theta)$ be a semistable logarithmic Higgs bundles with vanishing Chern classes over $(X,D)$. If either $\textrm{char}(k)=0$ or $\textrm{char}(k)=p>0$, $f$ is $W_2(k)$-liftable and $\rk(E)\leq p$, then the logarithmic Higgs bundle $f^*(E,\theta)$ over $(Y,B)$ is also semistable with vanishing Chern classes. \end{theorem} For $\textrm{char}(k)=0$ and $D=\emptyset$, the result is due to C. Simpson by transcendental means \cite{Sim92}. Our approach is to deduce it from the char $p$ statement by mod $p$ reduction and hence is purely algebraic. \section{Twisted pullback} We assume our schemes are all noetherian. Let $(R,M)$ be an affine log scheme. Let $f: Y\to X$ be a morphism of log smooth schemes over $R$. Fix an $r\in \N$. Choose and then fix an element $\tau\in \Ext^1(f^*\Omega_{X/R},\sO_Y)$. The aim of this section is to define the twisted pullback along $\tau$ as a functor $$ \TP_{\tau}: \HIG_{\leq r}(X/R)\to \HIG_{\leq r}(Y/R), $$ under the following assumption on $r$ \begin{assumption}\label{basic assumption on r} $r!$ is invertible in $R$. \end{assumption} Let $\Omega_{X/R}$ be the sheaf of relative logarithmic K\"{a}hler differentials and $T_{X/R}$ be its $\sO_X$-dual. They are locally free of rank $\dim X-\dim R$ by log smoothness. The symmetric algebra $\Sym^{\bullet}T_{X/R}=\bigoplus_{k\geq 0} \Sym^kT_{X/R}$ on $T_{X/R}$ is $\sO_X$-algebra, and one has the following morphisms of $\sO_X$-algebras whose composite is the identity: $$ \sO_X\to \Sym^{\bullet}T_{X/R}\to \sO_X. $$ It defines the zero section of the natural projection $\Omega_{X/R}\to X$, where we view $\Omega_{X/R}$ as a vector bundle over $X$ (see Ex 5.18, Ch. II \cite{Ha77}). Set $$\sA_r:=\Sym^{\bullet}(T_{X/R})/\Sym^{\geq r+1}(T_{X/R}),$$ which is nothing but the structure sheaf of the closed subscheme $(r+1)X$ of $\Omega_{X/R}$ supported along the zero section. In below, we shall use the notations $\sA_r$ and $\sO_{(r+1)X}$ interchangeably. Note as $\sO_X$-module, $\sA_{r}=\sO_X\oplus T_{X/R}\oplus \cdots \oplus \Sym^{r}T_{X/R}$. The following lemma is well-known. \begin{lemma}\label{correspondence} The category of nilpotent (quasi-)coherent Higgs modules over $X/R$ of exponent $\leq r$ is equivalent to the category of (quasi-)coherent $\sO_{(r+1)X}$-modules. \end{lemma} \begin{proof} The natural inclusion $\iota: X\to \Omega_{X/R}$ of zero section induces an equivalence of categories between the category of sheaves of abelian groups over $X$ and the category of sheaves of abelian groups over $\Omega_{X/R}$ whose support is contained in the zero section. Let $E$ be a sheaf of abelian groups over $X$. It has a Higgs module structure if it has \begin{itemize} \item [(i)] a ring homomorphism $\theta^0:\sO_X\to \End(E)$; \item [(ii)] an $\sO_X$-linear homomorphism $\theta^1: T_{X/R}\to \End_{\sO_X}(E)$. \end{itemize} Since $\Sym^{\bullet}T_{X/R}$ is generated by $T_{X/R}$ as $\sO_X$-algebra, $\theta^0$ and $\theta^1$ together extend to a ring homomorphism $$ \theta^{\bullet}: \Sym^{\bullet}T_{X/R}\to \End_{\sO_X}(E)\subset \End(E). $$ If $\theta^1$ is nilpotent of exponent $\leq r$, then $\Sym^{\geq r+1}(T_{X/R})\subset \mathrm{Ann}(E)$. Therefore, we obtain an $\sA_r$-module structure on $E$. So we obtain a sheaf of $\sO_{(r+1)X}$-module. As $E$ is (quasi-)coherent as $\sO_X$-module, it is (quasi-)coherent as $\sO_{(r+1)X}$-module. Conversely, for a quasi-coherent $\sO_{\sA_r}$-module $E$, one obtains a ring homomorphism $$ \sA_r\to \End(E). $$ Restricting it to the degree zero part, one obtains the $\sO_X$-module structure on $E$. While restricting to the degree one component, one obtains a morphism of sheaf of abelian groups $$ \theta: T_{X/R}\to \End(E), v\mapsto \theta_v:=\textrm{the multiplication by}\ v. $$ Since for any $v\in T_{X/R}$, $v^{r+1}=0$ in $\sA_r$, it follows $\theta^{r+1}=0$, that is the exponent of $\theta\leq r$. For any $f\in\sO_X, v\in T_{X/R}$ and any $e\in E$, one verifies that $$ \theta_v(fe)=\theta_{fv}(e)=f\theta_v(e), $$ which means that the image of $\theta$ is contained in $\End_{\sO_X}(E)$. The obtained $\sO_X$-module is nothing but the pushforward of $E$ along the composite $(r+1)X\nrightarrow \Omega_{X/R}\to X$ which is finite. Therefore, $E$ is (quasi-)coherent as $\sO_X$-module if it is (quasi-)coherent as $\sO_{(r+1)X}$-module. \end{proof} \begin{remark}\label{equivalence between A-module and Higgs module} An $f^*\Omega_{X/R}$-Higgs module is a pair $(E,\theta)$ where $E$ is an $\sO_Y$-module and $\theta: E\to E\otimes f^*\Omega_{X/R}$ is an $\sO_Y$-linear morphism satisfying $\theta\wedge \theta=0$. A modification of the above argument shows that the category of nilpotent (quasi-)coherent $f^*\Omega_{X/R}$-Higgs modules is equivalent to the category of (quasi-)coherent $f^*\sA_r$-modules. \end{remark} {\bf $1^{st}$ construction:} For an $r$ satisfying Assumption \ref{basic assumption on r}, we have a natural morphism: $$ \exp: H^1(Y,f^*T_{X/R})\to H^1(Y, (f^*\sA_r)^*), \tau\mapsto \exp(\tau)=1+\tau+\cdots+\frac{\tau^r}{r!}, $$ where $(f^*\sA_r)^*$ is the unit group of $f^*\sA_r$. An element of $f^*\sA_r$ is invertible iff its image under $f^*\sA_r\to \sO_{Y}$ is invertible. So we obtain an $f^*\sA_r$-module $\sF^{r}_{\tau}$ of rank one. We introduce an intermediate category $\HIG_{\leq r}(f^*\Omega_{X/R})$, which is the category of nilpotent quasi-coherent $f^*\Omega_{X/R}$-Higgs modules of exponent $\leq r$. We define the functor $$ \TP^{\sF}_{\tau}: \HIG_{\leq r}(X/R)\to \HIG_{\leq r}(f^*\Omega_{X/R}) $$ as follows: For an $E\in \HIG_{\leq r}(X/R)$, define $$ \TP^{\sF}_{\tau}(E):=\sF^r_{\tau}\otimes_{f^*\sA_r}f^{*}E $$ as $f^*\sA_r$-module. Next, for a morphism $\phi: E_1\to E_2$ in $\HIG_{\leq r}(X/R)$, $$\TP^{\sF}_{\tau}(\phi):=id\otimes f^*\phi: \TP^{\sF}_{\tau}(E_1)\to \TP^{\sF}_{\tau}(E_2)$$ is a morphism of $f^*\sA_r$-modules. One has the natural functor from $\HIG_{\leq r}(f^*\Omega_{X/R})$ to $\HIG_{\leq r}(\Omega_{Y/R})$ induced by the differential morphism $f^*\Omega_{X/R}\to \Omega_{Y/R}$. We define the functor $\TP^1_{\tau}$ to be composite of functors $$ \HIG_{\leq r}(X/R)\stackrel{\TP^{\sF}_{\tau}}{\longrightarrow}\HIG_{\leq r}(f^*\Omega_{X/R})\to \HIG_{\leq r}(Y/R). $$ This is how Faltings \cite{Fa05} defines twisted pullback in the $p$-adic setting, at least for those small $\tau$s. \\ {\bf $2^{nd}$ construction:} This is based on the method of \emph{exponential twisting} \cite{LSZ15}, whose basic construction is given as follows: {\itshape Step 0}: Take an open affine covering $\{U_{\alpha}\}_{\alpha\in \Lambda}$ of $X$ as well as an open affine covering $\{V_{\alpha}\}_{\alpha\in \Lambda}$ of $Y$ such that $f: V_{\alpha}\to U_{\alpha}$. Let $\{\tau_{\alpha\beta}\}$ be a Cech representative of $\tau$. That is, $\tau_{\alpha\beta}\in \Gamma(V_{\alpha\beta}, f^*T_{X/R})$ satisfying the cocycle relation $$ \tau_{\alpha\gamma}=\tau_{\alpha\beta}+\tau_{\beta\gamma}. $$ {\itshape Step 1}: Let $(E,\theta)$ be a nilpotent Higgs module over $X$, whose exponent of nilpotency satisfies Assumption \ref{basic assumption on r}. For any $\alpha$, set $(E_{\alpha},\theta_{\alpha})=(E,\theta)|_{U_{\alpha}}$. Then one forms the various local Higgs modules $\{(f^*E_{\alpha},f^*\theta_{\alpha})\}$ via the usual pullback. \\ {\itshape Step 2}: Define $$ G_{\alpha\beta}=\exp(\tau_{\alpha\beta}\cdot f^*\theta)=\sum_{i\geq 0}\frac{(\tau_{\alpha\beta}\cdot f^*\theta)^n}{n!}. $$ The expression makes sense since each term $\frac{(\tau_{\alpha\beta}\cdot f^*\theta)^n}{n!}$ is well defined by assumption. Obviously, $G_{\alpha\beta}\in \Aut_{\sO_{Y}}(f^*E|_{V_{\alpha\beta}})$. Because of the cocycle relation, $\{G_{\alpha\beta}\}$ satisfies the cocycle relation $$ G_{\alpha\gamma}=G_{\beta\gamma}G_{\alpha\beta}. $$ Then we use the set of local isomorphism $\{G_{\alpha\beta}\}$ to glue the local $\Omega_{Y/R}$-Higgs modules $\{(f^*E_{\alpha},f^*\theta_{\alpha})\}$, to obtain a new Higgs module over $Y$. The verification details are analogous to \S2.2 \cite{LSZ15}. It is tedious and routine to verify the glued Higgs module, up to natural isomorphism, is independent of the choice of affine coverings and Cech representatives of $\tau$. We denote it by $\TP^2_{\tau}(E)$. For a morphism $\phi: E_1\to E_2$ of Higgs modules, it is not difficult to see that $f^*\phi$ induces a morphism $\TP^2_{\tau}(\phi):\TP^2_{\tau}(E_1)\to \TP^2_{\tau}(E_2)$. \begin{proposition}\label{Faltings twisted pullback is exponential twisting} The two functors $\TP^1_{\tau}$ and $\TP^2_{\tau}$ are naturally isomorphic. \end{proposition} \begin{proof} One uses the equivalence in Remark \ref{equivalence between A-module and Higgs module}. It suffices to notice that the element $\exp(\tau_{\alpha\beta})\in f^*\sA_r$ has its image $G_{\alpha\beta}$ in $\Aut_{\sO_{Y}}(f^*E|_{V_{\alpha\beta}})$. \end{proof} By the above proposition, we set $\TP_{\tau}$ to be either of $\TP^i_{\tau}, i=1.2$. \begin{proposition} The functor $\TP_{\tau}$ has the following properties: \begin{itemize} \item [(i)] it preserves rank; \item [(ii)] it preserves direct sum; \item [(iii)] Let $E_i, i=1,2$ be two nilpotent Higgs modules over $X/R$ whose exponents of nilpotency satisfies $(r_1+r_2)!$ being invertible in $R$. Then there is a canonical isomorphism of Higgs modules over $Y$: $$ \TP_{\tau}(E_1\otimes E_2) \cong \TP_{\tau}(E_1)\otimes\TP_{\tau}(E_2). $$ \end{itemize} \end{proposition} \begin{proof} The first two properties are obvious. To approach (iii), one uses the second construction. Note that when $\tau=0$, it is nothing but the fact $f^*(E_1\otimes E_2)=f^*E_1\otimes f^*E_2$. When the exponents $r_i, i=1,2$ satisfies the condition, one computes that $$ \exp(\tau\cdot (f^*\theta_1\otimes id+id\otimes f^*\theta_2))=\exp(\tau\cdot f^*\theta_1\otimes id)\exp(id\otimes\tau\cdot f^*\theta_2), $$ using the equality $$ (f^*\theta_1\otimes id)(id\otimes f^*\theta_2)=(id\otimes f^*\theta_2)(f^*\theta_1\otimes id)=f^*\theta_1\otimes f^*\theta_2. $$ \end{proof} To conclude this section, we shall point out that there is one closely related construction that works for all $r\in \N$. \\ {\bf $3^{rd}$ construction:} Note that the element $\tau\in \Ext^1(f^*\Omega_X,\Omega_Y)\cong \Ext^1(\sO_Y,f^*T_{X/R})$ corresponds to an extension of $\sO_Y$-modules $$ 0\to f^*T_{X/R}\to \sE_{\tau}\stackrel{pr}{\to} \sO_{Y} \to 0. $$ Notice that $\sE_{\tau}$ admits a natural $f^*\Sym^{\bullet}T_{X/R}$-module structure: In degree zero, this is $\sO_{Y}$-structure; in degree one, $$ f^*T_{X/R}\otimes_{\sO_{Y}}\sE_{\tau}\stackrel{id\otimes pr}{\longrightarrow} f^*T_{X/R}\otimes_{\sO_{Y}}\sO_{Y}=f^*T_{X/R}\subset \sE_{\tau}, $$ and therefore $\theta: f^*T_{X/R}\to \End_{\sO_{Y}}(\sE_{\tau})$. By construction, $\theta\neq 0$ but $\theta^2=0$. For any $r\in \N$, set $$ \sE^r_{\tau}:=\Sym^r\sE_{\tau}. $$ The proof of the next lemma is straightforward. \begin{lemma}\label{basic property of E^r} For any $r\in \N$, $\sE^r_{\tau}$ is a nilpotent $f^*\Omega_{X/R}$-Higgs bundle of exponent $r$. It admits a filtration $F^{\bullet}$ of $f^*\Omega_{X/R}$-Higgs subbundles: $$ \sE^r_{\tau}=F^0\supset F^1\supset\cdots \supset F^r\supset 0, $$ whose associated graded $Gr_{F^{\bullet}}E^r_{\tau}$ is naturally isomorphic to $f^*\sA_r$. When $\tau=0$, $\sE^r_{\tau}=f^*\sA_r$ as $f^*\Omega_{X/R}$-Higgs bundle. \end{lemma} By the lemma, $\sE^r_{\tau}$ is an $f^*\sA_r$-module of rank one. Therefore, one may replace the tensor module in the definition of $\TP^{\sF}_{\tau}$ with $\sE^{r}_{\tau}$. This defines a new functor $\TP^{\sE}_{\tau}$ and hence the third twisted pullback functor $\TP^3_{\tau}$. \begin{remark} When one is interested only in coherent objects, one may drop the nilpotent condition in the construction. This is because by Cayley-Hamilton, there is an element of form $v^r-a_1v^{r-1}+\cdots+(-1)^ra_r\in \Sym^{\bullet}T_{X/R}$ annihilating $E$, so that $\Sym^{\bullet}T_{X/R}$-module structure on $E$ factors though $\Sym^{\bullet}T_{X/R}\to \sA_r$. \end{remark} We record the following statement for further study. \begin{proposition} Assume $r\in \N$ satisfy Assumption \ref{basic assumption on r}. Then as $f^*\sA_r$-modules, \begin{itemize} \item [(i)] $\sF^r_{\tau}\cong \sE^r_{\tau}$ for $r\leq 1$; \item [(ii)] $\sF^r_{\tau}\ncong \sE^r_{\tau}$ for $r>1$. \end{itemize} \end{proposition} \begin{proof} Obviously, $\sF^0_{\tau}\cong \sE^0_{\tau}\cong \sO_Y$. Assume $r\geq 1$. We illustrate our proof by looking at the case of $X/R$ being a relative curve. We describe $\sE^r_{\tau}$ in terms of local data: Take $r=1$ first. Let $U_{\alpha}$ be an open subset of $X$ with $\partial_{\alpha}$ a local basis of $\Gamma(U_{\alpha}, T_{X/R})$. Assume that $V_{\alpha}$ to be an open subset of $Y$ such that $f: V_{\alpha}\to U_{\alpha}$. We may assume the gluing functions between two different local basis are identity. Let $\{\tau_{\alpha\beta}\}$ be a Cech representative of $\tau$. Write $\tau_{\alpha\beta}=a_{\alpha\beta}f^*\partial_{\alpha\beta}$. Then $\sE_{\tau}$ is the $\sO_{Y}$-module obtained by gluing $\{\sO_{V_{\alpha}}\oplus f^*T_{U_{\alpha}/R}\}$ via the following gluing matrix: $$ \left( \begin{array}{c} 1 \\ f^*\partial_{\alpha} \\ \end{array} \right)=\left( \begin{array}{cc} 1 & a_{\alpha\beta} \\ 0 & 1 \\ \end{array} \right)\left( \begin{array}{c} 1 \\ f^*\partial_{\beta} \\ \end{array} \right). $$ Under the assumption for $r$, $\sE^r_{\tau}$ is obtained by gluing $$\{f^*\sA_{r}|_{U_{\alpha}}=\sO_{V_{\alpha}}\oplus f^*T_{U_{\alpha}/R}\oplus\cdots \oplus f^*T^{\otimes r}_{U_{\alpha}/R}\}$$ via the gluing matrix: $$ \left( \begin{array}{c} \frac{1}{r!} \\ \frac{f^*\partial_{\alpha}}{(r-1)!} \\ \vdots\\ f^*\partial^{r-1}_{\alpha} \\ f^*\partial^r_{\alpha} \\ \end{array} \right)=\left( \begin{array}{ccccc} 1& a_{\alpha\beta} & \frac{a^2_{\alpha\beta}}{2!} & \ldots & \frac{a^r_{\alpha\beta}}{r!} \\ 0& 1 & a_{\alpha\beta} & \ldots & \frac{a^{r-1}_{\alpha\beta}}{(r-1)!} \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \ldots & 1 & a_{\alpha\beta} \\ 0 & 0 & \ldots & \ldots & 1 \\ \end{array} \right)\left( \begin{array}{c} \frac{1}{r!} \\ \frac{f^*\partial_{\beta}}{(r-1)!} \\ \vdots\\ f^*\partial^{r-1}_{\beta} \\ f^*\partial^r_{\beta} \\ \end{array} \right). $$ As comparison, $\sF^r_{\tau}$ is obtained by gluing $\{f^*\sA_{r}|_{U_{\alpha}}\}$ via the following transition functions $$ \left( \begin{array}{c} 1 \\ f^*\partial_{\alpha} \\ \vdots\\ f^*\partial^{r-1}_{\alpha} \\ f^*\partial^r_{\alpha} \\ \end{array} \right)=\left( \begin{array}{ccccc} 1& a_{\alpha\beta} & \frac{a^2_{\alpha\beta}}{2!} & \ldots & \frac{a^r_{\alpha\beta}}{r!} \\ 0& 1 & a_{\alpha\beta} & \ldots & \frac{a^{r-1}_{\alpha\beta}}{(r-1)!} \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \ldots & 1 & a_{\alpha\beta} \\ 0 & 0 & \ldots & \ldots & 1 \\ \end{array} \right)\left( \begin{array}{c} 1 \\ f^*\partial_{\beta} \\ \vdots\\ f^*\partial^{r-1}_{\beta} \\ f^*\partial^r_{\beta} \\ \end{array} \right). $$ Therefore, $\sF^r_{\tau}$ and $\sE^{r}_{\tau}$ are isomorphic as $\sO_Y$-modules. However, when $r\geq 2$, the Higgs structures of these two bundles differ: For $\sF^r_{\tau}$, the Higgs field along $\partial_{\alpha}$ is given by $$ \theta_{\partial_\alpha}\left( \begin{array}{c} 1 \\ f^*\partial_{\alpha} \\ \vdots\\ f^*\partial^{r-1}_{\alpha} \\ f^*\partial^r_{\alpha} \\ \end{array} \right)=\left( \begin{array}{ccccc} 0& 1 & 0 & \ldots & 0 \\ 0& 0 & 1 & \ldots & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \ldots & 0 & 1 \\ 0 & 0 & \ldots & \ldots & 0 \\ \end{array} \right)\left( \begin{array}{c} 1 \\ f^*\partial_{\alpha} \\ \vdots\\ f^*\partial^{r-1}_{\alpha} \\ f^*\partial^r_{\alpha} \\ \end{array} \right), $$ while for Higgs field action for $\sE^r_{\tau}$ is given by $$ \theta_{\partial_\alpha}\left( \begin{array}{c} \frac{1}{r!} \\ \frac{f^*\partial_{\alpha}}{(r-1)!} \\ \vdots\\ f^*\partial^{r-1}_{\alpha} \\ f^*\partial^r_{\alpha} \\ \end{array} \right)=\left( \begin{array}{ccccc} 0& \frac{1}{r} & 0 & \ldots & 0 \\ 0& 0 & \frac{1}{r-1} & \ldots & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \ldots & 0& 1 \\ 0 & 0 & \ldots & \ldots & 0 \\ \end{array} \right)\left( \begin{array}{c} \frac{1}{r!} \\ \frac{f^*\partial_{\alpha}}{(r-1)!} \\ \vdots\\ f^*\partial^{r-1}_{\alpha} \\ f^*\partial^r_{\alpha} \\ \end{array} \right). $$ \end{proof} \section{Twisted functoriality} Now we come back to the setting in \S1. First we make the following \begin{definition}\label{twisted pullback} Let $k$, $f: Y_{\log}\to X_{\log}$ and $\tilde X_{\log}, \tilde Y_{\log}$ be as in \S1. For a Higgs module $(E,\theta)\in \HIG_{\leq p-1}(X_{\log}/k)$. Then the twisted pullback $f^\circ(E,\theta)$ is defined to be $\TP_{ob(f)}(E,\theta)$, where $ob(f)$ is the obstruction class of lifting $f$ to a morphism $\tilde Y_{\log}\to \tilde X_{\log}$ over $W_2(k)$. \end{definition} Assume that $f$ admits a $W_2(k)$-lifting $\tilde f$. In Langer's proof of functoriality Theorem 5.3 \cite{La19}, the existence of local logarithmic Frobenius liftings $F_{\tilde X_{\log}}$ and $F_{\tilde Y_{\log}}$ such that $F_{\tilde X_{\log}}\circ \tilde f=\tilde f\circ F_{\tilde Y_{\log}}$ is crucial-this is where the condition of $\tilde f$ being good enters. However, one notices that any local logarithmic Frobenius liftings on $\tilde X$ and $\tilde Y$ commute with $\tilde f$ \emph{up to homotopy}. A heuristic reasoning shows that this homotopy should be intertwined with the homotopies caused by local logarithmic Frobenius liftings of both $\tilde X$ and $\tilde Y$, as well as the one caused by local liftings of the morphism (no $W_2$-lifting on $f$ is assumed any more). Turning this soft homotopy argument into exact differential calculus in positive characteristic yields the proof for the claimed twisted functoriality. To start with proof of Theorem \ref{main result}, we take an \'{e}tale covering $\mathcal X=\coprod_i X_i\to X$ with $X_i$ affine and the pullback of $D$ along each $X_i\to X$ simple normal crossing. Then we take an \'{e}tale covering $\pi: \mathcal Y=\coprod_i Y_i\to Y$ with similar properties and $f$ restricts to a local morphism $f_i: Y_{i,\log}\to X_{i,\log}$ for each $i$. For each $i$, we choose logarithmic Frobenius lifting over $W_2(k)$ $$ F_{\tilde X_{i,\log}}: \tilde X_{i,\log}\to \tilde X_{i,\log},\quad F_{\tilde Y_{i,\log}}: \tilde Y_{i,\log}\to \tilde Y_{i,\log}, $$ and also a $W_2(k)$-lift $\tilde f_i: \tilde Y_{i,\log}\to \tilde X_{i,\log}$. Such local lifts exist. Set $$ (V_1,\nabla_1)=C_{Y_{\log}\subset \tilde Y_{\log}}^{-1}f^{\circ}(E,\theta),\quad (V_2,\nabla_2)=f^*C^{-1}_{X_{\log}\subset \tilde X_{\log}}(E,\theta). $$ In below, we exhibit an isomorphism between $(V_i,\nabla_i),i=1,2$ after pulling back to the \'{e}tale covering $\sY$ which satisfies the descent condition. The whole proof is therefore divided into two steps.\\ {\itshape Step 1: Isomorphism over $\sY$}\\ As $\sY$ is a disjoint union of open affine log schemes $\{Y_{i,\log}\}$s, it suffices to construct an isomorphism for each open affine. In the foregoing argument, we drop out the subscript $i$ everywhere. Notice first that the two morphisms $\tilde f^*\circ F_{\tilde X_{\log}}^*$ and $F_{\tilde Y_{\log}}^*\circ \tilde f^*$ coincide after reduction modulo $p$. Thus, it defines an element $$ \nu_{f}\in \Hom_{\sO_{X}}(\Omega_{X_{\log}/k},\sO_Y) $$ such that $$ \nu_f\circ d=\frac{1}{p}(F_{\tilde Y_{\log}}^*\circ \tilde f^*-\tilde f^*\circ F_{\tilde X_{\log}}^*). $$ So we get $\nu_f\cdot \theta\in \Gamma(Y,\End_{\sO_Y}(g^*E))$, where $g=F_X\circ f=f\circ F_Y$. \begin{lemma} $\exp(\nu_f\cdot \theta)$ defines an isomorphism $(V_1,\nabla_1)\to (V_2,\nabla_2)$. That is, there is a commutative diagram: $$\CD V_1 @> \exp(\nu_f\cdot \theta) >> V_2 \\ @V \nabla_1 VV @VV \nabla_2V \\ V_1\otimes \Omega_{Y_{\log}/k} @>\exp(\nu_f\cdot \theta)\otimes id>> V_2\otimes \Omega_{Y_{\log}/k}. \endCD$$ \end{lemma} \begin{proof} Recall that over $Y$, $V_1=V_2=g^*E$. So $\exp(\nu_f\cdot E)$ defines an isomorphism from $V_1$ to $V_2$. Moreover, the connections are given by $$ \nabla_1=\nabla_{can}+(id\otimes \frac{dF_{\tilde Y_{\log}}}{p})(F_Y^*f^*\theta), $$ and respectively by $$ \nabla_2=f^*(\nabla_{can}+(id\otimes \frac{dF_{\tilde X_{\log}}}{p})(F_X^*\theta)). $$ Now we are going to check the commutativity of the above diagram. Take a local section $e\in E$. Then $$ \exp(\nu_f\cdot \theta)\otimes id\circ \nabla_1(g^*e)= \exp(\nu_f\cdot \theta)(id\otimes \frac{dF_{\tilde Y_{\log}}}{p})(F_Y^*f^*\theta(e)). $$ On the other hand, $\nabla_2\circ \exp(\nu_f\cdot \theta)(e)$ equals $$ \exp(\nu_f\cdot \theta)d(\nu_f\cdot \theta)(g^*e)+\exp(\nu_f\cdot \theta)(f^*(id\otimes \frac{dF_{\tilde X_{\log}}}{p})(F_X^*\theta(e))). $$ We take a system of local coordinates $\{x_i\}$ for $\tilde X$ and use the same notion for its reduction modulo $p$. Write $\theta=\sum_i\theta_idx_i$, and $\nu_f=\sum_iu_i\partial_{x_i}$ with $u_i\in \sO_Y$. Thus $$ d(\nu_f\cdot \theta)=d(\sum_ig^*\theta_i\cdot u_i). $$ As $d$ is $\sO_X$-linear, it equals $$ \sum_ig^*\theta_i\cdot du_i=\sum_ig^*\theta_i\cdot d(\frac{(F_{\tilde Y_{\log}}^*\circ \tilde f^*-\tilde f^*\circ F_{\tilde X_{\log}}^*)(x_i)}{p}). $$ So $d(\nu_f\cdot \theta)(g^*e)=\sum_ig^*\theta_i(e)\cdot \frac{(F_{\tilde Y_{\log}}^*\circ \tilde f^*-\tilde f^*\circ F_{\tilde X_{\log}}^*)(x_i)}{p}$. On the other hand, \begin{eqnarray*} (id\otimes \frac{dF_{\tilde Y_{\log}}}{p})(F_Y^*f^*\theta(e))&=& \sum_i g^*\theta_i(e)\cdot (\frac{id\otimes dF_{\tilde Y_{\log}}}{p})(F_Y^*f^*(dx_i))\\ &=&\sum_ig^*\theta_i(e)\cdot \frac{d(F_{\tilde Y_{\log}}^*\tilde f^*(x_i))}{p}, \end{eqnarray*} and similarly, \begin{eqnarray*} f^*(id\otimes \frac{dF_{\tilde X_{\log}}}{p})(F_X^*\theta(e))&=& \sum_i g^*\theta_i(e)\cdot \frac{f^*d(F^*_{\tilde X_{\log}}(x_i))}{p}\\ &=&\sum_i g^*\theta_i(e)\cdot \frac{d(\tilde f^*F^*_{\tilde X_{\log}}(x_i))}{p}. \end{eqnarray*} This completes the proof. \end{proof} {\itshape Step 2: Descent condition}\\ In Step 1, we have constructed an isomorphism $\exp(\nu_f\cdot \theta): \pi^*(V_1,\nabla_1)\to \pi^*(V_2,\nabla_2)$ whose restriction to $Y_{i,\log}$ is given by $\exp(\nu_{f_i}\cdot \theta)$. Let $p_i: \sY\times_Y \sY\to \sY, i=1,2$ be two projections. In below, we show that $$ p_1^*(\exp(\nu_f\cdot \theta))=p_2^*(\exp(\nu_f\cdot \theta)). $$ The obstruction class $ob(F_X)$ (resp. $ob(F_Y)$ and $ob(f)$) of lifting $F_X$ (resp. $F_Y$ and $f$) over $W_2$ has its Cech representative landing in $\Gamma(X_{ij},F^*_{X}T_{X_{\log}/k})$ (resp. $\Gamma(Y_{ij},F^*_{Y}T_{Y_{\log}/k})$ and $\Gamma(Y_{ij},f^*T_{X_{\log}/k})$). We have the following natural maps: $$ f^*: H^1(X,F_X^*T_{X_{\log}/k})\to H^1(Y,f^*F_X^*T_{X_{\log}/k})=H^1(Y,g^*T_{X_{\log}/k}), $$ $F_Y^*: H^1(Y,f^*T_{X_{\log}/k})\to H^1(Y,g^*T_{X_{\log}/k})$, and $$ f_*: H^1(Y,F_Y^*T_{Y_{\log}/k})\to H^1(Y,F_Y^*f^*T_{X_{\log}/k})=H^1(Y,g^*T_{X_{\log}/k}), $$ which is induced by $f_*: T_{Y_{\log}/k}\to f^*T_{X_{\log}/k}$. \begin{lemma}\label{cech relation} One has an equality in $\Gamma(Y_{ij},g^*T_{X_{\log}/k})$, where $Y_{ij}=Y_i\times_YY_j$: $$ \nu_{f_i}-\nu_{f_j}=ob(F_Y)_{ij}+ob(f)_{ij}-ob(F_X)_{ij} $$ where we understand the obstruction classes as their images via the natural morphisms. Consequently, there is an equality in $H^1(Y,g^*T_{X_{\log}/k})$: $$ [\nu_{f_i}-\nu_{f_j}]=ob(F_Y)+ob(f)-ob(F_X). $$ \end{lemma} \begin{proof} First, we observe the following identity \begin{eqnarray*} \frac{1}{p}(F^*_{\tilde Y_{i,\log}}\circ \tilde f_i^*-F^*_{\tilde Y_{j,\log}}\circ \tilde f_j^*)&=& \frac{1}{p}[(F^*_{\tilde Y_{i,\log}}-F^*_{\tilde Y_{j,\log}})\circ \tilde f_i^*]+\frac{1}{p}[F^*_{\tilde Y_{j,\log}}\circ (\tilde f_i^*-\tilde f_j^*)]\\ &=& \frac{F^*_{\tilde Y_{i,\log}}-F^*_{\tilde Y_{j,\log}}}{p}\circ f_i^*+F_{Y_{j}}^*\circ \frac{\tilde f_i^*-\tilde f_j^*}{p}. \end{eqnarray*} It follows that \begin{eqnarray*} (\nu_{f_i}-\nu_{f_j})\circ d&=&\frac{1}{p}(F^*_{\tilde Y_{i,\log}}\circ \tilde f_i^*-F^*_{\tilde Y_{j,\log}}\circ \tilde f_j^*)-\frac{1}{p}(\tilde f_i^*\circ F^*_{\tilde X_{i,\log}} -\tilde f_j^*\circ F^*_{\tilde X_{j,\log}})\\ &=& \frac{F^*_{\tilde Y_{i,\log}}-F^*_{\tilde Y_{j,\log}}}{p}\circ f_i^*+F_{Y_{j}}^*\circ \frac{\tilde f_i^*-\tilde f_j^*}{p}-f_j^*\circ\frac{F^*_{\tilde X_{i,\log}}-F^*_{\tilde X_{j,\log}}}{p}-\frac{\tilde f_i^*-\tilde f_j^*}{p}\circ F_{X_i}^*\\ &=&ob(F_Y)_{ij}\circ f_i^*\circ d+F_{Y_{j}}^*\circ ob(f)_{ij}\circ d-f_j^*\circ ob(F_X)_{ij}\circ d. \end{eqnarray*} In the second equality, the last term vanishes because $$ \frac{\tilde f_i^*-\tilde f_j^*}{p}\circ F_{X_i}^*=ob(f)\circ (d F_{X_i}^*)=0. $$ \end{proof} Now we turn the above equality into an equality required in the descent condition. The transition function of $V_1$ is given by $$ a_{ij}:=\exp(ob(F_Y)_{ij}\cdot f_i^*\theta)\cdot\exp (F_{Y_j}^*(ob(f)_{ij}\cdot \theta)), $$ while the transition function for $V_2$ is given by $$ b_{ij}:=f_j^*\exp((ob(F_X)_{ij}\cdot \theta)). $$ Then Lemma \ref{cech relation} implies the commutativity of the following diagram over $Y_{ij}$: $$ \CD V_1|_{Y_i} @>\exp(\nu_{f_i}\cdot \theta)>> V_2|_{Y_i} \\ @V a_{ij} VV @VVb_{ij}V \\ V_1|_{Y_j} @>>\exp(\nu_{f_j}\cdot \theta)> V_2|_{Y_j}. \endCD $$ The commutativity is nothing but the descent condition for the isomorphism $\exp(\nu_f\cdot\theta)$. So we are done. \section{Semistability under pullback} Semistablity is not always preserved under pullback. After all, semistability refers to some given ample line bundle and an ample line bundle does not necessarily pulls back to an ample line bundle. Even worse, in the postive characteristic case, there are well-known examples of semistable vector bundles over curves which pull back to unstable bundles under Frobenius morphism. For a polystable Higgs bundle with vanishing Chern classes in characteristic zero, this is handled by the existence of Higgs-Yang-Mills metric-it is a harmonic bundle by this case and harmonic bundles pulls back to harmonic bundles. Consequently, the pullback of a polystable Higgs bundle with vanishing Chern classes is again polystable with vanishing Chern classes. For a semistable Higgs bundle with vanishing Chern classes, one takes a Jordan-H\"older filtration of the Higgs bundle and the semistability of the pullback follows from that of the polystable case. In the following, we provide a purely algebraic approach to the semistable case. We proceed to the proof of Theorem \ref{semistability}. \begin{proof} We resume the notations of Theorem \ref{semistability}. Since taking Chern class commutes with pullback, the statement about vanishing Chern classes of the pullback is trivial. We focus on the semistability below. In the following discussion, we choose and then fix an arbitrary ample line bundle $L$ (resp. $M$) over $X$ (resp. $Y$). We consider first the char $p$ setting. Fix a $W_2(k)$-lifting $\tilde f: \tilde Y_{\log}=(\tilde Y, \tilde B)\to \tilde X_{\log}=(\tilde X, \tilde D)$. First, we observe that the proof of Theorem A.4 \cite{LSZ} works verbatim for a semistable logarithmic Higgs bundle, so that there exists a filtration $Fil_{-1}$ on $E$ such that $\Gr_{Fil_{-1}}(E,\theta)$ is semistable. Now applying \cite[Theorem A.1]{LSZ}, \cite[Theorem 5.12]{La1} to the \emph{nilpotent} semistable Higgs bundle $\Gr_{Fil_{-1}}(E,\theta)$, we obtain a flow of the following form: $$ \xymatrix{ (E,\theta)\ar[dr]^{Gr_{Fil_{-1}}} && (H_0,\nabla_0)\ar[dr]^{Gr_{Fil_0}} && (H_1,\nabla_1)\ar[dr]^{Gr_{Fil_1}} \\ &(E_0,\theta_0) \ar[ur]^{C_{X_{\log}\subset \tilde X_{\log}}^{-1}} & & (E_1,\theta_1) \ar[ur]^{C_{X_{\log}\subset \tilde X_{\log}}^{-1}}&& \cdots, } $$ in which each Higgs term in bottom is semistable. Next, because of Corollary \ref{main cor}, we may obtain the pullback flow as follows: $$ \xymatrix{ f^*(E,\theta)\ar[dr]^{Gr_{f^*Fil_{-1}}} && f^*(H_0,\nabla_0)\ar[dr]^{Gr_{f^*Fil_0}} && f^*(H_1,\nabla_1)\ar[dr]^{Gr_{f^*Fil_1}} \\ &f^*(E_0,\theta_0) \ar[ur]^{C_{Y_{\log}\subset \tilde Y_{\log}}^{-1}} & & f^*(E_1,\theta_1) \ar[ur]^{C_{Y_{\log}\subset \tilde Y_{\log}}^{-1}}&& \cdots } $$ Now as the Higgs terms $(E_i,\theta_i)$s in the first flow are semistable of the same rank and of vanishing Chern classes, the set $\{(E_i,\theta_i)\}_{i\geq 0}$ form a bounded family. So the set $\{f^*(E_i,\theta_i)\}_{i\geq 0}$ also forms a bounded family. In particular, the degrees of subsheaves in $\{f^*E_i\}_{i\geq 0}$ have an upper bound $N$. Suppose $f^*(E,\theta)$ is unstable, that is, there exists a saturated Higgs subsheaf $(F,\eta)$ of positive degree $d$ in $f^*(E,\theta)$. Then $\Gr_{f^*Fil_{-1}}(F,\eta)\subset f^*(E_0,\theta_0)$ is a Higgs subsheaf of degree $d$. It implies that $\Gr_{f^*Fil_0}\circ C^{-1}_{Y_{\log}\subset \tilde Y_{\log}}(F,\eta)\subset f^*(E_1,\theta_1)$ is of degree $pd$. Iterating this process, one obtains a subsheaf in $f^*(E_i,\theta_i)$ whose degree exceeds $N$. Contradiction. Therefore, $f^*(E,\theta)$ is semistable. Now we turn to the char zero case. By the standard spread-out technique, there is a regular scheme $S$ of finite type over $\Z$, and an $S$-morphism $\mathfrak{f}: (\sY,\sB)\to (\sX,\sD)$ and an $S$-relative logarithmic Higgs bundle $(\sE,\Theta)$ over $(\sX,\sD)$, together with a $k$-rational point in $S$ such that $\{\mathfrak f: (\sY,\sB)\to (\sX,\sD), (\sE,\Theta)\}$ pull back to $\{f: (Y,B)\to (X,D), (E,\theta)\}$. In above, we may assume that $\sX$ (resp. $\sY$) is smooth projective over $S$ and $\sD$ (resp. $\sB$) is an $S$-relative normal crossing divisor in $\sX$ (resp. $\sY$). For a geometrically closed point $s\in S$ and a $W_2(k(s))$-lifting $\tilde s\to S$, we obtain a family $\mathfrak{f}_s: (\sY,\sB)_s\to (\sX,\sD)_s$ over $k(s)$ which is $W_2$-liftable. Once taking an $s\in S$ such that $\textrm{char}(k(s))\geq \rk(\sE_s)=\rk(E)$, we are in the previous char $p$ setting. Hence it follows that $\mathfrak{f}_s^*(\sE,\Theta)_s$ is semistable. From this, it follows immediately that $f^*(E,\theta)$ is also semistable. \end{proof}
ea761910955c17a3f8720227ae6d9c4f89be7e57
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \textcolor{black}{The number of drone or UAV based applications are drastically increasing. Based on the latest TechSci report, the overall revenue from the drone application-related market is expected to drastically improve from 69 billion dollars in 2018 to 141 billion dollars in $2023$ \cite{techsci}. The first application of drones was seen in $1849$ when the Austrian army attacked Venice with some unmanned balloons filled with explosives \cite{Austrian}. This was the point where the idea of drones and its related applications came into the picture and became a topic of exploration for researchers. During WWII, Reginald Denny invented the first remote-controlled aircraft called the Radioplane OQ-2. It was the first mass-produced UAV product in the US, and was a breakthrough in manufacturing and supplying drones for the military \cite{firstdrone}. The use of drones in multiple domains has been rapidly increasing in the past few years. } \begin{figure}[!t] \centering \includegraphics[width=90mm]{Fig1.jpg} \caption{\textcolor{black}{Basic Process of Drone Communication \cite{fig1}.}} \label{droneWorking} \end{figure} \textcolor{black}{Drones work on a simple procedure that involves a data link from the ground controller to the drone and a data link from the drone to the satellite. The ground station controller is also in link with the satellite at every point of time. The basic functioning of drone communication is pictorially shown in Fig. \ref{droneWorking}. Communication between the drone and the other components takes place through radio waves. Drones can help in sending data from one point to another with low latency \cite{fig1}. Drones can provide on-the-fly communication facilities in areas where terrestrial infrastructure is poor or has been destroyed, and to provide any further destruction or harm, emergency services are required in disaster-struck areas \cite{drone_ml}. UAVs can act as a communication-bridge between ground users and network nodes. Furthermore, they can also be used in various monitoring or surveillance operations. A 3-D network can also be made to integrate drone base stations (droneBS) and cellular-connected drone users \cite{rev2}. Although these applications are highly promising to provide safety and comfort to all, they can also bring disastrous results if the drone communication links are hacked and misused. Being resource-constrained, drones are highly vulnerable to physical and cyber attacks/threats \cite{cyber}. The storage and battery capacity of drones is limited and if proper care is not taken, it is easy to hack the chips and the sensors installed inside the drone's circuit to get all the stored information. Therefore, it is highly imperative to focus on the security standards for drone communication as their applications increase \cite{newly1, newly2}. The authors of \cite{rev1} propose a way to reduce the service-time of drones. Drone path planning can be done for secure positioning and it's verification of various components \cite{conti4}.} \begin{figure*}[!t] \centering \includegraphics[width=180mm]{Fig2.jpg} \caption{\textcolor{black}{Structure of our Survey.}} \label{organization} \end{figure*} \begin{table}[!t] \caption{List of Major Acronyms} \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{|l|l|} \hline \rowcolor[gray]{0.7} \textbf{{Notation}} & \textbf{{Meaning}}\\ \hline ADMM & Alternating Direction Method of Multipliers \\ \hline AODV & Ad hoc On demand distance vector\\ \hline ApaaS & Authentication Proxy as a Service \\ \hline CPS & Cyber-Physical System \\ \hline FANET & Flying Ad-hoc Network \\ \hline FQ & Fair Queuing \\ \hline \textcolor{black}{GNSS} & \textcolor{black}{Global Navigation Satellite System Signals} \\ \hline HAE & Homomorphic Authenticated Encryption\\ \hline ILP & Integer Linear Program \\ \hline IMU & Inertial Measurement Unit \\ \hline \textcolor{black}{LOIC} & \textcolor{black}{Low Orbit Ion Cannon} \\ \hline \textcolor{black}{LiDAR} & \textcolor{black}{Light Detection And Ranging} \\ \hline NBTM & Neural-Blockchain Based Transport Model\\ \hline NIDS & Network intrusion detection systems \\ \hline PKG & Private Key Generator \\ \hline PUF & Physical Unclonable Function \\ \hline \textcolor{black} {RSA} & \textcolor{black}{Rivest-Shamir-Adleman} \\ \hline TVA+ & Traffic Validation Architecture \\ \hline VIO & Visual Inertial Odometry \\ \hline \end{tabular} } \label{Acronymtable} \end{table} \begin{table*}[!t] \caption{Related Surveys on Drone Security} \centering \resizebox{1\textwidth}{!}{ \begin{tabular}{|l|l|l|} \hline \rowcolor[gray]{0.7} \textbf{Year} & \textbf{Author} & \textbf{Contributions} \\ \hline 2015 & Lav Gupta et al., \cite{rw3} & Discussion on security issues in swarm of drones or Flying Ad-hoc Network (FANET) \\ \hline 2016 & Riham Altawy, Amr M. Youssef , \cite{civilian} & Survey on security, privacy, and safety aspects of civilian drones \\ \hline 2016 & Samira Hayat et al., \cite{rw4} & Discussion on requirements of UAV networks for upcoming drone applications \\ \hline 2017 & Mohammad Mozaffari, Walid Saad et al., \cite{rw5} & Issues that UAV faces due to wireless networks \\ \hline 2018 & Silvia Sekander, Hina Tabassum et al., \cite{rw6} & Issues due to wireless networks and the architecture of 5G for UAV \\ \hline 2019 & Azade Fotouhi et al., \cite{rw2} & \textcolor{black}{Challenges faced by UAV in cellular communication} \\ \hline 2019 & Saeed H. Alsamhi et al., \cite{rw1} & The challenges faced in collaboration of drones and IoT specifically for smart cities \\ \hline 2019 & Sun Xingming, Yueyan Zhi et al., \cite{tab1} & Survey on security and privacy issues of UAV \\ \hline \textcolor{black}{2021} & \textcolor{black}{This paper} & \textcolor{black}{Survey on existing and upcoming security challenges in drone communication and their solutions}\\ \hline \end{tabular} } \label{table_relatedworks} \end{table*} \textcolor{black}{Due to the increasing use of drones, the issues related to drone security, privacy, reliability, regulation, and ownership are also increasing at the same pace. There are various security-critical applications where drones fail to provide complete security of data, and that results in a great loss and life-threatening risk. For example, on $29^{th}$, November $2018$, a drone was hacked in Las Vegas and it came into the path of a tour helicopter \cite{2}. Fortunately, the pilot could manage to avoid a crash, but this may not be the case in all such events. A crash might have resulted in the loss of life of many civilians. The incident was investigated by the Federal Aviation Administration (FAA) and some strict rules against drone usage were also brought into action. Various such threats can be caused by the unrestricted use of drones in different applications without any standard security parameters. In this section, we present various important drone applications that are associated with critical security issues. Table \ref{Acronymtable} shows the list of major acronyms used throughout this survey. } \subsection{Related Surveys and Our Contributions} Although a few recent works focus on surveys of issues related to drone communications, the existing surveys generally consider a specific domain or utility of drones. For example, the authors of \cite{rw1} provide a detailed survey on the challenges faced in the collaboration of drones and IoT specifically for smart cities. Another work presented in \cite{civilian} discusses the security, privacy, and safety aspects specific to civilian drones. Furthermore, a significant number of surveys have been done earlier for discussing the privacy and security issues present specifically in UAVs or communication networks. The authors of \cite{rw2} focus on the use of UAVs for cellular communications. The authors discuss various standardization advancements, practical aspects, regulatory issues, and security challenges related to the use of UAVs in cellular communication. The authors of \cite{rw3} provide a full review of various security challenges faced in UAV communication networks. The authors specifically focus on the issues faced in a swarm of drones or Flying Ad-hoc Network (FANET). A comparative study of issues that differentiate FANET from other ad-hoc networks such as Mobile Ad-hoc Network (MANET) or Vehicular Ad-hoc Network (VANET) is also done in good detail. Furthermore, the authors of \cite{cong1} review the use of Game Theory-based approaches for UAVs. UAV path deviation attacks have been surveyed in \cite{conti2}. The authors of \cite{rw4} provide a review of the characteristics and requirements of UAV networks for upcoming drone applications. A generic review on all the network-related requirements such as safety, scalability, privacy, connectivity, security, and adaptability are discussed. The authors of \cite{rw5} and \cite{rw6} also emphasize on the issues related to the use of UAVs in the wireless network. The work done in \cite{rw5} provide some key guidelines over analyzing, designing and optimizing communication systems unique to UAVs. The authors also discuss the need for various security measures required for drones. A complete drone architecture for 5G has been presented in good detail. Moreover, a comprehensive survey discussing the security and privacy issues faced by UAVs is presented in \cite{tab1}. Hence, different from any of the previous works, this work is a comprehensive survey on \textcolor{black}{the most critical} existing and upcoming security challenges in drone communication and the related solutions. This paper aims to help the readers get an overview of the state-of-the-art security challenges in drone communication. The readers will also have a good overview of existing and emerging security solutions for drone communication. Table \ref{table_relatedworks} shows the major survey works done in the direction of drone security in last few years. The main contributions of this work are as follows: \begin{enumerate} \item[{\bf 1.}] {A complete review of different existing and anticipated attacks in drone communication.} \item[{\bf 2.}] {Detailed and realistic recommendations to improve the drone application architecture for secure communication.} \item[{\bf 3.}] {Extensive analysis on the existing and upcoming solutions that empower the use of drone communication in multiple domains.} \item[{\bf 4.}] {An assessment of the future research areas, existing challenges, and, open issues for developing secure drone applications.} \end{enumerate} \subsection{Organization} The rest of the paper is organized as follows. In Section \ref{sec2}, we discuss various security issues and security-critical applications of UAVs in different domains. Section \ref{sec3} discusses the fundamentals of various emerging technologies for secure drone communication. Four major drone communication security approaches, i.e., Blockchain, Software Defined Networks (SDN), Machine learning, and Fog/edge computing are presented in Section \ref{sec5}, \ref{sec6}, \ref{sec7}, and \ref{sec8}, respectively. Section \ref{sec9} describes various future research areas, existing challenges, and open issues in drone security. Finally, we conclude the paper in Section \ref{sec10}. The organization of the survey is also shown in Fig. \ref{organization}. \section{\textcolor{black}{Security Issues in Drone Communication and Potential Vulnerabilities}} \label{sec2}Drone communication faces some specific security challenges along with the generic cyber-threats. One of the reasons for these specific issues is that drones are unmanned and it is difficult to handle or prevent unanticipated issues dynamically and adaptively. Special attention needs to be given to drone security issues as drones are different from the traditional IoT devices (mobile phones, sensor-based alarms, smart trackers, etc.), and we need drones to adapt to several advanced security concepts, such as confidentiality, authentication, access control, and data protection, while being highly resource constrained devices. Usage of drones needs to take care of vulnerability concerns from sensor networks, mobile communication networks, the internet, et cetera. The drones communicating via cellular data use radio signals to communicate with the controller. The controller sends the radio signals through its transmitter and these are received by the drone through its receiver. The radio signals in between can be jammed or can be tampered \cite{radio}. As stated by an IBM researcher, drones can be hijacked easily if they do not have encryption on their onboard chips \cite{10}. Because of resource constraint issues of drones, encryption would not be the ideal solution. With a huge amount of data exchange in drone communications, encryption and decryption using complex algorithms require a certain amount of computational power. The security concerns become more severe when drones use Wi-Fi for communication \cite{norton}. \textcolor{black}{Table \ref{IoT_diff} summarizes how susceptible various wireless communication networks are in comparison with drone communication systems.} In this section, we present various specific security challenges faced by drones. Furthermore, we also discuss the specific security vulnerabilities for each attack in some drone applications. Various ideas and methodologies to overcome these security challenges are discussed in the upcoming sections of this paper. \begin{table*}[] \color{black} \centering \caption{Difference between security vulnerabilities of different wireless networks} \begin{tabular}{|l|l|l|l|l|l|} \hline \rowcolor[gray]{0.8} Security Issue & Ad-hoc Network & Sensor Network & Mesh Network & Vehicular Network & Drone Comm. \\ \hline DoS & {\cmark} (Low) & {\cmark} (High) & {\cmark} (Low) & {\cmark} (Medium) & {\cmark} (High)\\ \hline Man-in-the-middle & {\cmark} (Medium) & {\cmark} (High) & {\cmark} (Low) & {\cmark} (Low) & {\cmark} (High)\\ \hline GPS spoofing & {\xmark} & {\xmark} & {\xmark} & {\cmark} (Medium) & {\cmark} (High)\\ \hline Radar & {\xmark} & {\xmark} & {\xmark} & {\xmark} & {\cmark} (High)\\ \hline Jamming & {\cmark} (Medium) & {\xmark} & {\cmark} (Low) & {\cmark} (Low) & {\cmark} (High)\\ \hline Wormhole & {\cmark} (High) & {\cmark} (High) & {\cmark} (Low) & {\cmark} (Low) & {\cmark} (High)\\ \hline \end{tabular} \label{IoT_diff} \end{table*} \subsection{Security Issues In Drones} Few of the security threats discussed below are more drone-specific (GPS spoofing, radars, jamming, and wormhole attacks), whereas the relatively generic issues mentioned are discussed based on how adversaries can exploit them to threaten the use of drones. \subsubsection{ Denial of Service Attacks} The DoS or the Denial Of Service is the most common and easy type of attack that an adversary can use to stop the drone from functioning normally. This is the simplest way of entering into the drone network and making it useless or sometimes even harmful \cite{newly14}. Fig. \ref{dos} shows the basic working of the DoS attack in case of drone communication. Due to a large number of superfluous requests, the access of shared resources to legitimate users is restricted. This will cause the system to overload, and might result in rejection of some or all legitimate requests to be fulfilled. In this process, the network connection between the ground controller and the drone is de-authenticated as the adversary sends several data packets to the drone which leads to the failure of the computational power of the drone \cite{DoS}. Data packets can be easily created by any packet generator application and can be sent directly to the drone's network. ICMP (Internet Control Message Protocol) packets will be sent at a very high rate which will make the network of the drone overflow, resulting in the loss of control of the drone both by the drone as well as the ground controller. It is also possible that there is some malicious code present in one of the sent data packets, that can be used to attack the drone. Such attacks can be used by the hijacker to crash the drone causing harm to civilians and government agencies. \begin{figure}[!t] \centering \includegraphics[width=90mm]{Fig3.pdf} \caption{Denial of Service Attack on Wi-Fi Enabled Drone.} \label{dos} \end{figure} The authors of \cite{doscase} have demonstrated the effects of DoS attack on $2$ types of drones namely, Augmented Reality aerial drone (AR Drone) $2.0$ which is a cheap quadcopter and $3$DR Solo which is a costlier quadcopter. The authors experimentally evaluate three DOS attack-tools: Netwox, Low Orbit Ion Cannon (LOIC), and Hping3, to analyze the drone's behavior. Both these types of drones are widely available in the markets. Both the drones were tested for the image quality transmitted and how DoS attacks affect them. The study found that both the costlier and cheap drones show a significant drop in frame rates demonstrating clearly to us that even premium drone manufacturers are not paying enough attention to drone security. The increase in network latency shows the ease of DoS attacks even on such high end drones. \textit{De-Authentication Attacks:} This is a type of attack that can make the use of drones difficult in various applications. This is a specific type of Denial Of Service attack in which communication is disrupted between the client and the Wi-Fi access point. In this attack, the control of the drone is lost by the pilot as the attacker de-authenticates the ground pilot. Attacker can send a de-authentication frame to a wireless access point at any point in time as encryption is not needed to send the frame, despite the privacy technique employed \cite{dos_encrypt}. The attacker only needs the mac address of the drone which is made available through any of the tools like `Aircrack-ng' \cite{11}. In the de-authentication attack, the drone is hijacked by using this tool which specifies the mac address of the drone. As soon as the Aircrack-ng tool is activated, the connection between the drone and the ground controller is de-authenticated. The attacker can use this tool to communicate with the drone and direct it maliciously. This attack makes the drone go out of control and leads to a heavy loss. De-authentication attacks have become one of the newest concerns in the industry as e-commerce giants, such as Amazon, look towards product delivery mechanisms for drones. One of the most famous methods for carrying out this attack is SkyJack \cite{decase} which uses an AR. Drone $2.0$ along with a Raspberry Pi and wireless Adapters to hack and control drones. It sends de-authentication requests through Aircrack-ng which is used to disconnect the target drone from their user and then use the node-ar-drone library to communicate with the target drone. \begin{figure}[!b] \centering \includegraphics[width=90mm]{Fig4.jpg} \caption{Man-In-The-Middle Attack On Drone \cite{fig4}.} \label{manattack} \end{figure} \subsubsection{Man-In-The-Middle Attack} Man-in-the-middle attack places an adversary in between the client and the drone. The adversary uses a device known as Wi-Fi Pineapple \cite{11}. Fig. \ref{manattack} represents the implementation of man-in-the-middle attack. In this attack, the flight planning software broadcasts the plan to the drone controller which sends it further to the drone. On successfully receiving the commands from the controller, the drone sends the acknowledgement, which is received by the attacker in between the drone and the controller. The attacker uses Wi-fi Pineapple to send the forced commands to the controller. Once the pineapple is set, it will run the recon mode which will trace out all the possible access points that the client software may be using. Once the access point (drone) is traced, it is added to the Pine-AP SSID (Service Set Identifier) pool. This command is forwarded to the drone and the actions intended by the adversary are imperceivably implemented by the drone. One example of man-in-the-middle attack is active eavesdropping \cite{eavs} in which the adversary connects himself with the drone controller. After getting the access of the drone through the SSID of the drone, the hacker sends the fake commands to the controller, making them believe that they are communicating with the drone itself \cite{newly13}. Fig. \ref{manattack} depicts the man-in-the-middle attack. The authors of \cite{fig4} explored various vulnerabilities of UAVs; if a weak encryption scheme is used, the password becomes easy to crack, and the man-in-the-middle attack can be performed using the Wi-Fi link. Lack of secure encryption schemes throughout the chain of communication can cause such attacks. The authors of \cite{link2}, from IBM, have demonstrated the easiness of stealing a police quadcopter worth a thousand dollars by performing the man-in-the-middle attack. The researchers revealed that a hardware worth only $40$ dollars is sufficient to perform such an attack. This is a very clear example of an attack in which the controller is not even aware of the middle layer hacker. \subsubsection{GPS Spoofing} For communication, drones need incoming signals from GPS satellites, a two-way link between the drone and the ground-station, and signals notifying the drone's presence \cite{fig5}. Fig. \ref{spoofing} shows the basic working of the GPS spoofing attack in drone communication. Spoofing can be done using multiple transmitting antennas, in which the attackers transmitting antenna combines with the corresponding receiving antenna and transmits the false signals. In this process of getting the GPS coordinates of the drone, the drone is located by the satellite using GPS and its coordinates are then sent to the ground controller. The drones that do not have any encryption on their chipboard, are easily tracked by the hacker and they share a wrong location to the drone controller using a directional antenna with narrow beam-width aiming for the drone. GPS spoofing is mainly carried out on military drones as they are deployed at certain critical places that can provide highly confidential information about the other nations. It is relatively difficult to spoof military drones as they are highly equipped with encryption mechanisms. Spoofing can be done using multiple transmitting antennas \cite{spoof}, in which the attackers transmitting antenna combines with the corresponding receiving antenna and transmits the false signals. The spoofer can take the drone to any trajectory he/she wants without even giving the controller a hint as fake coordinates are sent to the controller at regular intervals. This technique can be used to reduce the velocity of the drone making it less useful. \begin{figure}[!t] \centering \includegraphics[width=90mm]{Fig5.jpg} \caption{GPS spoofing attack on GPS Enabled Drone \cite{fig5}.} \label{spoofing} \end{figure} According to \cite{26}, on December $5$, $2011$, an American UAV was detected and shot down by Iranian forces near the city of Kashmar in northeastern Iran. According to the American officials, the UAV was spoofed and was forced to fly over Iran. The attackers hacked the UAV and injected it with wrong GPS coordinates. The incident resulted in a disturbance in the relationship between the two nations. The military drone was said to be using an inertial navigation system, and not the GPS navigation because of increasing number of spoofing and jamming attacks. Despite the measures taken to prevent any spoofing attack, or to protect the classified information available from the drones, the Iranians claimed that they could access it and reverse-engineered the entire drone. The authors of \cite{spoof} have used Software Defined Radio (SDR) platforms to simulate GPS to transmit false signals to the target drone. This methodology has been used for a long time to hack or relay wrong information through drones. Using this approach, they divert and take control of the drones that depend on GPS for flight paths. For generating fake GPS signals, BladeRFx$40$ SDR is used, which is very versatile, and costs around $420$ dollars. \subsubsection{Radar} Mono-static radar is the most traditional way of searching for important entities. Similarly, it can be used to find drones. The radars send electromagnetic signals that can travel a long distance. These signals travel in all directions and wherever the presence of drone is detected, the signals reflect from the surface of the drone and are received at the other desired end. By further studying the signals, one can easily measure the velocity, direction, and altitude of the drone. A problem with this technique is that sometimes the electromagnetic signals consider obstacles like birds, airplanes, or kites as drones and transmit wrong information to the radar station, which in turn produces a loss. Radars operating in the millimeter wave (electromagnetic spectrum) range can be used for surveillance of small drones, even under adverse weather conditions, with high accuracy and with distance-independent resolution \cite{radar_em}. Moreover, to overcome these issues, hackers have tried to use various machine learning techniques, including the SVM classifier, binary decision tree, etc., to classify between the real drones and other objects \cite{radar}. In this technique, the detector is trained with a lot of data sets to distinguish between any obstacle and a drone. The authors of \cite{fraun} have discussed in detail about the ways in which the radars can be used to detect and identify drones. Various sensors such as optic or infrared sensors are also used to detect or identify drones. However, these sensors have various limitations in terms of range, and their reliability in night, rain, and fog. The use of radar is declared by the authors to be superior to visual optic sensors and infrared sensors because of their range. Applications of drones such as package delivery, and military operations, makes the drones very vulnerable to attacks that use radars. In all such areas, detecting and identifying the drones can be a threat to the drone itself, and might also result in the loss of other task-associated resources. \subsubsection{Jammers} These are the electronic devices used by the adversaries to block the signals at the receiver's end. It is mainly used for the disruption of communication between several users. It works on a simple principle in which a transmitter is used which is tuned to the same frequency as that of the target. If the jammer has enough power, it over-rides the frequency signals, thereby blocking every type of signal that the target can configure to. The jamming attack is analogous to the DoS attack, but the only difference is that in the DoS attack, the network, service, and the application layer get affected whereas in the jamming attack radio signals are used to attack the drones which mainly affect the physical layer. Signals of Wi-Fi and Bluetooth can be easily jammed, and that too using a low power jammer \cite{12}. The ability of the jammer is judged by its range. Jammers with a higher range can block signals of devices that are present upto that range. Fig. \ref{jamming} shows the implementation of the jamming attack. The attacker sends the jamming signal to the serving base station from his end, with the help of a UAV, which matches the frequency of the signal with the deployed drone. Thereafter, the signals between the drone and the backup serving base station are blocked. Hence, no data and commands are allowed to reach to the server, and the deployed drone becomes non-responsive thereafter. Once the drone loses contact with the control station, many drones have a auto-pilot mode as a fail-safe which gets activated. Auto-pilot mode makes it easy for the attacker to launch a GPS-spoofing attack, and force a landing away from the original destination by spoofing the GPS signals \cite{jam_attack}. A technology was introduced in Australia a few years back, which would allow the hacker to commandeer a drone mid-flight and lands the drone in a defined exclusion zone by the new pilot \cite{jam_Aus}. \begin{figure}[!t] \centering \includegraphics[width=90mm]{Fig6.jpg} \caption{Jamming Attacks On Drone \cite{fig6}.} \label{jamming} \end{figure} The authors of \cite{jammerinc} mentioned an incident in which GPS jamming was used to bring down $46$ drones during a show in Hong Kong. The drones started falling with great velocity. According to the board's executive director, these professional drones were equipped with fail-safe technologies to direct them back to their take-off location, but because the strength of the jamming signals was so strong, the drones started dropping mid-air only. The hacker had to just point the jamming device towards the drones and as soon as the signal interruption was detected by the drones, they started falling, as confirmed by Rex Ngan, founder of the Hong Kong Professional Unmanned Aerial Vehicles Association. \subsubsection{Wormhole Attack} UAV networks that utilize FANET or MANET are susceptible to routing-based attacks like wormhole. The communication between UAVs rely not only on the information exchange between UAV and the ground control station, but also amongst the UAVs. FANET uses a system of auto-configuration and self-healing to improve the reliability of the system, but it makes them vulnerable to wormhole attacks \cite{fanet_wormhole}. A wormhole attack is one of the most severe and grave attacks in MANETs. In a wormhole attack, two attackers place themselves strategically in the network in order to listen-in on the communication between the drone-network. The attacker records the communication at one point in the drone-network and tunnels the information to the second attacker, where the recorded information is replayed. As the routing protocol algorithm looks for the nearest node to transfer the information, wormholes are placed so as to make the distant nodes believe that they are their closest neighbors. This kind of re-routing compromises the confidentiality of sensitive information, and also enables the attacker to launch an attack from any point in the network, because it practically controls all the routes discovered after the wormhole \cite{wormhole}. Moreover, wormhole attacks in a UAV Ad Hoc Network (UAANET), made of a swarm of UAVs and a ground control station, are a high-level risk, and special attention needs to be paid to this multi-node attack. Even without the knowledge of any the cryptographic keys or hash-functions, the attacker is able to affect the integrity of the network by transferring control packets, and further, captures the data traffic \cite{uaanet_wormhole}. \subsection{Potential Vulnerabilities In Different Drone Applications} This section discusses the potential vulnerabilities in different drone application. Although the drone applications are vulnerable to every security issue mentioned above, we present the main security issue faced by the specific drone application. \subsubsection{\textit{Security Vulnerabilities in Mining Drones}} Drones can help a lot in surface and underground mining \cite{mining}. Mining is a very tedious task and involves a risk to the life of the miners, so drones can be employed to decrease the workload and the risk to human life. Drones equipped with infrared night vision cameras can help in finding ores easily. They can also be equipped with a metal detector device to directly detect the ore even without a camera. Such applications may help in reducing the mining cost, and can increase the overall efficiency. There are various reasons and motivations due to which the adversaries may get attracted towards hacking such drones and launching a DoS attack. The other competitors may send unwanted requests to the mining drones, thereby preventing them from accurately identifying the ores and other valuables in the mines. \subsubsection{\textit{Security Vulnerabilities in Disaster Management Drones}} Drones can timely inform the respective emergency teams about the disaster situation and can help in preventing the loss of life and property. They can also help in delivering essential items to the disaster victims. Anticipated installation of drones can also be done in disaster-prone areas, to keep an eye on the upcoming disasters. The disasters may either be natural disasters like earthquake and flood, or man-made disasters like riots and terrorist attacks. Although there are multiple applications of drones in disaster anticipation, identification, prevention, recovery, and damage control, it is very important to safely deploy and use drones in such applications. These drones are highly vulnerable to de-authentication attacks. Both false positive and false negative messages from drones can result in major problems. Some hackers may try to de-authenticate the drones deployed for disaster anticipation and may send false positive messages to the emergency teams using their own malicious drones. This will end up in a waste of time and money for government bodies. False negatives may try to prevent the timely delivery of disaster-related messages to the emergency teams and thereby lead to loss of life and property. \subsubsection{\textit{Security Vulnerabilities in Agriculture Drones}} Various countries, like Sierra Leone, Somalia, and India, are dependent on agriculture for their living \cite{agri}. The farmers need techniques that help make farming easy and increase the productivity. Drones can be used for pollinating seeds which is a very important task for farmers to grow their crops. The drones can carry pollinating seeds and can have the dataset of the field where they have to sprinkle the seeds in the required quantity. This can decrease the work load of farmers and the mechanized sprinkling of seeds can also save seeds as drones can be programmed to sprinkle only the required quantity of seeds at appropriate locations. Drones can also help the farmers by spraying medicines in the correct quantity to kill unwanted plants like weeds and insects in the farms. In all the above discussed processes that are being carried out by the drone, the data should be accurate and if the drone gets hacked, the hacker can easily change the quantity of seeds or insecticides that have to be sprayed. The agricultural drones can be easily hijacked using the man-in-the-middle attack as the adversary can place himself between the drone and the ground controller and can manipulate the data already fed in the drone. Any unwanted change in the data can destroy the plants resulting in a great loss to the farmers, and as well as to the nation. \subsubsection{\textit{Security Vulnerabilities in Military Drones}} The initial drones were all very noisy, and therefore, it was difficult to use them for most of the hidden military operations. Various new drones have now been invented that make the least sound which makes them difficult to detect. The invention of such silent drones has increased their usability for various military purposes as they can fly up to a very remote location secretly. The cameras installed on the drones can be used to spot the enemy's location while carrying out any type of strike against the enemy. Although these drones have multiple advantages in military operations, if the drones are hacked or the communication link is spoofed, it can lead to disastrous results. The enemies can even hack and reprogram the drones to act against the army itself. The information leak by spoofing the communication link can also end up revealing military plans to the enemies. Therefore, it is very important to take care of all security standards before trying to deploy the drones for such critical applications. The most famous incident of such a case is also known as the Iran-U.S. RQ-170 incident \cite{26}, where Iran's military used GPS Spoofing to land a US UAV almost undamaged and reverse-engineered the entire design of the stealth drones to make their own Sa'egheh drones. This turned into an international incident. \subsubsection{\textit{Security Vulnerabilities in Delivery Drones}} Due to the increase in the pace of E-commerce, a lot of manpower is required especially for the last-mile delivery of products. The use of drones can be a promising solution as drones can deliver the products in less time and high accuracy. Drones can be used to deliver food, medicine, newspapers, and other things of daily basic needs. FAA approved the first NASA drone to deliver medicines in July $2015$. The UAV could successfully drop medicines to the health clinic in rural southwest Virginia \cite{7}. In $2016$, Amazon made its first drone delivery successful by delivering the package in $13$ minutes after being ordered by a customer in Cambridge \cite{8}. Amazon also launched a drone for delivery named Amazon Prime Air which can fly up to a range of $10$ miles for product delivery. These drones can take off and land autonomously, guided by GPS. Although, drones can help a lot in timely and accurate delivery, if these drones are hacked, it can end up in a big chaos. The hacker may use the radars to identify and capture the basic drones used for delivery and may guide the drones to deliver packets to different destinations or to himself. Therefore, care of the security standards is important even for the delivery drones. Since a huge number of people are using e-commerce, any misstep could endanger the privacy of billions of people in the future. \subsubsection{\textit{Security Vulnerabilities in Drones for Urban Planning}} Urbanization refers to the heavy movement of people from rural areas, like villages, to the cities. Drones can be highly helpful for architects to take some major decisions regarding renovations and new constructions. Drones can also help in making and analyzing the plans for water management in cities. A drone can be deployed with GIS (Geographic Information System), by which drones can easily capture, analyze and manipulate geographical data of the water supply management. It is important to have well-defined security measures for such drones as well. Various rules have to be followed to approve a safe construction. There have been various cases of buildings getting collapsed due to illegal construction resulting in a loss of life. If the architects rely on the results submitted by drones, and the values calculated and submitted by drones are not secure, then illegitimate people might try to hack and manipulate the drone functioning to get their illegal constructions approved. To conceal their identity, people with ill intent could crash such drones which could lead to a loss of resources. Other owners of illegal or unauthorized construction sites may deploy jammers to prevent such drones from identifying illegal constructions. \subsection{\textcolor{black}{Classification of Drone Communication Systems}} \subsubsection{\textcolor{black}{Drone-to-Drone}} \textcolor{black}{Even though drone-to-drone (D2D) communication has not been standardized yet, it can be seen as a peer-to-peer (P2P) network \cite{d2d_security}. This makes D2D communications susceptible to various P2P attacks (DoS/DDoS, jamming attack, etc.).} \subsubsection{\textcolor{black}{Drone-to-Infrastructure}} \textcolor{black}{Drone-to-infrastructure communication can be further classified into categories such as: \begin{enumerate} \item Drone-to-Satellite: This infrastructure is used for the drone to coordinate with the GPS. Although, it is expensive to set-up and maintain, such communication systems are considered safe and secure. \item Drone-to-Network: This type of communication is useful for cellular networks (4G, 5G, etc.), and it is very important to ensure their security when used. \item Drone-to-Ground Station: This infrastructure is based on common wireless technologies like Bluetooth and Wi-Fi. They are public, and hence not secure, making them very susceptible to man-in-the-middle attacks and eavesdropping. \end{enumerate}} \par \textbf{ Summary:} This section discusses the security issues being faced by the existing drone applications. Many attacks like DoS attacks, De-authentication attacks, Man-In-The-Middle attacks, that deal with the tampering of the data in the drones are mentioned in this section. Several other attacks that deal with the position of the drones like GPS spoofing, jamming attacks, radars are also discussed. In the next section, we discuss the overview and fundamentals of various emerging technologies such as blockchain, SDN, ML, and fog computing that can help in preventing the above-mentioned attacks on the drone applications. \textcolor{black}{Furthermore, we gave a brief classification distinction between drone communication systems.} \section{Overview and Fundamentals of Various Emerging technologies for Secure Drone Communication} \label{sec3}In this section, we discuss the four main emerging technologies that are being widely used and explored for making drone communication fast, reliable, and secure. Mainly we discuss the use of blockchain, ML, SDN, and fog computing for secure drone communication. \begin{figure*}[!t] \centering \includegraphics[width=180mm]{Fig7.jpg} \caption{Working process of blockchain.} \label{bcworking} \end{figure*} \subsection{Drone Communication Architecture using Blockchain} According to FAA, $1.3$ million drones have been registered with the FAA in $2019$ and the number is expected to increase to $7$ million by the end of $2020$ \cite{dronenumber}. With the rapid increase in the number of drones, the data generated by them is also increasing rapidly, which has increased security concerns about the data. Researchers state that blockchain can contribute to another layer of security to drone communication, which would prevent data retrieval and tamper by unauthorized persons \cite{cov}. Furthermore, the data on the blockchain is distributed, such that it becomes very difficult for an adversary to hack a single system to get control over the complete data in the network. Figure \ref{bcworking} shows the basic working process of the blockchain technology. \subsubsection*{\textit{Motivation For Using Blockchain For Drone Communication Security}} A blockchain is a growing chain of blocks linked to each other using cryptographic hash functions. The drone applications are becoming highly popular and are gradually being used in almost all domains and spheres of life. With the increasing number of drone applications, it is imperative to keep the transactions between the drones and other users secure, cost-effective, and privacy-preserving. The blockchain technology is a highly promising solution that can be used to deploy real-time drone applications. Once a transaction is recorded on the blockchain, it remains immutable and no adversary can try to tamper the records \cite{blocksolauth}. Furthermore, the use of smart contracts can help a lot in performing different transactions between different parties in a secure and cost-effective manner. Depending on the nature of the application, different kinds of blockchain networks can be created, such as public, private, consortium, and hybrid. Moreover, there is a vast variety of consensus algorithms used in the blockchain network ranging from PoW, PoS, PoB, DAG, and so on. All these features of blockchain can be leveraged to make drone communication secure, reliable, and cost-effective. In Section \ref{sec5}, we discuss in detail the various blockchain-based models to secure drone communication. \begin{figure}[!b] \centering \includegraphics[width=90mm]{Fig8.jpg} \caption{Basic SDN Architecture \cite{fig11}.} \label{sdn_arch} \end{figure} \subsection{Drone Communication Architecture Using SDN} Software-Defined Networking (SDN) is a networking architecture that can be used to control or program the network centrally using software applications. SDN helps in the consistent management of the network as everything in the network is centrally programmed. The architecture of a typical SDN-based drone communication network is shown in Fig. \ref{sdn_arch}. The figure illustrates a simple use case of SDN in drone applications. It shows the steps involved in transmitting data from the drones in the data plane/layer to the control plane for getting the data processed and then getting back the required output. In a typical SDN-based drone communication network, each drone in the network behaves as an individual switch. The application plane of the SDN sits on a centralized controller and is responsible for the implementation of any and all high-level functions to be performed by the network as a whole. The centralized controller also houses the control plane, which would command and control data flows between the drones. The data plane consists of the drones themselves, which respond to commands from the controller. There exist a variety of protocols and standards for performing various functions in the network. Because the SDN architecture decouples the control plane from the data plane, protocols in different planes can be implemented independently. This offers a considerable degree of freedom in the design of an SDN-based drone communication network. The authors of \cite{rev3} review the various 5G techniques based on UAV platforms using network layer techniques like software-defined UAV networks. \subsubsection*{\textit{Motivation For Using SDN For Drone Communication Security}} As discussed above, SDN enables the network to be centrally controlled, which makes the network reliable. Moreover, SDN's decoupled data layer, control layer, and application layer makes the network control directly-programmable. The increasing drone applications make use of real-time video streaming which can be achieved by the use of SDN, as it can provide better QoS because the traffic is automatically controlled in the SDN. The drone is highly resource-constrained so there are many vulnerabilities that can be prevented by the use of SDN, as the controller can keep a close check on the data traffic. The above-mentioned parameters of SDN helps in maintaining the overall security in the network. Section \ref{sec6} discusses the SDN-based DCN models that can help in making the drone communication secure from different types of attacks. \subsection{Drone Communication Architecture Using Machine-Learning} \begin{figure}[!t] \centering \includegraphics[width=90mm]{Fig9.jpg} \caption{Basic ML Techniques Architecture \cite{mlfigure}.} \label{mlfigure} \end{figure} Machine learning is a technique that provides the system with the ability to learn automatically and ameliorate using past experiences without being explicitly programmed. Once the data is fed, ML learns and predicts the output automatically without much human intervention. The ML algorithms need a large amount of training data to make more accurate predictions. ML algorithms are broadly divided into two categories, i.e., supervised machine learning (training dataset can be classified into several labels) and unsupervised machine learning (training data is not classified) algorithms. Figure \ref{mlfigure} shows the basic architecture of ML-based drone communication applications. The figure illustrates the various ways in which the ML techniques can assist in making the drone communication secure. Several ML algorithms like CNN (Convolutional Neural Network), SVM (Support-Vector Machine), ANN (Artificial Neural Network), RNN (Recurrent Neural Network), etc. can be used for making drone communication secure. ML algorithms, such as LSTM (Long Short-Term Memory), can also be used for detecting the faults in drone communication, and the recovery methods are sent to the drone for its safety. A classification algorithm can be applied which can help in detecting the DoS attacks and other attacks that make use of the fake and affected data packets to paralyze the network. The data packets can be easily classified as either benign or affected packets, which can prevent the network from getting hacked. These diverse applications of ML can help achieve highly secure drone communication. \subsubsection*{\textit{Motivation For Using ML For Drone Communication Security}} The ML algorithms learn from the training data and improve themselves for achieving better results and high accuracy without any human intervention, which is a huge advantage. The ML algorithms can be deployed for detecting the presence of malicious drones in the network and can help in preventing the attacks such as man-in-the-middle attack \cite{mitmml} and spoofing attacks \cite{mlspoof}. Such algorithms keep on improving with increasing experience, and provide better and more accurate results. The models can also be trained to automatically detect and recover from the faults using neural networks and LSTM. Moreover, ML algorithms can handle multi-dimensional and diverse data. All these parameters make these algorithms highly suitable for use in drone applications. In Section \ref{sec7} we discuss in detail the use of various ML techniques to secure drone communication. \subsection{Drone Communication Architecture Using Fog Computing} The concept of fog computing was first introduced by CISCO in $2014$ \cite{ff}. Fog is considered to be a dimension that extends the use and capabilities of the cloud. Fog is not particularly a substitute for cloud computing; instead, it is a large complement of cloud computing. The fog layer is a layer or stratum between the edge devices and the cloud. Deploying cloud servers is very difficult as it is very costly and is very difficult to establish. So a new concept came into the market in $2014$, by which the load of the cloud can be minimized. Fog is a smaller version of the cloud which can be placed near to the end devices. Fig. \ref{foglayer} shows the \textcolor{black}{layered architecture of cloud, edge, and fog computing combined}. What happens in fog computing is that whenever an end device user requests some query of fetching any data or uploading any data, the mobile network helps in connecting to the nearest fog node available. Now the data can be easily fetched and stored in the fog. The fog uses LAN (local area network) whereas for accessing cloud facilities we need to access the internet through WAN (wide access network) which will take more time as well as more cost. So, fog computing is very helpful in many aspects like cost, time, and security. Fog domain is made by combining multiple fog nodes which can be switches, gateways, routers, smartphones, or PCs to form the fog stratum. \begin{figure}[!t] \centering \includegraphics[scale=0.9]{Fig10.JPG} \caption{\textcolor{black}{Layered Architecture of Cloud, Edge, and Fog Computing \cite{fig12}.}} \label{foglayer} \end{figure} \subsubsection*{\textit{Motivation For Using Fog Computing For Drone Communication Security}} Fog Computing is a paradigm that can help in processing and accessing large data rapidly, efficiently, and with the least possible latency. This is a layer between the end-device and the cloud servers. Fog computing helps in increasing the QoS and QoE, as the retrieval time of the data is very less in the fog. Fog computing also reduces the data load on the cloud servers and makes the data dissemination very cost-effective as well as reliable. The fog is a decentralized paradigm in which the data is stored at multiple fog layers. This proves advantageous as compared to when data is stored in one place, because data in the fog has no central entity handling the entire data making it less vulnerable. This also prevents the cloud server from being getting affected, as the vulnerability can be detected at earlier stages. These aspects make fog computing a very important technology in making drone communication secure. The fog-based DCN methods and models that help make the drone communication secure are discussed in Section \ref{sec8}. \begin{figure*}[!t] \centering \includegraphics[width=180mm]{Fig11.JPG} \caption{Different drone applications using various security techniques.} \label{drones} \end{figure*} Fig. \ref{drones} shows various drone applications in different domains that have used \textcolor{black}{blockchain \cite{BDI4}, SDN \cite{intent}, ML \cite{ML1}, or fog computing \cite{fogres} for securing drone communication.} The rest of this paper discusses the usage, and benefits of these technologies in making drone communication more secure in detail. \section{Applications of blockchain for Drone Communication Security} \label{sec5} Drone technology has been there for almost a century, but in the recent past, it has gained importance in fields such as agriculture, security, wildlife conservation, delivery, and so on. Blockchain technology is said to have the potential to improve data security and transparency across multiple domains \cite{newly3,newly4,newly5,newly7}. In this section, we will elaborate the models and the mechanisms based on blockchain that can be used in making the drone communication secure. Various other non-blockchain technologies have also been proposed to increase drone security. There are various issues related to such solutions that can be resolved using the features of blockchain technology. A model proposed in \cite{non1} helps in maintaining data integrity by using sensor Physical Unclonable Function (PUF). This method provides data integrity but fails in maintaining self-trust and the data provenance. Another non-blockchain model for preventing a wormhole attack has been discussed in \cite{worm}. The authors use a label-based method for detecting the attack. The model only addresses wormhole attacks and is still vulnerable to other attacks that blockchain could prevent such as DDoS attacks, and GPS spoofing attacks. We further discuss the list of specific security issues that can be resolved and prevented using blockchain as a solution. Fig. \ref{blockchainindroneapplications} shows the various security applications of blockchain in a network of drones. \subsection{ \textit{\textbf{\textcolor{black}{Air Traffic Management}}}} UAVs have recently gained huge popularity. Thus, with a number of drones, their paths may cross with one another and sudden collision may occur. Therefore, it is necessary to devise a solution and a platform to maintain optimal paths for air traffic management \cite{newly10}. Such traffic management in drones is different from traditional road traffic management, as there is no well-defined path of travel for each drone and the coordinates need to be maintained in $3$ dimensions. Blockchain and IoT have many advantages over traditional internet-based systems due to the fact that the internet-based systems are more prone to cyber-attacks that would degrade or disrupt the functioning of the drones. Traditionally, GPS coordinates are used for UAV localization and avoiding traffic violations. However, such approaches are difficult to apply for complex paths due to pilot errors and other intrusion attacks. The authors of \cite{NBTM} suggest that the neural-blockchain based transport model (NBTM) can significantly help in optimizing the problem of the air traffic violation. This model involves the use of $3$ different blockchain networks to form a master blockchain, taking the input parameters as a function of the reliability of connections and reliability of flyby time. The model also generates feedback for initial inputs while iterating towards an optimal solution. The forward propagation is done between Blockchain $A$ and $B$, and backward propagation is between Blockchain $C$ and $D$. The primary components of NBTM are blockchain and neural networks. The neural model is a 4-layer network having $B$ and $C$ as intermediate layers. The output of the neural network model is used to form the optimal path for UAV to travel. This model does not employ any separate mechanism for security, but is simply dependent on the basic principles of the blockchain. The simulation results demonstrate that the proposed neural-blockchain enhances the reliability (the statistical parameter for evaluation of consistency) of the network with a lesser failure rate. Due to the availability of a feedback mechanism, the model reduces the computation power demand, resulting in lesser complexity and yields higher efficiency when compared with the model proposed in \cite{NBTM2}. In \cite{NBTM2}, the authors have proposed a model for reducing the number of transaction required for updating the ledger in the Internet of Vehicles (IoV), so that less latency is experienced in maintaining the air traffic whereas in \cite{NBTM}, feedback mechanism gives better results. However, the dynamic partitioning between a centralized system and the blockchain-based system is still a challenge to be worked upon. \textit{Preventing mid-air collisions: } Air traffic Control (ATC) needs improvement in preventing mid-air collisions due to the increasing number of UAVs. The Las Vegas incident wouldn't have happened if proper precautions were taken \cite{2}. Due to resource constraints and heavy traffic, the UAVs are subjected to transmission delays in communication with the ground station, unlike high-speed LAN serial communication, and therefore an innovative method to improve ATC and prevent mid-air collisions is required. The authors of \cite{BDI3} propose a blockchain-based solution for ATC management to prevent mid-air collisions. Similar to \cite{NBTM} and \cite{BD2}, the authors of \cite{BDI3} focus on physical protection of drones in case of high air traffic. However, different from \cite{NBTM} and \cite{BD2}, the authors of \cite{BDI3} focus more on exploiting the feature of peer-to-peer transactions in blockchain technology as compared to the feature of tamper-less data storage. If the path at which the drone has to traverse is defined and is stored in a tamper-less data storage, then the drone can be made secure, as no adversary can change the path of the drone for its benefit. Because of drones' agility, collision avoidance algorithms can also be employed to prevent mid-air collisions. A fast obstacle collision avoidance algorithm for UAVs has been proposed by the authors of \cite{midair_algo}. Using this algorithm, the drone can avoid static and dynamic obstacles, while still being able to get back on it's initial trajectory. Mid-air collisions can be dangerous as after the collision the drone could fall down to the ground and can cause harm to human life. Similarly, mid-air collisions could also bring in various threats to the airplane flying in the sky as any collision with the airplane would lead to loss of lives as well. In the proposed model, blockchain is used to store UAVnet data that comprises UAV-ID, flight route sheet, flying schedule, and sensor’s data. The computing UAVs are divided into $m$ groups each containing $n$ number of UAVs. Out of those $m$ groups, one is used to store information broadcasted from other UAVs and acts as actual blockchain participant. The other computing UAVs simulate the possible paths for the idle UAV to reach its destination. The optimal path that would limit the mid-air collisions is chosen by the Proof of Graph (PoG) consensus mechanism which is based on Simplified Memory Bounded A* (SMA*) Algorithm \cite{bound}. The authors compare the SMA* Algorithm with A* and Dijkstra’s algorithms. Although Dijkstra can find the shortest path, the algorithm must explore all possible paths, resulting in high complexity. A* algorithm uses exponential memory whereas SMA* uses bounded memory, where in exponential memory, addition of data increases the computation time exponentially \cite{exp}, and in bounded memory, the computation time depends on the amount of memory the data needs \cite{bounded}. This specification of the bounded memory makes the algorithm memory efficient and reduces the required computation time. \begin{figure*}[!t] \centering \includegraphics[width=180mm]{Fig12.pdf} \caption{\textcolor{black}{Security applications of blockchain in a network of UAVs.}} \label{blockchainindroneapplications} \end{figure*} \subsection{ \textit{\textbf{\textcolor{black}{Geo-fencing System}}}} Geofencing can be defined as the virtual fencing or boundary created to disengage UAVs from entering a sensitive area such as prisons, airports, and private properties \cite{geo}. It is similar to road networks where some vehicles are not allowed to enter certain zones. However, creating fences is more complicated in IoD due to lack of well-defined pathway, and motion being in $3$ dimensions. Traditionally, DJI’s GEO System was used to mark where it is safe for the drone to fly and where the authorities may raise concerns about the flight of the drone\cite{DJI}. However, such systems may not be suitable for drones, as there are certain prohibited zones where the drones cannot fly, like near the airports \cite{airport}. These systems give the optimal path which cannot be used in the case of the drones as the proposed optimal path may lie in the prohibited zone. Blockchain can be effective in maintaining such a restriction based on the $3$ dimensional coordinate system in real-time. The pioneer work for blockchain-based flight space allocation in a decentralized fashion is \cite{BD2}. Unlike \cite{NBTM}, the authors of \cite{BD2} focus mainly on preventing the entry of drones into restricted areas rather than performing complete traffic management. In the proposed model, the UAV during its flight adds its request for air-space to the blockchain network. The trajectories are then added in such a way that it does not conflict with any restricted zone defined through virtual fencing. Blockchain can maintain the constraint that the optimal path should be chosen such that it should not lie in the prohibited flying zone. This also mandates that air paths of multiple UAVs do not cross with one another which would eventually lead into a crash. The proposed model is better than the baseline scheme in \cite{NBTM}, as the proposed model uses blockchain both for geo-fencing and avoiding traffic violations. The benefits of using blockchain to maintain geo-fencing include its immutability and safety from cyber attacks. However, as transactions continue and records grow, and block sizes increase in a blockchain, eventually exceeding any limits set, each transaction will need more time to be processed. Thus there is a need for blockchains with a higher TPS (Transactions per Second) rate to avoid bulking. Some newer blockchain-based structures that offer higher transaction rates of $3500$ TPS have also been proposed in the literature \cite{hedera}. \subsection{ \textit{\textbf{\textcolor{black}{Maintaining data integrity}}}} The immense data including the geographical addresses, and the data captured by the sensors on the drones, can be collectively used to profile an individual, and thus, can lead to privacy leakage \cite{IoD}. The Iranian government claimed to be able to access all the information from the American UAV and reverse-engineer the entire drone \cite{26}. As drones have limited computation resources, the data processing can be done in the cloud. In traditional cloud-based solutions, the Zone Service Providers (ZSPs) provide for the navigation of the drones and the feedback systems between drones. However, ZSPs are vulnerable to attacks due to high latency and high false rate \cite{IoD}. An efficient IoD using blockchain technology is proposed in \cite{BDI4}. Different from \cite{NBTM} and \cite{BD2}, the focus of the authors of \cite{BDI4} is more towards securing the important data being sent by drones, rather than physically preventing them from colliding or entering restricted areas. Tamper-proof and transparent storage of data are the main features of blockchain technology being exploited by the authors of \cite{BDI4}. In the proposed algorithm, firstly, the drone enrolls itself in the blockchain ledger for storing the data, and a unique ID is assigned to the drones. The data is then hashed for maintaining the integrity and is then uploaded to the blockchain network, via the controller. After the data is successfully uploaded, an acknowledgment is sent to the drone. The data records are transformed into a Merkle Hash tree \cite{merkle}. Furthermore, data auditing is done in the cloud which is a crucial step as it helps to detect any anomaly in the data. The proposed model was analyzed for the response time with varying numbers of drones. The simulation results demonstrate that the response time increases linearly from about $400$ms for $100$ drones to about $550$ms for $1000$ drones, thus providing better scalability. The average response time latency is also fairly stable, varying from $350$ms to $1000$ms for a $100$ drone network. The proposed model also makes the network less vulnerable to attacks like DDoS attacks and data losses, while making it more accountable. One major challenge is the time delay in the drones due to proof of work. For mining a block, computing machines require enough time, which results in latency \cite{mine}. Due to hardware constraints in drones, lightweight cryptography and DAG-chain based consensus algorithms can be developed \cite{my3}. \textit{\textbf{Secure Data Dissemination: }} Data dissemination is the process of distribution of data/information to its end-users. The authors of \cite{BDI2} propose a blockchain-based algorithm which helps in secure data dissemination in the IoD (Internet of Drones). Although the model presented in \cite{BDI2} is based on blockchain, it is not designed to keep the localization information as shown in \cite{local}. The authors of \cite{BDI2} make use of blockchain technology only to secure the data transfer between the drones and drone controllers. A combination of the approaches discussed in \cite{local} and \cite{BDI2} would be a promising solution for securing both localization and data dissemination. The proposed model in \cite{BDI2} is designed using three layers, namely, the user layer, the infrastructure layer, and the IoD layer. In the user layer, blockchain technology is used for the verification and the security of each transaction made in the model. The second layer, the infrastructure layer, consists of all the base stations, which ensure the connectivity between the drone controller, the drone, and the end-users. IoD layer consists of the drones that capture real-time data and communicate amongst themselves for making certain decisions. Two types of nodes are considered in this model, (i) forger nodes and (ii) normal nodes. The forger node is used for creating new blocks in the blockchain, whereas, the normal nodes are used for the verification of the blocks in the blockchain. This model works in three stages. First, the forger node is selected, and the other remaining nodes are declared as normal nodes. After the forger node is selected, the hash value is calculated by the forger node using the PoS (Proof of Stake) consensus algorithm \cite{PoS}. The other nodes validate the hash value broadcasted by the forger node by comparing it with the hash value that is generated using the Merkle Hash tree. If both the hash keys match, the block is validated and is added to the main chain. The forger node then encrypts the data packets and sends the request to the public distributed blockchain. When the request is accepted, the forger node computes the digital signature of the data packets with its private key and broadcasts it to the public blockchain. The data is stored in the blockchain and can be accessed only using the decryption key, so attacks like spoofing or DoS attacks can be prevented using this algorithm. The authors evaluate the security of the proposed model in terms of communication cost and time. The simulation results demonstrate that the proposed model provides data authentication, authorization, and accountability that is not offered by other state-of-the-art related works. Another work related to securing data dissemination in IoD is \cite{IoD}. The authors of \cite{IoD} use Identity-based encryption techniques for secure data dissemination. Such techniques can provide data integrity and identity anonymity only, and fail to provide authentication, authorization, and accountability of nodes in the network. Also, there is no proposal for data verification and validation in \cite{IoD} as compared to the blockchain-based approach used in \cite{BDI2}. \begin{table*}[] \centering \caption{A summary of advantages and disadvantages of major applications of blockchain for drone communication security.} \begin{tabular}{|l|l|l|l|} \hline \rowcolor[gray]{0.8} \begin{tabular}[c]{@{}l@{}}Major\\ approaches\end{tabular} & Advantages & Disadvantages & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Benefits over}\\ \textcolor{black}{traditional approaches} \end{tabular}\\ \hline \cite{NBTM} & \begin{tabular}[c]{@{}l@{}}• Significantly helps in optimizing the \\problem of air traffic violation\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Does not support dynamic\\ partitioning of UAVs into groups\end{tabular} & \textcolor{black}{Reduced Latency}\\ \hline \cite{BD2} & \begin{tabular}[c]{@{}l@{}}• Prevents the UAVs from entering into\\ any restricted zone through virtual fencing\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Can support only a limited\\ number of transactions per minute\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Immutability, Safety}\\ \textcolor{black}{from cyber attacks} \end{tabular}\\ \hline \cite{BDI4} & \begin{tabular}[c]{@{}l@{}}• Supports high scalability with stable\\ response time latency\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Significant latency in the data \\ transmission\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textcolor{black}{Scalabilty, Data }\\ \textcolor{black}{integrity} \end{tabular}\\ \hline \cite{BDI3} & \begin{tabular}[c]{@{}l@{}}• Supports high computation speed and\\ memory efficiency with SMA*\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Does not support dynamic\\ partitioning of UAVs into groups \end{tabular} & \begin{tabular}[c]{@{}l@{}}\textcolor{black}{Cost-effective,}\\ \textcolor{black}{Scalability} \end{tabular}\\ \hline \cite{local} & \begin{tabular}[c]{@{}l@{}}• The chances of localization errors are\\ reduced to $1/4^{th}$ \end{tabular} & • Suspectible to $51$\% attack & \textcolor{black}{Localization} \\ \hline \cite{BDI2} & \begin{tabular}[c]{@{}l@{}}• Proof-of-stake consensus algorithm is\\ used to significantly reduce the\\ computation time and cost\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Regulatory control and\\ governance features are missing\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textcolor{black}{Authentication, Authoriz-}\\ \textcolor{black}{ation, Accountability} \end{tabular} \\ \hline \end{tabular} \label{bloadv} \end{table*} \begin{table*}[] \centering \caption{Applications of Blockchain for drone communication security. } \begin{tabular}{|p{0.033\linewidth}|p{0.11\linewidth}|p{0.24\linewidth}|p{0.11\linewidth}|p{0.21\linewidth}|p{0.15\linewidth}|} \hline \rowcolor[gray]{0.7} Ref. & Attack & Mechanism & \begin{tabular}[c]{@{}l@{}}Blockchain\\ Feature used\end{tabular} & Major achievement & Open issues \\ \hline \cite{NBTM} & \begin{tabular}[c]{@{}l@{}}GPS Spoofing,\\Jamming\\attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}The optimal path generated\\by the neural network is\\stored in the blockchain\end{tabular} & \begin{tabular}[c]{@{}l@{}}Peer-to-Peer\\model \end{tabular} & \begin{tabular}[c]{@{}l@{}}The model gives the best\\ path with a maximum\\ failure rate of $25.8$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}Making the model\\ efficient for a large\\ number of UAVs\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{BD2} & \begin{tabular}[c]{@{}l@{}}GPS spoofing, \\ DoS attacks,\\ DDoS attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The blockchain assigns \\ trajectory to the UAVs such \\ that no route clashes with\\ the other UAVs route.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Tamper-proof\\data,\\ Peer-to-Peer \\ network\end{tabular} & \begin{tabular}[c]{@{}l@{}}A collision-free trajectory\\ is proposed such that the\\ UAV does not enter the\\ geo-fenced zone\end{tabular} & \begin{tabular}[c]{@{}l@{}}Model has to be\\ trained for handling\\ large number of\\ UAVs\end{tabular} \\ \hline \cite{BDI4} & \begin{tabular}[c]{@{}l@{}}DoS attacks,\\ DDoS attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The data generated from \\ the sensors is stored in the \\ Merkle Hash tree which\\ ensures data integrity\end{tabular} & \begin{tabular}[c]{@{}l@{}}Distributed-\\ database,\\ Public key\\ infrastructure\end{tabular} & \begin{tabular}[c]{@{}l@{}}The average response\\ time for data transmission\\ with $1000$ drones is\\ $550$ms\end{tabular} & \begin{tabular}[c]{@{}l@{}}Implementing\\ private blockchain\\ can make the\\ system more secure\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{BDI3} & \begin{tabular}[c]{@{}l@{}}Man-in-the-\\ middle attacks,\\ GPS spoofing,\\ DoS attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The details of the UAV stored \\ in the blockchain are used to \\ calculate the optimal path\\ using the SMA* algorithm\end{tabular} & \begin{tabular}[c]{@{}l@{}}Distributed\\storage,\\ Tamper-free \\ transactions\end{tabular} & \begin{tabular}[c]{@{}l@{}}SMA* gave the optimal\\ path in a very less time\\ and it uses bounded\\ memory as well\end{tabular} & \begin{tabular}[c]{@{}l@{}}Efficient dynamic\\ partitioning of the\\ UAV groups is \\ needed\end{tabular} \\ \hline \cite{local} & \begin{tabular}[c]{@{}l@{}}DoS attacks,\\ Wormhole-\\ attack,\\ GPS spoofing\end{tabular} & \begin{tabular}[c]{@{}l@{}}The co-ordinates of the drones\\stored in the blockchain is\\made available to the other\\ drones after the verification\end{tabular} & \begin{tabular}[c]{@{}l@{}}Decentralized\\ network,\\ Distributed-\\ database\end{tabular} & \begin{tabular}[c]{@{}l@{}}The localization errors are \\ reduced by $75$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}Model is still\\ susceptible to\\ $51$\% attack\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{BDI2} & \begin{tabular}[c]{@{}l@{}}Eavesdropping\\ attacks,\\ GPS spoofing,\\ DoS attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}If the hash value generated by\\ the Merkle Hash tree and the\\ computed hash value by the\\ forger node is the same then\\ only the data is transmitted\end{tabular} & \begin{tabular}[c]{@{}l@{}}Data integrity,\\ Distributed- \\ database,\\ Decentralized- \\ network\end{tabular} & \begin{tabular}[c]{@{}l@{}}The total computation\\ time required for data \\ dissemination is computed\\ to be $0.046$ms.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Implementing\\ private blockchain\\ can make the\\ system more secure\end{tabular} \\ \hline \cite{blotab1} & \begin{tabular}[c]{@{}l@{}}DoS attacks,\\ DDoS attacks,\\ GPS spoofing\end{tabular} & \begin{tabular}[c]{@{}l@{}}The data generated by the \\ drone is stored in the \\ blockchain and is transformed\\ into the Merkle Hash tree\\ to maintain the data integrity\end{tabular} & \begin{tabular}[c]{@{}l@{}}Decentralized-\\ network,\\ Peer-to-Peer\\ model,\\ Immutability\end{tabular} & \begin{tabular}[c]{@{}l@{}}The results show that only\\ the validated drone\\ were allowed to transfer\\ the data\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model can be\\ further enhanced\\ for multi UAV\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{blotab2} & \begin{tabular}[c]{@{}l@{}}Man-in-the-\\ middle attack,\\ DoS attack,\\ DDoS attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}The swarms of drones needs to\\ register themselves on the \\ blockchain using their public\\ key and after the validation,\\ the data is added to the server\end{tabular} & \begin{tabular}[c]{@{}l@{}}Distributed-\\ Database,\\ Immutability,\\ Public Key-\\ infrastructure\end{tabular} & \begin{tabular}[c]{@{}l@{}}The data acquisition was\\ done successfully and with\\ high efficiency\\\end{tabular} & \begin{tabular}[c]{@{}l@{}}Blockchain with\\ higher TPS can be\\ incorporated in the\\ model for making\\ it more efficient\end{tabular} \\ \hline \cite{blotab3} & \begin{tabular}[c]{@{}l@{}}DoS attacks,\\ DDoS attacks,\\ Man-in-the-\\ middle attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The trust is lost from\\ the intruding UAV when\\several intruder events\\are detected\end{tabular} & \begin{tabular}[c]{@{}l@{}}Peer-to-Peer \\ model,\\ Decentralized-\\ network\end{tabular} & \begin{tabular}[c]{@{}l@{}}$90$\% of the UAVs were\\ able to support the data\\ about the event and none\\ detected the intruder event\end{tabular} & \begin{tabular}[c]{@{}l@{}}Work on detecting\\ the compromised\\ UAV successfully\\ is required\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{blotab4} & \begin{tabular}[c]{@{}l@{}}GPS spoofing,\\ Man-in-the-\\ middle attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}A consumer makes an order \\ according to which a smart\\ contract is generated. Any\\ free UAV can accept the order\\ and the client details are sent.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Peer-to-Peer \\ model,\\ Decentralized-\\ network,\\ Smart Contract\end{tabular} & \begin{tabular}[c]{@{}l@{}}The blockchain and smart\\ contracts are proved to be \\ successful in organizing a\\ secure communication\\ between the UAVs\end{tabular} & \begin{tabular}[c]{@{}l@{}}Implementing\\ private blockchain\\ can make the\\ system more secure\end{tabular} \\ \hline \cite{blotab5} & \begin{tabular}[c]{@{}l@{}}DoS attacks,\\ DDoS attacks,\\ Hijacking\end{tabular} & \begin{tabular}[c]{@{}l@{}}The interest-key-content\\ binding (IKCB) is stored in the\\ blockchain which is compared\\ by the router and the poisoned\\ data is discarded\end{tabular} & \begin{tabular}[c]{@{}l@{}}Tamper-\\resistant\\ledger,\\ Consensus-\\ algorithm\end{tabular} & \begin{tabular}[c]{@{}l@{}}The proposed model gives\\ better results when\\ compared with the\\ Interest-key binding as it\\ has lower system overhead\\ and the latency is reduced\end{tabular} & \begin{tabular}[c]{@{}l@{}}The forwarding\\ technologies can\\ be optimized to\\ make the model\\ efficient\end{tabular} \\ \hline \end{tabular} \label{relblo} \end{table*} \subsection{ \textit{\textbf{\textcolor{black}{Secure Localization}}}} A swarm of drones are deployed that automatically take actions to achieve a specific goal cooperatively. In such scenarios, the exact location coordinates of the drones is critical information for completion of the mission. However, the generic localization algorithms are vulnerable to attacks as the adversary can easily inject false location coordinates. A pioneering work on secure localization on the internet of drones using blockchain is presented in \cite{local}. The authors propose a blockchain-based localization algorithm for securing drones. Three major features of the algorithm are, (i) Decentralization: no central entity would be present to maintain the localization coordinates of the drone, (ii) Peer-to-Peer communication between the drones, and (iii) no need for a central trust node to manage the security of the data and the coordinates between the drones. The other non-blockchain based approach for securing localization in IoD is discussed in \cite{worm}. This method focuses only on preventing the wormhole attack. The approach in \cite{worm} being centralized in nature cannot be used to prevent other generic attacks such as DoS. In the proposed algorithm, the drones need to cooperate with various other anchor drones knowing their exact coordinates. The coordinates of the anchors are sent to the requesting drone using the private key of the anchors. Next, the coordinates of the anchor drones are added to the distributed blockchain ledger after verification. The requesting drone first requests for the coordinates of the anchor drones present at a $1$-hop distance. If the requesting drone receives the location from at least three anchor drones in the 1-hop, the distance between the requesting drone and the anchor drone is calculated using the Received Signal Strength Indicator (RSSI) method \cite{RSSI}. However, if the requesting drone does not receive three minimum responses from the neighboring anchor drones, the distance between the requesting drone and the anchor drones is calculated using the DV-Hop (Distance Vector Hop) method \cite{DVHOP}. The DV-Hop method works by first computing the average hop distance and then multiplying it by the number of hops. The authors also calculate the change in localization accuracy with the increase in the number of malicious nodes in the network. The accuracy of the proposed algorithm is proven to be better than the generic localization algorithms \cite{genloc}. Simulation results demonstrate that the localization errors are minimized by $1/4^{th}$ in the presence of $50$\% malicious nodes in the network. Moreover, due to the decentralized nature of blockchain, various other attacks such as DoS attack, and wormhole attack can also be prevented. However, the model is still susceptible to the $51$\% attack and other attacks, as is the case with blockchain. $51$\% attack happens when the number of malicious nodes in the network become more than half of the total nodes in the network, and hence, fair localization coordinates would not be revealed. \par \textbf{ Summary:} \textcolor{black}{The objective here is to minimize the possibilities of any kind of physical attack on drones or data losses in drone communication. A summary of advantages and disadvantages of major works is given in Table \ref{bloadv}, and a summary of the related works is given in Table \ref{relblo}. As seen, the blockchain technology is mostly used to provide a peer-to-peer model to mitigate the various security issues related to generic centralized architectures. In various works, the smart contract and incentive model features of blockchain are also used to enhance data security and reliability in various scenarios related to drone communication.} \section{Applications of SDN for Drone communication Security} \label{sec6}Software-Defined Networking may be defined as a networking paradigm centered around the separation of the data plane or forwarding plane of a computer network from its control plane, and the application layer. Software-defined networking is an architecture that aims to make networks agile and flexible. The SDN virtualizes the network by separating the control plane that manages the network from the data plane where all the traffic flows. SDN simply decouples the network control from the forwarding process of the network packets. This decoupling allows the network to be controlled separately without worrying about the traffic flow. This infrastructure will keep the traffic and the network services abstracted from the network control. These SDN parameters help in making the drone communication secure. In this section, we talk about the SDN-based DCN that helps in resolving various security issues in DCN. There are several existing solutions that tend to resolve the security issues related to drone communication without using the SDN architecture. The pioneer work in this direction is presented in \cite{non2}. In this model, the UAV receives the request from the ground controller and sends the data back to the controller in the form of visuals. The method described in the model demands high bandwidth for its execution, which varies with the speed of the drone or the broadcasting channels. Such traditional solutions fail to provide high security in the new generation drones, and also fail in maintaining the data integrity. Another model proposed in \cite{non3} uses heuristic algorithms for providing data integrity but fails in providing good efficiency and reliability. \begin{table*}[] \centering \caption{A summary of advantages and disadvantages of major applications of SDN for drone communication security.} \begin{tabular}{|l|l|l|l|} \hline \rowcolor[gray]{0.8} \begin{tabular}[c]{@{}l@{}}Major\\ approaches\end{tabular} & Advantages & Disadvantages & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Benefits over}\\ \textcolor{black}{traditional approaches} \end{tabular} \\ \hline \cite{16} & \begin{tabular}[c]{@{}l@{}}• Very short average file-transfer time\\ • Every sender gets fair share of\\ bandwidth\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Legitimate sender may have to\\ wait for a long time\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Scalability, Lightweight,}\\ \textcolor{black}{Self-reliant defense} \end{tabular} \\ \hline \cite{DS1} & \begin{tabular}[c]{@{}l@{}}• The drone from which the DDoS\\ attack is launched can be found in\\ a very less time\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Cannot work properly when the\\ number of flow table items in the SD-\\IoT is very high\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Detecting and Mitigating}\\ \textcolor{black}{DDoS attacks faster} \end{tabular} \\ \hline \cite{intent} & \begin{tabular}[c]{@{}l@{}}• The average end-to-end outage rate\\ in IoD is reduced by $18$\%\end{tabular} & • High average end-to-end delay & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Reduction in end-to-end}\\ \textcolor{black}{delay} \end{tabular} \\ \hline \cite{sdn1} & \begin{tabular}[c]{@{}l@{}}• The model has high fault-tolerance\\ • High performance due to the presence\\ of multiple controllers\end{tabular} & \begin{tabular}[c]{@{}l@{}}• The link between the data plane and\\ the control plane is still susceptible\\ to attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Scalability, Mobility} \end{tabular}\\ \hline \cite{SI1} & \begin{tabular}[c]{@{}l@{}}• The latency and the maximum load\\ experienced by a SDN switch is reduced\\ by $50$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}• The complexity of the algorithm is\\ quite high \end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Stability, Security,}\\ \textcolor{black}{Reduced network latency} \end{tabular}\\ \hline \end{tabular} \label{sdnadv} \end{table*} Considering the above issues, in this section, we discuss the methods that involve SDN for maintaining the security in drone communication. We further discuss the list of specific security issues that can be resolved and prevented using SDN as a solution. \subsection{\textit{\textbf{\textcolor{black}{DoS Attacks}}}} Due to resource constraints, we need a highly efficient protocol that is resistant to large–scale DoS attacks. NetFence protocol in SDN based Drone Communication Network (DCN) as proposed in \cite{16} can be used to create a scalable DoS resistant network. The proposed model makes use of traffic policing inside the network. The packets in the network carry the unforgeable congestion policing feedback that is attached on the packets by routers. For a drone to be a part of the network, it needs to first send a request packet to a NetFence ready receiver. When it is accepted, it receives feedback and along with the acknowledgement, it sends regular data packets. Non-NetFence senders can only send packets through the legacy channel which is given the lowest packet-forwarding priority. Bottleneck routers act as congestion detectors which regularly check link load and packet loss rate. The rate limiters reduce data congestion through the Additive Increase and Multiplicative Decrease (AIMD) algorithm \cite{AIMD}. NS-$2$ simulations were implemented in linux, and the performance of NetFence in DoS attacks was compared with $3$ other mechanisms that are Traffic Validation Architecture (TVA+) \cite{TVA}, StopIt \cite{stop}, and Fair Queuing (FQ) \cite{FQ}. Netfence has the advantage of having short average file-transfer time that does not increase significantly with an increase in senders, whereas, in FQ, transfer time increases linearly with an increase in senders. Even though mechanisms like TVA+ and StopIt tend to block large-scale DoS attacks, per-host queuing is implemented in these algorithms as compared to per-Autonomous System queuing in NetFence. This is advantageous as the number of autonomous systems is significantly less than the number of hosts. No matter how heavy the attack is, the Netfence protocol makes sure that senders get their fair share of bandwidth. This model has a drawback, a legitimate sender may need to wait more time in Netfence to transmit data than in TVA+, StopIt. Additionally, NetFence algorithm also fails to distinguish between the congestion caused by DoS attack or any other issue in the network. \subsection{\textit{\textbf{\textcolor{black}{Distributed Denial of Service (DDoS) attacks}}}} The DDoS attack is a bit different from a normal DoS attack. In a DDoS attack, there are multiple numbers of compromised hosts as compared to a single compromised host in a normal DoS attack. An intelligent and lightweight approach is required to prevent DDoS attacks in IoD. A pioneer work in avoiding DDoS attacks in IoT using SDN is \cite{DS1}. Unlike \cite{16}, the authors of \cite{DS1} propose an algorithm to detect and mitigate the DDoS attacks in drones. Cosine similarity of the vectors of the packet-in message rate at software-defined Internet of Things (SD-IoT) switch ports is used to determine the occurrence of DDoS attack. The threshold values of the vector length and cosine similarity are used to precisely and accurately classify an attack situation. The simulation results demonstrate that the proposed algorithm is capable of detecting the device used to launch the DDoS attack in a short span of time. The results of the proposed work are compared with other state-of-the-art work that try to detect DDoS attacks using IP filtering \cite{DDoS2}. The simulation results demonstrate that in case of a DDoS attack, the number of flow table items of the SD-IoT switches, and the number of data packets received by the SD-IoT controller are less in \cite{DS1} as compared to \cite{DDoS2}. However, a proactive scheme to defend and prevent the DDoS attacks is missing. The proposed algorithm will work only after the DDoS attack has been launched. The authors of \cite{conti3} provide a lightweight solution to counter DDoS attacks using SDN. \subsection{\textit{\textbf{\textcolor{black}{Avoiding intentional disruption}}}} Apart from DDoS attacks on a set of drones in the network, the network of drones being resource-constrained is also susceptible to intentional jamming and disruption attacks. Such attacks are more severe as compared to DDoS attacks, as they can paralyze the entire network leaving no room for detection and mitigation. Different from \cite{16} and \cite{DS1}, the authors of \cite{intent} have proposed an SDN-based framework for secure and robust data relaying in case of intentional jamming and disruptions. In the proposed model, the drones act as SDN switches that are controlled by a centralized SDN controller. A novel 3D spatial coverage metric is used to calculate diverse multiple paths among the drones to prevent the effect of intentional disruptions on the functioning of the drone network, as the model gives the directives to the drone for using the best possible path. The simulation results demonstrate that the proposed algorithm outperforms the traditional shortest path and shortest multi-path algorithms in terms of outage prevention \cite{intentional2}. The average end-to-end outage rate in IoD is reduced by $18$\% in the proposed model when compared to \cite{intentional2}. The algorithms in \cite{intentional2} consider only the path that takes the least time, irrespective of the presence of jammers and intentional disrupts. Although the proposed model in \cite{intent} succeeds in preventing the complete outage of the drone network, the end-to-end delay is also increased by $12$\% when compared to \cite{intentional2}. The proposed model in \cite{intent} also helps in preventing the frequent link disconnections between the devices linked to the network. Further work is required to propose some algorithms that can prevent such intentional disruptions without increasing the average delay of the network. The jamming incident in Hong Kong drones could've been prevented if any of the above-mentioned measures were taken \cite{jammerinc}. \begin{table*}[] \centering \caption{Applications of SDN for drone communication security. } \begin{tabular}{|p{0.04\linewidth}|p{0.075\linewidth}|p{0.23\linewidth}|p{0.125\linewidth}|p{0.2\linewidth}|p{0.18\linewidth}|} \hline \rowcolor[gray]{0.7} Ref. & Attack & Mechanism & \begin{tabular}[c]{@{}l@{}}SDN Feature \\ used\end{tabular} & Major achievement & Open issues \\ \hline \cite{16} & DoS attacks & \begin{tabular}[c]{@{}l@{}}The drone first registers\\ itself with the NetFence\\ ready receiver and then\\ only the drone is allowed\\ to transmit the data packets.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Directly-\\ programmable,\\ Scalability\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model has a very\\ short average file-transfer\\ time\end{tabular} & \begin{tabular}[c]{@{}l@{}}Implementing the\\ model specifically\\ for UAVs is much\\ needed\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{DS1} & \begin{tabular}[c]{@{}l@{}}DDoS\\ attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The cosine similarity of the\\ vectors of the packet-in\\ message rate at the SD-IoT\\ switch port is used to\\ determine the attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}Abstraction of\\ network devices,\\ Dynamic re-\\ configuration of\\ networks\end{tabular} & \begin{tabular}[c]{@{}l@{}}Can determine the device\\ using which the DDoS\\ attack is launched in a\\ very short time span\end{tabular} & \begin{tabular}[c]{@{}l@{}}Implementing the\\ model specifically\\ for UAVs is much\\ needed\end{tabular} \\ \hline \cite{intent} & \begin{tabular}[c]{@{}l@{}}Jamming\\ attacks,\\ Disruption\\ attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}Multiple paths are generated\\ using a $3$D spatial metric\\ which are directed to the\\ UAVs to avoid the disruption\end{tabular} & \begin{tabular}[c]{@{}l@{}}Decoupled Data-\\ plane and the\\ Control plane\end{tabular} & \begin{tabular}[c]{@{}l@{}}The average end-to-end\\ outage rate in the IoD is\\ reduced to a large extent\\\end{tabular} & \begin{tabular}[c]{@{}l@{}}End-to-end delay\\ increases significantly,\\ which is a major area\\ to be looked upon in\\ the future\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{sdn1} & \begin{tabular}[c]{@{}l@{}}DoS\\attacks,\\ GPS\\ Spoofing\end{tabular} & \begin{tabular}[c]{@{}l@{}}SDN controller authenticates\\ the network device and then\\ only the data is transmitted\\ by the controllers\end{tabular} & \begin{tabular}[c]{@{}l@{}}Decoupled Data-\\ plane and the\\ Control plane\end{tabular} & \begin{tabular}[c]{@{}l@{}}Multiple SDN controllers\\ are deployed to prevent\\ the malfunctioning of the\\ devices in the network\end{tabular} & \begin{tabular}[c]{@{}l@{}}The link between the\\ Control plane and Data\\ plane is still\\ susceptible to attacks\end{tabular} \\ \hline \cite{SI1} & \begin{tabular}[c]{@{}l@{}}DoS\\attacks,\\ Spoofing\\ attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The Middlebox-Guard (M-G)\\ is deployed at different\\ locations which manages the\\ dataflow\end{tabular} & \begin{tabular}[c]{@{}l@{}}Directly-\\ programmable,\\ Flexible network\\ architecture\end{tabular} & \begin{tabular}[c]{@{}l@{}}Latency and the\\ maximum load on the\\ device are reduced by\\ $50$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}The Integer Linear\\ Program (ILP) pruning\\ algorithm used in M-G\\ has a high complexity\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{sdnt1} & \begin{tabular}[c]{@{}l@{}}DoS\\attacks,\\ DDoS\\ attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The primary path forwards\\the common files whereas\\the backup path forwards\\the uncommon cases where\\the primary path is not\\reliable\end{tabular} & \begin{tabular}[c]{@{}l@{}}Decoupled Data-\\ plane and the \\ Control plane,\\ Scalability\end{tabular} & \begin{tabular}[c]{@{}l@{}}It can handles link\\ congestion with high\\ bandwidth\end{tabular} & \begin{tabular}[c]{@{}l@{}}It has a very high end-\\ to-end delay, which\\ has to be looked upon\\ in the future to make\\ the algorithm more\\ efficient\end{tabular} \\ \hline \cite{sdnt2} & \begin{tabular}[c]{@{}l@{}}DoS\\attacks,\\ DDoS\\ attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}SDN computes the optimal\\ flow for each multi-path TCP\\ and the Flow Deviation\\ Method (FDM) algorithm\\ is used to re-allocate\\ the bandwidth\end{tabular} & \begin{tabular}[c]{@{}l@{}}Network-\\ programmability,\\ Decoupled Data-\\ plane and the\\ Control plane\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model achieves\\ fairer bandwidth\\ allocation that provides\\ better QoS and it makes\\ the network more reliable\end{tabular} & \begin{tabular}[c]{@{}l@{}}Cannot support a high\\ number of users and\\ the model is not fully\\ secure\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{fig11} & \begin{tabular}[c]{@{}l@{}}Grey hole\\attacks,\\Black hole\\ attacks,\\DDoS\\attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The UAV informs its\\ controller about the\\ neighboring drone while\\ establishing OpenFlow\\ connection and also informs\\ about its update\end{tabular} & \begin{tabular}[c]{@{}l@{}}Decopuled Data-\\ plane and the\\ Contol plane,\\ Scalability\end{tabular} & \begin{tabular}[c]{@{}l@{}}The amount of data\\ exchange when compared\\ with the AODV routing\\ algorithm is increased by\\ $2$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}Further works on\\ increasing the security\\ of the model is needed\end{tabular} \\ \hline \cite{sdnt3} & \begin{tabular}[c]{@{}l@{}}GPS\\ Spoofing\end{tabular} & \begin{tabular}[c]{@{}l@{}}Cluster heads are assigned to\\different densely populated\\sectors and the data is\\transferred through the cluster\\head only when in the range\end{tabular} & \begin{tabular}[c]{@{}l@{}}Flexible network\\ architecture,\\ Scalability\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model provides faster\\ and efficient coverage rate\\ of about $99$\% and a\\ latency of around $20$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}Need to decrease\\ the latency to make\\ the model more\\ efficient\end{tabular} \\ \hline \end{tabular} \label{relsdn} \end{table*} \subsection{ \textit{\textbf{\textcolor{black}{Malfunctioning devices}}}} Apart from DoS, DDoS, and intentional jamming, there are various other issues in drone communication that are related to the different sensors deployed on the drones. Traditional internet systems use IP and firewalls, which cannot solve these issues, as it is not possible to fit all objects and protocols to a common and singular protocol. A lightweight model for avoiding malfunctioning devices in IoD is proposed in \cite{sdn1}. In the proposed model, the SDN controller first authenticates the network device requesting to be connected to the network. Only after successful authentication, the data is disseminated to the connected devices using the controller that makes sure that no malfunctioning of the device is taking place. Traditional network protocols are not designed to support high levels of traffic, scalability, and mobility. Hence, the use of SDN in this work increases the functionality of the network by reducing the hardware complexity. SDN also has the ability to extend the network security to the access end-point devices. Multiple SDN controllers have been used instead of a single one to improve fault tolerance and robustness. Unlike \cite{sdn1}, the authors of \cite{sdn2} have proposed a similar framework using only a single controller. If an attacker compromises the SDN controller, he gains full control over the network. Hardware and software failures may also occur, which pose a potential risk to the entire network. The work in \cite{sdn1} is superior as it uses multiple controllers, so if one goes down, another can take control to avoid a system failure in case of any malfunctioning device. The proposed work reports increased network performance with multiple controllers because each controller has a partial view of the network and the controllers collaborate and exchange information with each other. However, the link between the Control Plane and Data Plane of the SDN is still vulnerable and susceptible to attacks and these issues are yet to be resolved. \subsection{ \textit{\textbf{\textcolor{black}{Data Integrity}}}} An SDN-based data security model Middlebox-Guard (M-G) is proposed in \cite{SI1}. Different from \cite{sdn1}, and \cite{sdn2}, the M-G manages the dataflow to ensure network efficiency and helps in minimizing the network latency. To reduce the latency, the middleboxes are placed at locations where the communication link is the shortest using a placement selection algorithm. The middleboxes are placed in different locations and an offline Integer Linear Program (ILP) pruning algorithm \cite{ILP} is deployed at each middlebox. ILP helps in solving uncontrollable computation optimization problems at every middlebox to tackle the switch constraints, such as the use of CPU, RAM, etc. The ILP algorithm also provides the optimum routes to be used for the data transfer. Also, an online ILP is used to minimize the maximum middlebox load across the network. M-G is compared to a model known as SIMPLE proposed in \cite{SDNI2}, as both solve middlebox placement and route selection problems. M-G outperforms the latter in terms of security, latency, and load. In \cite{SI1}, POX was used as the controller, and OpenvSwitch was used as the SDN switch for carrying out the experiments. On running the entire system, latency and maximum loads were reduced by $50$\%. In terms of security, middlebox failures and overload conditions were analyzed, and the response times for these were calculated to be less than $0.15$ seconds, which means, it shows a fast response. \par \textbf{ Summary: } \textcolor{black}{SDN can help in preventing many attacks that drones are susceptible to, including DoS attacks and DDoS attacks, and can help in maintaining data integrity in drones. SDN technology can also help in avoiding the intentional disruption and jamming attacks that impose danger on drone communication. A summary of the advantages and disadvantages of major works that use SDN as a solution to drone communication security are described in Table \ref{sdnadv}. Furthermore, Table \ref{relsdn} summarizes the related works that use SDN in maintaining security in drone communication. According to the best of our knowledge, the decoupling of the control plane, the data plane, and the network plane helps a lot in maintaining security standards in drone communication.} \section{Applications of Machine Learning for Drone Communication Security} \label{sec7}Machine learning is the study of algorithms that are capable of learning and improving automatically through experience, and can make accurate predictions based on the data with which they are fed \cite{ml}. They can provide generalized observations for unseen and unknown states and networks as well. Different machine learning algorithms are useful in different drone applications and domains. The use of specific ML algorithms is dependent on the domain, and type of data available. The ML approaches have been extensively explored in literature both for physical security of drones, and for drone communication security. The physical security approaches basically deal with using different ML algorithms to detect unauthorized drones or to prevent authorized drones from entering into unauthorized zones. Both these types of security issues are intrinsically related to each other. For example, if the system fails to identify or detect an unauthorized drone and allows it to enter into a network of authorized drones, it can easily allow all possible communication attacks on the network. Therefore, drone detection using ML can be considered as a preliminary step that can prevent the possibility of drone communication issues to a great extent. The authors of \cite{jam_ml} study different ML frameworks and provide a model to prevent jamming attacks. A distributed learning framework is essential to manage the various tasks in a swarm of drones \cite{mozaffari5}. Therefore, in this section, we review the various works that try to use various ML algorithms to detect the drones or to identify and prevent the generic security vulnerabilities. First of all, we discuss the issues with the traditional approaches that do not use ML algorithms, and then we move on to the challenges in drone communication and possible ML-based solutions. There are some traditional techniques that do not use ML algorithms for detecting drones. The most primitive technique is drone detection using radar. Detection using radar is highly expensive, and it can be used only for detecting large objects. Another model that can be used for drone detection is using Light Detection And Ranging (LiDAR) that has been implemented in \cite{LiDAR}. LiDAR sends the laser beam towards the object and analyzes the beams returned after colliding with the object. However, LiDAR is also a extremely expensive method for detection, and is highly vulnerable to climatic conditions. Moreover, these techniques tend to give false positive results, thereby resulting in wastage of resources. We further discuss specific security issues that can be resolved and prevented using ML as a solution. \subsection{\textit{\textbf{\textcolor{black}{Drone detection using SVM and CNN}}}} ML algorithms can be used in radar detection to address various detection and classification problems associated with the traditional methods of radar detection \cite{radar}. The authors discussed different SVM models to classify the detected objects as drones or birds, classify different kinds of drones depending on payload or number of rotors. These models showed high accuracy on test data (>$90$\%). An efficient drone detection model using the Support Vector Machine (SVM) and Convolutional Neural Networks (CNN) for noise detection is discussed in \cite{SVM}. Unlike \cite{ML1}, the authors of \cite{SVM} use SVM and CNN for drone detection as compared to LSTM approach used in \cite{ML1}. \textcolor{black}{The data was collected using audio from $6$ listening nodes to listen to the UAV flown \cite{SVM}.} Both types of ML algorithms have their own pros and cons. The SVM-based models are easier to implement as compared to other deep learning algorithms such as LSTM. However, the SVM-based models are only suitable for small datasets with limited outliers. In the proposed model, multiple listening nodes and a control center are used. The listening nodes are deployed on a circle surrounding the protected area. A microphone is installed on the listening nodes, to detect the sound of the drone. After the detection, the modules per frame are computed, and are sent to the control center for further evaluation. At the control center, SVM is deployed. SVM is a supervised classifier that helps in classifying between the required entities by mapping the input vectors into a high-dimensional feature space \cite{SVM2}. This classifier plots the pattern of the frames and the sound that is sent to the control center, and plots whether the drone is detected or not. \textcolor{black}{The simulation results in \cite{SVM} demonstrate that the SVM algorithm is more efficient than CNN in detecting drones.} However, the main limitation is that these algorithms have noise-related issues that make the results inconsistent. Moreover, the signals were not normalized in the proposed model leading to a lot of outliers. \begin{table*}[] \centering \caption{A summary of advantages and disadvantages of major applications of ML for drone communication security.} \begin{tabular}{|l|l|l|l|} \hline \rowcolor[gray]{0.8} \begin{tabular}[c]{@{}l@{}}Major \\ Approaches\end{tabular} & Advantages & Disadvantages & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Benefits over}\\ \textcolor{black}{traditional approaches} \end{tabular} \\ \hline \cite{SVM}, \cite{CNN} & \begin{tabular}[c]{@{}l@{}}• Supports high computation speed\\ and is cost efficient\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Unwanted noises in the background\\ makes the results inconsistent\end{tabular} & \textcolor{black}{Efficiency} \\ \hline \cite{RNN}, \cite{RNNIMAGE} & \begin{tabular}[c]{@{}l@{}}• Model training time is very less\\ and gives high accuracy\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Generating such a large dataset\\ artificially is very difficult\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Identification, Classific-}\\ \textcolor{black}{ation of types of drones} \end{tabular} \\ \hline \cite{inter} & \begin{tabular}[c]{@{}l@{}}• Stores data sequentially, so the data\\ retrieval latency is very less\end{tabular} & \begin{tabular}[c]{@{}l@{}}• The latency in data transmission\\ increases when a very large data file\\ is transmitted\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Minimizing latency, }\\ \textcolor{black}{Data reliability} \end{tabular} \\ \hline \cite{neural} & \begin{tabular}[c]{@{}l@{}}• High reliability with very fewer\\ resource requirements\end{tabular} & \begin{tabular}[c]{@{}l@{}} • Fails in further classifying the type\\ of attack \end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Detecting and preventing}\\ \textcolor{black}{DoS attacks} \end{tabular} \\ \hline \cite{obs} & \begin{tabular}[c]{@{}l@{}}• Very lightweight model, can run\\ on Raspberry Pi $3$B\end{tabular} & \begin{tabular}[c]{@{}l@{}}• A lot of training and testing in Deep \\ Learning algorithms may be required\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Easy deployment,}\\ \textcolor{black}{Privacy} \end{tabular} \\ \hline \cite{GPSD} & \begin{tabular}[c]{@{}l@{}}• Can detect the adversary in GPS-\\ denied environment with great\\ accuracy\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Fails when the drone moves in an\\ irregular pattern and is sensitive to\\ lighting conditions\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Efficiency in GPS-}\\ \textcolor{black}{denied environment} \end{tabular}\\ \hline \end{tabular} \label{mladv} \end{table*} \subsection{\textit{\textbf{\textcolor{black}{Drone detection using RNN and CRNN}}}} An efficient model using deep learning techniques like Recurrent Neural Networks (RNN) and Convolutional Recurrent Neural Networks (CRNN) for drone detection is mentioned in \cite{RNN}. The authors of \cite{RNN} have acquired a large dataset of drone propeller audio data, and have overlapped the audio clips with a variety of background noises to mimic real-life scenarios. \textcolor{black}{Data labelling was done for the identification problem as unknown (random noises in the surrounding), Bebop (drone $1$), or Mambo (drone $2$), and for the detection problem as drone or not a drone.} The experiment has been divided into two categories, the first, targeting detection of drones and the second, targeting their identification based on type. The detection problem has been evaluated and compared with existing literature, and the mentioned algorithms have been compared based on their accuracy, F1 score, precision, recall metrics, and computational time required to train and test the model. The \textcolor{black}{experiment} results of the model in \cite{RNN} show that deep learning methods using drone acoustics show great effectiveness for drone detection. CNN and CRNN algorithms remarkably outperform RNN in both detection and identification. Although CNN showed better performance than CRNN, the difference in performance was negligible, and CRNN required significantly less time to train, making it the most practical choice. Another model as discussed in \cite{RNNIMAGE} uses CNN for drone detection, but uses images instead of drone acoustics. Although the results are promising, the dataset for such a study could only be artificially created which decreases its reliability, and identification of the specific type of drone is also not possible like it is in \cite{RNN}. The authors of \cite{RNNWIN} have noted that RNN achieves superior performance when compared to CNN. The discrepancy is attributed to differences in the model’s architecture and design parameters, but a direct comparison of the results could not be performed by the authors of \cite{RNN}. \subsection{\textit{\textbf{\textcolor{black}{Fault detection and Recovery of UAV data using LSTM}}}} UAVs are used for certain critical applications like military, and product delivery. Therefore, it is imperative to deploy a certain mechanism for making data transmission in UAVs ultra-reliable. Furthermore, the latency of data transmission should also be kept minimum. Being resource-constrained, the UAVs need to transmit real-time data to the cloud servers for storing. Pioneer works in the direction of minimizing the latency and increasing the data reliability using LSTM are \cite{ML1}, \cite{inter}. Unlike \cite{RNN}, the authors of \cite{ML1}, \cite{inter} use LSTM for drone communication security. LSTM networks are a special type of RNN networks with some special features. The main feature of LSTM over RNN is the presence of a 'memory cell' that can maintain information in memory for a long time. In the proposed model, firstly, a regression model using LSTM is built to extract spatial-temporal features of the drone data. This is done to get an estimate of the monitored parameters or features. The authors use a set of $11$ distinct parameters or features, like roll angle, altitude, indicated airspeed, etc., to sense the UAV's current attitude and position through airborne sensors as an input to the proposed model. The output is used to train the fault detection model after the normalization of the data. Next, various filters are used to reduce the difference between the actual data and estimated values, thereby removing the effects of various noises. A threshold value is compared with the estimated values to detect the faults. In case a fault in data is discovered, the faulty data is replaced with the data estimated by the proposed model or the recovery data. \textcolor{black}{The simulation results demonstrate that the proposed model is capable of providing a quick recovery of the data in a limited time. The experimentation shows that the Mean Square Error (MSE) was recorded to be less than $0.078$ whereas, the Mean Absolute Error (MAE) was less than $0.205$.} However, further work on increasing real-time data recovery may be done to make the model more accurate and effective in fault detection and recovery. \subsection{\textit{\textbf{\textcolor{black}{DoS attacks}}}} The authors of \cite{dist}, and \cite{msvm} have proposed an ML-based model to detect Denial of Service Attack using Neural Networks and Modified Support Vector Machines respectively. However, a pioneer model for detecting and preventing DoS attacks in IoD using machine learning is proposed in \cite{neural}. Unlike \cite{SVM,RNN,inter}, the authors of \cite{neural} focus on preventing the DoS attacks on drone data, rather than physically detecting the unwanted drones in the network. \textcolor{black}{The dataset consists of labeled data categorized as Benign for normal traffic, and attacks like brute force, DoS/DDoS, and web attacks.} The authors proposed and implemented the random forest algorithm \cite{RF} and multi-layer perceptron algorithm \cite{MLP} on the CIC IDS $2017$ dataset. CIC IDS $2017$ dataset consists of all the data of current attacks, such as DoS and DDoS, in pcap format. The incoming data traffic in the drone is classified using the deployed classification algorithms to be as benign or affected packet. In both of the models, an accuracy greater than $98$\% was achieved with the MLP achieving an accuracy of $98.87$\% with $30$\% training records and the RF algorithm achieving an accuracy of $99.95$\% with $50$\% training records. However, none of the previous works including \cite{dist} and \cite{msvm} could achieve such an accuracy level with a relatively low resource requirement, as desired by an IoD system. The further task is to test the system for the multi-classification of DoS attacks. Moreover, the model does not further classify into attacks such as Hearbleed, slowhttptest, and http flood. Also, the resources required can be further reduced to make the system more efficient by reducing the number of features. \begin{table*}[] \centering \caption{Applications of ML for drone communication security.} \begin{tabular}{|p{0.035\linewidth}|p{0.065\linewidth}|p{0.2\linewidth}|p{0.21\linewidth}|p{0.171\linewidth}|p{0.176\linewidth}|} \hline \rowcolor[gray]{0.7} Ref. & Attack & Mechanism & \begin{tabular}[c]{@{}l@{}}Machine Learning\\ Feature used\end{tabular} & Major achievement & Open issues \\ \hline \cite{SVM} & \begin{tabular}[c]{@{}l@{}}GPS\\ spoofing\end{tabular} & \begin{tabular}[c]{@{}l@{}}The sound of the drone is \\ used for classifying the\\ presence of drone using\\ SVM and CNN\end{tabular} & \begin{tabular}[c]{@{}l@{}}SVM and CNN \\ classifies whether the\\ drone is present in the\\ specified area or not\end{tabular} & \begin{tabular}[c]{@{}l@{}}SVM shows better\\results in detecting\\the UAV as compared\\to CNN\end{tabular} & \begin{tabular}[c]{@{}l@{}}The background noises\\ of the wind and the\\ surroundings gave\\ inconsistent results\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{RNN} & \begin{tabular}[c]{@{}l@{}}GPS\\ spoofing\end{tabular} & \begin{tabular}[c]{@{}l@{}}The algorithms like RNN\\ are used to identify\\ the presence of UAV on\\ the basis of sound\end{tabular} & \begin{tabular}[c]{@{}l@{}}Algorithms like RNN and\\ CRNN is used to classify\\ the presence of the drone\end{tabular} & \begin{tabular}[c]{@{}l@{}}CRNN showed the\\best results in\\detecting the presence\\of the drone\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model can be\\trained to detect\\ the wide class of\\drones\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}} \cite{ML1} \end{tabular} & \begin{tabular}[c]{@{}l@{}}DoS\\ attack,\\ Worm-\\hole\\ attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}The LSTM-based fault\\ detection model detects\\ the fault and the quick\\ recovery commands are\\ sent to the UAV\end{tabular} & \begin{tabular}[c]{@{}l@{}}LSTM is used to store the\\ previous data of the UAV\\ which helps in building\\ the model that efficiently\\ detects the fault\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model achieved\\very less MSE and\\MAE, which makes \\the model very\\efficient\end{tabular} & \begin{tabular}[c]{@{}l@{}}Work on increasing the\\ efficiency of the model\\ is much needed\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{neural} & \begin{tabular}[c]{@{}l@{}}DoS \\ attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}The Random Forest and\\ Multi-Layer Perceptron\\ algorithm classifies\\ the data packets received \\ as benign or the DoS\\ affected packets\end{tabular} & \begin{tabular}[c]{@{}l@{}}Random Forest and Multi-\\ Layer perceptron algorithm\\ is used to classify between\\ the affected and the non-\\ affected packet received by\\ the drones\end{tabular} & \begin{tabular}[c]{@{}l@{}}The MLP algorithm\\ achieved an accuracy\\ of $98.87$\% whereas the\\ RF algorithm achieved\\ an accuracy of $99.95$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model does not\\ classify the type of\\attack taking place and\\work on decreasing the\\latency is needed\end{tabular} \\ \hline \cite{obs} & \begin{tabular}[c]{@{}l@{}}DoS\\attack,\\ DDoS\\ attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The data received made\\ obscure by adding some\\ noise and CNN is used\\ to reconstruct the\\ obscured image by\\ using different weights\end{tabular} & \begin{tabular}[c]{@{}l@{}}CNN algorithm is used to \\ reconstruct the obscured\\ data by using some\\ random weights, hence\\ making the data secure.\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model re-\\constructed the\\obscured data with\\an accuracy of $81.3$\%\\and it can run on R Pi\\ $3$B as well\end{tabular} & \begin{tabular}[c]{@{}l@{}}Research is open for\\ working on increasing\\the accuracy and\\the efficiency of\\the model\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{GPSD} & \begin{tabular}[c]{@{}l@{}}GPS\\ spoofing,\\ DoS\\attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}The target drone and the\\ size of the drone is\\ detected using bounding\\ box object detection\\ algorithm\end{tabular} & \begin{tabular}[c]{@{}l@{}}The bounding box object\\ detection algorithm and the \\ YOLO detection algorithm\\ is used for the real-time\\ detection of the drone\end{tabular} & \begin{tabular}[c]{@{}l@{}}This model achieved\\ $77$\% accuracy in\\ detecting the target \\ drone with an average\\ frame rate of $5.22$ fps\end{tabular} & \begin{tabular}[c]{@{}l@{}}The hunter drone\\is inefficient because\\of its heavy weight\end{tabular} \\ \hline \cite{mltab1} & \begin{tabular}[c]{@{}l@{}}Jamming,\\ Black\\hole\\attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}Whenever any event is\\ detected by the UAV, the\\ information is sent to the\\ controller and IDS\\ identifies the malicious\\ node\end{tabular} & \begin{tabular}[c]{@{}l@{}}A hierarchical intrusion\\ detection system is used\\ for detecting the malicious\\ nodes that are injecting\\ false data\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model achieved \\ a detection rate of\\more than $93$\% and a\\false positive rate of\\less than $3$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}Implementing the \\model on the swarm\\of drones is a much\\needed work\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{mltab2} & \begin{tabular}[c]{@{}l@{}}GPS\\ spoofing\end{tabular} & \begin{tabular}[c]{@{}l@{}}A machine learning-based\\ naive Bayes algorithm is\\ used to check for the\\ presence of the UAV\\ and the classification is\\ done using the k-nearest\\ neighbor algorithm\end{tabular} & \begin{tabular}[c]{@{}l@{}}The naive Bayes algorithm\\ is used for the detection of\\ the micro-UAV and for the\\ classification of the micro-\\ UAV kNN and is used\end{tabular} & \begin{tabular}[c]{@{}l@{}}The confusion matrix\\ obtained for the kNN \\ classifier in the model\\ achieved and accuracy\\ of $97.1$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}This model can make\\use of a $3$D feature\\cluster map that would\\help improve the real-\\time classification\end{tabular} \\ \hline \cite{mltab3} & \begin{tabular}[c]{@{}l@{}}Jamming\\ attacks,\\ DoS\\attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}UAVs are used for data \\ transmission and intrusion\\ detection system is used\\ for detecting any anomaly\end{tabular} & \begin{tabular}[c]{@{}l@{}}An intrusion detection\\ system is used for the\\ detection of the \\ anomaly in the network\end{tabular} & \begin{tabular}[c]{@{}l@{}}An efficient way\\of securing the\\ multi-level ad hoc\\ networks is presented\end{tabular} & \begin{tabular}[c]{@{}l@{}}Other networking \\ solutions can be used\\ to make the model\\more efficient\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{mltab4} & \begin{tabular}[c]{@{}l@{}}DoS\\attack,\\ DDoS\\ attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}Devices selected in white\\ list using the algorithm\\ are only used for\\ data transmission\end{tabular} & \begin{tabular}[c]{@{}l@{}}Random Forest algorithm\\ is used for classifying the\\ connected devices as\\ legitimate devices or\\ malicious devices\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model showed an\\ accuracy of $99.49$\%\\ in detecting the un-\\ authorized device in\\the network\end{tabular} & \begin{tabular}[c]{@{}l@{}}The efficient detection\\of a variety of\\compromised drones\\can be worked\\upon in the future\end{tabular} \\ \hline \end{tabular} \label{relml} \end{table*} \subsection{\textit{\textbf{\textcolor{black}{Privacy Leakage}}}} In the case of drones, the authentication algorithms used to enable access to the network are generally cryptographically placed. However, recently, the use of machine learning algorithms to avoid privacy leakage in IoD networks is being explored \cite{IoD}. For avoiding privacy leakage in drones, a pioneer model is proposed in \cite{obs}. Different from \cite{neural}, the authors of \cite{obs} focus on using deep learning algorithms to proactively secure the data, rather than detecting attacks on the network. The authors use deep learning auto-encoders to secure sensor data and to store media on a server. Each bit of data collected from the sensors of the drone is first converted into a digit image of size $28$ by $28$ pixels. Further, some noise is added to the sensor data to make it obscure and it is then sent to a remote cloud server to be saved. Convolutional Neural Network (CNN) is implemented in the reconstruction and classification components. Reconstruction component is used for reconstructing the obscured data into the original data by training the model weights. As the model weights are not known, it becomes almost impossible for the adversary to retrieve the original data. Next, the classification component recognizes the data from the reconstructed data and the digital data is further converted to sensor data by using deep learning auto-encoder. The proposed model is such a lightweight model that it can be run on Raspberry Pi $3$B too. The model was tested on the MNIST dataset, and the results demonstrate an accuracy of $81.3$\% in identifying the reconstructed data. In general, data privacy is ensured by encrypting the data with a variety of cryptographic representations. However, further techniques are required as the privacy techniques using cryptographic keys can be broken once the key is obtained. Another technique that helps in preventing data leakage is Homomorphic authenticated encryption (HAE) \cite{HAE}. Unlike \cite{obs}, the \cite{HAE} model works without the use of a key. HAE allows the users who do not have a key to perform computation on the ciphertext. The computed ciphertext decrypts to the correct function value. \subsection{\textit{\textbf{\textcolor{black}{Adversarial attacks}}}} Adversarial machine learning is mainly used to cause a malfunction in a machine learning model by supplying deceptive inputs. The IoD brings in a vast range of sensors, mobile network security issues, and privacy-protecting challenges that are different from traditional internet systems. A model proposed for avoiding adversarial attacks has been proposed in \cite{CNNADV} that uses CNN and RNN for adversary detection. Similar to \cite{RNN,CNN}, the authors of \cite{CNNADV} use the RNN and CNN-based models. However, these models are used to detect and prevent adversarial attacks rather than detecting the presence of drones, as done in \cite{RNN,CNN}. A pioneer work in detecting and preventing adversarial attacks in IoD is \cite{GPSD}. The authors of \cite{GPSD} use a black and white version of the Tiny You Only Live Once (YOLO) detection system and visual-serving without motion capturing systems. The proposed techniques are even efficient in a GPS denied environment. The proposed model is presented using a drone hunting platform that self localizes using visual inertial odometry (VIO) through ZED stereo camera, and runs a visual tracking and identifying algorithm on Jetson TX$2$. The commands are sent by the algorithm to the PX-$4$ based flight controller. The simulation results demonstrate that the platform could effectively track and chase the adversary. The model achieved $77$\% accuracy with an average frame rate of $5.22$ fps. The proposed work runs significantly faster than other deep learning detection models as mentioned in \cite{CNNADV} and \cite{GIS} with comparable accuracy. Also, it works to detect the adversary in a GPS denied environment, which is not done in other previous works in this direction. However, the fundamental drawback in the proposed model is that the detection algorithm is sensitive to poor lightning. \textbf{ Summary:} \textcolor{black}{The objective here is to enhance the possibilities of adversary drone detection using various machine learning and deep learning approaches. Apart from drone detection, various works have also focused on using such algorithms to prevent attacks in drone communication network. A summary of advantages and disadvantages of major works is shown in Table \ref{mladv}, and a summary of the related works is given in Table \ref{relml}. As seen, ML algorithms have a high capability in detecting unwanted drones and preventing the drones from entering restricted areas. These algorithms are also being widely proposed for secure traffic management and prevention of mid-air collisions.} \section{Applications of Fog Computing for Drone Communication Security} \label{sec8} Fog computing is a powerful complement to cloud computing which can provide a better QoS and can also help in decreasing the security issues in the cloud computing system. It is difficult to connect such a large number of drones directly to the cloud due to high latency delays and unpredictable network connections. Connections between the drones and the fog layer can be easily established with low latency issues. The most important benefit of fog computing is that it does all the computations and keeps the data near to the drone, which keeps it more safe and secure. Fog computing also supports mobility, scalability, heterogeneity as well as platform-independence. \textcolor{black}{Concept of edge computing comes close to fog computing and is said to overlap to a great extent \cite{fog_edge_iot}. Edge computing is for moving the resources from the cloud towards the edge of the network, and is more focused towards the 'things' side. However, fog computing concerns itself mainly with the infrastructure.} In this section, we first of all discuss the basic issues with the traditional approaches that do not use fog computing and then we move on to the challenges in drone communication and possible fog computing based solutions. There are various traditional methods that help in securing drone communication without leveraging the benefits of fog computing. A traditional man-in-the-middle attack detection system has been proposed in \cite{non5}, which uses the precise timing of arrival of data packets to infer the possibility of the attack. If the packet arrives late than the expected threshold time, the possibility of the attack is inferred. This method can fail in several circumstances where heavy background noise is present as the arrival of the data packet highly depends on the transmission channel. Bamasag et al. in \cite{non6}, proposed a multicast authentication model for data transmission in the desired time interval. This model makes the use of Shamir’s secret sharing technique \cite{non7} in which the secret can be unlocked if the authenticator has enough number of shares. Although this method provides some reliability, storing such a large number of keys is not preferred considering the resource constrained nature of drones. We further discuss the list of specific security issues that can be resolved and prevented using fog-computing as a solution. \begin{table*}[] \centering \caption{A summary of advantages and disadvantages of major applications of Fog Computing for drone communication security.} \begin{tabular}{|l|l|l|l|} \hline \rowcolor[gray]{0.8} \begin{tabular}[c]{@{}l@{}}Major\\ Approaches\end{tabular} & Advantages & Disadvantages & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Benefits over}\\ \textcolor{black}{traditional approaches} \end{tabular}\\ \hline \cite{27} & \begin{tabular}[c]{@{}l@{}}• High performance and accuracy as the\\ spoofing attack can be detected from\\ $10$ meters and in $250$ milli-seconds\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Cannot efficiently avoid collisions\\ and also sometimes fails in\\ detecting the obstacles\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Easy deployment,}\\ \textcolor{black}{Confidentiality} \end{tabular} \\ \hline \cite{IDS} & \begin{tabular}[c]{@{}l@{}}• A very high efficiency and a low\\ resource-demanding model\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Highly dependent on network's\\ latency which demands more\\ optimization\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textcolor{black}{Security against man-}\\ \textcolor{black}{in-the-middle attacks} \end{tabular} \\ \hline \cite{FSS} & \begin{tabular}[c]{@{}l@{}}• Identity-based authentication enhances\\ end-to-end security between the edge\\ layer and the fog layer\end{tabular} & \begin{tabular}[c]{@{}l@{}}• The data processing time highly\\ varies with the configuration of the\\ device used for detection\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Authentication, Data}\\ \textcolor{black}{integrity, Non-repudiation} \end{tabular} \\ \hline \cite{fogres} & \begin{tabular}[c]{@{}l@{}}• The architecture covers all the three\\ aspects i.e. minimizing the latency and\\ the energy consumption and maximizing\\ the reliability in the drones\end{tabular} & \begin{tabular}[c]{@{}l@{}}• The architecture is not very fast\\ and future work is needed for\\ increasing the efficiency of the\\ architecture\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Better performance than} \\ \textcolor{black}{ LRGA-MIE and LP-based} \\ \textcolor{black}{algorithms} \end{tabular}\\ \hline \cite{cache} & \begin{tabular}[c]{@{}l@{}}• The model decreases the latency\\ experienced by the drone and increases\\ the Quality-of-Experience (QoE)\end{tabular} &\begin{tabular}[c]{@{}l@{}} • Highly sensitive to the number\\ of drones \end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Low latency, High} \\ \textcolor{black}{QoE} \end{tabular}\\ \hline \end{tabular} \label{fogadv} \end{table*} \subsection{ \textit{\textbf{\textcolor{black}{GPS spoofing attacks}}}} UAVs in the fog environment are susceptible to a lot of challenges against its benefits like mobility, scalability, and accurate location tracking. GPS spoofing is a notable security breach attack that sends incorrect GPS information to the receiver. UAVs need special attention since traditional internet systems like cloud computing in the former causes latency overloads and unforeseeable network issues. In the past, there are various GPS spoofing detection methods that have been adopted. The major ones are detection based on cryptographic algorithms and using auxiliary equipment as mentioned in \cite{27}. The authors take flight security and safety of drones, acting as fog nodes in an airborne fog computing system, in consideration. The model uses visual sensors combined with IMU (Inertial Measurement Unit) for information fusion to solve GPS spoofing issues. Using DJI Phantom $4$ with a frame rate of $30$ fps, it was observed that the spoofing attack can be detected from $10$ meters and in $250$ milli-seconds. The authors of \cite{fogt1} propose a fog-to-cloud computing framework for a Dragnet based amateur drone surveillance system. GPS spoofing and jamming attacks can be detected using the framework, which is inspired from traditional military anti-drone technologies. The amateur surveillance system is empowered with a "brain" for high-level intelligence consisting of a fog-to-cloud model. It is a system of coordinated measures for sensing a spoofing attack on the system by global decision-making based on the actions on the amateur drones. The Kashmar incident as mentioned in \cite{26} could've been prevented if US had employed some of the frameworks of fog computing as mentioned above. \subsection{ \textit{\textbf{\textcolor{black}{Man-In-The-Middle attack}}}} Man-in-the-middle attack problem threatens the very demanding aspect of IoD, which is integrity, as mentioned in \cite{int}. A pioneer work in the direction of low resource demanding model with a high level of security to prevent man-in-the-middle attack in IoD is proposed in \cite{IDS}. The authors propose an intrusion detection system (IDS) and intrusion prevention system (IPS) for preventing man-in-the-middle attack at the fog layer. Although, the model is proposed for IoT devices in general, it can be easily implemented on IoD as well. In the proposed network model, IDS nodes are deployed at a one-hop distance. Whenever an IDS node finds a compromised node or an intruder, it simply indicates the nodes in its proximity to cut off connection with the compromised node. On deployment, IDS nodes acquire the key from the cloud and distribute it to fog nodes. To prevent intrusion, all the packets are encrypted using Advanced Encryption System (AES), and Diffie-Hellman key exchange \cite{AES} is used for key exchange. IDS nodes periodically interrogate the fog nodes and observe the receiver’s behavior. IDS nodes expect the receiver to decrypt the packet in some pre-defined time. If the round trip time of interrogation exceeds the pre-defined time, the IDS concludes that the fog node is malicious. Additionally, if an attacker knows the existence of IDS nodes and the protocol they are using, he still does not know the nature of interrogation which is pre-programmed before the deployment of the nodes. This further reduces the chances of the attack. The proposed model, when implemented at the fog layer, could help in the identification and prevention of the man-in-the-middle attacks so that manipulated information does not reach the cloud. The simulation of the model was done over OMNET++. Latency overhead for deploying IDS and IPS was $40$ milli-seconds. Time taken to detect an attack was discovered to be between $2.48$ seconds to $2.53$ seconds. Since $2$ seconds was the time between investigation sessions, actual discovery time was approximately $0.5$ seconds. Energy overhead incurred on the fog nodes by IDS nodes was negligible which makes it a very less resource-demanding model for detecting man-in-the-middle attack. However, in the proposed model, the investigating-time is inversely proportional to the network's latency and energy overhead of the IDS network model. Further work is required in the direction of optimization algorithms to improve the efficiency of the proposed model. \begin{table*}[] \centering \caption{Applications of Fog computing for drone communication security.} \begin{tabular}{|p{0.033\linewidth}|p{0.11\linewidth}|p{0.235\linewidth}|p{0.114\linewidth}|p{0.189\linewidth}|p{0.17\linewidth}|} \hline \rowcolor[gray]{0.7} Ref. & Security Issues & Mechanism & \begin{tabular}[c]{@{}l@{}}Fog Computing\\ Feature used\end{tabular} & Major achievement & Open issues \\ \hline \cite{27} & \begin{tabular}[c]{@{}l@{}}GPS spoofing,\\ Eavesdropping\end{tabular} & \begin{tabular}[c]{@{}l@{}}The visual sensors combined\\ with IMU are deployed on\\ the drone and the data is\\ transmitted to the fog layer\\ detecting the attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}Distributed\\ computing,\\ Scalability\end{tabular} & \begin{tabular}[c]{@{}l@{}}The spoofing attack can\\ be detected from $10$\\ meter far and in $250$\\ milli-seconds.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Methods for avoiding \\ mid-air collisions\\can also be \\implemented\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{IDS} & \begin{tabular}[c]{@{}l@{}}Man-in-the-\\ middle attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The malicious data is\\ prevented from entering the\\ cloud layer as the attack is\\ detected in the fog layer by\\ using the cryptographic keys\end{tabular} & \begin{tabular}[c]{@{}l@{}}High\\computation\\ power\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model is very less\\ resource-demanding\\ and can detect the attack\\ in less than $0.5$ secs\end{tabular} & \begin{tabular}[c]{@{}l@{}}Future work on\\increasing the\\efficiency of the\\model is much\\needed\end{tabular} \\ \hline \cite{FSS} & \begin{tabular}[c]{@{}l@{}}Eavesdropping,\\ Man-in-the-\\ middle attacks,\\ Hijacking\end{tabular} & \begin{tabular}[c]{@{}l@{}}The drone gets authenticated\\ from the fog layer that\\ contains the hashing\\ algorithm, and then only is\\ allowed to transmit data\end{tabular} & \begin{tabular}[c]{@{}l@{}}Low latency,\\ Mobility,\\ Heterogeneity\end{tabular} & \begin{tabular}[c]{@{}l@{}}The average end-to-end\\ processing time was\\ $2.59$ secs and the\\ average overall response\\ time was $3.17$ secs\end{tabular} & \begin{tabular}[c]{@{}l@{}}The average overall\\ response time highly\\ varied with the\\number of devices\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{fogres} & \begin{tabular}[c]{@{}l@{}}Latency,\\ Resource\\ constraints\end{tabular} & \begin{tabular}[c]{@{}l@{}}The task is divided into\\ several small task using\\ ADMM algorithm and is\\ transmitted to the nearby\\ ready drones that completes\\ the task and transmit back\\ the result acting as a fog node\end{tabular} & \begin{tabular}[c]{@{}l@{}}Low latency,\\ High\\ computation\\ power\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model showed\\ positive results in\\ minimizing the latency\\ and the energy\\ consumption and\\ maximizing the\\ reliability in the drones\end{tabular} & \begin{tabular}[c]{@{}l@{}}Future work is \\required in making\\the model more\\ reliable and safe\\ for the drones\end{tabular} \\ \hline \cite{fogt1} & GPS spoofing & \begin{tabular}[c]{@{}l@{}}The surveillance devices\\ acting as a fog layer sends\\ the data to the cloud layer\\ and gets the result back and\\ transmits to the amateur\\ drone for implementation\end{tabular} & \begin{tabular}[c]{@{}l@{}}High\\computation\\ power,\\ Distributed\\ computing\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model detects the\\ authorized drone with a\\ greater probability than\\ detecting the false or\\ unauthorized drone\end{tabular} & \begin{tabular}[c]{@{}l@{}}To efficiently detect \\the drone, a high\\detection delay is\\expected from the\\ model which should\\ be reduced\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{fogt2} & \begin{tabular}[c]{@{}l@{}}DoS attacks,\\ DDoS attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The serial number of each\\ device is stored in the fog\\ layer and whenever any\\ device wants to communicate\\ with the other device it needs\\ to verify its serial number\\ with the fog layer\end{tabular} & \begin{tabular}[c]{@{}l@{}}Low latency,\\ High \\computation\\ power \end{tabular} & \begin{tabular}[c]{@{}l@{}}The model uses very \\less bandwidth and\\increases the security\\ in the IoT devices\\as no device can\\communicate without\\authentication\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model can be\\ implemented on the\\drones specifically to\\increase their security\end{tabular} \\ \hline \rowcolor[gray]{0.9} \end{tabular} \label{relfog} \end{table*} \subsection{ \textit{\textbf{\textcolor{black}{Eavesdropping}}}} Eavesdropping is an attack that affects the confidentiality of the data in the drones \cite{int}. Classical security solutions exist such as Secure Sockets Layer (SSL) \cite{SSL}, but it cannot be implemented on drones as they lack enough memory and CPU power to perform the required cryptographic operations. Therefore, offloading the additional security operations to a more resourceful entity such as fog nodes is a promising solution. A model that addresses this problem in drones is proposed in \cite{FSS}. Fog Security Service (FSS) mechanism is proposed that uses public and private key cryptography schemes. It consists of a Verifier, PKG (Private key generator), and a hashing algorithm at the fog layer. In the proposed model, input security parameters that include an identifier (unique), username, and password for verification of the sender are assigned to every drone. PKG is used for communication between the fog layer and the edge layer. After node authentication, asymmetric encryption is used for getting symmetric keys from the fog layer. For public-key encryption, the Rivest-Shamir-Adleman (RSA) algorithm \cite{RSA} is used. Nonce values are also used for preventing play-back attacks. FSS provides identity-based authentication as the private key is used for encryption and decryption both. This enhances the end-to-end security between the edge and the fog layer. For IoD networks, ground access points along with UAVs are present \cite{fogaccess}. Therefore, installing the proposed FSS layer along the data transmission paths could identify and prevent eavesdropping problems in the drones. OPNET based network simulator is used to evaluate the proposed method. In addition to different traffic loads, several devices representing different capacities and resources were used for experimentation. The performance of the model was evaluated based on the response time. The average E2E processing time was $2.59$ secs, while the overall response time was $3.17$ secs on average. Response time was measured against the state-of-the-art Authentication Proxy as a Service (ApaaS) \cite{ApaaS} and legacy methods. Processing time varied according to different hardware used. However, the heterogeneity of drones created a lot of dependencies related to processing time. Decreasing the variance of time involved based on heterogeneity is open for research. \subsection{\textit{\textbf{\textcolor{black}{Resource constraint issues}}}} As discussed above, drones have several applications like delivering products, military applications, etc. Therefore, drones need to have high computation power. The latency-sensitive applications such as disaster management, path recognition are also at risk due to this issue. A Fog Computing aided Swarm of Drones (FCSD) architecture is proposed in \cite{fogres}, which helps in minimizing the latency in drone communication. As the drones are highly resource-constrained, the task is divided into several small tasks using a Proximal Jacobi Alternating Direction Method of Multipliers (ADMM) \cite{ADMM}. ADMM is an algorithm that distributes a task into several small tasks and assigns it to the devices connected in the network that are ready to perform the task. An initiator drone is used to assign the tasks to the nearby drones using the ADMM algorithm. The drones complete the specified task and transmit the computed results back to the initiator drone. The simulation results demonstrate a considerable improvement in terms of reduction in transmission latency and computation latency. The energy consumption including transmission energy consumption and the computation energy consumption is also considered for the FCSD to reduce the overall energy consumption in the drone. The ADMM algorithm results in better performance when compared with the baseline pre-existing algorithms such as the latency and reliability constrained minimum energy consumption algorithm based on genetic algorithm (LRGA-MIE) \cite{LRGA} and a newly developed Linear Programming (LP) based algorithm. The Proximal Jacobi ADMM based algorithm gave the optimal solution and algorithm converged after the $14^{th}$ iteration. Another model for minimizing the latency in the swarm of drones that uses a decentralized algorithm for task allocation based on game theory is discussed in \cite{ano}. However, this model fails in providing the level of reliability provided by the ADMM algorithm. Moreover, the model in \cite{ano} converges towards the optimal solution after a large number of iterations as compared to \cite{fogres}. \subsection{\textit{\textbf{\textcolor{black}{Minimum Latency in data dissemination}}}} The dissemination of the data is required to have the least possible latency and fallacy. A model known as edge caching is proposed in \cite{cache} in which common files that the drone captures are cached and are made available whenever needed. The data that the user demands is generated by merging the data files collected by the different sensors installed in the UAV. The common data files can be stored in the cache-enabled UAV, which will then be transmitted directly to the demanding user. This model helps in decreasing the latency and increasing the Quality-of-Experience (QoE) as it has the common data already cached, which collectively helps in generating the demanded data. However, this model suffers from the drawback that whenever the number of drones increases, the data transmitting power decreases. The simulation results demonstrate that the transmission power decreases by $86$\% when the number of UAVs is increased from $3$ to $7$. Hence this model is highly sensitive to the number of UAVs. However, another similar model proposed in \cite{inter} gives a significant performance as compared to the mechanism proposed in \cite{cache}. The authors of \cite{inter} use some ML algorithms such as CNN and RNN for classifying the already existing data, and the required data for the generation of the demanded data. The use of these algorithms significantly improves the system of the model even with the increased number of drones. \textbf{ Summary:} \textcolor{black}{This section portrays that fog computing can help in preventing various attacks like GPS spoofing, man-in-the-middle attacks, and eavesdropping attacks in drone communication. Fog computing majorly works in minimizing the latency in drone communication considering the resource-constrained nature of drones. A summary of the advantages and disadvantages of major works that use fog computing as a solution to drone communication security are presented in Table \ref{fogadv}, and a summary of all the related works in this direction is presented in Table \ref{relfog}. As seen, fog computing minimizes the load on the cloud and helps the drone to offload its tasks to the fog layer, thereby minimizing the latency and maximizing the reliability in drone communication.} \section{\textcolor{black}{Lessons Learned,} Future Research Directions and Open Challenges} \label{sec9} \subsection{\textcolor{black}{Lessons Learned}} The UAV industry is growing rapidly, and as the applications come to use, we face various challenges that still need to be handled. Although various technologies mentioned above are anticipated to help secure drone communications, there are various constraints in these technologies as well. It is required to closely focus on the solution constraints as well before implementing these solutions in different drone applications. Blockchain technology is itself an emerging technology and has not been well implemented and tested in non-financial domains. \textcolor{black}{Blockchain technology has properties that can help secure drone communication across application areas like mining, delivery, surveillance, and disaster management effectively because it improves data security (against DoS, jamming, GPS spoofing, eavesdropping and wormhole attacks) and transparency, even in a swarm of drones.} \textcolor{black}{SDN aims at making the network more agile, flexible, and secure (against DoS, jamming, GPS spoofing and black hole attacks) because of its infrastructure of separating the data plane and the control plane. These incentives make drone communication networks useful in application areas of military, photography, and $5$G networks.} \textcolor{black}{Use of different approaches of machine learning algorithms depend on the application area and domain. ML can be used for securing drone communication networks (against DoS, GPS spoofing, jamming, wormhole attacks), as well as physical security (drone detection) of the drones. Characteristics of frameworks using ML make it suitable for drone applications like traffic management, fault detection, and navigation systems.} \textcolor{black}{Fog computing provides a better QoS, scalability, flexibility, low latency, platform-independence, and improves the security of the network (against GPS spoofing, man-in-the-middle, eavesdropping, hijacking, and DoS attacks. All these advantages make such a framework suitable for drone application areas involving big data, smart vehicles, and energy conservation.} \subsection{\textcolor{black}{Future Research Directions and Open Challenges}} Some of the future research directions in this field are as follows: \begin{itemize} \item The drones are resource-constrained devices. Implementing security algorithms such as blockchain, in a swarm of drones, means adding some more storage capacity and computation ability on drones. This might end up reducing the flight time. Moreover, using blockchain for non-critical communication can be acceptable, but for the critical things like location coordinates, blockchains can cause high latency as of now. Further research is required to enable the security algorithms keeping the resource-constrained nature of drones in mind. \item The gateways between drones, ground controllers and satellites in drone communication are also highly vulnerable to various security attacks. If the gateways are compromised, then the whole network is compromised, even though the end devices are highly secure. Further analysis is required on how to secure the gateways between different hops in drone communication. \item The current architecture of fog computing does not support the inter-fog resource and task sharing. In drone communication, few fog nodes or access points might be less loaded as compared to others. In such scenarios, the fog nodes can directly interact with each other and can share the tasks among themselves. This could further reduce the transfer of data from fog to cloud, thereby enhancing security. \item The current blockchain architecture is highly limited in terms of the number of nodes in permission-ed networks and in terms of throughput in permission-less networks. Various consensus algorithms are being designed to support high throughput along with a large number of nodes or users. \item A concept of multi and distributed controllers is being proposed in some works to overcome the problem of the controller being a single point of failure in SDN architectures. However, further work is required to ensure secure and near real-time communication between different controllers in SDN. \end{itemize} \section{Conclusion} \label{sec10}In this study, \textcolor{black}{we gave an overview of the various application areas of drones. Following which, security threats (drone-specific and IoT-generic) and potential vulnerabilities in specific applications of drone communications were explained. Furthermore, a brief overview of the fundamentals for various technologies is given.} Existing and upcoming solutions to overcome the security threats in drone applications \textcolor{black}{using different concepts have been discussed in detail in the subsequent sections}. The major technologies covered in solution architectures include software-defined networks, blockchain, fog/edge computing, and machine learning. Detailed benefits of these technologies to overcome the security threats in specific drone applications have also been discussed. The state-of-the-art drone security has also been discussed with some improvement suggestions, open issues, and future research directions. This survey is expected to serve as a valuable resource for security enhancement for upcoming and existing drone applications. \section{Introduction} \textcolor{black}{The number of drone or UAV based applications are drastically increasing. Based on the latest TechSci report, the overall revenue from the drone application-related market is expected to drastically improve from 69 billion dollars in 2018 to 141 billion dollars in $2023$ \cite{techsci}. The first application of drones was seen in $1849$ when the Austrian army attacked Venice with some unmanned balloons filled with explosives \cite{Austrian}. This was the point where the idea of drones and its related applications came into the picture and became a topic of exploration for researchers. During WWII, Reginald Denny invented the first remote-controlled aircraft called the Radioplane OQ-2. It was the first mass-produced UAV product in the US, and was a breakthrough in manufacturing and supplying drones for the military \cite{firstdrone}. The use of drones in multiple domains has been rapidly increasing in the past few years. } \begin{figure}[!t] \centering \includegraphics[width=90mm]{Fig1.jpg} \caption{\textcolor{black}{Basic Process of Drone Communication \cite{fig1}.}} \label{droneWorking} \end{figure} \textcolor{black}{Drones work on a simple procedure that involves a data link from the ground controller to the drone and a data link from the drone to the satellite. The ground station controller is also in link with the satellite at every point of time. The basic functioning of drone communication is pictorially shown in Fig. \ref{droneWorking}. Communication between the drone and the other components takes place through radio waves. Drones can help in sending data from one point to another with low latency \cite{fig1}. Drones can provide on-the-fly communication facilities in areas where terrestrial infrastructure is poor or has been destroyed, and to provide any further destruction or harm, emergency services are required in disaster-struck areas \cite{drone_ml}. UAVs can act as a communication-bridge between ground users and network nodes. Furthermore, they can also be used in various monitoring or surveillance operations. A 3-D network can also be made to integrate drone base stations (droneBS) and cellular-connected drone users \cite{rev2}. Although these applications are highly promising to provide safety and comfort to all, they can also bring disastrous results if the drone communication links are hacked and misused. Being resource-constrained, drones are highly vulnerable to physical and cyber attacks/threats \cite{cyber}. The storage and battery capacity of drones is limited and if proper care is not taken, it is easy to hack the chips and the sensors installed inside the drone's circuit to get all the stored information. Therefore, it is highly imperative to focus on the security standards for drone communication as their applications increase \cite{newly1, newly2}. The authors of \cite{rev1} propose a way to reduce the service-time of drones. Drone path planning can be done for secure positioning and it's verification of various components \cite{conti4}.} \begin{figure*}[!t] \centering \includegraphics[width=180mm]{Fig2.jpg} \caption{\textcolor{black}{Structure of our Survey.}} \label{organization} \end{figure*} \begin{table}[!t] \caption{List of Major Acronyms} \centering \resizebox{1\columnwidth}{!}{ \begin{tabular}{|l|l|} \hline \rowcolor[gray]{0.7} \textbf{{Notation}} & \textbf{{Meaning}}\\ \hline ADMM & Alternating Direction Method of Multipliers \\ \hline AODV & Ad hoc On demand distance vector\\ \hline ApaaS & Authentication Proxy as a Service \\ \hline CPS & Cyber-Physical System \\ \hline FANET & Flying Ad-hoc Network \\ \hline FQ & Fair Queuing \\ \hline \textcolor{black}{GNSS} & \textcolor{black}{Global Navigation Satellite System Signals} \\ \hline HAE & Homomorphic Authenticated Encryption\\ \hline ILP & Integer Linear Program \\ \hline IMU & Inertial Measurement Unit \\ \hline \textcolor{black}{LOIC} & \textcolor{black}{Low Orbit Ion Cannon} \\ \hline \textcolor{black}{LiDAR} & \textcolor{black}{Light Detection And Ranging} \\ \hline NBTM & Neural-Blockchain Based Transport Model\\ \hline NIDS & Network intrusion detection systems \\ \hline PKG & Private Key Generator \\ \hline PUF & Physical Unclonable Function \\ \hline \textcolor{black} {RSA} & \textcolor{black}{Rivest-Shamir-Adleman} \\ \hline TVA+ & Traffic Validation Architecture \\ \hline VIO & Visual Inertial Odometry \\ \hline \end{tabular} } \label{Acronymtable} \end{table} \begin{table*}[!t] \caption{Related Surveys on Drone Security} \centering \resizebox{1\textwidth}{!}{ \begin{tabular}{|l|l|l|} \hline \rowcolor[gray]{0.7} \textbf{Year} & \textbf{Author} & \textbf{Contributions} \\ \hline 2015 & Lav Gupta et al., \cite{rw3} & Discussion on security issues in swarm of drones or Flying Ad-hoc Network (FANET) \\ \hline 2016 & Riham Altawy, Amr M. Youssef , \cite{civilian} & Survey on security, privacy, and safety aspects of civilian drones \\ \hline 2016 & Samira Hayat et al., \cite{rw4} & Discussion on requirements of UAV networks for upcoming drone applications \\ \hline 2017 & Mohammad Mozaffari, Walid Saad et al., \cite{rw5} & Issues that UAV faces due to wireless networks \\ \hline 2018 & Silvia Sekander, Hina Tabassum et al., \cite{rw6} & Issues due to wireless networks and the architecture of 5G for UAV \\ \hline 2019 & Azade Fotouhi et al., \cite{rw2} & \textcolor{black}{Challenges faced by UAV in cellular communication} \\ \hline 2019 & Saeed H. Alsamhi et al., \cite{rw1} & The challenges faced in collaboration of drones and IoT specifically for smart cities \\ \hline 2019 & Sun Xingming, Yueyan Zhi et al., \cite{tab1} & Survey on security and privacy issues of UAV \\ \hline \textcolor{black}{2021} & \textcolor{black}{This paper} & \textcolor{black}{Survey on existing and upcoming security challenges in drone communication and their solutions}\\ \hline \end{tabular} } \label{table_relatedworks} \end{table*} \textcolor{black}{Due to the increasing use of drones, the issues related to drone security, privacy, reliability, regulation, and ownership are also increasing at the same pace. There are various security-critical applications where drones fail to provide complete security of data, and that results in a great loss and life-threatening risk. For example, on $29^{th}$, November $2018$, a drone was hacked in Las Vegas and it came into the path of a tour helicopter \cite{2}. Fortunately, the pilot could manage to avoid a crash, but this may not be the case in all such events. A crash might have resulted in the loss of life of many civilians. The incident was investigated by the Federal Aviation Administration (FAA) and some strict rules against drone usage were also brought into action. Various such threats can be caused by the unrestricted use of drones in different applications without any standard security parameters. In this section, we present various important drone applications that are associated with critical security issues. Table \ref{Acronymtable} shows the list of major acronyms used throughout this survey. } \subsection{Related Surveys and Our Contributions} Although a few recent works focus on surveys of issues related to drone communications, the existing surveys generally consider a specific domain or utility of drones. For example, the authors of \cite{rw1} provide a detailed survey on the challenges faced in the collaboration of drones and IoT specifically for smart cities. Another work presented in \cite{civilian} discusses the security, privacy, and safety aspects specific to civilian drones. Furthermore, a significant number of surveys have been done earlier for discussing the privacy and security issues present specifically in UAVs or communication networks. The authors of \cite{rw2} focus on the use of UAVs for cellular communications. The authors discuss various standardization advancements, practical aspects, regulatory issues, and security challenges related to the use of UAVs in cellular communication. The authors of \cite{rw3} provide a full review of various security challenges faced in UAV communication networks. The authors specifically focus on the issues faced in a swarm of drones or Flying Ad-hoc Network (FANET). A comparative study of issues that differentiate FANET from other ad-hoc networks such as Mobile Ad-hoc Network (MANET) or Vehicular Ad-hoc Network (VANET) is also done in good detail. Furthermore, the authors of \cite{cong1} review the use of Game Theory-based approaches for UAVs. UAV path deviation attacks have been surveyed in \cite{conti2}. The authors of \cite{rw4} provide a review of the characteristics and requirements of UAV networks for upcoming drone applications. A generic review on all the network-related requirements such as safety, scalability, privacy, connectivity, security, and adaptability are discussed. The authors of \cite{rw5} and \cite{rw6} also emphasize on the issues related to the use of UAVs in the wireless network. The work done in \cite{rw5} provide some key guidelines over analyzing, designing and optimizing communication systems unique to UAVs. The authors also discuss the need for various security measures required for drones. A complete drone architecture for 5G has been presented in good detail. Moreover, a comprehensive survey discussing the security and privacy issues faced by UAVs is presented in \cite{tab1}. Hence, different from any of the previous works, this work is a comprehensive survey on \textcolor{black}{the most critical} existing and upcoming security challenges in drone communication and the related solutions. This paper aims to help the readers get an overview of the state-of-the-art security challenges in drone communication. The readers will also have a good overview of existing and emerging security solutions for drone communication. Table \ref{table_relatedworks} shows the major survey works done in the direction of drone security in last few years. The main contributions of this work are as follows: \begin{enumerate} \item[{\bf 1.}] {A complete review of different existing and anticipated attacks in drone communication.} \item[{\bf 2.}] {Detailed and realistic recommendations to improve the drone application architecture for secure communication.} \item[{\bf 3.}] {Extensive analysis on the existing and upcoming solutions that empower the use of drone communication in multiple domains.} \item[{\bf 4.}] {An assessment of the future research areas, existing challenges, and, open issues for developing secure drone applications.} \end{enumerate} \subsection{Organization} The rest of the paper is organized as follows. In Section \ref{sec2}, we discuss various security issues and security-critical applications of UAVs in different domains. Section \ref{sec3} discusses the fundamentals of various emerging technologies for secure drone communication. Four major drone communication security approaches, i.e., Blockchain, Software Defined Networks (SDN), Machine learning, and Fog/edge computing are presented in Section \ref{sec5}, \ref{sec6}, \ref{sec7}, and \ref{sec8}, respectively. Section \ref{sec9} describes various future research areas, existing challenges, and open issues in drone security. Finally, we conclude the paper in Section \ref{sec10}. The organization of the survey is also shown in Fig. \ref{organization}. \section{\textcolor{black}{Security Issues in Drone Communication and Potential Vulnerabilities}} \label{sec2}Drone communication faces some specific security challenges along with the generic cyber-threats. One of the reasons for these specific issues is that drones are unmanned and it is difficult to handle or prevent unanticipated issues dynamically and adaptively. Special attention needs to be given to drone security issues as drones are different from the traditional IoT devices (mobile phones, sensor-based alarms, smart trackers, etc.), and we need drones to adapt to several advanced security concepts, such as confidentiality, authentication, access control, and data protection, while being highly resource constrained devices. Usage of drones needs to take care of vulnerability concerns from sensor networks, mobile communication networks, the internet, et cetera. The drones communicating via cellular data use radio signals to communicate with the controller. The controller sends the radio signals through its transmitter and these are received by the drone through its receiver. The radio signals in between can be jammed or can be tampered \cite{radio}. As stated by an IBM researcher, drones can be hijacked easily if they do not have encryption on their onboard chips \cite{10}. Because of resource constraint issues of drones, encryption would not be the ideal solution. With a huge amount of data exchange in drone communications, encryption and decryption using complex algorithms require a certain amount of computational power. The security concerns become more severe when drones use Wi-Fi for communication \cite{norton}. \textcolor{black}{Table \ref{IoT_diff} summarizes how susceptible various wireless communication networks are in comparison with drone communication systems.} In this section, we present various specific security challenges faced by drones. Furthermore, we also discuss the specific security vulnerabilities for each attack in some drone applications. Various ideas and methodologies to overcome these security challenges are discussed in the upcoming sections of this paper. \begin{table*}[] \color{black} \centering \caption{Difference between security vulnerabilities of different wireless networks} \begin{tabular}{|l|l|l|l|l|l|} \hline \rowcolor[gray]{0.8} Security Issue & Ad-hoc Network & Sensor Network & Mesh Network & Vehicular Network & Drone Comm. \\ \hline DoS & {\cmark} (Low) & {\cmark} (High) & {\cmark} (Low) & {\cmark} (Medium) & {\cmark} (High)\\ \hline Man-in-the-middle & {\cmark} (Medium) & {\cmark} (High) & {\cmark} (Low) & {\cmark} (Low) & {\cmark} (High)\\ \hline GPS spoofing & {\xmark} & {\xmark} & {\xmark} & {\cmark} (Medium) & {\cmark} (High)\\ \hline Radar & {\xmark} & {\xmark} & {\xmark} & {\xmark} & {\cmark} (High)\\ \hline Jamming & {\cmark} (Medium) & {\xmark} & {\cmark} (Low) & {\cmark} (Low) & {\cmark} (High)\\ \hline Wormhole & {\cmark} (High) & {\cmark} (High) & {\cmark} (Low) & {\cmark} (Low) & {\cmark} (High)\\ \hline \end{tabular} \label{IoT_diff} \end{table*} \subsection{Security Issues In Drones} Few of the security threats discussed below are more drone-specific (GPS spoofing, radars, jamming, and wormhole attacks), whereas the relatively generic issues mentioned are discussed based on how adversaries can exploit them to threaten the use of drones. \subsubsection{ Denial of Service Attacks} The DoS or the Denial Of Service is the most common and easy type of attack that an adversary can use to stop the drone from functioning normally. This is the simplest way of entering into the drone network and making it useless or sometimes even harmful \cite{newly14}. Fig. \ref{dos} shows the basic working of the DoS attack in case of drone communication. Due to a large number of superfluous requests, the access of shared resources to legitimate users is restricted. This will cause the system to overload, and might result in rejection of some or all legitimate requests to be fulfilled. In this process, the network connection between the ground controller and the drone is de-authenticated as the adversary sends several data packets to the drone which leads to the failure of the computational power of the drone \cite{DoS}. Data packets can be easily created by any packet generator application and can be sent directly to the drone's network. ICMP (Internet Control Message Protocol) packets will be sent at a very high rate which will make the network of the drone overflow, resulting in the loss of control of the drone both by the drone as well as the ground controller. It is also possible that there is some malicious code present in one of the sent data packets, that can be used to attack the drone. Such attacks can be used by the hijacker to crash the drone causing harm to civilians and government agencies. \begin{figure}[!t] \centering \includegraphics[width=90mm]{Fig3.pdf} \caption{Denial of Service Attack on Wi-Fi Enabled Drone.} \label{dos} \end{figure} The authors of \cite{doscase} have demonstrated the effects of DoS attack on $2$ types of drones namely, Augmented Reality aerial drone (AR Drone) $2.0$ which is a cheap quadcopter and $3$DR Solo which is a costlier quadcopter. The authors experimentally evaluate three DOS attack-tools: Netwox, Low Orbit Ion Cannon (LOIC), and Hping3, to analyze the drone's behavior. Both these types of drones are widely available in the markets. Both the drones were tested for the image quality transmitted and how DoS attacks affect them. The study found that both the costlier and cheap drones show a significant drop in frame rates demonstrating clearly to us that even premium drone manufacturers are not paying enough attention to drone security. The increase in network latency shows the ease of DoS attacks even on such high end drones. \textit{De-Authentication Attacks:} This is a type of attack that can make the use of drones difficult in various applications. This is a specific type of Denial Of Service attack in which communication is disrupted between the client and the Wi-Fi access point. In this attack, the control of the drone is lost by the pilot as the attacker de-authenticates the ground pilot. Attacker can send a de-authentication frame to a wireless access point at any point in time as encryption is not needed to send the frame, despite the privacy technique employed \cite{dos_encrypt}. The attacker only needs the mac address of the drone which is made available through any of the tools like `Aircrack-ng' \cite{11}. In the de-authentication attack, the drone is hijacked by using this tool which specifies the mac address of the drone. As soon as the Aircrack-ng tool is activated, the connection between the drone and the ground controller is de-authenticated. The attacker can use this tool to communicate with the drone and direct it maliciously. This attack makes the drone go out of control and leads to a heavy loss. De-authentication attacks have become one of the newest concerns in the industry as e-commerce giants, such as Amazon, look towards product delivery mechanisms for drones. One of the most famous methods for carrying out this attack is SkyJack \cite{decase} which uses an AR. Drone $2.0$ along with a Raspberry Pi and wireless Adapters to hack and control drones. It sends de-authentication requests through Aircrack-ng which is used to disconnect the target drone from their user and then use the node-ar-drone library to communicate with the target drone. \begin{figure}[!b] \centering \includegraphics[width=90mm]{Fig4.jpg} \caption{Man-In-The-Middle Attack On Drone \cite{fig4}.} \label{manattack} \end{figure} \subsubsection{Man-In-The-Middle Attack} Man-in-the-middle attack places an adversary in between the client and the drone. The adversary uses a device known as Wi-Fi Pineapple \cite{11}. Fig. \ref{manattack} represents the implementation of man-in-the-middle attack. In this attack, the flight planning software broadcasts the plan to the drone controller which sends it further to the drone. On successfully receiving the commands from the controller, the drone sends the acknowledgement, which is received by the attacker in between the drone and the controller. The attacker uses Wi-fi Pineapple to send the forced commands to the controller. Once the pineapple is set, it will run the recon mode which will trace out all the possible access points that the client software may be using. Once the access point (drone) is traced, it is added to the Pine-AP SSID (Service Set Identifier) pool. This command is forwarded to the drone and the actions intended by the adversary are imperceivably implemented by the drone. One example of man-in-the-middle attack is active eavesdropping \cite{eavs} in which the adversary connects himself with the drone controller. After getting the access of the drone through the SSID of the drone, the hacker sends the fake commands to the controller, making them believe that they are communicating with the drone itself \cite{newly13}. Fig. \ref{manattack} depicts the man-in-the-middle attack. The authors of \cite{fig4} explored various vulnerabilities of UAVs; if a weak encryption scheme is used, the password becomes easy to crack, and the man-in-the-middle attack can be performed using the Wi-Fi link. Lack of secure encryption schemes throughout the chain of communication can cause such attacks. The authors of \cite{link2}, from IBM, have demonstrated the easiness of stealing a police quadcopter worth a thousand dollars by performing the man-in-the-middle attack. The researchers revealed that a hardware worth only $40$ dollars is sufficient to perform such an attack. This is a very clear example of an attack in which the controller is not even aware of the middle layer hacker. \subsubsection{GPS Spoofing} For communication, drones need incoming signals from GPS satellites, a two-way link between the drone and the ground-station, and signals notifying the drone's presence \cite{fig5}. Fig. \ref{spoofing} shows the basic working of the GPS spoofing attack in drone communication. Spoofing can be done using multiple transmitting antennas, in which the attackers transmitting antenna combines with the corresponding receiving antenna and transmits the false signals. In this process of getting the GPS coordinates of the drone, the drone is located by the satellite using GPS and its coordinates are then sent to the ground controller. The drones that do not have any encryption on their chipboard, are easily tracked by the hacker and they share a wrong location to the drone controller using a directional antenna with narrow beam-width aiming for the drone. GPS spoofing is mainly carried out on military drones as they are deployed at certain critical places that can provide highly confidential information about the other nations. It is relatively difficult to spoof military drones as they are highly equipped with encryption mechanisms. Spoofing can be done using multiple transmitting antennas \cite{spoof}, in which the attackers transmitting antenna combines with the corresponding receiving antenna and transmits the false signals. The spoofer can take the drone to any trajectory he/she wants without even giving the controller a hint as fake coordinates are sent to the controller at regular intervals. This technique can be used to reduce the velocity of the drone making it less useful. \begin{figure}[!t] \centering \includegraphics[width=90mm]{Fig5.jpg} \caption{GPS spoofing attack on GPS Enabled Drone \cite{fig5}.} \label{spoofing} \end{figure} According to \cite{26}, on December $5$, $2011$, an American UAV was detected and shot down by Iranian forces near the city of Kashmar in northeastern Iran. According to the American officials, the UAV was spoofed and was forced to fly over Iran. The attackers hacked the UAV and injected it with wrong GPS coordinates. The incident resulted in a disturbance in the relationship between the two nations. The military drone was said to be using an inertial navigation system, and not the GPS navigation because of increasing number of spoofing and jamming attacks. Despite the measures taken to prevent any spoofing attack, or to protect the classified information available from the drones, the Iranians claimed that they could access it and reverse-engineered the entire drone. The authors of \cite{spoof} have used Software Defined Radio (SDR) platforms to simulate GPS to transmit false signals to the target drone. This methodology has been used for a long time to hack or relay wrong information through drones. Using this approach, they divert and take control of the drones that depend on GPS for flight paths. For generating fake GPS signals, BladeRFx$40$ SDR is used, which is very versatile, and costs around $420$ dollars. \subsubsection{Radar} Mono-static radar is the most traditional way of searching for important entities. Similarly, it can be used to find drones. The radars send electromagnetic signals that can travel a long distance. These signals travel in all directions and wherever the presence of drone is detected, the signals reflect from the surface of the drone and are received at the other desired end. By further studying the signals, one can easily measure the velocity, direction, and altitude of the drone. A problem with this technique is that sometimes the electromagnetic signals consider obstacles like birds, airplanes, or kites as drones and transmit wrong information to the radar station, which in turn produces a loss. Radars operating in the millimeter wave (electromagnetic spectrum) range can be used for surveillance of small drones, even under adverse weather conditions, with high accuracy and with distance-independent resolution \cite{radar_em}. Moreover, to overcome these issues, hackers have tried to use various machine learning techniques, including the SVM classifier, binary decision tree, etc., to classify between the real drones and other objects \cite{radar}. In this technique, the detector is trained with a lot of data sets to distinguish between any obstacle and a drone. The authors of \cite{fraun} have discussed in detail about the ways in which the radars can be used to detect and identify drones. Various sensors such as optic or infrared sensors are also used to detect or identify drones. However, these sensors have various limitations in terms of range, and their reliability in night, rain, and fog. The use of radar is declared by the authors to be superior to visual optic sensors and infrared sensors because of their range. Applications of drones such as package delivery, and military operations, makes the drones very vulnerable to attacks that use radars. In all such areas, detecting and identifying the drones can be a threat to the drone itself, and might also result in the loss of other task-associated resources. \subsubsection{Jammers} These are the electronic devices used by the adversaries to block the signals at the receiver's end. It is mainly used for the disruption of communication between several users. It works on a simple principle in which a transmitter is used which is tuned to the same frequency as that of the target. If the jammer has enough power, it over-rides the frequency signals, thereby blocking every type of signal that the target can configure to. The jamming attack is analogous to the DoS attack, but the only difference is that in the DoS attack, the network, service, and the application layer get affected whereas in the jamming attack radio signals are used to attack the drones which mainly affect the physical layer. Signals of Wi-Fi and Bluetooth can be easily jammed, and that too using a low power jammer \cite{12}. The ability of the jammer is judged by its range. Jammers with a higher range can block signals of devices that are present upto that range. Fig. \ref{jamming} shows the implementation of the jamming attack. The attacker sends the jamming signal to the serving base station from his end, with the help of a UAV, which matches the frequency of the signal with the deployed drone. Thereafter, the signals between the drone and the backup serving base station are blocked. Hence, no data and commands are allowed to reach to the server, and the deployed drone becomes non-responsive thereafter. Once the drone loses contact with the control station, many drones have a auto-pilot mode as a fail-safe which gets activated. Auto-pilot mode makes it easy for the attacker to launch a GPS-spoofing attack, and force a landing away from the original destination by spoofing the GPS signals \cite{jam_attack}. A technology was introduced in Australia a few years back, which would allow the hacker to commandeer a drone mid-flight and lands the drone in a defined exclusion zone by the new pilot \cite{jam_Aus}. \begin{figure}[!t] \centering \includegraphics[width=90mm]{Fig6.jpg} \caption{Jamming Attacks On Drone \cite{fig6}.} \label{jamming} \end{figure} The authors of \cite{jammerinc} mentioned an incident in which GPS jamming was used to bring down $46$ drones during a show in Hong Kong. The drones started falling with great velocity. According to the board's executive director, these professional drones were equipped with fail-safe technologies to direct them back to their take-off location, but because the strength of the jamming signals was so strong, the drones started dropping mid-air only. The hacker had to just point the jamming device towards the drones and as soon as the signal interruption was detected by the drones, they started falling, as confirmed by Rex Ngan, founder of the Hong Kong Professional Unmanned Aerial Vehicles Association. \subsubsection{Wormhole Attack} UAV networks that utilize FANET or MANET are susceptible to routing-based attacks like wormhole. The communication between UAVs rely not only on the information exchange between UAV and the ground control station, but also amongst the UAVs. FANET uses a system of auto-configuration and self-healing to improve the reliability of the system, but it makes them vulnerable to wormhole attacks \cite{fanet_wormhole}. A wormhole attack is one of the most severe and grave attacks in MANETs. In a wormhole attack, two attackers place themselves strategically in the network in order to listen-in on the communication between the drone-network. The attacker records the communication at one point in the drone-network and tunnels the information to the second attacker, where the recorded information is replayed. As the routing protocol algorithm looks for the nearest node to transfer the information, wormholes are placed so as to make the distant nodes believe that they are their closest neighbors. This kind of re-routing compromises the confidentiality of sensitive information, and also enables the attacker to launch an attack from any point in the network, because it practically controls all the routes discovered after the wormhole \cite{wormhole}. Moreover, wormhole attacks in a UAV Ad Hoc Network (UAANET), made of a swarm of UAVs and a ground control station, are a high-level risk, and special attention needs to be paid to this multi-node attack. Even without the knowledge of any the cryptographic keys or hash-functions, the attacker is able to affect the integrity of the network by transferring control packets, and further, captures the data traffic \cite{uaanet_wormhole}. \subsection{Potential Vulnerabilities In Different Drone Applications} This section discusses the potential vulnerabilities in different drone application. Although the drone applications are vulnerable to every security issue mentioned above, we present the main security issue faced by the specific drone application. \subsubsection{\textit{Security Vulnerabilities in Mining Drones}} Drones can help a lot in surface and underground mining \cite{mining}. Mining is a very tedious task and involves a risk to the life of the miners, so drones can be employed to decrease the workload and the risk to human life. Drones equipped with infrared night vision cameras can help in finding ores easily. They can also be equipped with a metal detector device to directly detect the ore even without a camera. Such applications may help in reducing the mining cost, and can increase the overall efficiency. There are various reasons and motivations due to which the adversaries may get attracted towards hacking such drones and launching a DoS attack. The other competitors may send unwanted requests to the mining drones, thereby preventing them from accurately identifying the ores and other valuables in the mines. \subsubsection{\textit{Security Vulnerabilities in Disaster Management Drones}} Drones can timely inform the respective emergency teams about the disaster situation and can help in preventing the loss of life and property. They can also help in delivering essential items to the disaster victims. Anticipated installation of drones can also be done in disaster-prone areas, to keep an eye on the upcoming disasters. The disasters may either be natural disasters like earthquake and flood, or man-made disasters like riots and terrorist attacks. Although there are multiple applications of drones in disaster anticipation, identification, prevention, recovery, and damage control, it is very important to safely deploy and use drones in such applications. These drones are highly vulnerable to de-authentication attacks. Both false positive and false negative messages from drones can result in major problems. Some hackers may try to de-authenticate the drones deployed for disaster anticipation and may send false positive messages to the emergency teams using their own malicious drones. This will end up in a waste of time and money for government bodies. False negatives may try to prevent the timely delivery of disaster-related messages to the emergency teams and thereby lead to loss of life and property. \subsubsection{\textit{Security Vulnerabilities in Agriculture Drones}} Various countries, like Sierra Leone, Somalia, and India, are dependent on agriculture for their living \cite{agri}. The farmers need techniques that help make farming easy and increase the productivity. Drones can be used for pollinating seeds which is a very important task for farmers to grow their crops. The drones can carry pollinating seeds and can have the dataset of the field where they have to sprinkle the seeds in the required quantity. This can decrease the work load of farmers and the mechanized sprinkling of seeds can also save seeds as drones can be programmed to sprinkle only the required quantity of seeds at appropriate locations. Drones can also help the farmers by spraying medicines in the correct quantity to kill unwanted plants like weeds and insects in the farms. In all the above discussed processes that are being carried out by the drone, the data should be accurate and if the drone gets hacked, the hacker can easily change the quantity of seeds or insecticides that have to be sprayed. The agricultural drones can be easily hijacked using the man-in-the-middle attack as the adversary can place himself between the drone and the ground controller and can manipulate the data already fed in the drone. Any unwanted change in the data can destroy the plants resulting in a great loss to the farmers, and as well as to the nation. \subsubsection{\textit{Security Vulnerabilities in Military Drones}} The initial drones were all very noisy, and therefore, it was difficult to use them for most of the hidden military operations. Various new drones have now been invented that make the least sound which makes them difficult to detect. The invention of such silent drones has increased their usability for various military purposes as they can fly up to a very remote location secretly. The cameras installed on the drones can be used to spot the enemy's location while carrying out any type of strike against the enemy. Although these drones have multiple advantages in military operations, if the drones are hacked or the communication link is spoofed, it can lead to disastrous results. The enemies can even hack and reprogram the drones to act against the army itself. The information leak by spoofing the communication link can also end up revealing military plans to the enemies. Therefore, it is very important to take care of all security standards before trying to deploy the drones for such critical applications. The most famous incident of such a case is also known as the Iran-U.S. RQ-170 incident \cite{26}, where Iran's military used GPS Spoofing to land a US UAV almost undamaged and reverse-engineered the entire design of the stealth drones to make their own Sa'egheh drones. This turned into an international incident. \subsubsection{\textit{Security Vulnerabilities in Delivery Drones}} Due to the increase in the pace of E-commerce, a lot of manpower is required especially for the last-mile delivery of products. The use of drones can be a promising solution as drones can deliver the products in less time and high accuracy. Drones can be used to deliver food, medicine, newspapers, and other things of daily basic needs. FAA approved the first NASA drone to deliver medicines in July $2015$. The UAV could successfully drop medicines to the health clinic in rural southwest Virginia \cite{7}. In $2016$, Amazon made its first drone delivery successful by delivering the package in $13$ minutes after being ordered by a customer in Cambridge \cite{8}. Amazon also launched a drone for delivery named Amazon Prime Air which can fly up to a range of $10$ miles for product delivery. These drones can take off and land autonomously, guided by GPS. Although, drones can help a lot in timely and accurate delivery, if these drones are hacked, it can end up in a big chaos. The hacker may use the radars to identify and capture the basic drones used for delivery and may guide the drones to deliver packets to different destinations or to himself. Therefore, care of the security standards is important even for the delivery drones. Since a huge number of people are using e-commerce, any misstep could endanger the privacy of billions of people in the future. \subsubsection{\textit{Security Vulnerabilities in Drones for Urban Planning}} Urbanization refers to the heavy movement of people from rural areas, like villages, to the cities. Drones can be highly helpful for architects to take some major decisions regarding renovations and new constructions. Drones can also help in making and analyzing the plans for water management in cities. A drone can be deployed with GIS (Geographic Information System), by which drones can easily capture, analyze and manipulate geographical data of the water supply management. It is important to have well-defined security measures for such drones as well. Various rules have to be followed to approve a safe construction. There have been various cases of buildings getting collapsed due to illegal construction resulting in a loss of life. If the architects rely on the results submitted by drones, and the values calculated and submitted by drones are not secure, then illegitimate people might try to hack and manipulate the drone functioning to get their illegal constructions approved. To conceal their identity, people with ill intent could crash such drones which could lead to a loss of resources. Other owners of illegal or unauthorized construction sites may deploy jammers to prevent such drones from identifying illegal constructions. \subsection{\textcolor{black}{Classification of Drone Communication Systems}} \subsubsection{\textcolor{black}{Drone-to-Drone}} \textcolor{black}{Even though drone-to-drone (D2D) communication has not been standardized yet, it can be seen as a peer-to-peer (P2P) network \cite{d2d_security}. This makes D2D communications susceptible to various P2P attacks (DoS/DDoS, jamming attack, etc.).} \subsubsection{\textcolor{black}{Drone-to-Infrastructure}} \textcolor{black}{Drone-to-infrastructure communication can be further classified into categories such as: \begin{enumerate} \item Drone-to-Satellite: This infrastructure is used for the drone to coordinate with the GPS. Although, it is expensive to set-up and maintain, such communication systems are considered safe and secure. \item Drone-to-Network: This type of communication is useful for cellular networks (4G, 5G, etc.), and it is very important to ensure their security when used. \item Drone-to-Ground Station: This infrastructure is based on common wireless technologies like Bluetooth and Wi-Fi. They are public, and hence not secure, making them very susceptible to man-in-the-middle attacks and eavesdropping. \end{enumerate}} \par \textbf{ Summary:} This section discusses the security issues being faced by the existing drone applications. Many attacks like DoS attacks, De-authentication attacks, Man-In-The-Middle attacks, that deal with the tampering of the data in the drones are mentioned in this section. Several other attacks that deal with the position of the drones like GPS spoofing, jamming attacks, radars are also discussed. In the next section, we discuss the overview and fundamentals of various emerging technologies such as blockchain, SDN, ML, and fog computing that can help in preventing the above-mentioned attacks on the drone applications. \textcolor{black}{Furthermore, we gave a brief classification distinction between drone communication systems.} \section{Overview and Fundamentals of Various Emerging technologies for Secure Drone Communication} \label{sec3}In this section, we discuss the four main emerging technologies that are being widely used and explored for making drone communication fast, reliable, and secure. Mainly we discuss the use of blockchain, ML, SDN, and fog computing for secure drone communication. \begin{figure*}[!t] \centering \includegraphics[width=180mm]{Fig7.jpg} \caption{Working process of blockchain.} \label{bcworking} \end{figure*} \subsection{Drone Communication Architecture using Blockchain} According to FAA, $1.3$ million drones have been registered with the FAA in $2019$ and the number is expected to increase to $7$ million by the end of $2020$ \cite{dronenumber}. With the rapid increase in the number of drones, the data generated by them is also increasing rapidly, which has increased security concerns about the data. Researchers state that blockchain can contribute to another layer of security to drone communication, which would prevent data retrieval and tamper by unauthorized persons \cite{cov}. Furthermore, the data on the blockchain is distributed, such that it becomes very difficult for an adversary to hack a single system to get control over the complete data in the network. Figure \ref{bcworking} shows the basic working process of the blockchain technology. \subsubsection*{\textit{Motivation For Using Blockchain For Drone Communication Security}} A blockchain is a growing chain of blocks linked to each other using cryptographic hash functions. The drone applications are becoming highly popular and are gradually being used in almost all domains and spheres of life. With the increasing number of drone applications, it is imperative to keep the transactions between the drones and other users secure, cost-effective, and privacy-preserving. The blockchain technology is a highly promising solution that can be used to deploy real-time drone applications. Once a transaction is recorded on the blockchain, it remains immutable and no adversary can try to tamper the records \cite{blocksolauth}. Furthermore, the use of smart contracts can help a lot in performing different transactions between different parties in a secure and cost-effective manner. Depending on the nature of the application, different kinds of blockchain networks can be created, such as public, private, consortium, and hybrid. Moreover, there is a vast variety of consensus algorithms used in the blockchain network ranging from PoW, PoS, PoB, DAG, and so on. All these features of blockchain can be leveraged to make drone communication secure, reliable, and cost-effective. In Section \ref{sec5}, we discuss in detail the various blockchain-based models to secure drone communication. \begin{figure}[!b] \centering \includegraphics[width=90mm]{Fig8.jpg} \caption{Basic SDN Architecture \cite{fig11}.} \label{sdn_arch} \end{figure} \subsection{Drone Communication Architecture Using SDN} Software-Defined Networking (SDN) is a networking architecture that can be used to control or program the network centrally using software applications. SDN helps in the consistent management of the network as everything in the network is centrally programmed. The architecture of a typical SDN-based drone communication network is shown in Fig. \ref{sdn_arch}. The figure illustrates a simple use case of SDN in drone applications. It shows the steps involved in transmitting data from the drones in the data plane/layer to the control plane for getting the data processed and then getting back the required output. In a typical SDN-based drone communication network, each drone in the network behaves as an individual switch. The application plane of the SDN sits on a centralized controller and is responsible for the implementation of any and all high-level functions to be performed by the network as a whole. The centralized controller also houses the control plane, which would command and control data flows between the drones. The data plane consists of the drones themselves, which respond to commands from the controller. There exist a variety of protocols and standards for performing various functions in the network. Because the SDN architecture decouples the control plane from the data plane, protocols in different planes can be implemented independently. This offers a considerable degree of freedom in the design of an SDN-based drone communication network. The authors of \cite{rev3} review the various 5G techniques based on UAV platforms using network layer techniques like software-defined UAV networks. \subsubsection*{\textit{Motivation For Using SDN For Drone Communication Security}} As discussed above, SDN enables the network to be centrally controlled, which makes the network reliable. Moreover, SDN's decoupled data layer, control layer, and application layer makes the network control directly-programmable. The increasing drone applications make use of real-time video streaming which can be achieved by the use of SDN, as it can provide better QoS because the traffic is automatically controlled in the SDN. The drone is highly resource-constrained so there are many vulnerabilities that can be prevented by the use of SDN, as the controller can keep a close check on the data traffic. The above-mentioned parameters of SDN helps in maintaining the overall security in the network. Section \ref{sec6} discusses the SDN-based DCN models that can help in making the drone communication secure from different types of attacks. \subsection{Drone Communication Architecture Using Machine-Learning} \begin{figure}[!t] \centering \includegraphics[width=90mm]{Fig9.jpg} \caption{Basic ML Techniques Architecture \cite{mlfigure}.} \label{mlfigure} \end{figure} Machine learning is a technique that provides the system with the ability to learn automatically and ameliorate using past experiences without being explicitly programmed. Once the data is fed, ML learns and predicts the output automatically without much human intervention. The ML algorithms need a large amount of training data to make more accurate predictions. ML algorithms are broadly divided into two categories, i.e., supervised machine learning (training dataset can be classified into several labels) and unsupervised machine learning (training data is not classified) algorithms. Figure \ref{mlfigure} shows the basic architecture of ML-based drone communication applications. The figure illustrates the various ways in which the ML techniques can assist in making the drone communication secure. Several ML algorithms like CNN (Convolutional Neural Network), SVM (Support-Vector Machine), ANN (Artificial Neural Network), RNN (Recurrent Neural Network), etc. can be used for making drone communication secure. ML algorithms, such as LSTM (Long Short-Term Memory), can also be used for detecting the faults in drone communication, and the recovery methods are sent to the drone for its safety. A classification algorithm can be applied which can help in detecting the DoS attacks and other attacks that make use of the fake and affected data packets to paralyze the network. The data packets can be easily classified as either benign or affected packets, which can prevent the network from getting hacked. These diverse applications of ML can help achieve highly secure drone communication. \subsubsection*{\textit{Motivation For Using ML For Drone Communication Security}} The ML algorithms learn from the training data and improve themselves for achieving better results and high accuracy without any human intervention, which is a huge advantage. The ML algorithms can be deployed for detecting the presence of malicious drones in the network and can help in preventing the attacks such as man-in-the-middle attack \cite{mitmml} and spoofing attacks \cite{mlspoof}. Such algorithms keep on improving with increasing experience, and provide better and more accurate results. The models can also be trained to automatically detect and recover from the faults using neural networks and LSTM. Moreover, ML algorithms can handle multi-dimensional and diverse data. All these parameters make these algorithms highly suitable for use in drone applications. In Section \ref{sec7} we discuss in detail the use of various ML techniques to secure drone communication. \subsection{Drone Communication Architecture Using Fog Computing} The concept of fog computing was first introduced by CISCO in $2014$ \cite{ff}. Fog is considered to be a dimension that extends the use and capabilities of the cloud. Fog is not particularly a substitute for cloud computing; instead, it is a large complement of cloud computing. The fog layer is a layer or stratum between the edge devices and the cloud. Deploying cloud servers is very difficult as it is very costly and is very difficult to establish. So a new concept came into the market in $2014$, by which the load of the cloud can be minimized. Fog is a smaller version of the cloud which can be placed near to the end devices. Fig. \ref{foglayer} shows the \textcolor{black}{layered architecture of cloud, edge, and fog computing combined}. What happens in fog computing is that whenever an end device user requests some query of fetching any data or uploading any data, the mobile network helps in connecting to the nearest fog node available. Now the data can be easily fetched and stored in the fog. The fog uses LAN (local area network) whereas for accessing cloud facilities we need to access the internet through WAN (wide access network) which will take more time as well as more cost. So, fog computing is very helpful in many aspects like cost, time, and security. Fog domain is made by combining multiple fog nodes which can be switches, gateways, routers, smartphones, or PCs to form the fog stratum. \begin{figure}[!t] \centering \includegraphics[scale=0.9]{Fig10.JPG} \caption{\textcolor{black}{Layered Architecture of Cloud, Edge, and Fog Computing \cite{fig12}.}} \label{foglayer} \end{figure} \subsubsection*{\textit{Motivation For Using Fog Computing For Drone Communication Security}} Fog Computing is a paradigm that can help in processing and accessing large data rapidly, efficiently, and with the least possible latency. This is a layer between the end-device and the cloud servers. Fog computing helps in increasing the QoS and QoE, as the retrieval time of the data is very less in the fog. Fog computing also reduces the data load on the cloud servers and makes the data dissemination very cost-effective as well as reliable. The fog is a decentralized paradigm in which the data is stored at multiple fog layers. This proves advantageous as compared to when data is stored in one place, because data in the fog has no central entity handling the entire data making it less vulnerable. This also prevents the cloud server from being getting affected, as the vulnerability can be detected at earlier stages. These aspects make fog computing a very important technology in making drone communication secure. The fog-based DCN methods and models that help make the drone communication secure are discussed in Section \ref{sec8}. \begin{figure*}[!t] \centering \includegraphics[width=180mm]{Fig11.JPG} \caption{Different drone applications using various security techniques.} \label{drones} \end{figure*} Fig. \ref{drones} shows various drone applications in different domains that have used \textcolor{black}{blockchain \cite{BDI4}, SDN \cite{intent}, ML \cite{ML1}, or fog computing \cite{fogres} for securing drone communication.} The rest of this paper discusses the usage, and benefits of these technologies in making drone communication more secure in detail. \section{Applications of blockchain for Drone Communication Security} \label{sec5} Drone technology has been there for almost a century, but in the recent past, it has gained importance in fields such as agriculture, security, wildlife conservation, delivery, and so on. Blockchain technology is said to have the potential to improve data security and transparency across multiple domains \cite{newly3,newly4,newly5,newly7}. In this section, we will elaborate the models and the mechanisms based on blockchain that can be used in making the drone communication secure. Various other non-blockchain technologies have also been proposed to increase drone security. There are various issues related to such solutions that can be resolved using the features of blockchain technology. A model proposed in \cite{non1} helps in maintaining data integrity by using sensor Physical Unclonable Function (PUF). This method provides data integrity but fails in maintaining self-trust and the data provenance. Another non-blockchain model for preventing a wormhole attack has been discussed in \cite{worm}. The authors use a label-based method for detecting the attack. The model only addresses wormhole attacks and is still vulnerable to other attacks that blockchain could prevent such as DDoS attacks, and GPS spoofing attacks. We further discuss the list of specific security issues that can be resolved and prevented using blockchain as a solution. Fig. \ref{blockchainindroneapplications} shows the various security applications of blockchain in a network of drones. \subsection{ \textit{\textbf{\textcolor{black}{Air Traffic Management}}}} UAVs have recently gained huge popularity. Thus, with a number of drones, their paths may cross with one another and sudden collision may occur. Therefore, it is necessary to devise a solution and a platform to maintain optimal paths for air traffic management \cite{newly10}. Such traffic management in drones is different from traditional road traffic management, as there is no well-defined path of travel for each drone and the coordinates need to be maintained in $3$ dimensions. Blockchain and IoT have many advantages over traditional internet-based systems due to the fact that the internet-based systems are more prone to cyber-attacks that would degrade or disrupt the functioning of the drones. Traditionally, GPS coordinates are used for UAV localization and avoiding traffic violations. However, such approaches are difficult to apply for complex paths due to pilot errors and other intrusion attacks. The authors of \cite{NBTM} suggest that the neural-blockchain based transport model (NBTM) can significantly help in optimizing the problem of the air traffic violation. This model involves the use of $3$ different blockchain networks to form a master blockchain, taking the input parameters as a function of the reliability of connections and reliability of flyby time. The model also generates feedback for initial inputs while iterating towards an optimal solution. The forward propagation is done between Blockchain $A$ and $B$, and backward propagation is between Blockchain $C$ and $D$. The primary components of NBTM are blockchain and neural networks. The neural model is a 4-layer network having $B$ and $C$ as intermediate layers. The output of the neural network model is used to form the optimal path for UAV to travel. This model does not employ any separate mechanism for security, but is simply dependent on the basic principles of the blockchain. The simulation results demonstrate that the proposed neural-blockchain enhances the reliability (the statistical parameter for evaluation of consistency) of the network with a lesser failure rate. Due to the availability of a feedback mechanism, the model reduces the computation power demand, resulting in lesser complexity and yields higher efficiency when compared with the model proposed in \cite{NBTM2}. In \cite{NBTM2}, the authors have proposed a model for reducing the number of transaction required for updating the ledger in the Internet of Vehicles (IoV), so that less latency is experienced in maintaining the air traffic whereas in \cite{NBTM}, feedback mechanism gives better results. However, the dynamic partitioning between a centralized system and the blockchain-based system is still a challenge to be worked upon. \textit{Preventing mid-air collisions: } Air traffic Control (ATC) needs improvement in preventing mid-air collisions due to the increasing number of UAVs. The Las Vegas incident wouldn't have happened if proper precautions were taken \cite{2}. Due to resource constraints and heavy traffic, the UAVs are subjected to transmission delays in communication with the ground station, unlike high-speed LAN serial communication, and therefore an innovative method to improve ATC and prevent mid-air collisions is required. The authors of \cite{BDI3} propose a blockchain-based solution for ATC management to prevent mid-air collisions. Similar to \cite{NBTM} and \cite{BD2}, the authors of \cite{BDI3} focus on physical protection of drones in case of high air traffic. However, different from \cite{NBTM} and \cite{BD2}, the authors of \cite{BDI3} focus more on exploiting the feature of peer-to-peer transactions in blockchain technology as compared to the feature of tamper-less data storage. If the path at which the drone has to traverse is defined and is stored in a tamper-less data storage, then the drone can be made secure, as no adversary can change the path of the drone for its benefit. Because of drones' agility, collision avoidance algorithms can also be employed to prevent mid-air collisions. A fast obstacle collision avoidance algorithm for UAVs has been proposed by the authors of \cite{midair_algo}. Using this algorithm, the drone can avoid static and dynamic obstacles, while still being able to get back on it's initial trajectory. Mid-air collisions can be dangerous as after the collision the drone could fall down to the ground and can cause harm to human life. Similarly, mid-air collisions could also bring in various threats to the airplane flying in the sky as any collision with the airplane would lead to loss of lives as well. In the proposed model, blockchain is used to store UAVnet data that comprises UAV-ID, flight route sheet, flying schedule, and sensor’s data. The computing UAVs are divided into $m$ groups each containing $n$ number of UAVs. Out of those $m$ groups, one is used to store information broadcasted from other UAVs and acts as actual blockchain participant. The other computing UAVs simulate the possible paths for the idle UAV to reach its destination. The optimal path that would limit the mid-air collisions is chosen by the Proof of Graph (PoG) consensus mechanism which is based on Simplified Memory Bounded A* (SMA*) Algorithm \cite{bound}. The authors compare the SMA* Algorithm with A* and Dijkstra’s algorithms. Although Dijkstra can find the shortest path, the algorithm must explore all possible paths, resulting in high complexity. A* algorithm uses exponential memory whereas SMA* uses bounded memory, where in exponential memory, addition of data increases the computation time exponentially \cite{exp}, and in bounded memory, the computation time depends on the amount of memory the data needs \cite{bounded}. This specification of the bounded memory makes the algorithm memory efficient and reduces the required computation time. \begin{figure*}[!t] \centering \includegraphics[width=180mm]{Fig12.pdf} \caption{\textcolor{black}{Security applications of blockchain in a network of UAVs.}} \label{blockchainindroneapplications} \end{figure*} \subsection{ \textit{\textbf{\textcolor{black}{Geo-fencing System}}}} Geofencing can be defined as the virtual fencing or boundary created to disengage UAVs from entering a sensitive area such as prisons, airports, and private properties \cite{geo}. It is similar to road networks where some vehicles are not allowed to enter certain zones. However, creating fences is more complicated in IoD due to lack of well-defined pathway, and motion being in $3$ dimensions. Traditionally, DJI’s GEO System was used to mark where it is safe for the drone to fly and where the authorities may raise concerns about the flight of the drone\cite{DJI}. However, such systems may not be suitable for drones, as there are certain prohibited zones where the drones cannot fly, like near the airports \cite{airport}. These systems give the optimal path which cannot be used in the case of the drones as the proposed optimal path may lie in the prohibited zone. Blockchain can be effective in maintaining such a restriction based on the $3$ dimensional coordinate system in real-time. The pioneer work for blockchain-based flight space allocation in a decentralized fashion is \cite{BD2}. Unlike \cite{NBTM}, the authors of \cite{BD2} focus mainly on preventing the entry of drones into restricted areas rather than performing complete traffic management. In the proposed model, the UAV during its flight adds its request for air-space to the blockchain network. The trajectories are then added in such a way that it does not conflict with any restricted zone defined through virtual fencing. Blockchain can maintain the constraint that the optimal path should be chosen such that it should not lie in the prohibited flying zone. This also mandates that air paths of multiple UAVs do not cross with one another which would eventually lead into a crash. The proposed model is better than the baseline scheme in \cite{NBTM}, as the proposed model uses blockchain both for geo-fencing and avoiding traffic violations. The benefits of using blockchain to maintain geo-fencing include its immutability and safety from cyber attacks. However, as transactions continue and records grow, and block sizes increase in a blockchain, eventually exceeding any limits set, each transaction will need more time to be processed. Thus there is a need for blockchains with a higher TPS (Transactions per Second) rate to avoid bulking. Some newer blockchain-based structures that offer higher transaction rates of $3500$ TPS have also been proposed in the literature \cite{hedera}. \subsection{ \textit{\textbf{\textcolor{black}{Maintaining data integrity}}}} The immense data including the geographical addresses, and the data captured by the sensors on the drones, can be collectively used to profile an individual, and thus, can lead to privacy leakage \cite{IoD}. The Iranian government claimed to be able to access all the information from the American UAV and reverse-engineer the entire drone \cite{26}. As drones have limited computation resources, the data processing can be done in the cloud. In traditional cloud-based solutions, the Zone Service Providers (ZSPs) provide for the navigation of the drones and the feedback systems between drones. However, ZSPs are vulnerable to attacks due to high latency and high false rate \cite{IoD}. An efficient IoD using blockchain technology is proposed in \cite{BDI4}. Different from \cite{NBTM} and \cite{BD2}, the focus of the authors of \cite{BDI4} is more towards securing the important data being sent by drones, rather than physically preventing them from colliding or entering restricted areas. Tamper-proof and transparent storage of data are the main features of blockchain technology being exploited by the authors of \cite{BDI4}. In the proposed algorithm, firstly, the drone enrolls itself in the blockchain ledger for storing the data, and a unique ID is assigned to the drones. The data is then hashed for maintaining the integrity and is then uploaded to the blockchain network, via the controller. After the data is successfully uploaded, an acknowledgment is sent to the drone. The data records are transformed into a Merkle Hash tree \cite{merkle}. Furthermore, data auditing is done in the cloud which is a crucial step as it helps to detect any anomaly in the data. The proposed model was analyzed for the response time with varying numbers of drones. The simulation results demonstrate that the response time increases linearly from about $400$ms for $100$ drones to about $550$ms for $1000$ drones, thus providing better scalability. The average response time latency is also fairly stable, varying from $350$ms to $1000$ms for a $100$ drone network. The proposed model also makes the network less vulnerable to attacks like DDoS attacks and data losses, while making it more accountable. One major challenge is the time delay in the drones due to proof of work. For mining a block, computing machines require enough time, which results in latency \cite{mine}. Due to hardware constraints in drones, lightweight cryptography and DAG-chain based consensus algorithms can be developed \cite{my3}. \textit{\textbf{Secure Data Dissemination: }} Data dissemination is the process of distribution of data/information to its end-users. The authors of \cite{BDI2} propose a blockchain-based algorithm which helps in secure data dissemination in the IoD (Internet of Drones). Although the model presented in \cite{BDI2} is based on blockchain, it is not designed to keep the localization information as shown in \cite{local}. The authors of \cite{BDI2} make use of blockchain technology only to secure the data transfer between the drones and drone controllers. A combination of the approaches discussed in \cite{local} and \cite{BDI2} would be a promising solution for securing both localization and data dissemination. The proposed model in \cite{BDI2} is designed using three layers, namely, the user layer, the infrastructure layer, and the IoD layer. In the user layer, blockchain technology is used for the verification and the security of each transaction made in the model. The second layer, the infrastructure layer, consists of all the base stations, which ensure the connectivity between the drone controller, the drone, and the end-users. IoD layer consists of the drones that capture real-time data and communicate amongst themselves for making certain decisions. Two types of nodes are considered in this model, (i) forger nodes and (ii) normal nodes. The forger node is used for creating new blocks in the blockchain, whereas, the normal nodes are used for the verification of the blocks in the blockchain. This model works in three stages. First, the forger node is selected, and the other remaining nodes are declared as normal nodes. After the forger node is selected, the hash value is calculated by the forger node using the PoS (Proof of Stake) consensus algorithm \cite{PoS}. The other nodes validate the hash value broadcasted by the forger node by comparing it with the hash value that is generated using the Merkle Hash tree. If both the hash keys match, the block is validated and is added to the main chain. The forger node then encrypts the data packets and sends the request to the public distributed blockchain. When the request is accepted, the forger node computes the digital signature of the data packets with its private key and broadcasts it to the public blockchain. The data is stored in the blockchain and can be accessed only using the decryption key, so attacks like spoofing or DoS attacks can be prevented using this algorithm. The authors evaluate the security of the proposed model in terms of communication cost and time. The simulation results demonstrate that the proposed model provides data authentication, authorization, and accountability that is not offered by other state-of-the-art related works. Another work related to securing data dissemination in IoD is \cite{IoD}. The authors of \cite{IoD} use Identity-based encryption techniques for secure data dissemination. Such techniques can provide data integrity and identity anonymity only, and fail to provide authentication, authorization, and accountability of nodes in the network. Also, there is no proposal for data verification and validation in \cite{IoD} as compared to the blockchain-based approach used in \cite{BDI2}. \begin{table*}[] \centering \caption{A summary of advantages and disadvantages of major applications of blockchain for drone communication security.} \begin{tabular}{|l|l|l|l|} \hline \rowcolor[gray]{0.8} \begin{tabular}[c]{@{}l@{}}Major\\ approaches\end{tabular} & Advantages & Disadvantages & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Benefits over}\\ \textcolor{black}{traditional approaches} \end{tabular}\\ \hline \cite{NBTM} & \begin{tabular}[c]{@{}l@{}}• Significantly helps in optimizing the \\problem of air traffic violation\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Does not support dynamic\\ partitioning of UAVs into groups\end{tabular} & \textcolor{black}{Reduced Latency}\\ \hline \cite{BD2} & \begin{tabular}[c]{@{}l@{}}• Prevents the UAVs from entering into\\ any restricted zone through virtual fencing\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Can support only a limited\\ number of transactions per minute\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Immutability, Safety}\\ \textcolor{black}{from cyber attacks} \end{tabular}\\ \hline \cite{BDI4} & \begin{tabular}[c]{@{}l@{}}• Supports high scalability with stable\\ response time latency\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Significant latency in the data \\ transmission\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textcolor{black}{Scalabilty, Data }\\ \textcolor{black}{integrity} \end{tabular}\\ \hline \cite{BDI3} & \begin{tabular}[c]{@{}l@{}}• Supports high computation speed and\\ memory efficiency with SMA*\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Does not support dynamic\\ partitioning of UAVs into groups \end{tabular} & \begin{tabular}[c]{@{}l@{}}\textcolor{black}{Cost-effective,}\\ \textcolor{black}{Scalability} \end{tabular}\\ \hline \cite{local} & \begin{tabular}[c]{@{}l@{}}• The chances of localization errors are\\ reduced to $1/4^{th}$ \end{tabular} & • Suspectible to $51$\% attack & \textcolor{black}{Localization} \\ \hline \cite{BDI2} & \begin{tabular}[c]{@{}l@{}}• Proof-of-stake consensus algorithm is\\ used to significantly reduce the\\ computation time and cost\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Regulatory control and\\ governance features are missing\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textcolor{black}{Authentication, Authoriz-}\\ \textcolor{black}{ation, Accountability} \end{tabular} \\ \hline \end{tabular} \label{bloadv} \end{table*} \begin{table*}[] \centering \caption{Applications of Blockchain for drone communication security. } \begin{tabular}{|p{0.033\linewidth}|p{0.11\linewidth}|p{0.24\linewidth}|p{0.11\linewidth}|p{0.21\linewidth}|p{0.15\linewidth}|} \hline \rowcolor[gray]{0.7} Ref. & Attack & Mechanism & \begin{tabular}[c]{@{}l@{}}Blockchain\\ Feature used\end{tabular} & Major achievement & Open issues \\ \hline \cite{NBTM} & \begin{tabular}[c]{@{}l@{}}GPS Spoofing,\\Jamming\\attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}The optimal path generated\\by the neural network is\\stored in the blockchain\end{tabular} & \begin{tabular}[c]{@{}l@{}}Peer-to-Peer\\model \end{tabular} & \begin{tabular}[c]{@{}l@{}}The model gives the best\\ path with a maximum\\ failure rate of $25.8$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}Making the model\\ efficient for a large\\ number of UAVs\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{BD2} & \begin{tabular}[c]{@{}l@{}}GPS spoofing, \\ DoS attacks,\\ DDoS attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The blockchain assigns \\ trajectory to the UAVs such \\ that no route clashes with\\ the other UAVs route.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Tamper-proof\\data,\\ Peer-to-Peer \\ network\end{tabular} & \begin{tabular}[c]{@{}l@{}}A collision-free trajectory\\ is proposed such that the\\ UAV does not enter the\\ geo-fenced zone\end{tabular} & \begin{tabular}[c]{@{}l@{}}Model has to be\\ trained for handling\\ large number of\\ UAVs\end{tabular} \\ \hline \cite{BDI4} & \begin{tabular}[c]{@{}l@{}}DoS attacks,\\ DDoS attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The data generated from \\ the sensors is stored in the \\ Merkle Hash tree which\\ ensures data integrity\end{tabular} & \begin{tabular}[c]{@{}l@{}}Distributed-\\ database,\\ Public key\\ infrastructure\end{tabular} & \begin{tabular}[c]{@{}l@{}}The average response\\ time for data transmission\\ with $1000$ drones is\\ $550$ms\end{tabular} & \begin{tabular}[c]{@{}l@{}}Implementing\\ private blockchain\\ can make the\\ system more secure\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{BDI3} & \begin{tabular}[c]{@{}l@{}}Man-in-the-\\ middle attacks,\\ GPS spoofing,\\ DoS attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The details of the UAV stored \\ in the blockchain are used to \\ calculate the optimal path\\ using the SMA* algorithm\end{tabular} & \begin{tabular}[c]{@{}l@{}}Distributed\\storage,\\ Tamper-free \\ transactions\end{tabular} & \begin{tabular}[c]{@{}l@{}}SMA* gave the optimal\\ path in a very less time\\ and it uses bounded\\ memory as well\end{tabular} & \begin{tabular}[c]{@{}l@{}}Efficient dynamic\\ partitioning of the\\ UAV groups is \\ needed\end{tabular} \\ \hline \cite{local} & \begin{tabular}[c]{@{}l@{}}DoS attacks,\\ Wormhole-\\ attack,\\ GPS spoofing\end{tabular} & \begin{tabular}[c]{@{}l@{}}The co-ordinates of the drones\\stored in the blockchain is\\made available to the other\\ drones after the verification\end{tabular} & \begin{tabular}[c]{@{}l@{}}Decentralized\\ network,\\ Distributed-\\ database\end{tabular} & \begin{tabular}[c]{@{}l@{}}The localization errors are \\ reduced by $75$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}Model is still\\ susceptible to\\ $51$\% attack\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{BDI2} & \begin{tabular}[c]{@{}l@{}}Eavesdropping\\ attacks,\\ GPS spoofing,\\ DoS attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}If the hash value generated by\\ the Merkle Hash tree and the\\ computed hash value by the\\ forger node is the same then\\ only the data is transmitted\end{tabular} & \begin{tabular}[c]{@{}l@{}}Data integrity,\\ Distributed- \\ database,\\ Decentralized- \\ network\end{tabular} & \begin{tabular}[c]{@{}l@{}}The total computation\\ time required for data \\ dissemination is computed\\ to be $0.046$ms.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Implementing\\ private blockchain\\ can make the\\ system more secure\end{tabular} \\ \hline \cite{blotab1} & \begin{tabular}[c]{@{}l@{}}DoS attacks,\\ DDoS attacks,\\ GPS spoofing\end{tabular} & \begin{tabular}[c]{@{}l@{}}The data generated by the \\ drone is stored in the \\ blockchain and is transformed\\ into the Merkle Hash tree\\ to maintain the data integrity\end{tabular} & \begin{tabular}[c]{@{}l@{}}Decentralized-\\ network,\\ Peer-to-Peer\\ model,\\ Immutability\end{tabular} & \begin{tabular}[c]{@{}l@{}}The results show that only\\ the validated drone\\ were allowed to transfer\\ the data\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model can be\\ further enhanced\\ for multi UAV\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{blotab2} & \begin{tabular}[c]{@{}l@{}}Man-in-the-\\ middle attack,\\ DoS attack,\\ DDoS attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}The swarms of drones needs to\\ register themselves on the \\ blockchain using their public\\ key and after the validation,\\ the data is added to the server\end{tabular} & \begin{tabular}[c]{@{}l@{}}Distributed-\\ Database,\\ Immutability,\\ Public Key-\\ infrastructure\end{tabular} & \begin{tabular}[c]{@{}l@{}}The data acquisition was\\ done successfully and with\\ high efficiency\\\end{tabular} & \begin{tabular}[c]{@{}l@{}}Blockchain with\\ higher TPS can be\\ incorporated in the\\ model for making\\ it more efficient\end{tabular} \\ \hline \cite{blotab3} & \begin{tabular}[c]{@{}l@{}}DoS attacks,\\ DDoS attacks,\\ Man-in-the-\\ middle attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The trust is lost from\\ the intruding UAV when\\several intruder events\\are detected\end{tabular} & \begin{tabular}[c]{@{}l@{}}Peer-to-Peer \\ model,\\ Decentralized-\\ network\end{tabular} & \begin{tabular}[c]{@{}l@{}}$90$\% of the UAVs were\\ able to support the data\\ about the event and none\\ detected the intruder event\end{tabular} & \begin{tabular}[c]{@{}l@{}}Work on detecting\\ the compromised\\ UAV successfully\\ is required\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{blotab4} & \begin{tabular}[c]{@{}l@{}}GPS spoofing,\\ Man-in-the-\\ middle attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}A consumer makes an order \\ according to which a smart\\ contract is generated. Any\\ free UAV can accept the order\\ and the client details are sent.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Peer-to-Peer \\ model,\\ Decentralized-\\ network,\\ Smart Contract\end{tabular} & \begin{tabular}[c]{@{}l@{}}The blockchain and smart\\ contracts are proved to be \\ successful in organizing a\\ secure communication\\ between the UAVs\end{tabular} & \begin{tabular}[c]{@{}l@{}}Implementing\\ private blockchain\\ can make the\\ system more secure\end{tabular} \\ \hline \cite{blotab5} & \begin{tabular}[c]{@{}l@{}}DoS attacks,\\ DDoS attacks,\\ Hijacking\end{tabular} & \begin{tabular}[c]{@{}l@{}}The interest-key-content\\ binding (IKCB) is stored in the\\ blockchain which is compared\\ by the router and the poisoned\\ data is discarded\end{tabular} & \begin{tabular}[c]{@{}l@{}}Tamper-\\resistant\\ledger,\\ Consensus-\\ algorithm\end{tabular} & \begin{tabular}[c]{@{}l@{}}The proposed model gives\\ better results when\\ compared with the\\ Interest-key binding as it\\ has lower system overhead\\ and the latency is reduced\end{tabular} & \begin{tabular}[c]{@{}l@{}}The forwarding\\ technologies can\\ be optimized to\\ make the model\\ efficient\end{tabular} \\ \hline \end{tabular} \label{relblo} \end{table*} \subsection{ \textit{\textbf{\textcolor{black}{Secure Localization}}}} A swarm of drones are deployed that automatically take actions to achieve a specific goal cooperatively. In such scenarios, the exact location coordinates of the drones is critical information for completion of the mission. However, the generic localization algorithms are vulnerable to attacks as the adversary can easily inject false location coordinates. A pioneering work on secure localization on the internet of drones using blockchain is presented in \cite{local}. The authors propose a blockchain-based localization algorithm for securing drones. Three major features of the algorithm are, (i) Decentralization: no central entity would be present to maintain the localization coordinates of the drone, (ii) Peer-to-Peer communication between the drones, and (iii) no need for a central trust node to manage the security of the data and the coordinates between the drones. The other non-blockchain based approach for securing localization in IoD is discussed in \cite{worm}. This method focuses only on preventing the wormhole attack. The approach in \cite{worm} being centralized in nature cannot be used to prevent other generic attacks such as DoS. In the proposed algorithm, the drones need to cooperate with various other anchor drones knowing their exact coordinates. The coordinates of the anchors are sent to the requesting drone using the private key of the anchors. Next, the coordinates of the anchor drones are added to the distributed blockchain ledger after verification. The requesting drone first requests for the coordinates of the anchor drones present at a $1$-hop distance. If the requesting drone receives the location from at least three anchor drones in the 1-hop, the distance between the requesting drone and the anchor drone is calculated using the Received Signal Strength Indicator (RSSI) method \cite{RSSI}. However, if the requesting drone does not receive three minimum responses from the neighboring anchor drones, the distance between the requesting drone and the anchor drones is calculated using the DV-Hop (Distance Vector Hop) method \cite{DVHOP}. The DV-Hop method works by first computing the average hop distance and then multiplying it by the number of hops. The authors also calculate the change in localization accuracy with the increase in the number of malicious nodes in the network. The accuracy of the proposed algorithm is proven to be better than the generic localization algorithms \cite{genloc}. Simulation results demonstrate that the localization errors are minimized by $1/4^{th}$ in the presence of $50$\% malicious nodes in the network. Moreover, due to the decentralized nature of blockchain, various other attacks such as DoS attack, and wormhole attack can also be prevented. However, the model is still susceptible to the $51$\% attack and other attacks, as is the case with blockchain. $51$\% attack happens when the number of malicious nodes in the network become more than half of the total nodes in the network, and hence, fair localization coordinates would not be revealed. \par \textbf{ Summary:} \textcolor{black}{The objective here is to minimize the possibilities of any kind of physical attack on drones or data losses in drone communication. A summary of advantages and disadvantages of major works is given in Table \ref{bloadv}, and a summary of the related works is given in Table \ref{relblo}. As seen, the blockchain technology is mostly used to provide a peer-to-peer model to mitigate the various security issues related to generic centralized architectures. In various works, the smart contract and incentive model features of blockchain are also used to enhance data security and reliability in various scenarios related to drone communication.} \section{Applications of SDN for Drone communication Security} \label{sec6}Software-Defined Networking may be defined as a networking paradigm centered around the separation of the data plane or forwarding plane of a computer network from its control plane, and the application layer. Software-defined networking is an architecture that aims to make networks agile and flexible. The SDN virtualizes the network by separating the control plane that manages the network from the data plane where all the traffic flows. SDN simply decouples the network control from the forwarding process of the network packets. This decoupling allows the network to be controlled separately without worrying about the traffic flow. This infrastructure will keep the traffic and the network services abstracted from the network control. These SDN parameters help in making the drone communication secure. In this section, we talk about the SDN-based DCN that helps in resolving various security issues in DCN. There are several existing solutions that tend to resolve the security issues related to drone communication without using the SDN architecture. The pioneer work in this direction is presented in \cite{non2}. In this model, the UAV receives the request from the ground controller and sends the data back to the controller in the form of visuals. The method described in the model demands high bandwidth for its execution, which varies with the speed of the drone or the broadcasting channels. Such traditional solutions fail to provide high security in the new generation drones, and also fail in maintaining the data integrity. Another model proposed in \cite{non3} uses heuristic algorithms for providing data integrity but fails in providing good efficiency and reliability. \begin{table*}[] \centering \caption{A summary of advantages and disadvantages of major applications of SDN for drone communication security.} \begin{tabular}{|l|l|l|l|} \hline \rowcolor[gray]{0.8} \begin{tabular}[c]{@{}l@{}}Major\\ approaches\end{tabular} & Advantages & Disadvantages & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Benefits over}\\ \textcolor{black}{traditional approaches} \end{tabular} \\ \hline \cite{16} & \begin{tabular}[c]{@{}l@{}}• Very short average file-transfer time\\ • Every sender gets fair share of\\ bandwidth\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Legitimate sender may have to\\ wait for a long time\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Scalability, Lightweight,}\\ \textcolor{black}{Self-reliant defense} \end{tabular} \\ \hline \cite{DS1} & \begin{tabular}[c]{@{}l@{}}• The drone from which the DDoS\\ attack is launched can be found in\\ a very less time\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Cannot work properly when the\\ number of flow table items in the SD-\\IoT is very high\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Detecting and Mitigating}\\ \textcolor{black}{DDoS attacks faster} \end{tabular} \\ \hline \cite{intent} & \begin{tabular}[c]{@{}l@{}}• The average end-to-end outage rate\\ in IoD is reduced by $18$\%\end{tabular} & • High average end-to-end delay & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Reduction in end-to-end}\\ \textcolor{black}{delay} \end{tabular} \\ \hline \cite{sdn1} & \begin{tabular}[c]{@{}l@{}}• The model has high fault-tolerance\\ • High performance due to the presence\\ of multiple controllers\end{tabular} & \begin{tabular}[c]{@{}l@{}}• The link between the data plane and\\ the control plane is still susceptible\\ to attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Scalability, Mobility} \end{tabular}\\ \hline \cite{SI1} & \begin{tabular}[c]{@{}l@{}}• The latency and the maximum load\\ experienced by a SDN switch is reduced\\ by $50$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}• The complexity of the algorithm is\\ quite high \end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Stability, Security,}\\ \textcolor{black}{Reduced network latency} \end{tabular}\\ \hline \end{tabular} \label{sdnadv} \end{table*} Considering the above issues, in this section, we discuss the methods that involve SDN for maintaining the security in drone communication. We further discuss the list of specific security issues that can be resolved and prevented using SDN as a solution. \subsection{\textit{\textbf{\textcolor{black}{DoS Attacks}}}} Due to resource constraints, we need a highly efficient protocol that is resistant to large–scale DoS attacks. NetFence protocol in SDN based Drone Communication Network (DCN) as proposed in \cite{16} can be used to create a scalable DoS resistant network. The proposed model makes use of traffic policing inside the network. The packets in the network carry the unforgeable congestion policing feedback that is attached on the packets by routers. For a drone to be a part of the network, it needs to first send a request packet to a NetFence ready receiver. When it is accepted, it receives feedback and along with the acknowledgement, it sends regular data packets. Non-NetFence senders can only send packets through the legacy channel which is given the lowest packet-forwarding priority. Bottleneck routers act as congestion detectors which regularly check link load and packet loss rate. The rate limiters reduce data congestion through the Additive Increase and Multiplicative Decrease (AIMD) algorithm \cite{AIMD}. NS-$2$ simulations were implemented in linux, and the performance of NetFence in DoS attacks was compared with $3$ other mechanisms that are Traffic Validation Architecture (TVA+) \cite{TVA}, StopIt \cite{stop}, and Fair Queuing (FQ) \cite{FQ}. Netfence has the advantage of having short average file-transfer time that does not increase significantly with an increase in senders, whereas, in FQ, transfer time increases linearly with an increase in senders. Even though mechanisms like TVA+ and StopIt tend to block large-scale DoS attacks, per-host queuing is implemented in these algorithms as compared to per-Autonomous System queuing in NetFence. This is advantageous as the number of autonomous systems is significantly less than the number of hosts. No matter how heavy the attack is, the Netfence protocol makes sure that senders get their fair share of bandwidth. This model has a drawback, a legitimate sender may need to wait more time in Netfence to transmit data than in TVA+, StopIt. Additionally, NetFence algorithm also fails to distinguish between the congestion caused by DoS attack or any other issue in the network. \subsection{\textit{\textbf{\textcolor{black}{Distributed Denial of Service (DDoS) attacks}}}} The DDoS attack is a bit different from a normal DoS attack. In a DDoS attack, there are multiple numbers of compromised hosts as compared to a single compromised host in a normal DoS attack. An intelligent and lightweight approach is required to prevent DDoS attacks in IoD. A pioneer work in avoiding DDoS attacks in IoT using SDN is \cite{DS1}. Unlike \cite{16}, the authors of \cite{DS1} propose an algorithm to detect and mitigate the DDoS attacks in drones. Cosine similarity of the vectors of the packet-in message rate at software-defined Internet of Things (SD-IoT) switch ports is used to determine the occurrence of DDoS attack. The threshold values of the vector length and cosine similarity are used to precisely and accurately classify an attack situation. The simulation results demonstrate that the proposed algorithm is capable of detecting the device used to launch the DDoS attack in a short span of time. The results of the proposed work are compared with other state-of-the-art work that try to detect DDoS attacks using IP filtering \cite{DDoS2}. The simulation results demonstrate that in case of a DDoS attack, the number of flow table items of the SD-IoT switches, and the number of data packets received by the SD-IoT controller are less in \cite{DS1} as compared to \cite{DDoS2}. However, a proactive scheme to defend and prevent the DDoS attacks is missing. The proposed algorithm will work only after the DDoS attack has been launched. The authors of \cite{conti3} provide a lightweight solution to counter DDoS attacks using SDN. \subsection{\textit{\textbf{\textcolor{black}{Avoiding intentional disruption}}}} Apart from DDoS attacks on a set of drones in the network, the network of drones being resource-constrained is also susceptible to intentional jamming and disruption attacks. Such attacks are more severe as compared to DDoS attacks, as they can paralyze the entire network leaving no room for detection and mitigation. Different from \cite{16} and \cite{DS1}, the authors of \cite{intent} have proposed an SDN-based framework for secure and robust data relaying in case of intentional jamming and disruptions. In the proposed model, the drones act as SDN switches that are controlled by a centralized SDN controller. A novel 3D spatial coverage metric is used to calculate diverse multiple paths among the drones to prevent the effect of intentional disruptions on the functioning of the drone network, as the model gives the directives to the drone for using the best possible path. The simulation results demonstrate that the proposed algorithm outperforms the traditional shortest path and shortest multi-path algorithms in terms of outage prevention \cite{intentional2}. The average end-to-end outage rate in IoD is reduced by $18$\% in the proposed model when compared to \cite{intentional2}. The algorithms in \cite{intentional2} consider only the path that takes the least time, irrespective of the presence of jammers and intentional disrupts. Although the proposed model in \cite{intent} succeeds in preventing the complete outage of the drone network, the end-to-end delay is also increased by $12$\% when compared to \cite{intentional2}. The proposed model in \cite{intent} also helps in preventing the frequent link disconnections between the devices linked to the network. Further work is required to propose some algorithms that can prevent such intentional disruptions without increasing the average delay of the network. The jamming incident in Hong Kong drones could've been prevented if any of the above-mentioned measures were taken \cite{jammerinc}. \begin{table*}[] \centering \caption{Applications of SDN for drone communication security. } \begin{tabular}{|p{0.04\linewidth}|p{0.075\linewidth}|p{0.23\linewidth}|p{0.125\linewidth}|p{0.2\linewidth}|p{0.18\linewidth}|} \hline \rowcolor[gray]{0.7} Ref. & Attack & Mechanism & \begin{tabular}[c]{@{}l@{}}SDN Feature \\ used\end{tabular} & Major achievement & Open issues \\ \hline \cite{16} & DoS attacks & \begin{tabular}[c]{@{}l@{}}The drone first registers\\ itself with the NetFence\\ ready receiver and then\\ only the drone is allowed\\ to transmit the data packets.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Directly-\\ programmable,\\ Scalability\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model has a very\\ short average file-transfer\\ time\end{tabular} & \begin{tabular}[c]{@{}l@{}}Implementing the\\ model specifically\\ for UAVs is much\\ needed\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{DS1} & \begin{tabular}[c]{@{}l@{}}DDoS\\ attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The cosine similarity of the\\ vectors of the packet-in\\ message rate at the SD-IoT\\ switch port is used to\\ determine the attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}Abstraction of\\ network devices,\\ Dynamic re-\\ configuration of\\ networks\end{tabular} & \begin{tabular}[c]{@{}l@{}}Can determine the device\\ using which the DDoS\\ attack is launched in a\\ very short time span\end{tabular} & \begin{tabular}[c]{@{}l@{}}Implementing the\\ model specifically\\ for UAVs is much\\ needed\end{tabular} \\ \hline \cite{intent} & \begin{tabular}[c]{@{}l@{}}Jamming\\ attacks,\\ Disruption\\ attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}Multiple paths are generated\\ using a $3$D spatial metric\\ which are directed to the\\ UAVs to avoid the disruption\end{tabular} & \begin{tabular}[c]{@{}l@{}}Decoupled Data-\\ plane and the\\ Control plane\end{tabular} & \begin{tabular}[c]{@{}l@{}}The average end-to-end\\ outage rate in the IoD is\\ reduced to a large extent\\\end{tabular} & \begin{tabular}[c]{@{}l@{}}End-to-end delay\\ increases significantly,\\ which is a major area\\ to be looked upon in\\ the future\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{sdn1} & \begin{tabular}[c]{@{}l@{}}DoS\\attacks,\\ GPS\\ Spoofing\end{tabular} & \begin{tabular}[c]{@{}l@{}}SDN controller authenticates\\ the network device and then\\ only the data is transmitted\\ by the controllers\end{tabular} & \begin{tabular}[c]{@{}l@{}}Decoupled Data-\\ plane and the\\ Control plane\end{tabular} & \begin{tabular}[c]{@{}l@{}}Multiple SDN controllers\\ are deployed to prevent\\ the malfunctioning of the\\ devices in the network\end{tabular} & \begin{tabular}[c]{@{}l@{}}The link between the\\ Control plane and Data\\ plane is still\\ susceptible to attacks\end{tabular} \\ \hline \cite{SI1} & \begin{tabular}[c]{@{}l@{}}DoS\\attacks,\\ Spoofing\\ attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The Middlebox-Guard (M-G)\\ is deployed at different\\ locations which manages the\\ dataflow\end{tabular} & \begin{tabular}[c]{@{}l@{}}Directly-\\ programmable,\\ Flexible network\\ architecture\end{tabular} & \begin{tabular}[c]{@{}l@{}}Latency and the\\ maximum load on the\\ device are reduced by\\ $50$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}The Integer Linear\\ Program (ILP) pruning\\ algorithm used in M-G\\ has a high complexity\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{sdnt1} & \begin{tabular}[c]{@{}l@{}}DoS\\attacks,\\ DDoS\\ attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The primary path forwards\\the common files whereas\\the backup path forwards\\the uncommon cases where\\the primary path is not\\reliable\end{tabular} & \begin{tabular}[c]{@{}l@{}}Decoupled Data-\\ plane and the \\ Control plane,\\ Scalability\end{tabular} & \begin{tabular}[c]{@{}l@{}}It can handles link\\ congestion with high\\ bandwidth\end{tabular} & \begin{tabular}[c]{@{}l@{}}It has a very high end-\\ to-end delay, which\\ has to be looked upon\\ in the future to make\\ the algorithm more\\ efficient\end{tabular} \\ \hline \cite{sdnt2} & \begin{tabular}[c]{@{}l@{}}DoS\\attacks,\\ DDoS\\ attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}SDN computes the optimal\\ flow for each multi-path TCP\\ and the Flow Deviation\\ Method (FDM) algorithm\\ is used to re-allocate\\ the bandwidth\end{tabular} & \begin{tabular}[c]{@{}l@{}}Network-\\ programmability,\\ Decoupled Data-\\ plane and the\\ Control plane\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model achieves\\ fairer bandwidth\\ allocation that provides\\ better QoS and it makes\\ the network more reliable\end{tabular} & \begin{tabular}[c]{@{}l@{}}Cannot support a high\\ number of users and\\ the model is not fully\\ secure\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{fig11} & \begin{tabular}[c]{@{}l@{}}Grey hole\\attacks,\\Black hole\\ attacks,\\DDoS\\attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The UAV informs its\\ controller about the\\ neighboring drone while\\ establishing OpenFlow\\ connection and also informs\\ about its update\end{tabular} & \begin{tabular}[c]{@{}l@{}}Decopuled Data-\\ plane and the\\ Contol plane,\\ Scalability\end{tabular} & \begin{tabular}[c]{@{}l@{}}The amount of data\\ exchange when compared\\ with the AODV routing\\ algorithm is increased by\\ $2$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}Further works on\\ increasing the security\\ of the model is needed\end{tabular} \\ \hline \cite{sdnt3} & \begin{tabular}[c]{@{}l@{}}GPS\\ Spoofing\end{tabular} & \begin{tabular}[c]{@{}l@{}}Cluster heads are assigned to\\different densely populated\\sectors and the data is\\transferred through the cluster\\head only when in the range\end{tabular} & \begin{tabular}[c]{@{}l@{}}Flexible network\\ architecture,\\ Scalability\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model provides faster\\ and efficient coverage rate\\ of about $99$\% and a\\ latency of around $20$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}Need to decrease\\ the latency to make\\ the model more\\ efficient\end{tabular} \\ \hline \end{tabular} \label{relsdn} \end{table*} \subsection{ \textit{\textbf{\textcolor{black}{Malfunctioning devices}}}} Apart from DoS, DDoS, and intentional jamming, there are various other issues in drone communication that are related to the different sensors deployed on the drones. Traditional internet systems use IP and firewalls, which cannot solve these issues, as it is not possible to fit all objects and protocols to a common and singular protocol. A lightweight model for avoiding malfunctioning devices in IoD is proposed in \cite{sdn1}. In the proposed model, the SDN controller first authenticates the network device requesting to be connected to the network. Only after successful authentication, the data is disseminated to the connected devices using the controller that makes sure that no malfunctioning of the device is taking place. Traditional network protocols are not designed to support high levels of traffic, scalability, and mobility. Hence, the use of SDN in this work increases the functionality of the network by reducing the hardware complexity. SDN also has the ability to extend the network security to the access end-point devices. Multiple SDN controllers have been used instead of a single one to improve fault tolerance and robustness. Unlike \cite{sdn1}, the authors of \cite{sdn2} have proposed a similar framework using only a single controller. If an attacker compromises the SDN controller, he gains full control over the network. Hardware and software failures may also occur, which pose a potential risk to the entire network. The work in \cite{sdn1} is superior as it uses multiple controllers, so if one goes down, another can take control to avoid a system failure in case of any malfunctioning device. The proposed work reports increased network performance with multiple controllers because each controller has a partial view of the network and the controllers collaborate and exchange information with each other. However, the link between the Control Plane and Data Plane of the SDN is still vulnerable and susceptible to attacks and these issues are yet to be resolved. \subsection{ \textit{\textbf{\textcolor{black}{Data Integrity}}}} An SDN-based data security model Middlebox-Guard (M-G) is proposed in \cite{SI1}. Different from \cite{sdn1}, and \cite{sdn2}, the M-G manages the dataflow to ensure network efficiency and helps in minimizing the network latency. To reduce the latency, the middleboxes are placed at locations where the communication link is the shortest using a placement selection algorithm. The middleboxes are placed in different locations and an offline Integer Linear Program (ILP) pruning algorithm \cite{ILP} is deployed at each middlebox. ILP helps in solving uncontrollable computation optimization problems at every middlebox to tackle the switch constraints, such as the use of CPU, RAM, etc. The ILP algorithm also provides the optimum routes to be used for the data transfer. Also, an online ILP is used to minimize the maximum middlebox load across the network. M-G is compared to a model known as SIMPLE proposed in \cite{SDNI2}, as both solve middlebox placement and route selection problems. M-G outperforms the latter in terms of security, latency, and load. In \cite{SI1}, POX was used as the controller, and OpenvSwitch was used as the SDN switch for carrying out the experiments. On running the entire system, latency and maximum loads were reduced by $50$\%. In terms of security, middlebox failures and overload conditions were analyzed, and the response times for these were calculated to be less than $0.15$ seconds, which means, it shows a fast response. \par \textbf{ Summary: } \textcolor{black}{SDN can help in preventing many attacks that drones are susceptible to, including DoS attacks and DDoS attacks, and can help in maintaining data integrity in drones. SDN technology can also help in avoiding the intentional disruption and jamming attacks that impose danger on drone communication. A summary of the advantages and disadvantages of major works that use SDN as a solution to drone communication security are described in Table \ref{sdnadv}. Furthermore, Table \ref{relsdn} summarizes the related works that use SDN in maintaining security in drone communication. According to the best of our knowledge, the decoupling of the control plane, the data plane, and the network plane helps a lot in maintaining security standards in drone communication.} \section{Applications of Machine Learning for Drone Communication Security} \label{sec7}Machine learning is the study of algorithms that are capable of learning and improving automatically through experience, and can make accurate predictions based on the data with which they are fed \cite{ml}. They can provide generalized observations for unseen and unknown states and networks as well. Different machine learning algorithms are useful in different drone applications and domains. The use of specific ML algorithms is dependent on the domain, and type of data available. The ML approaches have been extensively explored in literature both for physical security of drones, and for drone communication security. The physical security approaches basically deal with using different ML algorithms to detect unauthorized drones or to prevent authorized drones from entering into unauthorized zones. Both these types of security issues are intrinsically related to each other. For example, if the system fails to identify or detect an unauthorized drone and allows it to enter into a network of authorized drones, it can easily allow all possible communication attacks on the network. Therefore, drone detection using ML can be considered as a preliminary step that can prevent the possibility of drone communication issues to a great extent. The authors of \cite{jam_ml} study different ML frameworks and provide a model to prevent jamming attacks. A distributed learning framework is essential to manage the various tasks in a swarm of drones \cite{mozaffari5}. Therefore, in this section, we review the various works that try to use various ML algorithms to detect the drones or to identify and prevent the generic security vulnerabilities. First of all, we discuss the issues with the traditional approaches that do not use ML algorithms, and then we move on to the challenges in drone communication and possible ML-based solutions. There are some traditional techniques that do not use ML algorithms for detecting drones. The most primitive technique is drone detection using radar. Detection using radar is highly expensive, and it can be used only for detecting large objects. Another model that can be used for drone detection is using Light Detection And Ranging (LiDAR) that has been implemented in \cite{LiDAR}. LiDAR sends the laser beam towards the object and analyzes the beams returned after colliding with the object. However, LiDAR is also a extremely expensive method for detection, and is highly vulnerable to climatic conditions. Moreover, these techniques tend to give false positive results, thereby resulting in wastage of resources. We further discuss specific security issues that can be resolved and prevented using ML as a solution. \subsection{\textit{\textbf{\textcolor{black}{Drone detection using SVM and CNN}}}} ML algorithms can be used in radar detection to address various detection and classification problems associated with the traditional methods of radar detection \cite{radar}. The authors discussed different SVM models to classify the detected objects as drones or birds, classify different kinds of drones depending on payload or number of rotors. These models showed high accuracy on test data (>$90$\%). An efficient drone detection model using the Support Vector Machine (SVM) and Convolutional Neural Networks (CNN) for noise detection is discussed in \cite{SVM}. Unlike \cite{ML1}, the authors of \cite{SVM} use SVM and CNN for drone detection as compared to LSTM approach used in \cite{ML1}. \textcolor{black}{The data was collected using audio from $6$ listening nodes to listen to the UAV flown \cite{SVM}.} Both types of ML algorithms have their own pros and cons. The SVM-based models are easier to implement as compared to other deep learning algorithms such as LSTM. However, the SVM-based models are only suitable for small datasets with limited outliers. In the proposed model, multiple listening nodes and a control center are used. The listening nodes are deployed on a circle surrounding the protected area. A microphone is installed on the listening nodes, to detect the sound of the drone. After the detection, the modules per frame are computed, and are sent to the control center for further evaluation. At the control center, SVM is deployed. SVM is a supervised classifier that helps in classifying between the required entities by mapping the input vectors into a high-dimensional feature space \cite{SVM2}. This classifier plots the pattern of the frames and the sound that is sent to the control center, and plots whether the drone is detected or not. \textcolor{black}{The simulation results in \cite{SVM} demonstrate that the SVM algorithm is more efficient than CNN in detecting drones.} However, the main limitation is that these algorithms have noise-related issues that make the results inconsistent. Moreover, the signals were not normalized in the proposed model leading to a lot of outliers. \begin{table*}[] \centering \caption{A summary of advantages and disadvantages of major applications of ML for drone communication security.} \begin{tabular}{|l|l|l|l|} \hline \rowcolor[gray]{0.8} \begin{tabular}[c]{@{}l@{}}Major \\ Approaches\end{tabular} & Advantages & Disadvantages & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Benefits over}\\ \textcolor{black}{traditional approaches} \end{tabular} \\ \hline \cite{SVM}, \cite{CNN} & \begin{tabular}[c]{@{}l@{}}• Supports high computation speed\\ and is cost efficient\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Unwanted noises in the background\\ makes the results inconsistent\end{tabular} & \textcolor{black}{Efficiency} \\ \hline \cite{RNN}, \cite{RNNIMAGE} & \begin{tabular}[c]{@{}l@{}}• Model training time is very less\\ and gives high accuracy\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Generating such a large dataset\\ artificially is very difficult\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Identification, Classific-}\\ \textcolor{black}{ation of types of drones} \end{tabular} \\ \hline \cite{inter} & \begin{tabular}[c]{@{}l@{}}• Stores data sequentially, so the data\\ retrieval latency is very less\end{tabular} & \begin{tabular}[c]{@{}l@{}}• The latency in data transmission\\ increases when a very large data file\\ is transmitted\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Minimizing latency, }\\ \textcolor{black}{Data reliability} \end{tabular} \\ \hline \cite{neural} & \begin{tabular}[c]{@{}l@{}}• High reliability with very fewer\\ resource requirements\end{tabular} & \begin{tabular}[c]{@{}l@{}} • Fails in further classifying the type\\ of attack \end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Detecting and preventing}\\ \textcolor{black}{DoS attacks} \end{tabular} \\ \hline \cite{obs} & \begin{tabular}[c]{@{}l@{}}• Very lightweight model, can run\\ on Raspberry Pi $3$B\end{tabular} & \begin{tabular}[c]{@{}l@{}}• A lot of training and testing in Deep \\ Learning algorithms may be required\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Easy deployment,}\\ \textcolor{black}{Privacy} \end{tabular} \\ \hline \cite{GPSD} & \begin{tabular}[c]{@{}l@{}}• Can detect the adversary in GPS-\\ denied environment with great\\ accuracy\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Fails when the drone moves in an\\ irregular pattern and is sensitive to\\ lighting conditions\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Efficiency in GPS-}\\ \textcolor{black}{denied environment} \end{tabular}\\ \hline \end{tabular} \label{mladv} \end{table*} \subsection{\textit{\textbf{\textcolor{black}{Drone detection using RNN and CRNN}}}} An efficient model using deep learning techniques like Recurrent Neural Networks (RNN) and Convolutional Recurrent Neural Networks (CRNN) for drone detection is mentioned in \cite{RNN}. The authors of \cite{RNN} have acquired a large dataset of drone propeller audio data, and have overlapped the audio clips with a variety of background noises to mimic real-life scenarios. \textcolor{black}{Data labelling was done for the identification problem as unknown (random noises in the surrounding), Bebop (drone $1$), or Mambo (drone $2$), and for the detection problem as drone or not a drone.} The experiment has been divided into two categories, the first, targeting detection of drones and the second, targeting their identification based on type. The detection problem has been evaluated and compared with existing literature, and the mentioned algorithms have been compared based on their accuracy, F1 score, precision, recall metrics, and computational time required to train and test the model. The \textcolor{black}{experiment} results of the model in \cite{RNN} show that deep learning methods using drone acoustics show great effectiveness for drone detection. CNN and CRNN algorithms remarkably outperform RNN in both detection and identification. Although CNN showed better performance than CRNN, the difference in performance was negligible, and CRNN required significantly less time to train, making it the most practical choice. Another model as discussed in \cite{RNNIMAGE} uses CNN for drone detection, but uses images instead of drone acoustics. Although the results are promising, the dataset for such a study could only be artificially created which decreases its reliability, and identification of the specific type of drone is also not possible like it is in \cite{RNN}. The authors of \cite{RNNWIN} have noted that RNN achieves superior performance when compared to CNN. The discrepancy is attributed to differences in the model’s architecture and design parameters, but a direct comparison of the results could not be performed by the authors of \cite{RNN}. \subsection{\textit{\textbf{\textcolor{black}{Fault detection and Recovery of UAV data using LSTM}}}} UAVs are used for certain critical applications like military, and product delivery. Therefore, it is imperative to deploy a certain mechanism for making data transmission in UAVs ultra-reliable. Furthermore, the latency of data transmission should also be kept minimum. Being resource-constrained, the UAVs need to transmit real-time data to the cloud servers for storing. Pioneer works in the direction of minimizing the latency and increasing the data reliability using LSTM are \cite{ML1}, \cite{inter}. Unlike \cite{RNN}, the authors of \cite{ML1}, \cite{inter} use LSTM for drone communication security. LSTM networks are a special type of RNN networks with some special features. The main feature of LSTM over RNN is the presence of a 'memory cell' that can maintain information in memory for a long time. In the proposed model, firstly, a regression model using LSTM is built to extract spatial-temporal features of the drone data. This is done to get an estimate of the monitored parameters or features. The authors use a set of $11$ distinct parameters or features, like roll angle, altitude, indicated airspeed, etc., to sense the UAV's current attitude and position through airborne sensors as an input to the proposed model. The output is used to train the fault detection model after the normalization of the data. Next, various filters are used to reduce the difference between the actual data and estimated values, thereby removing the effects of various noises. A threshold value is compared with the estimated values to detect the faults. In case a fault in data is discovered, the faulty data is replaced with the data estimated by the proposed model or the recovery data. \textcolor{black}{The simulation results demonstrate that the proposed model is capable of providing a quick recovery of the data in a limited time. The experimentation shows that the Mean Square Error (MSE) was recorded to be less than $0.078$ whereas, the Mean Absolute Error (MAE) was less than $0.205$.} However, further work on increasing real-time data recovery may be done to make the model more accurate and effective in fault detection and recovery. \subsection{\textit{\textbf{\textcolor{black}{DoS attacks}}}} The authors of \cite{dist}, and \cite{msvm} have proposed an ML-based model to detect Denial of Service Attack using Neural Networks and Modified Support Vector Machines respectively. However, a pioneer model for detecting and preventing DoS attacks in IoD using machine learning is proposed in \cite{neural}. Unlike \cite{SVM,RNN,inter}, the authors of \cite{neural} focus on preventing the DoS attacks on drone data, rather than physically detecting the unwanted drones in the network. \textcolor{black}{The dataset consists of labeled data categorized as Benign for normal traffic, and attacks like brute force, DoS/DDoS, and web attacks.} The authors proposed and implemented the random forest algorithm \cite{RF} and multi-layer perceptron algorithm \cite{MLP} on the CIC IDS $2017$ dataset. CIC IDS $2017$ dataset consists of all the data of current attacks, such as DoS and DDoS, in pcap format. The incoming data traffic in the drone is classified using the deployed classification algorithms to be as benign or affected packet. In both of the models, an accuracy greater than $98$\% was achieved with the MLP achieving an accuracy of $98.87$\% with $30$\% training records and the RF algorithm achieving an accuracy of $99.95$\% with $50$\% training records. However, none of the previous works including \cite{dist} and \cite{msvm} could achieve such an accuracy level with a relatively low resource requirement, as desired by an IoD system. The further task is to test the system for the multi-classification of DoS attacks. Moreover, the model does not further classify into attacks such as Hearbleed, slowhttptest, and http flood. Also, the resources required can be further reduced to make the system more efficient by reducing the number of features. \begin{table*}[] \centering \caption{Applications of ML for drone communication security.} \begin{tabular}{|p{0.035\linewidth}|p{0.065\linewidth}|p{0.2\linewidth}|p{0.21\linewidth}|p{0.171\linewidth}|p{0.176\linewidth}|} \hline \rowcolor[gray]{0.7} Ref. & Attack & Mechanism & \begin{tabular}[c]{@{}l@{}}Machine Learning\\ Feature used\end{tabular} & Major achievement & Open issues \\ \hline \cite{SVM} & \begin{tabular}[c]{@{}l@{}}GPS\\ spoofing\end{tabular} & \begin{tabular}[c]{@{}l@{}}The sound of the drone is \\ used for classifying the\\ presence of drone using\\ SVM and CNN\end{tabular} & \begin{tabular}[c]{@{}l@{}}SVM and CNN \\ classifies whether the\\ drone is present in the\\ specified area or not\end{tabular} & \begin{tabular}[c]{@{}l@{}}SVM shows better\\results in detecting\\the UAV as compared\\to CNN\end{tabular} & \begin{tabular}[c]{@{}l@{}}The background noises\\ of the wind and the\\ surroundings gave\\ inconsistent results\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{RNN} & \begin{tabular}[c]{@{}l@{}}GPS\\ spoofing\end{tabular} & \begin{tabular}[c]{@{}l@{}}The algorithms like RNN\\ are used to identify\\ the presence of UAV on\\ the basis of sound\end{tabular} & \begin{tabular}[c]{@{}l@{}}Algorithms like RNN and\\ CRNN is used to classify\\ the presence of the drone\end{tabular} & \begin{tabular}[c]{@{}l@{}}CRNN showed the\\best results in\\detecting the presence\\of the drone\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model can be\\trained to detect\\ the wide class of\\drones\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}} \cite{ML1} \end{tabular} & \begin{tabular}[c]{@{}l@{}}DoS\\ attack,\\ Worm-\\hole\\ attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}The LSTM-based fault\\ detection model detects\\ the fault and the quick\\ recovery commands are\\ sent to the UAV\end{tabular} & \begin{tabular}[c]{@{}l@{}}LSTM is used to store the\\ previous data of the UAV\\ which helps in building\\ the model that efficiently\\ detects the fault\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model achieved\\very less MSE and\\MAE, which makes \\the model very\\efficient\end{tabular} & \begin{tabular}[c]{@{}l@{}}Work on increasing the\\ efficiency of the model\\ is much needed\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{neural} & \begin{tabular}[c]{@{}l@{}}DoS \\ attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}The Random Forest and\\ Multi-Layer Perceptron\\ algorithm classifies\\ the data packets received \\ as benign or the DoS\\ affected packets\end{tabular} & \begin{tabular}[c]{@{}l@{}}Random Forest and Multi-\\ Layer perceptron algorithm\\ is used to classify between\\ the affected and the non-\\ affected packet received by\\ the drones\end{tabular} & \begin{tabular}[c]{@{}l@{}}The MLP algorithm\\ achieved an accuracy\\ of $98.87$\% whereas the\\ RF algorithm achieved\\ an accuracy of $99.95$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model does not\\ classify the type of\\attack taking place and\\work on decreasing the\\latency is needed\end{tabular} \\ \hline \cite{obs} & \begin{tabular}[c]{@{}l@{}}DoS\\attack,\\ DDoS\\ attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The data received made\\ obscure by adding some\\ noise and CNN is used\\ to reconstruct the\\ obscured image by\\ using different weights\end{tabular} & \begin{tabular}[c]{@{}l@{}}CNN algorithm is used to \\ reconstruct the obscured\\ data by using some\\ random weights, hence\\ making the data secure.\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model re-\\constructed the\\obscured data with\\an accuracy of $81.3$\%\\and it can run on R Pi\\ $3$B as well\end{tabular} & \begin{tabular}[c]{@{}l@{}}Research is open for\\ working on increasing\\the accuracy and\\the efficiency of\\the model\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{GPSD} & \begin{tabular}[c]{@{}l@{}}GPS\\ spoofing,\\ DoS\\attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}The target drone and the\\ size of the drone is\\ detected using bounding\\ box object detection\\ algorithm\end{tabular} & \begin{tabular}[c]{@{}l@{}}The bounding box object\\ detection algorithm and the \\ YOLO detection algorithm\\ is used for the real-time\\ detection of the drone\end{tabular} & \begin{tabular}[c]{@{}l@{}}This model achieved\\ $77$\% accuracy in\\ detecting the target \\ drone with an average\\ frame rate of $5.22$ fps\end{tabular} & \begin{tabular}[c]{@{}l@{}}The hunter drone\\is inefficient because\\of its heavy weight\end{tabular} \\ \hline \cite{mltab1} & \begin{tabular}[c]{@{}l@{}}Jamming,\\ Black\\hole\\attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}Whenever any event is\\ detected by the UAV, the\\ information is sent to the\\ controller and IDS\\ identifies the malicious\\ node\end{tabular} & \begin{tabular}[c]{@{}l@{}}A hierarchical intrusion\\ detection system is used\\ for detecting the malicious\\ nodes that are injecting\\ false data\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model achieved \\ a detection rate of\\more than $93$\% and a\\false positive rate of\\less than $3$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}Implementing the \\model on the swarm\\of drones is a much\\needed work\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{mltab2} & \begin{tabular}[c]{@{}l@{}}GPS\\ spoofing\end{tabular} & \begin{tabular}[c]{@{}l@{}}A machine learning-based\\ naive Bayes algorithm is\\ used to check for the\\ presence of the UAV\\ and the classification is\\ done using the k-nearest\\ neighbor algorithm\end{tabular} & \begin{tabular}[c]{@{}l@{}}The naive Bayes algorithm\\ is used for the detection of\\ the micro-UAV and for the\\ classification of the micro-\\ UAV kNN and is used\end{tabular} & \begin{tabular}[c]{@{}l@{}}The confusion matrix\\ obtained for the kNN \\ classifier in the model\\ achieved and accuracy\\ of $97.1$\%\end{tabular} & \begin{tabular}[c]{@{}l@{}}This model can make\\use of a $3$D feature\\cluster map that would\\help improve the real-\\time classification\end{tabular} \\ \hline \cite{mltab3} & \begin{tabular}[c]{@{}l@{}}Jamming\\ attacks,\\ DoS\\attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}UAVs are used for data \\ transmission and intrusion\\ detection system is used\\ for detecting any anomaly\end{tabular} & \begin{tabular}[c]{@{}l@{}}An intrusion detection\\ system is used for the\\ detection of the \\ anomaly in the network\end{tabular} & \begin{tabular}[c]{@{}l@{}}An efficient way\\of securing the\\ multi-level ad hoc\\ networks is presented\end{tabular} & \begin{tabular}[c]{@{}l@{}}Other networking \\ solutions can be used\\ to make the model\\more efficient\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{mltab4} & \begin{tabular}[c]{@{}l@{}}DoS\\attack,\\ DDoS\\ attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}Devices selected in white\\ list using the algorithm\\ are only used for\\ data transmission\end{tabular} & \begin{tabular}[c]{@{}l@{}}Random Forest algorithm\\ is used for classifying the\\ connected devices as\\ legitimate devices or\\ malicious devices\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model showed an\\ accuracy of $99.49$\%\\ in detecting the un-\\ authorized device in\\the network\end{tabular} & \begin{tabular}[c]{@{}l@{}}The efficient detection\\of a variety of\\compromised drones\\can be worked\\upon in the future\end{tabular} \\ \hline \end{tabular} \label{relml} \end{table*} \subsection{\textit{\textbf{\textcolor{black}{Privacy Leakage}}}} In the case of drones, the authentication algorithms used to enable access to the network are generally cryptographically placed. However, recently, the use of machine learning algorithms to avoid privacy leakage in IoD networks is being explored \cite{IoD}. For avoiding privacy leakage in drones, a pioneer model is proposed in \cite{obs}. Different from \cite{neural}, the authors of \cite{obs} focus on using deep learning algorithms to proactively secure the data, rather than detecting attacks on the network. The authors use deep learning auto-encoders to secure sensor data and to store media on a server. Each bit of data collected from the sensors of the drone is first converted into a digit image of size $28$ by $28$ pixels. Further, some noise is added to the sensor data to make it obscure and it is then sent to a remote cloud server to be saved. Convolutional Neural Network (CNN) is implemented in the reconstruction and classification components. Reconstruction component is used for reconstructing the obscured data into the original data by training the model weights. As the model weights are not known, it becomes almost impossible for the adversary to retrieve the original data. Next, the classification component recognizes the data from the reconstructed data and the digital data is further converted to sensor data by using deep learning auto-encoder. The proposed model is such a lightweight model that it can be run on Raspberry Pi $3$B too. The model was tested on the MNIST dataset, and the results demonstrate an accuracy of $81.3$\% in identifying the reconstructed data. In general, data privacy is ensured by encrypting the data with a variety of cryptographic representations. However, further techniques are required as the privacy techniques using cryptographic keys can be broken once the key is obtained. Another technique that helps in preventing data leakage is Homomorphic authenticated encryption (HAE) \cite{HAE}. Unlike \cite{obs}, the \cite{HAE} model works without the use of a key. HAE allows the users who do not have a key to perform computation on the ciphertext. The computed ciphertext decrypts to the correct function value. \subsection{\textit{\textbf{\textcolor{black}{Adversarial attacks}}}} Adversarial machine learning is mainly used to cause a malfunction in a machine learning model by supplying deceptive inputs. The IoD brings in a vast range of sensors, mobile network security issues, and privacy-protecting challenges that are different from traditional internet systems. A model proposed for avoiding adversarial attacks has been proposed in \cite{CNNADV} that uses CNN and RNN for adversary detection. Similar to \cite{RNN,CNN}, the authors of \cite{CNNADV} use the RNN and CNN-based models. However, these models are used to detect and prevent adversarial attacks rather than detecting the presence of drones, as done in \cite{RNN,CNN}. A pioneer work in detecting and preventing adversarial attacks in IoD is \cite{GPSD}. The authors of \cite{GPSD} use a black and white version of the Tiny You Only Live Once (YOLO) detection system and visual-serving without motion capturing systems. The proposed techniques are even efficient in a GPS denied environment. The proposed model is presented using a drone hunting platform that self localizes using visual inertial odometry (VIO) through ZED stereo camera, and runs a visual tracking and identifying algorithm on Jetson TX$2$. The commands are sent by the algorithm to the PX-$4$ based flight controller. The simulation results demonstrate that the platform could effectively track and chase the adversary. The model achieved $77$\% accuracy with an average frame rate of $5.22$ fps. The proposed work runs significantly faster than other deep learning detection models as mentioned in \cite{CNNADV} and \cite{GIS} with comparable accuracy. Also, it works to detect the adversary in a GPS denied environment, which is not done in other previous works in this direction. However, the fundamental drawback in the proposed model is that the detection algorithm is sensitive to poor lightning. \textbf{ Summary:} \textcolor{black}{The objective here is to enhance the possibilities of adversary drone detection using various machine learning and deep learning approaches. Apart from drone detection, various works have also focused on using such algorithms to prevent attacks in drone communication network. A summary of advantages and disadvantages of major works is shown in Table \ref{mladv}, and a summary of the related works is given in Table \ref{relml}. As seen, ML algorithms have a high capability in detecting unwanted drones and preventing the drones from entering restricted areas. These algorithms are also being widely proposed for secure traffic management and prevention of mid-air collisions.} \section{Applications of Fog Computing for Drone Communication Security} \label{sec8} Fog computing is a powerful complement to cloud computing which can provide a better QoS and can also help in decreasing the security issues in the cloud computing system. It is difficult to connect such a large number of drones directly to the cloud due to high latency delays and unpredictable network connections. Connections between the drones and the fog layer can be easily established with low latency issues. The most important benefit of fog computing is that it does all the computations and keeps the data near to the drone, which keeps it more safe and secure. Fog computing also supports mobility, scalability, heterogeneity as well as platform-independence. \textcolor{black}{Concept of edge computing comes close to fog computing and is said to overlap to a great extent \cite{fog_edge_iot}. Edge computing is for moving the resources from the cloud towards the edge of the network, and is more focused towards the 'things' side. However, fog computing concerns itself mainly with the infrastructure.} In this section, we first of all discuss the basic issues with the traditional approaches that do not use fog computing and then we move on to the challenges in drone communication and possible fog computing based solutions. There are various traditional methods that help in securing drone communication without leveraging the benefits of fog computing. A traditional man-in-the-middle attack detection system has been proposed in \cite{non5}, which uses the precise timing of arrival of data packets to infer the possibility of the attack. If the packet arrives late than the expected threshold time, the possibility of the attack is inferred. This method can fail in several circumstances where heavy background noise is present as the arrival of the data packet highly depends on the transmission channel. Bamasag et al. in \cite{non6}, proposed a multicast authentication model for data transmission in the desired time interval. This model makes the use of Shamir’s secret sharing technique \cite{non7} in which the secret can be unlocked if the authenticator has enough number of shares. Although this method provides some reliability, storing such a large number of keys is not preferred considering the resource constrained nature of drones. We further discuss the list of specific security issues that can be resolved and prevented using fog-computing as a solution. \begin{table*}[] \centering \caption{A summary of advantages and disadvantages of major applications of Fog Computing for drone communication security.} \begin{tabular}{|l|l|l|l|} \hline \rowcolor[gray]{0.8} \begin{tabular}[c]{@{}l@{}}Major\\ Approaches\end{tabular} & Advantages & Disadvantages & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Benefits over}\\ \textcolor{black}{traditional approaches} \end{tabular}\\ \hline \cite{27} & \begin{tabular}[c]{@{}l@{}}• High performance and accuracy as the\\ spoofing attack can be detected from\\ $10$ meters and in $250$ milli-seconds\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Cannot efficiently avoid collisions\\ and also sometimes fails in\\ detecting the obstacles\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Easy deployment,}\\ \textcolor{black}{Confidentiality} \end{tabular} \\ \hline \cite{IDS} & \begin{tabular}[c]{@{}l@{}}• A very high efficiency and a low\\ resource-demanding model\end{tabular} & \begin{tabular}[c]{@{}l@{}}• Highly dependent on network's\\ latency which demands more\\ optimization\end{tabular} & \begin{tabular}[c]{@{}l@{}}\textcolor{black}{Security against man-}\\ \textcolor{black}{in-the-middle attacks} \end{tabular} \\ \hline \cite{FSS} & \begin{tabular}[c]{@{}l@{}}• Identity-based authentication enhances\\ end-to-end security between the edge\\ layer and the fog layer\end{tabular} & \begin{tabular}[c]{@{}l@{}}• The data processing time highly\\ varies with the configuration of the\\ device used for detection\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Authentication, Data}\\ \textcolor{black}{integrity, Non-repudiation} \end{tabular} \\ \hline \cite{fogres} & \begin{tabular}[c]{@{}l@{}}• The architecture covers all the three\\ aspects i.e. minimizing the latency and\\ the energy consumption and maximizing\\ the reliability in the drones\end{tabular} & \begin{tabular}[c]{@{}l@{}}• The architecture is not very fast\\ and future work is needed for\\ increasing the efficiency of the\\ architecture\end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Better performance than} \\ \textcolor{black}{ LRGA-MIE and LP-based} \\ \textcolor{black}{algorithms} \end{tabular}\\ \hline \cite{cache} & \begin{tabular}[c]{@{}l@{}}• The model decreases the latency\\ experienced by the drone and increases\\ the Quality-of-Experience (QoE)\end{tabular} &\begin{tabular}[c]{@{}l@{}} • Highly sensitive to the number\\ of drones \end{tabular} & \begin{tabular}[c]{@{}l@{}} \textcolor{black}{Low latency, High} \\ \textcolor{black}{QoE} \end{tabular}\\ \hline \end{tabular} \label{fogadv} \end{table*} \subsection{ \textit{\textbf{\textcolor{black}{GPS spoofing attacks}}}} UAVs in the fog environment are susceptible to a lot of challenges against its benefits like mobility, scalability, and accurate location tracking. GPS spoofing is a notable security breach attack that sends incorrect GPS information to the receiver. UAVs need special attention since traditional internet systems like cloud computing in the former causes latency overloads and unforeseeable network issues. In the past, there are various GPS spoofing detection methods that have been adopted. The major ones are detection based on cryptographic algorithms and using auxiliary equipment as mentioned in \cite{27}. The authors take flight security and safety of drones, acting as fog nodes in an airborne fog computing system, in consideration. The model uses visual sensors combined with IMU (Inertial Measurement Unit) for information fusion to solve GPS spoofing issues. Using DJI Phantom $4$ with a frame rate of $30$ fps, it was observed that the spoofing attack can be detected from $10$ meters and in $250$ milli-seconds. The authors of \cite{fogt1} propose a fog-to-cloud computing framework for a Dragnet based amateur drone surveillance system. GPS spoofing and jamming attacks can be detected using the framework, which is inspired from traditional military anti-drone technologies. The amateur surveillance system is empowered with a "brain" for high-level intelligence consisting of a fog-to-cloud model. It is a system of coordinated measures for sensing a spoofing attack on the system by global decision-making based on the actions on the amateur drones. The Kashmar incident as mentioned in \cite{26} could've been prevented if US had employed some of the frameworks of fog computing as mentioned above. \subsection{ \textit{\textbf{\textcolor{black}{Man-In-The-Middle attack}}}} Man-in-the-middle attack problem threatens the very demanding aspect of IoD, which is integrity, as mentioned in \cite{int}. A pioneer work in the direction of low resource demanding model with a high level of security to prevent man-in-the-middle attack in IoD is proposed in \cite{IDS}. The authors propose an intrusion detection system (IDS) and intrusion prevention system (IPS) for preventing man-in-the-middle attack at the fog layer. Although, the model is proposed for IoT devices in general, it can be easily implemented on IoD as well. In the proposed network model, IDS nodes are deployed at a one-hop distance. Whenever an IDS node finds a compromised node or an intruder, it simply indicates the nodes in its proximity to cut off connection with the compromised node. On deployment, IDS nodes acquire the key from the cloud and distribute it to fog nodes. To prevent intrusion, all the packets are encrypted using Advanced Encryption System (AES), and Diffie-Hellman key exchange \cite{AES} is used for key exchange. IDS nodes periodically interrogate the fog nodes and observe the receiver’s behavior. IDS nodes expect the receiver to decrypt the packet in some pre-defined time. If the round trip time of interrogation exceeds the pre-defined time, the IDS concludes that the fog node is malicious. Additionally, if an attacker knows the existence of IDS nodes and the protocol they are using, he still does not know the nature of interrogation which is pre-programmed before the deployment of the nodes. This further reduces the chances of the attack. The proposed model, when implemented at the fog layer, could help in the identification and prevention of the man-in-the-middle attacks so that manipulated information does not reach the cloud. The simulation of the model was done over OMNET++. Latency overhead for deploying IDS and IPS was $40$ milli-seconds. Time taken to detect an attack was discovered to be between $2.48$ seconds to $2.53$ seconds. Since $2$ seconds was the time between investigation sessions, actual discovery time was approximately $0.5$ seconds. Energy overhead incurred on the fog nodes by IDS nodes was negligible which makes it a very less resource-demanding model for detecting man-in-the-middle attack. However, in the proposed model, the investigating-time is inversely proportional to the network's latency and energy overhead of the IDS network model. Further work is required in the direction of optimization algorithms to improve the efficiency of the proposed model. \begin{table*}[] \centering \caption{Applications of Fog computing for drone communication security.} \begin{tabular}{|p{0.033\linewidth}|p{0.11\linewidth}|p{0.235\linewidth}|p{0.114\linewidth}|p{0.189\linewidth}|p{0.17\linewidth}|} \hline \rowcolor[gray]{0.7} Ref. & Security Issues & Mechanism & \begin{tabular}[c]{@{}l@{}}Fog Computing\\ Feature used\end{tabular} & Major achievement & Open issues \\ \hline \cite{27} & \begin{tabular}[c]{@{}l@{}}GPS spoofing,\\ Eavesdropping\end{tabular} & \begin{tabular}[c]{@{}l@{}}The visual sensors combined\\ with IMU are deployed on\\ the drone and the data is\\ transmitted to the fog layer\\ detecting the attack\end{tabular} & \begin{tabular}[c]{@{}l@{}}Distributed\\ computing,\\ Scalability\end{tabular} & \begin{tabular}[c]{@{}l@{}}The spoofing attack can\\ be detected from $10$\\ meter far and in $250$\\ milli-seconds.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Methods for avoiding \\ mid-air collisions\\can also be \\implemented\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{IDS} & \begin{tabular}[c]{@{}l@{}}Man-in-the-\\ middle attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The malicious data is\\ prevented from entering the\\ cloud layer as the attack is\\ detected in the fog layer by\\ using the cryptographic keys\end{tabular} & \begin{tabular}[c]{@{}l@{}}High\\computation\\ power\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model is very less\\ resource-demanding\\ and can detect the attack\\ in less than $0.5$ secs\end{tabular} & \begin{tabular}[c]{@{}l@{}}Future work on\\increasing the\\efficiency of the\\model is much\\needed\end{tabular} \\ \hline \cite{FSS} & \begin{tabular}[c]{@{}l@{}}Eavesdropping,\\ Man-in-the-\\ middle attacks,\\ Hijacking\end{tabular} & \begin{tabular}[c]{@{}l@{}}The drone gets authenticated\\ from the fog layer that\\ contains the hashing\\ algorithm, and then only is\\ allowed to transmit data\end{tabular} & \begin{tabular}[c]{@{}l@{}}Low latency,\\ Mobility,\\ Heterogeneity\end{tabular} & \begin{tabular}[c]{@{}l@{}}The average end-to-end\\ processing time was\\ $2.59$ secs and the\\ average overall response\\ time was $3.17$ secs\end{tabular} & \begin{tabular}[c]{@{}l@{}}The average overall\\ response time highly\\ varied with the\\number of devices\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{fogres} & \begin{tabular}[c]{@{}l@{}}Latency,\\ Resource\\ constraints\end{tabular} & \begin{tabular}[c]{@{}l@{}}The task is divided into\\ several small task using\\ ADMM algorithm and is\\ transmitted to the nearby\\ ready drones that completes\\ the task and transmit back\\ the result acting as a fog node\end{tabular} & \begin{tabular}[c]{@{}l@{}}Low latency,\\ High\\ computation\\ power\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model showed\\ positive results in\\ minimizing the latency\\ and the energy\\ consumption and\\ maximizing the\\ reliability in the drones\end{tabular} & \begin{tabular}[c]{@{}l@{}}Future work is \\required in making\\the model more\\ reliable and safe\\ for the drones\end{tabular} \\ \hline \cite{fogt1} & GPS spoofing & \begin{tabular}[c]{@{}l@{}}The surveillance devices\\ acting as a fog layer sends\\ the data to the cloud layer\\ and gets the result back and\\ transmits to the amateur\\ drone for implementation\end{tabular} & \begin{tabular}[c]{@{}l@{}}High\\computation\\ power,\\ Distributed\\ computing\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model detects the\\ authorized drone with a\\ greater probability than\\ detecting the false or\\ unauthorized drone\end{tabular} & \begin{tabular}[c]{@{}l@{}}To efficiently detect \\the drone, a high\\detection delay is\\expected from the\\ model which should\\ be reduced\end{tabular} \\ \hline \rowcolor[gray]{0.9} \cite{fogt2} & \begin{tabular}[c]{@{}l@{}}DoS attacks,\\ DDoS attacks\end{tabular} & \begin{tabular}[c]{@{}l@{}}The serial number of each\\ device is stored in the fog\\ layer and whenever any\\ device wants to communicate\\ with the other device it needs\\ to verify its serial number\\ with the fog layer\end{tabular} & \begin{tabular}[c]{@{}l@{}}Low latency,\\ High \\computation\\ power \end{tabular} & \begin{tabular}[c]{@{}l@{}}The model uses very \\less bandwidth and\\increases the security\\ in the IoT devices\\as no device can\\communicate without\\authentication\end{tabular} & \begin{tabular}[c]{@{}l@{}}The model can be\\ implemented on the\\drones specifically to\\increase their security\end{tabular} \\ \hline \rowcolor[gray]{0.9} \end{tabular} \label{relfog} \end{table*} \subsection{ \textit{\textbf{\textcolor{black}{Eavesdropping}}}} Eavesdropping is an attack that affects the confidentiality of the data in the drones \cite{int}. Classical security solutions exist such as Secure Sockets Layer (SSL) \cite{SSL}, but it cannot be implemented on drones as they lack enough memory and CPU power to perform the required cryptographic operations. Therefore, offloading the additional security operations to a more resourceful entity such as fog nodes is a promising solution. A model that addresses this problem in drones is proposed in \cite{FSS}. Fog Security Service (FSS) mechanism is proposed that uses public and private key cryptography schemes. It consists of a Verifier, PKG (Private key generator), and a hashing algorithm at the fog layer. In the proposed model, input security parameters that include an identifier (unique), username, and password for verification of the sender are assigned to every drone. PKG is used for communication between the fog layer and the edge layer. After node authentication, asymmetric encryption is used for getting symmetric keys from the fog layer. For public-key encryption, the Rivest-Shamir-Adleman (RSA) algorithm \cite{RSA} is used. Nonce values are also used for preventing play-back attacks. FSS provides identity-based authentication as the private key is used for encryption and decryption both. This enhances the end-to-end security between the edge and the fog layer. For IoD networks, ground access points along with UAVs are present \cite{fogaccess}. Therefore, installing the proposed FSS layer along the data transmission paths could identify and prevent eavesdropping problems in the drones. OPNET based network simulator is used to evaluate the proposed method. In addition to different traffic loads, several devices representing different capacities and resources were used for experimentation. The performance of the model was evaluated based on the response time. The average E2E processing time was $2.59$ secs, while the overall response time was $3.17$ secs on average. Response time was measured against the state-of-the-art Authentication Proxy as a Service (ApaaS) \cite{ApaaS} and legacy methods. Processing time varied according to different hardware used. However, the heterogeneity of drones created a lot of dependencies related to processing time. Decreasing the variance of time involved based on heterogeneity is open for research. \subsection{\textit{\textbf{\textcolor{black}{Resource constraint issues}}}} As discussed above, drones have several applications like delivering products, military applications, etc. Therefore, drones need to have high computation power. The latency-sensitive applications such as disaster management, path recognition are also at risk due to this issue. A Fog Computing aided Swarm of Drones (FCSD) architecture is proposed in \cite{fogres}, which helps in minimizing the latency in drone communication. As the drones are highly resource-constrained, the task is divided into several small tasks using a Proximal Jacobi Alternating Direction Method of Multipliers (ADMM) \cite{ADMM}. ADMM is an algorithm that distributes a task into several small tasks and assigns it to the devices connected in the network that are ready to perform the task. An initiator drone is used to assign the tasks to the nearby drones using the ADMM algorithm. The drones complete the specified task and transmit the computed results back to the initiator drone. The simulation results demonstrate a considerable improvement in terms of reduction in transmission latency and computation latency. The energy consumption including transmission energy consumption and the computation energy consumption is also considered for the FCSD to reduce the overall energy consumption in the drone. The ADMM algorithm results in better performance when compared with the baseline pre-existing algorithms such as the latency and reliability constrained minimum energy consumption algorithm based on genetic algorithm (LRGA-MIE) \cite{LRGA} and a newly developed Linear Programming (LP) based algorithm. The Proximal Jacobi ADMM based algorithm gave the optimal solution and algorithm converged after the $14^{th}$ iteration. Another model for minimizing the latency in the swarm of drones that uses a decentralized algorithm for task allocation based on game theory is discussed in \cite{ano}. However, this model fails in providing the level of reliability provided by the ADMM algorithm. Moreover, the model in \cite{ano} converges towards the optimal solution after a large number of iterations as compared to \cite{fogres}. \subsection{\textit{\textbf{\textcolor{black}{Minimum Latency in data dissemination}}}} The dissemination of the data is required to have the least possible latency and fallacy. A model known as edge caching is proposed in \cite{cache} in which common files that the drone captures are cached and are made available whenever needed. The data that the user demands is generated by merging the data files collected by the different sensors installed in the UAV. The common data files can be stored in the cache-enabled UAV, which will then be transmitted directly to the demanding user. This model helps in decreasing the latency and increasing the Quality-of-Experience (QoE) as it has the common data already cached, which collectively helps in generating the demanded data. However, this model suffers from the drawback that whenever the number of drones increases, the data transmitting power decreases. The simulation results demonstrate that the transmission power decreases by $86$\% when the number of UAVs is increased from $3$ to $7$. Hence this model is highly sensitive to the number of UAVs. However, another similar model proposed in \cite{inter} gives a significant performance as compared to the mechanism proposed in \cite{cache}. The authors of \cite{inter} use some ML algorithms such as CNN and RNN for classifying the already existing data, and the required data for the generation of the demanded data. The use of these algorithms significantly improves the system of the model even with the increased number of drones. \textbf{ Summary:} \textcolor{black}{This section portrays that fog computing can help in preventing various attacks like GPS spoofing, man-in-the-middle attacks, and eavesdropping attacks in drone communication. Fog computing majorly works in minimizing the latency in drone communication considering the resource-constrained nature of drones. A summary of the advantages and disadvantages of major works that use fog computing as a solution to drone communication security are presented in Table \ref{fogadv}, and a summary of all the related works in this direction is presented in Table \ref{relfog}. As seen, fog computing minimizes the load on the cloud and helps the drone to offload its tasks to the fog layer, thereby minimizing the latency and maximizing the reliability in drone communication.} \section{\textcolor{black}{Lessons Learned,} Future Research Directions and Open Challenges} \label{sec9} \subsection{\textcolor{black}{Lessons Learned}} The UAV industry is growing rapidly, and as the applications come to use, we face various challenges that still need to be handled. Although various technologies mentioned above are anticipated to help secure drone communications, there are various constraints in these technologies as well. It is required to closely focus on the solution constraints as well before implementing these solutions in different drone applications. Blockchain technology is itself an emerging technology and has not been well implemented and tested in non-financial domains. \textcolor{black}{Blockchain technology has properties that can help secure drone communication across application areas like mining, delivery, surveillance, and disaster management effectively because it improves data security (against DoS, jamming, GPS spoofing, eavesdropping and wormhole attacks) and transparency, even in a swarm of drones.} \textcolor{black}{SDN aims at making the network more agile, flexible, and secure (against DoS, jamming, GPS spoofing and black hole attacks) because of its infrastructure of separating the data plane and the control plane. These incentives make drone communication networks useful in application areas of military, photography, and $5$G networks.} \textcolor{black}{Use of different approaches of machine learning algorithms depend on the application area and domain. ML can be used for securing drone communication networks (against DoS, GPS spoofing, jamming, wormhole attacks), as well as physical security (drone detection) of the drones. Characteristics of frameworks using ML make it suitable for drone applications like traffic management, fault detection, and navigation systems.} \textcolor{black}{Fog computing provides a better QoS, scalability, flexibility, low latency, platform-independence, and improves the security of the network (against GPS spoofing, man-in-the-middle, eavesdropping, hijacking, and DoS attacks. All these advantages make such a framework suitable for drone application areas involving big data, smart vehicles, and energy conservation.} \subsection{\textcolor{black}{Future Research Directions and Open Challenges}} Some of the future research directions in this field are as follows: \begin{itemize} \item The drones are resource-constrained devices. Implementing security algorithms such as blockchain, in a swarm of drones, means adding some more storage capacity and computation ability on drones. This might end up reducing the flight time. Moreover, using blockchain for non-critical communication can be acceptable, but for the critical things like location coordinates, blockchains can cause high latency as of now. Further research is required to enable the security algorithms keeping the resource-constrained nature of drones in mind. \item The gateways between drones, ground controllers and satellites in drone communication are also highly vulnerable to various security attacks. If the gateways are compromised, then the whole network is compromised, even though the end devices are highly secure. Further analysis is required on how to secure the gateways between different hops in drone communication. \item The current architecture of fog computing does not support the inter-fog resource and task sharing. In drone communication, few fog nodes or access points might be less loaded as compared to others. In such scenarios, the fog nodes can directly interact with each other and can share the tasks among themselves. This could further reduce the transfer of data from fog to cloud, thereby enhancing security. \item The current blockchain architecture is highly limited in terms of the number of nodes in permission-ed networks and in terms of throughput in permission-less networks. Various consensus algorithms are being designed to support high throughput along with a large number of nodes or users. \item A concept of multi and distributed controllers is being proposed in some works to overcome the problem of the controller being a single point of failure in SDN architectures. However, further work is required to ensure secure and near real-time communication between different controllers in SDN. \end{itemize} \section{Conclusion} \label{sec10}In this study, \textcolor{black}{we gave an overview of the various application areas of drones. Following which, security threats (drone-specific and IoT-generic) and potential vulnerabilities in specific applications of drone communications were explained. Furthermore, a brief overview of the fundamentals for various technologies is given.} Existing and upcoming solutions to overcome the security threats in drone applications \textcolor{black}{using different concepts have been discussed in detail in the subsequent sections}. The major technologies covered in solution architectures include software-defined networks, blockchain, fog/edge computing, and machine learning. Detailed benefits of these technologies to overcome the security threats in specific drone applications have also been discussed. The state-of-the-art drone security has also been discussed with some improvement suggestions, open issues, and future research directions. This survey is expected to serve as a valuable resource for security enhancement for upcoming and existing drone applications.
01d8cff9d59f566001cb414edb0b4d2517b3be3f
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Discussion} \label{sec:discussion} \begin{table*}[bt] \newcounter{rownum} \newcommand\rownumber{\stepcounter{rownum}\textcolor{gray}{\arabic{rownum}}} \begin{center} \begin{tabular}{r p{0.45\textwidth} p{0.45\textwidth}} \toprule & {\normalsize Original} & {\normalsize Proposed}\\ \midrule \rownumber & It 's \underline{\it heavy rain} today & It 's \underline{\it raining heavily} today\\ \rownumber & Everyone wants to be \underline{\it success} . & Everyone wants to be \underline{\it successful} .\\ \rownumber & \underline{\it On the} 3 weeks , I learned many things . & \underline{\it In the last} 3 weeks , I learned many things . \\ \rownumber & \underline{\it this is the} first entry ! : D & \underline{\it This is my} first entry ! : D\\ \rownumber & Help me \underline{\it getting English skill} , please . & Help me \underline{\it improve my English skills} , please .\\ \rownumber & \underline{\it At} last night , the 24th of June 2010 was big night for the Japanese national team and heaps of fans . & Last night , the 24th of June 2010 was \underline{\it a} big night for the Japanese national team and heaps of fans .\\ \rownumber & I \underline{\it start to learning} English again . & I \underline{\it am starting to learn} English again .\\ \rownumber & I went to \underline{\it Beijin} in China \underline{\it for} four days \underline{\it in} this week . & I went to \underline{\it Beijing} in China four days this week .\\ \rownumber & After a long day , \underline{\it I and my roommate} usually sit down , drink coffee and listen to music . & After a long day , \underline{\it my roommate and I} usually sit down , drink coffee and listen to music .\\ \rownumber & Do you know \underline{\it a Toeic} ? & Do you know about \underline{\it TOEIC} ?\\ \toprule \end{tabular} \end{center} \caption{Sample edits from our Lang-8 development set using only the character-based encoder-decoder network. Note that the model is able to handle misspellings (\textit{Beijin}) as well as rare words (\textit{TOEIC}) and emoticons (\it{:~D}).} \label{tab:good_samples} \end{table*} \begin{table*}[bt] \newcounter{rownum2} \newcommand\rownumber{\stepcounter{rownum2}\textcolor{gray}{\arabic{rownum2}}} \begin{center} \begin{tabular}{l p{0.45\textwidth} p{0.45\textwidth}} \toprule & {\normalsize Original} & {\normalsize Proposed}\\ \midrule \rownumber & Broke my heart & \underline{\it I} broke my heart\\ \rownumber & I want to big size bag & I want to \underline{\it be} a big size bag\\ \rownumber & This is typical Japanese male \underline{\it hobit} & This is \underline{\it a} typical Japanese male \underline{\it hobby}\\ \rownumber & I 'm so sorry to miss Lukas \underline{\it Moodysson 's} Lijia 4-ever . & I 'm so sorry to miss Lukas \underline{\it Moodysnot} Lijia 4-ever .\\ \rownumber & The match is the Rockets \underline{\it withthe} Bulls . & The match is the Rockets \underline{\it withth} Bulls .\\ \bottomrule \end{tabular} \end{center} \caption{Sample incorrect and ambiguous edits, again using just the encoder-decoder network.} \label{tab:bad_samples} \end{table*} \sssection{Qualitative Analysis} We present examples of correct and incorrect edits on Lang-8 development set in Table~\ref{tab:good_samples} and Table~\ref{tab:bad_samples}. Despite operating at the character level, the network is occasionally able to perform rearrangements of words to form common phrases (e.g. {\it I and my roommate} $\rightarrow$ {\it my roommate and I}) and insert and delete words where appropriate. On the other hand, the network can also sometimes mangle rare words ({\it Moodysson} $\rightarrow$ {\it Moodysnot}) and fail to split common words missing a separating space ({\it withthe} $\rightarrow$ {\it withth}), suggesting that while common patterns are captured, the network lacks semantic understanding. \begin{figure}[!ht] \centering \includegraphics[width=0.4\textwidth]{fscore_length.pdf} \caption{$F$-score vs. length of input sentence on development set. Only bins with $10$ or more sentences included.} \label{fig:perf_length} \end{figure} \sssection{Performance Breakdown} While the encoder-decoder network can itself produce modifications, on less noisy datasets such as the CoNLL Challenge datasets a language model can greatly improve performance. Increasing the language model weight $\lambda$ tends to improves recall at the expense of precision. On the other hand, using edit classification to filter spurious edits increases precision, often with smaller drops in recall. We do not observe a trend of decreasing $F$-score for a wide range of sentence lengths (Figure~\ref{fig:perf_length}), likely due to the attention mechanism, which helps to prevent the decoded output from diverging from the input sentence. \begin{table}[bt] \begin{center} \begin{tabular}{l r r r } \toprule Type & Count & $R$ no aug & $R$ aug \\ \midrule \textbf{ArtOrDet} & 717 & 20.08 & 29.14 \\ Wci & 434 & 2.30 & 1.61 \\ \textbf{Nn} & 400 & 31.50 & 51.00 \\ Preposition & 315 & 13.01 & 7.93 \\ Word form & 223 & 26.90 & 19.73 \\ \bottomrule \end{tabular} \end{center} \caption{CoNLL development set recall for 5 most frequent error categories with and without training on data with synthesized article/determiner and noun number errors. Wci denotes wrong collocation/idiom errors.} \label{tab:aug_err_types} \end{table} We report the inter-annotator agreement in Table~\ref{tab:conll2014}, which gives a possible bound on the $F$-score for this task. \sssection{Effects of Data Augmentation} We obtain promising improvements using data augmentation, boosting $F_{0.5}$-score on the development set from 31.55 to 34.81. For the two error types where we synthesize data (article or determiner and noun number) we observe significant increases in recall, as shown in Table~\ref{tab:aug_err_types}. The same phenomenon has been observed by \newcite{rozovskaya2012ui}. Interestingly, the recall of other error types (see \newcite{ng2014conll} for descriptions) decreases. We surmise this is because the additional training data contains only ArtOrDet and Nn errors, and hence the network is encouraged to simply copy the output when those error types are not present. We hope synthesizing data with a variety of other error types may fix this issue and improve performance. \begin{table*}[bt] \begin{center} \begin{tabular}{l l l l } \toprule Type & Description & Original & Proposed \\ \midrule Mec & Spelling, punctuation, & Another identification is & Another identification is \\ & capitalization, etc. & \underline{\it Implanting} RFID chips ... & \underline{\it implanting} RFID chips ... \\ Rloc- & Redundancy & \dots it seems that our freedom \underline{\it of} & \dots it seems that our freedom \\ & & \underline{\it doing things} is being invaded. & is being invaded. \\ Wci & Wrong collocation/idiom & Every coin has \underline{\it its} two sides. & Every coin has two sides. \\ \bottomrule \end{tabular} \end{center} \caption{Examples of the aforementioned challenging error types that our system fixed.} \label{tab:challenging_err_types} \end{table*} \vspace{1em} \sssection{Challenging Error Types} We now examine a few illustrative error types from the CoNLL Challenges that originally motivated our approach: orthographic (Mec), redundancy (Rloc-), and idiomatic errors (Wci). Since the 2013 Challenge did not score these error types, we compare our recall to those of participants in the 2014 Challenge \cite{ng2014conll}.\footnote{The team that placed 9th overall did not disclose their method; thus we only compare to the 12 remaining teams.} Note that systems only predict corrected sentences and not error types, and hence precision is not compared. We use the results from our final system, including both data augmentation and edit classification. Some examples of these error types are shown in Table~\ref{tab:challenging_err_types}. \begin{itemize} \item \textbf{Mec}: We obtain a recall of 37.17 on the Mec error type, higher than all the 2014 Challenge teams besides one team (RAC) that used rule-based methods to attain 43.51 recall. The word/phrase-based translation and language modeling approaches do not seem to perform as well for fixing orthographic errors. \item \textbf{Rloc-}: Redundancy is difficult to capture using just rule-based approaches and classifiers; our approach obtains 17.47 recall which places second among the 12 teams. The top system obtains 20.16 recall using a combination MT, LM, and rule-based method. \item \textbf{Wci}: Although there are 340 collocation errors, all teams performed poorly on this category. Our recall places 3rd behind two teams (AMU and CAMB) whose methods both used an MT system. Again, this demonstrates the difficulty of capturing whether a sentence is idiomatic through only classifiers and rule-based methods. \end{itemize} We note that our system obtains significantly higher precision than any of the top 10 teams in the 2014 Challenge (49.24 vs. 41.78), which comes at the expense of recall. \sssection{Limitations} A key limitation of our method as well as most other translation-based methods is that it is trained on just parallel sentences, despite some errors requiring information about the surrounding text to make the proper correction. Even within individual sentences, when longer context is needed to make a correction (for example in many subject-verb agreement errors), the performance is hit-and-miss. The edits introduced by the system tend to be fairly local. Other errors illustrate the need for natural language understanding, for example in Table~\ref{tab:bad_samples} the correction \textit{Broke my heart} $\rightarrow$ \textit{\underline{I} broke my heart} and \textit{I want to big size bag} $\rightarrow$ \textit{I want to \underline{be} a big size bag}. Finally, although end-to-end approaches have the potential to fix a wide variety of errors, it is not straightforward to then classify the types of errors being made. Thus the system cannot easily provide error-specific feedback. \section{Decoding} While it is simpler to integrate a language model by using it as a re-ranker, here the language model probabilities are combined with the encoder-decoder network through beam search. This is possible because the attention mechanism in the decoder network prevents the decoded output from straying too far from the source sentence. \subsection{Language Model} To model the distribution \begin{equation} P_\mathrm{LM}({y}_{1:T}) = \prod_{t=1}^T P({y}_t | {y}_{< t}) \end{equation} we build a Kneser-Ney smoothed 5-gram language model on a subset of the Common Crawl Repository\footnote{\tt http://commoncrawl.org} collected during 2012 and 2013. After pruning, we obtain 2.2 billion $n$-grams. To build and query the model, we use the KenLM toolkit \cite{Heafield2013}. \subsection{Beam Search} For inference we use a beam search decoder combining the neural network and the language model likelihood. Similar to \newcite{hannun2014deep}, at step $k$, we rank the hypotheses on the beam using the score \begin{align*} s_k({y}_{1:k}|{x}) = \log P_\mathrm{NN}({y}_{1:k} | {x}) + \lambda \log P_\mathrm{LM}({y}_{1:k}) \end{align*} where the hyper-parameter $\lambda$ determines how much the language model is weighted. To avoid penalizing longer hypotheses, we additionally normalize scores by the number of words in the hypothesis $|{y}|$. Since decoding is done at the character level, the language model probability $P_\mathrm{LM}(\cdot)$ is only incorporated after a space or end-of-sentence symbol is encountered. \subsection{Controlling Precision} \label{sec:edit_class} For many error correction tasks, precision is emphasized more than recall; for users, an incorrect suggestion is worse than a missed mistake. In order to filter spurious edits, we train an edit classifier to classify edits as correct or not. We run our decoder on uncorrected sentences from our training data to generate candidate corrected sentences. We then align the candidate sentences to the uncorrected sentences by minimizing the word-level Levenshtein distance between each candidate and uncorrected sentence. Contiguous segments that do not match are extracted as proposed edits\footnote{Note this is an approximation and cannot distinguish side-by-side edits as separate edits.}. We repeat this alignment and edit extraction process for the gold corrected sentences and the uncorrected sentences to obtain the gold edits. ``Good'' edits are defined as the intersection of the proposed and gold edits and ``bad'' edits are defined as the proposed edits not contained in the gold edits. We compute edit features and train a multilayer perceptron binary classifier on the extracted edits to predict the probability of an edit being correct. The features computed on an edit $s \rightarrow t$ are: \begin{itemize} \item \textbf{edit distance features}: normalized word and character lengths of $s$ and $t$, normalized word and character insertions, deletions, and substitutions between $s$ and $t$. \item \textbf{embedding features}: sum of 100 dimensional GloVe \cite{pennington2014glove} vectors of words in $s$ and $t$, GloVe vectors of left and right context words in $s$. \end{itemize} In order to filter incorrect edits, we only accept edits whose predicted probability exceeds a threshold $p_\mathrm{min}$. This assumes that classifier probabilities are reasonably calibrated \cite{niculescu2005predicting}. Edit classification improves precision with a small drop in recall; most importantly, it helps filter edits where the decoder network misbehaves and $t$ deviates wildly from $s$. \section{Experiments} We perform experiments using two datasets of corrected sentences written by English learners. The first is the Lang-8 Corpus, which contains erroneous sentences and their corrected versions collected from a social language learner forum \cite{tajiri2012tense}. Due to the online user-generated setting, the Lang-8 data is noisy, with sentences often containing misspellings, emoticons, and other loose punctuation. Sample sentences are show in Table~\ref{tab:good_samples}. The other dataset we consider comes from the CoNLL 2013 and 2014 Shared Tasks, which contain about 60K sentences from essays written by English learners with corrections and error type annotations. We use the larger Lang-8 Corpus primarily to train our network, then evaluate on the CoNLL Shared Tasks. \subsection{Training and Decoding Details} Our pyramidal encoder has $3$ layers, resulting in a factor $4$ reduction in the sequence length at its output, and our decoder RNN has $3$ layers as well. Both the encoder and decoder use a hidden size of $400$ and gated recurrent units (GRUs), which along with LSTMs \cite{hochreiter1997long} have been shown to be easier to optimize and preserve information over many time steps better than vanilla recurrent networks. Our vocabulary includes 98 characters: the printing ASCII character set and special $\langle$sos$\rangle$, $\langle$eos$\rangle$, and $\langle$unk$\rangle$\ symbols indicating the start-of-sentence, end-of-sentence, and unknown symbols, respectively. To train the encoder-decoder network we use the Adam optimizer \cite{kingma2014adam} with a learning rate of $0.0003$, default decay rates $\beta_1$ and $\beta_2$, and a minibatch size of 128. We train for up to $40$ epochs, selecting the model with the lowest perplexity on the Lang-8 development set. We found that using dropout \cite{srivastava2014dropout} at a rate of $0.15$ on the non-recurrent connections \cite{pham2014dropout} helped reduce perplexity. We use uniform initialization of the weight matrices in the range $[-0.1, 0.1]$ and zero initialization of biases. Decoding parameter $\lambda$ and edit classifier threshold $p_\mathrm{min}$ were chosen to maximize performance on the development sets of the datasets described. All results were obtained using a beam width of 64, which seemed to provide a good trade-off between speed and performance. \subsection{Noisy Data: Lang-8 Corpus} \begin{table}[bt] \begin{center} \begin{tabular}{l r} \toprule Method & Test BLEU \\ \midrule No edits & 59.54 \\ Spell check & 58.91 \\ RNN & 61.63 \\ RNN + LM & \textbf{61.70} \\ \bottomrule \end{tabular} \end{center} \caption{Performance on Lang-8 test set. Adding the language model results in a negligible increase in performance, illustrating the difficulty of the user-generated forum setting.} \label{tab:lang8} \end{table} We use the train-test split provided by the Lang-8 Corpus of Learner English \cite{tajiri2012tense}, which contains 100K and 1K entries with about 550K and 5K parallel sentences, respectively. We also split 5K sentences from the training set to use as a separate development set for model and parameter selection. Since we do not have gold annotations that distinguish side-by-side edits as separate edits, we report BLEU score\footnote{Using case-sensitive \texttt{multi-bleu.perl} from Moses.} using just the encoder-decoder network as well as when combined with the $n$-gram language model (Table~\ref{tab:lang8}). Note that since there may be multiple ways to correct an error and some errors are left uncorrected, the baseline of using uncorrected sentences is more difficult to improve upon than it may initially appear. As another baseline we apply the top suggestions from a spell checker with default configurations\footnote{Hunspell v1.3.4, {\tt \url{https://hunspell.github.io}}}. We suspect due to proper nouns, acronyms, and inconsistent capitalization conventions in Lang-8, however, this actually decreased BLEU slightly. To the best of our knowledge, no other work has reported results on this challenging task. \subsection{Main Results: CoNLL Shared Tasks} \begin{table}[bt] \begin{center} \begin{tabular}{l r r r} \toprule Method & $P$ & $R$ & $F_{0.5}$\\ \midrule RNN & 42.96 & 6.27 & 19.81 \\ RNN aug & 49.30 & 10.10 & 27.75 \\ RNN + LM & 43.27 & 15.14 & 31.55 \\ RNN aug + LM & 46.94 & \textbf{17.11} & 34.81 \\ RNN aug + LM + EC & \textbf{51.38} & 15.83 & \textbf{35.45}\\ \bottomrule \end{tabular} \end{center} \caption{Development set performance. EC denotes edit classification (Section~\ref{sec:edit_class}), and ``aug'' indicates data augmentation was used.} \label{tab:conll2013} \end{table} \begin{table}[bt] \begin{center} \begin{tabular}{l r r r } \toprule Method & $P$ & $R$ & $F_{0.5}$ \\ \midrule AMU & 41.62 & 21.40 & 35.01 \\ CUUI & 41.78 & 24.88 & 36.79 \\ CAMB & 39.71 & 30.10 & 37.33 \\ \newcite{susanto2015systems} & 53.55 & 19.14 & 39.39 \\ Ours (no EC) & 45.86 & 26.40 & 39.97 \\ Ours (+ EC) & 49.24 & 23.77 & \textbf{40.56}\\ \midrule Ours (A1) & 32.56 & 14.76 & 26.23 \\ Ours (A2) & 44.04 & 14.83 & 31.59 \\ A1 (A2) & 50.47 & 32.29 & \textbf{45.36} \\ A2 (A1) & 37.14 & 45.38 & \textbf{38.54} \\ \bottomrule \end{tabular} \end{center} \caption{CoNLL 2014 test set performance. We compare to the 3 best CoNLL 2014 submissions which used combinations of MT, LM ranking, and error type-specific classifiers. We report $F$-score against both and single annotators, as well as each annotator scored against the other as a human ceiling. A1 and A2 denote Annotators 1 and 2.} \label{tab:conll2014} \end{table} \sssection{Description} For our second set of experiments we evaluate on the CoNLL 2014 Shared Task on Grammatical Error Correction \cite{ng2013conll,ng2014conll}. We use the revised CoNLL 2013 test data with all error types as a development set for parameter and model selection with the 2014 test data as our test set. The 2013 test data contains 1381 sentences with 3470 errors in total, and the 2014 test data contains 1312 sentences with 3331 errors. The CoNLL 2014 training set contains 57K sentences with the corresponding gold edits by a single annotator. The 2013 test set is also only labeled by a single annotator, while the 2014 test set has two separate annotators. We use the NUS MaxMatch scorer \cite{dahlmeier2012better} v3.2 in order to compute the precision ($P$), recall ($R$), and $F$-score for our corrected sentences. Since precision is considered more important than recall for the error correction task, $F_{0.5}$ score is reported as in the CoNLL 2014 Challenge. We compare to the top submissions in the 2014 Challenge as well as the method by \newcite{susanto2015systems}, which combines 3 of the weaker systems to achieve the state-of-the-art result. All results reported on the 2014 test set exclude alternative corrections submitted by the participants. \sssection{Synthesizing Errors} In addition to the Lang-8 training data, we include the CoNLL 2014 training data in order to train the encoder-decoder network. Following prior work, we additionally explore synthesizing additional sentences containing errors using the CoNLL 2014 training data \cite{felice2014generating,rozovskaya2012ui}. Our data augmentation procedure generates synthetic errors for two of the most common error types in the development set: article or determiner errors (ArtOrDet) and noun number errors (Nn). Similar to \newcite{felice2014generating}, we first collect error distribution statistics from the CoNLL 2014 training data. For ArtOrDet errors, we estimate the probability that an article or determiner is deleted, replaced with another determiner, or inserted before the start of a noun phrase. For Nn errors, we estimate the probability that it is replaced with its singular or plural form. To obtain sentence parses we use the Stanford CoreNLP Toolkit \cite{manning2014}. Example synthesized errors: \begin{itemize} \item \textbf{ArtOrDet}: They will generate and brainstorm \underline{\it the} innovative ideas. \item \textbf{Nn}: Identification is becoming more important in our {\it society} $\rightarrow$ \underline{\it societies}. \end{itemize} Errors are introduced independently according to their estimated probabilities by iterating over the words in the training sentences, and we produce two corrupted versions of each training sentence whenever possible. The original Lang-8 training data contains 550K sentence pairs. Adding the CoNLL 2014 training data results in about 610K sentence pairs, and after data augmentation we obtain a total of 720K sentence pairs. We examine the benefits of synthesizing errors in Section~\ref{sec:discussion}. \sssection{Results} Results for the development set are shown in Table~\ref{tab:conll2013}, and results for the CoNLL 2014 test set in Table~\ref{tab:conll2014}. On the CoNLL 2014 test set, which contains the full set of 28 error types, our method achieves a state-of-the-art result, beating all systems from the 2014 Challenge as well as a system combination method \cite{susanto2015systems}. Methods from the 2014 Challenge used statistical machine translation, language model ranking, rule-based approaches, and error type-specific features and classifiers, often in combination. System descriptions for participating teams are given in \newcite{ng2014conll}. \section{Introduction} Systems that provide writing feedback have great potential to assist language learners as well as native writers. Although tools such as spell checkers have been useful, detecting and fixing errors in natural language, even at the sentence level, remains far from solved. Much of the prior research focuses solely on training classifiers for a small number of error types, such as article or preposition errors \cite{han2006detecting,rozovskaya2010}. More recent methods that consider a broader range of error classes often rely on language models to score $n$-grams or statistical machine translation approaches \cite{ng2014conll}. These methods, however, do not flexibly handle orthographic errors in spelling, capitalization, and punctuation. \begin{figure}[!t] \centering \includegraphics[width=0.40\textwidth]{bpencdec.pdf} \caption{Illustration of the encoder-decoder neural network model with two encoder hidden layers and one decoder hidden layer. Character-level reasoning allows handling of misspellings and OOVs.} \label{fig:model_overview} \end{figure} As a motivating example, consider the following incorrect sentence: ``{\it I visitted Tokyo on Nov 2003. :)}''. Several errors in this sentence illustrate the difficulties in the language correction setting. First, the sentence contains a misspelling, {\it visitted}, an issue for systems with fixed vocabularies. Second, the sentence contains rare words such as {\it 2003} as well as punctuation forming an emoticon {\it :)}, issues that may require special handling. Finally, the use of the preposition {\it on} instead of {\it in} when not referring to a specific day is non-idiomatic, demonstrating the complex patterns that must be captured to suggest good corrections. In hopes of capturing such complex phenomena, we use a neural network-based method. Building on recent work in language modeling and machine translation, we propose an approach to natural language error correction based on an encoder-decoder recurrent neural network trained on a parallel corpus containing ``good'' and ``bad'' sentences (Figure~\ref{fig:model_overview}). When combined with a language model, our system obtains state-of-the-art results on the CoNLL 2014 Shared Task, beating systems using statistical machine translation systems, rule-based methods, and task-specific features. Our system naturally handles orthographic errors and rare words, and can flexibly correct a variety of error types. We further find that augmenting the network training data with sentences containing synthesized errors can result in significant gains in performance. \section{Model Architecture} Given an input sentence $x$ that we wish to map to an output sentence $y$, we seek to model $P(y | x)$. Our model consists of an encoder and a decoder \cite{Sutskever2014,cho2014learning}. The encoder maps the input sentence to a higher-level representation with a pyramidal bi-directional RNN architecture similar to that of \newcite{chan2015listen}. The decoder is also a recurrent neural network that uses a content-based attention mechanism \cite{bahdanau2014neural} to attend to the encoded representation and generate the output sentence one character at a time. \subsection{Character-Level Reasoning} \label{ssec:character} Our neural network model operates at the character level, in the encoder as well as the decoder. This is for two reasons, as illustrated by our motivating example. First, we do not assume that the inputs are spell-checked and often find spelling errors in the sentences written by English learners in the datasets we consider. Second, word-level neural MT models with a fixed vocabulary are poorly suited to handle OOVs such as multi-digit numbers, emoticons, and web addresses \cite{graves2013generating}, though recent work has proposed workarounds for this problem \cite{luong2014addressing}. Despite longer sequences in the character-based model, optimization does not seem to be a significant issue, since the network often only needs to copy characters from source to target. \subsection{Encoder Network} Given the input vector $x_t$, the forward, backward, and combined activations of the $j$th hidden layer are computed as: \begin{align*} f^{(j)}_t &= \mathrm{GRU}({f}^{(j)}_{t-1}, {c}^{(j-1)}_t),\\ b^{(j)}_t &= \mathrm{GRU}({b}^{(j)}_{t+1}, {c}^{(j-1)}_t),\\ h^{(j)}_t &= {f}^{(j)}_t + {b}^{(j)}_t \end{align*} where $\mathrm{GRU}$ denotes the gated recurrent unit function, which, similar to long short-term memory units (LSTMs), have shown to improve the performance of RNNs \cite{cho2014learning,hochreiter1997long}. The input from the previous layer input ${c}^{(0)}_t = {x}_t$ and \begin{equation*} {c}^{(j)}_t = \tanh\left({W}_\mathrm{pyr}^{(j)} \left[{h}^{(j-1)}_{2t}, {h}^{(j-1)}_{2t+1}\right]^\top + {b}_\mathrm{pyr}^{(j)}\right) \end{equation*} for $j>0$. The weight matrix ${W}_\mathrm{pyr}$ thus reduces the number of hidden states for each additional hidden layer by half, and hence the encoder has a pyramid structure. At the final hidden layer we obtain the encoded representation ${c}$ consisting of $\left \lceil T/2^{N-1} \right \rceil$ hidden states, where $N$ denotes the number of hidden layers.\vspace{0.3em} \subsection{Decoder Network} The decoder network is recurrent neural network using gated recurrent units with $M$ hidden layers. After the final hidden layer the network also conditions on the encoded representation $c$ using an attention mechanism. At the $j$th decoder layer the hidden activations are computed as \begin{equation*} d^{(j)}_t = \mathrm{GRU}(d^{(j)}_{t-1}, {d}^{(j-1)}_t), \end{equation*} with the output of the final hidden layer $d^{(M)}_t$ then being used as part of the content-based attention mechanism similar to that proposed by \newcite{bahdanau2014neural}: \begin{align*} u_{tk} &= \phi_1(d^{(M)}_t)^\top\phi_2(c_k)\\ \alpha_{tk} &= \frac{u_{tk}}{\sum_j u_{tj}}\\ a_t &= \sum_j \alpha_{tj} c_j \end{align*} where $\phi_1$ and $\phi_2$ represent feedforward affine transforms followed by a $\tanh$ nonlinearity. The weighted sum of the encoded hidden states $a_t$ is then concatenated with $d^{(M)}_t$, and passed through another affine transform followed by a $\mathrm{ReLU}$ nonlinearity before the final softmax output layer. The loss function is the cross-entropy loss per time step summed over the output sequence $y$: \begin{equation*} L(x, y) = -\sum_{t=1}^T \log P(y_t | x, y_{<t}). \end{equation*} Note that during training the ground truth $y_{t-1}$ is fed into the network to predict $y_t$, while at test time the most probable $\hat{y}_{t-1}$ is used. Figure~\ref{fig:model_overview} illustrates the model architecture. \subsection{Attention and Pyramid Structure} \label{ssec:attention} In preliminary experiments, we found that having an attention mechanism was crucial for the model to be able to generate outputs character-by-character that did not diverge from the input. While character-based approaches have not attained state-of-the-art performance on large scale translation and language modeling tasks, in this setting the decoder network simply needs to copy input tokens during the majority of time steps. Although character-level models reduce the softmax over the vocabulary at each time step over word-level models, they also increase the total number of time-steps of the RNN. The content-based attention mechanism must then consider all the encoder hidden states $c_{1:T}$ at every step of the decoder. Thus we use a pyramid architecture, which reduces computational complexity (as shown by \newcite{chan2015listen}). For longer batches, we observe over a $2\times$ speedup for the same number of parameters when using a 400 hidden unit per layer model with 3 hidden layers ($4\times$ reduction of steps in $c$). \section*{Conclusion} We present a neural network-based model for performing language correction. Our system is able correct errors on noisy data collected from an English learner forum and attains state-of-the-art performance on the CoNLL 2014 Challenge dataset of annotated essays. Key to our approach is the use of a character-based model with an attention mechanism, which allows for orthographic errors to be captured and avoids the OOV problem suffered by word-based neural machine translation methods. We hope the generality of this approach will also allow it to be applied to other tasks that must deal with noisy text, such as in the online user-generated setting. \section*{Acknowledgments} We thank Kenneth Heafield, Jiwei Li, Thang Luong, Peng Qi, and Anshul Samar for helpful discussions. We additionally thank the developers of Theano \cite{bergstra2010theano}. Some GPUs used in this work were donated by NVIDIA Corporation. ZX was supported by an NDSEG Fellowship. This project was funded in part by DARPA MUSE award FA8750-15-C-0242 AFRL/RIKF. \section{Related Work} Our work primarily builds on prior work on training encoder-decoder RNNs for machine translation \cite{kalchbrenner2013recurrent,Sutskever2014,cho2014learning}. The attention mechanism, which allows the decoder network to copy parts of the source sentence and cope with long inputs, is based on the content-based attention mechanism introduced by \newcite{bahdanau2014neural}, and the overall network architecture is based on that described by \newcite{chan2015listen}. Our model is also inspired by character-level models as proposed by \newcite{graves2013generating}. More recent work has applied character-level models to machine translation and speech recognition as well, suggesting that it may be applicable to many other tasks that involve the problem of OOVs \cite{ling2015character,lexfree2015,chan2015listen}. Treating grammatical error correction as a statistical machine translation problem is an old idea; the method of mapping ``bad'' to ``good'' sentences was used by many of the teams in the CoNLL 2014 Challenge \cite{Felice14grammaticalerror,junczys2014}. The work of \newcite{Felice14grammaticalerror} achieved the best $F_{0.5}$-score of 37.33 in that year's challenge using a combination of rule-based, language-model ranking, and statistical machine translation techniques. Many other teams used a language model for re-ranking hypotheses as well. Other teams participating in the CoNLL 2014 Challenge used techniques ranging from rule-based systems to type-specific classifiers, as well as combinations of the two \cite{rozovskaya-EtAl:2014:W14-17,lee-lee:2014:W14-17}. The rule-based systems often focus on only a subset of the error types. The previous state of the art was achieved by \newcite{susanto2015systems} using the system combination method proposed by \newcite{Heafield2010} to combine three weaker systems. Finally, our work uses data collected and shared through the generous efforts of the teams behind the CoNLL and Lang-8 datasets \cite{mizumoto2011mining,hayashibe2012effect,ng2013conll,ng2014conll}. Prior work has also proposed data augmentation for the language correction task \cite{felice2014generating,rozovskaya2012ui}.
7528b54465e1f68807165486dbe836d8bf359007
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{sec:intro} Polars are accreting white dwarf (WD) binaries characterized by the presence of a strong magnetic field ($\sim 7-230$ MG; \citealp{ferrario2015a}), which prevents the formation of an accretion disc and instead channels the accreted material directly on to the poles of the WD. In most polars, the measured WD and orbital angular velocities are found to be identical. For a handful of systems, however, the WD rotates asynchronously, so that those two velocity values differ significantly. The five known asynchronous polars (APs) are V1432 Aql (RXJ 1940-10), BY Cam, V1500 Cyg, CD Ind (RXJ 2115-58), and Paloma (RX J0524+42) \citep{campbell1999a,schwarz2004a}. EX Hya is an Intermediate Polar (IP), with a magnetic field of ${\sim}1$ MG and an accretion disc that has been observed to have dwarf nova outbursts, which also has a very high asynchronicity \citep{hellier1996a}, and was included in this study as well. The cause of this asynchronicity is theorized to be a nova eruption: accreted hydrogen builds up on the surface of the WD until reaching a critical temperature/pressure and igniting a thermonuclear runaway in the accreted layer, the ejection of which causes the WD to spin with a higher angular velocity than before the nova \citep{campbell1999a}. V1500 Cyg (Nova Cyg 1975) was observed to be an AP after its nova eruption, lending support to this theory, although its pre-eruption status is unknown. BY Cam, V1500 Cyg, and CD Ind all have a positive $\omega / \Omega$, where $\omega$ is the synodic angular velocity of the WD primary and $\Omega$ is the orbital angular velocity. $\omega / \Omega$ is then a measurement of the asynchronicity of the system. Another way to measure asynchronicity is by looking at the percent difference between the periods, defined as $\frac{P_\mathrm{orb} - P_\mathrm{spin}}{P_\mathrm{orb}}$, where $P_\mathrm{orb}$ is the orbital period of the binary system and $P_\mathrm{spin}$ is the spin period of the WD. The percent difference and $\omega / \Omega$, along with other general properties, are listed in Table \ref{tab:summary} for each system. BY Cam, V1500 Cyg, CD Ind, and Paloma all have positive percent differences, although they range over approximately one and a half orders of magnitude. V1432 Aql, however, is under-synchronous, with a negative $\omega / \Omega$ and percent difference, which may indicate a different formation mechanism, although at this point the details of the theory are poorly understood in general, and particularly when it comes to explaining how to obtain an under-synchronous AP. These systems do not remain asynchronous indefinitely; instead, they likely start returning to a synchronous state quickly after being knocked out of sync. Models of this process vary, leading to a range of estimates for the time needed to return to synchronization ($t_\mathrm{sync}$) for each system, also listed in Table \ref{tab:summary}. BY Cam has the largest range, with $t_\mathrm{sync}$ estimates ranging from $250 \pm 20$ yr \citep{pavlenko2013a} to ${>}3500$ yr \citep{honeycutt2005a}. With $t_\mathrm{sync}$ estimated to be just a few hundreds of years for most APs, and the postulate that the asynchronicity was originally caused by a nova eruption, it is reasonable to search for nova shells around the systems and expect to find something. Detection of such a shell would eventually allow for an estimate of the date of the nova that caused the asynchronicity and thus provide another constraint on the resynchronization time-scales as well as a further clue to the cause of the asynchronicity in the first place. \cite{sahman2015a_arxiv} searched for shells around just two of our targets, V1432 Aql and BY Cam, using the Auxiliary Port on the 4.2 m William Herschel Telescope on La Palma and did not find evidence for any shells. \section{Observations} We observed two APs, V1432 Aql and CD Ind, and the IP EX Hya, in H$\alpha$ ($\lambda _\mathrm{peak} = 656.6$ nm, FWHM = 10 nm) using the SALTICAM imager on the 10m class South African Large Telescope (SALT) located at the South African Astronomical Observatory, near Sutherland, South Africa \citep{buckley2006a,odonoghue2006a}. V1432 Aql was observed for a total of 3120s, divided evenly between 26 different exposures and two nights (2013 June 29 and 2013 July 11). With the same filter, CD Ind was observed on 2013 June 28 for a total of 1560s across 13 exposures, and EX Hya on 2013 July 11 for 1800s spread over 15 different exposures. (There is effectively no guide camera that can be used with SALTICAM, hence the large number of short exposures.) Additionally, we observed BY Cam in H$\alpha$ ($\lambda _\mathrm{peak} = 656.2$ nm, FWHM = 47 nm, filter borrowed from NOAO) using the OSMOS instrument (in imaging mode) on the MDM Observatory 2.4m Hiltner telescope. Our coverage of BY Cam is much shallower than for the Southern APs\textemdash we have only two 1200s exposures, for a total of 2400s of 2.4m time on the target\textemdash but we do have a much wider field of view, with a $20\arcmin$ unvignetted diameter for OSMOS as opposed to the $8\arcmin$ diameter field of view of SALTICAM. For all targets, the usable individual images were processed using the usual reduction steps in PyRAF and then summed using {\it imcombine} to obtain the greatest possible signal for each target. Compared to the observations from \cite{sahman2015a_arxiv}, we are able to go deeper on V1432 Aql and search a larger field of view for both V1432 Aql and BY Cam. The left-side images in Figures \ref{fig:v1432aql} to \ref{fig:cdind} show the stacked images for each system observed; no indications of any shells can be seen. To highlight any faint edges in the images and thus more thoroughly search for shells or shell fragments, we unsharp masked each combined image. The unsharpened versions are shown in the right side images of Figures \ref{fig:v1432aql} to \ref{fig:cdind}. None of the systems observed show visual evidence for a nova shell at any observed distance from the target star. \section{Discussion} \label{sec:discussion} Following a nova eruption, the ejected shell will expand until it dissipates into the circumstellar medium. From Table 3 of \citet{pagnotta2014b}, which collects observed nova characteristics, the average expansion velocity of a classical nova is $1800$ km s$^{-1}$. Assuming this expansion velocity for any prior nova eruptions in the systems we observed, and considering the resolution of the detectors and the site conditions, we can calculate the size and age of all possible detectable shells. The distances used for each calculation are listed in Table \ref{tab:summary}. A certain number of years after an eruption, the shell will have expanded enough that it will be distinct from the image of the nova itself on the image (accounting for seeing, binning, and instrument effects), and first detectable in its smallest state. For each system, we assume the shell must be at least 5 pixels away from the star to be seen on the image; the minimum detectable shell sizes and ages are listed in Table \ref{tab:sizes}. For all of the systems we observed, shells from very recent novae can be expected to be seen, within the past two years for most of them. As the shell expands, eventually it will reach the edge of the field of view of the CCD. Since no dithering patterns were employed in our observations, we assume the total observed field corresponds to the full, unvignetted size of the instrument/chip, with the target system located at the centre. Using triangle geometry, we can calculate the physical shell size and thus the possible age, which then gives us a date back to which we can assume we may have seen a shell if the nova had erupted that recently and produced an observable shell. For each system we calculate the shell sizes and ages assuming (a) constant expansion velocity, and (b) an expansion velocity that decelerates as described in \citet{duerbeck1987b}, which gives a mean half-lifetime of the velocities of 75 yr. From the Duerbeck paper, we can construct an exponential decay equation for the velocity over time: \begin{equation} v(t) = 5.68 \times 10^{10} \left( \frac{1}{2} \right) ^{\frac{t}{75}} \textrm{km yr}^{-1}. \label{eqn:velocity} \end{equation} Taking the indefinite integral of Equation \ref{eqn:velocity} and solving for the integration constant given that $r(t=0)=0$ gives \begin{equation} r(t) = -6.15 \times 10^{12} \mathrm{e}^{-0.009\cdot t} + 6.15 \times 10^{12} \textrm{ km}. \label{eqn:radius} \end{equation} To calculate how long it would take the decelerating shell to reach the edge of our fields of view, we need time as a function of radius, so we rearrange Equation \ref{eqn:radius} to obtain \begin{equation} t(r) = \frac{-1}{0.009} \ln{\left(1-\frac{r}{6.15 \times 10^{12}}\right)} \textrm{ yr}. \label{eqn:time} \end{equation} The amount of time necessary to observe the shells at their smallest sizes is the same for both cases (a) and (b), because it is such a short amount of time that no significant deceleration can have occurred on a level that we would be able to detect. Deceleration, however, does change the largest possible shell ages, increasing the amount of time we can expect to see the shell after the eruption, or essentially how far back in time we would be able to detect an eruption, because it takes longer for the shell to expand beyond the field of view if it is decelerating. If we take the \citet{duerbeck1987b} formulation at face value, we notice that the radius of the shell has an asymptote at $r=6.15 \times 10^{12}$ km. In some cases, the fields of view of our images are larger than this, so theoretically we could say that we would see all possible nova shells from an infinite amount of time in the past, however this is clearly unphysical, because we have observed at least two nova shells that are larger than the $r=6.15 \times 10^{12}$ km limit. AT Cnc and Z Cam, two dwarf novae with ancient nova shells \citep{shara2007a,shara2012b,shara2012c}, have shells with measured radii of $6.19 \times 10^{12}$ km and $2.15 \times 10^{13}$ km. The \citet{duerbeck1987b} result was empirically determined using observations of just four novae, so it is not altogether surprising that it is not universally applicable, but nevertheless it is a good first-order approximation of what we can expect, at least for the first 75 yr after a nova eruption. There is likely a strong dependence on the local circum- and inter-stellar medium, but measuring and modelling that is beyond the scope of this paper. For the cases in which our fields of view are larger than the $r=6.15 \times 10^{12}$ km asymptote, we can say only that we can see shells further back in time than in the no deceleration case, but cannot put a firm upper limit on the timeframe. For V1432 Aql, we can rule out nova shells from eruptions that happened up to 118 or 145 yr ago in the constant expansion velocity case, depending on which distance measurement we use (187 or 230 pc, respectively; \citealp{ak2008a,barlow2006a}). Accounting for the deceleration of the shell, for both distances, we have cases where the field of view is larger than the asymptote, so we can say that 118 and 145 yr are lower limits. For CD Ind, there are no shells detected from eruptions up to 59 yr ago in the constant velocity case, and 86 yr ago with a decelerating shell. EX Hya, the closest of our systems and the only non-AP, does not show shells from eruptions up to 35 yr ago or 43 yr ago, for constant and decelerating shell velocities, respectively. For BY Cam, with the caveat that the image is shallow, in the constant expansion velocity situation we rule out shells from eruptions that occurred as far back as 82 to 300 yr ago, again depending on the distance adopted (52 or 190 pc; \citealp{ak2008a,barlow2006a}). If the shell decelerates as described above, we can rule out shells from eruptions dating back to anywhere from 154 to $<$300 yr ago. These time constraints are listed for each system in Table \ref{tab:sizes}. Additionally, we were able to check whether the APs have large-scale ultraviolet-bright shells, similar to those found around Z Cam \citep{shara2007a} and AT Cancri \citep{shara2012b}. We searched the GALEX archive and found all but V1500 Cyg have images in both the FUV and NUV bands (135.0-175.0 nm and 175.0-280.0 nm, respectively). No shell is visible on any scale for any of the targets. The point spread function of GALEX is $\sim$6$\arcsec$ and confirms the H$\alpha$ non-detections on scales larger than this. It is critical to remember that the lack of a nova shell does not equate to the lack of a nova eruption. \cite{wade1990} provides one of the first statistical looks at how many shells have been detected around classical novae, reporting that 26 of the approximately 200 known at the time had resolved remnants. \cite{downes2000a} did a survey of 30 recent, relatively nearby novae and found 14 shells using a combination of ground- and space-based imaging, giving a 47\% detection rate, although we note that this may be different from the actual shell formation rate. There are many reasons a shell may not be observed around a nova after its eruption even if it has formed: the amount of mass ejected might be so small that the shell density is low and the shell is undetectable even shortly after the eruption; or, enough time has passed since the eruption that, as the shell has expanded, its density and therefore surface brightness have decreased, making it too faint to detect; or, the shell might have expanded so quickly that it is larger than the field of view of the image. There is another possibility for finding old nova eruptions in these systems: one can check through the major astronomical plate archives, namely those at the Harvard College Observatory in Cambridge, MA, and the Sonneberg Observatory in Sonneberg, Germany. There are scanning operations underway at both archives, which will allow for a quick check of the past behaviour of each of these objects once the fields are scanned and released. Although the Harvard operation, DASCH \citep{grindlay2012a}, has entered production scanning mode, it will likely be at least a few more years before all of the AP fields are scanned and available. Sonneberg is also scanning its plates, however they are not readily accessible offsite. Additionally, for a fully complete eruption search, it is recommended that the plates be examined by hand for evidence of eruption, especially in crowded fields, because it is possible that the eruption is only captured on one plate, and if for whatever reason that plate is not properly solved by the software pipeline, it will not be included in the digitized light curve results and the eruption will be missed. Although this method of searching for eruptions in archival plates only covers the last $\sim$120 yr, it is still a valuable resource in the attempt to find previous eruptions, and allows for the possibility of finding nova eruptions in systems that did not form detectable shells. With no shell detections in our images, we cannot prove that any of the systems in our study\textemdash the three APs V1432 Aql, BY Cam, and CD Ind, and the IP EX Hya\textemdash had nova eruptions in the recent past that caused their asynchronicity today; we also cannot conclude that they did not have nova eruptions. It is possible that they erupted recently and the shells are fainter than our observations, for any of the possible reasons discussed above. In this case, deeper imaging, especially for BY Cam, is advised. It is also possible that the eruptions were further in the past than expected (i.e. our understanding of the models used to obtain $t_{\mathrm{sync}}$ are incorrect), especially for BY Cam, with its possible synchronization time of $>$3500 yr \citep{honeycutt2005a}, which indicates that wider fields of view are recommended. The obvious potential solution to both of these problems is the never-ending wish of the observational astronomer: more, better data. To quote directly from \citet{wade1990}, ``There are few branches of astrophysics where the old refrain, `More observations are needed!', is more applicable than to the study of resolved nebular remnants of classical novae." Deeper, higher-resolution exposures would allow us to search for fainter shells, and wider fields of view or well-designed dithering patterns would allow us to see a larger area around the central binary and therefore detect older, further nova shells, if they in fact exist. \section*{Acknowledgements} This research was supported by the Kathryn W. Davis Postdoctoral Scholar programme, which is supported in part by the New York State Education Department and by the National Science Foundation under grant numbers DRL-1119444 and DUE-1340006. This manuscript is based on observations made with the Southern African Large Telescope (SALT) under program 2013-1-AMNH-004 (PI: A. Pagnotta). We gratefully acknowledge that AMNH access to SALT is made possible by a generous donation from the late Paul Newman and the Newman Foundation. This work is also based on observations obtained at the MDM Observatory, operated by Dartmouth College, Columbia University, Ohio State University, Ohio University, and the University of Michigan, using a filter borrowed from KPNO/NOAO, which was helpfully arranged by Eric Galayada at MDM. PyRAF is a product of the Space Telescope Science Institute, which is operated by AURA for NASA. We thank Tom Maccarone for the initial discussion that sparked the idea for this project, Mike Shara for helpful discussions on the subject of nova shells, and Arlin Crotts for access to his MDM time. Jana Grcevich provided many useful suggestions throughout the course of this work, and Denise Revello provided invaluable coaching via the RAISE-W program; we are grateful to them both. The writing of this manuscript was continually accompanied by the dulcet tones of NPG, FH, the rest of the TBS Crew, and The Nixtape, which undoubtedly contributed to increased productivity.
9cb07d1c3eddd6658247e23a5367d7fd9bb97bfc
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Understanding when, where, and how stars form from the highest redshifts to the local universe is key to understanding the formation and evolution of galaxies. Originating directly from the photosphere of young, massive stars living up to a few 100~Myr, the ultraviolet (UV) emission of galaxies is one of our main windows into star formation. Because rest--frame UV is easily accessible from the ground for distant galaxies, it has even become the star formation tracer of choice for large cosmological surveys. The launch of the Galaxy Evolution Explorer \citep[GALEX,][]{martin2005a} in 2003 has considerably furthered our insight into the UV emission of galaxies by providing us with a large amount of observations of nearby galaxies in the rest--frame far--UV (FUV, 151.6~nm) and near--UV (NUV, 226.7~nm) bands, which are not accessible from the ground. Yet, as a star formation tracer the UV suffers from a major issue: it is very efficiently attenuated by dust. \cite{burgarella2013a} showed that in the nearby universe $69\pm10$\% of star formation is invisible in the rest--frame FUV because of dust. This obscuration peaks at $89\pm4$\% at $z=1.35$. It means that correcting for the presence of dust is a crucial issue if we want to use the UV as a reliable star formation rate (SFR) estimator. To correct the UV for dust attenuation, one of the most popular techniques is to translate the UV spectral slope to an attenuation using the IRX--$\beta$ relation. While this method works remarkably well for starburst galaxies \citep{meurer1999a}, it fails, sometimes considerably, for more quiescent galaxies \citep{kong2004a,dale2007a}. This is probably due to a combination of age effects and variations in the attenuation laws \citep[][and many others]{bell2002a,calzetti2005a,boquien2009a,boquien2012a,mao2012a,mao2014a,grasha2013a}. An alternative approach has seen important developments over the past decade: hybrid (or composite) estimators \citep[e.g.,][]{buat1999a,buat2005a,calzetti2007a,zhu2008a,kennicutt2009a,hao2011a}. Because part of the energy emitted in the mid-- and far--infrared bands is due to the reprocessing by dust of UV photons emitted by massive stars, these hybrid estimators aim to correct the UV for the attenuation using the emission of the dust in a given IR band: \begin{equation} L(UV)_{int}=L(UV)_{obs}+k_i\times L(IR)_i, \end{equation} with $L\left(UV\right)_{int}$ the intrinsic UV luminosity (defined as $\nu L_\nu$) in the absence of dust, $L\left(UV\right)_{obs}$ the observed UV luminosity, $L\left(IR\right)_i$ the luminosity in the IR band $i$, and $k_i$ the scaling coefficient for the corresponding IR band. Or equivalently it can be written in terms of SFR: \begin{equation} SFR = c_{UV}\times\left[L(UV)_{obs}+k_i\times L(IR)_i\right], \end{equation} with $c_{UV}$ the UV--to--SFR calibration coefficient. The scaling coefficient can then be calibrated observationally by combining the attenuated and attenuation--corrected UV luminosities with the observed luminosities in the given IR band: \begin{equation}\label{eqn:k-IR-v1} k_i=\frac{L\left(UV\right)_{int}-L\left(UV\right)_{obs}}{L\left(IR\right)_i}, \end{equation} or by assuming $c_{UV}$ and combining the estimated SFR, with the attenuated UV luminosities and the observed luminosities in the given IR band: \begin{equation}\label{eqn:k-IR-v2} k_i=\frac{SFR-c_{UV}\times L(UV)_{obs}}{c_{UV}\times L(IR)_i}. \end{equation} While this method can be appealing, different studies have found different $k_i$ coefficients, depending on whether the IR emission unrelated to recent star formation has been subtracted \citep[e.g., the resolved study of][]{liu2011a} or not \citep[e.g., the unresolved study of][]{hao2011a}. This emission is often visible in resolved galaxies as a diffuse component which varies with galaxy type \citep{sauvage1992a}. This perhaps translates differences in the age of stellar populations, or equally in the relative contributions of young and old stellar populations to the dust--heating interstellar radiation field. Even in actively star--forming galaxies, recent results based on idealised hydrodynamical simulations find that up to a third of the total IR luminosity (TIR, which we define here as the integral of dust emission over all wavelengths) can be due to stars older than 100~Myr, and therefore not related to recent star formation \citep{boquien2014a}, in line results assuming a constant SFR \citep{crocker2013a}. This fraction can be even higher in individual IR bands. This is consistent with various \textit{Herschel} \citep{pilbratt2010a} studies, which have shown that the warm dust emission is driven by recent star formation whereas the emission of the colder dust is rather driven by older stellar populations \citep{popescu2000a,bendo2010a,bendo2012a,bendo2015a,boquien2011a,delooze2012a,delooze2012b,lu2014a,natale2015a}. As dust emission is not entirely related to UV--emitting stars, it is necessary to take this into account when deriving hybrid estimators. One approach to tackle this issue is to systematically subtract the diffuse emission \citep[e.g.,][]{calzetti2007a}, assuming that it is unrelated to recent star formation. This may not be valid if a significant fraction of the diffuse emission is actually due to radiation having escaped from star--forming regions \citep{kirkpatrick2014a}. In any case, this method requires observations at a high enough spatial resolution to enable the subtraction of the diffuse emission on a local basis. This can be achieved for instance by directly processing the images or by estimating the level of the diffuse emission from an ancillary dataset \citep[e.g., from a gas map, assuming the gas--to--dust mass ratio and the radiation field intensity due to old stars,][]{leroy2012a}. An alternative approach is to statistically take the average diffuse dust emission into account by calibrating $k_i$ against a star formation tracer insensitive to the emission of old stars, such as the extinction--corrected H$\alpha$ \citep{kennicutt2009a}. While hybrid SFR estimators provide good results on average, they have so far been limited to constant values in terms of $k_i$. This ignores not only galaxy--to--galaxy variations, which can be large, but also variations within galaxies when applied on local scales. Ultimately, they run the risk of being strongly dependent on the intrinsic properties of the calibration sample. This may induce systematic biases on the SFR if the observed sample has different properties. The aim of this paper is to derive a parametrisation of $k_i$ that naturally includes the varying degrees of dust heating from old stars at 24~$\mu$m, 70~$\mu$m, 100~$\mu$m and for the TIR. Following this idea, we propose to parametrise $k_i$ on 1. the FUV$-$NIR (J to 3.6~$\mu$m bands) colours, which are sensitive to the SFH, and 2. NIR luminosity densities per unit area, which are sensitive to the stellar mass surface densities. Such an approach would serve as a basis for new hybrid SFR estimators applicable at local and global scales across a wide range of physical conditions. To do so, we first need to quantify and understand how the relation between the UV and the IR varies within and between galaxies. With this in mind, we carry out a spatially resolved, multi--wavelength study of eight star--forming spiral galaxies drawn from the KINGFISH sample (Sect.~\ref{sec:sample}). These data allow us to carry out a detailed Spectral Energy Distribution (SED) modelling including stellar populations of all ages, nebular emission, attenuation by dust in the UV--to--NIR domains, and the re--emission of the absorbed energy at longer wavelengths. In turn, such a modelling allows us to compute their local physical properties such as their UV attenuation, their stellar mass, and their mean intrinsic and specific SFR over 100~Myr (Sect.~\ref{sec:physical-parameters}). Combining these physical properties with the observations, we investigate how $k_i$ depends on the local properties of galaxies in Sect.~\ref{sec:kIR-dependence-parameters}. This allows us to provide new practical methods to correct the FUV for dust attenuation at local scales in Sect.~\ref{sec:hybrid-relations}, which we generalise to entire galaxies in Sect.~\ref{sec:kir-unresolved}. We discuss the limits of this new approach and we provide practical guidelines how to apply it to correct the UV for the attenuation in different cases in Sect.~\ref{sec:limits}. Finally, we summarise our results in Sect.~\ref{sec:conclusion}. \section{Sample\label{sec:sample}} \subsection{Selection\label{ssec:selection}} To carry out this study we need to perform a detailed SED modelling to determine the local physical properties of nearby galaxies and in particular their UV attenuation, which is key as we will see in Sect.~\ref{sec:kIR-dependence-parameters}. To do so we rely on the KINGFISH \citep[Key Insights on Nearby Galaxies: a Far-Infrared Survey with \textit{Herschel},][]{kennicutt2011a} sample of nearby galaxies. It provides us with a broad range of spatially resolved, FUV to far--IR (FIR) data over many galaxy types. In this paper we restrict our subsample to star--forming spiral galaxies (Sa and later types) that have a minor axis of at least 5\arcmin\ and an inclination no larger than 60$\degr$. This ensures that we have at least several tens of $\sim$1~kpc resolution elements across each galaxy at 24\arcsec, the resolution of \textit{Herschel} SPIRE 350~$\mu$m, which is the coarsest data we consider as we will see later. Such large resolution elements ensure that there is little leakage of energetic photons between neighbouring pixels \citep[leaking photons should be absorbed within a radius of the scale--height of the diffuse ionised gas, 900~pc in the Milky Way,][]{reynolds1989a}, which would affect the energy balance assumption of the modelling (Sect.~\ref{ssec:modelling}). We also only select galaxies without a strong active nucleus (for instance we eliminate galaxies with a Seyfert nucleus but we keep LINER ones) to limit the contamination of the UV and the IR by emission from the nucleus. In addition, we also eliminate galaxies at low Galactic latitude due to the very large number of foreground stars in the field, which makes their removal daunting and uncertain (see Sect.~\ref{ssec:convolution} for the removal procedure). This would lead to a possible contamination of the emission of the galaxies by foreground objects. Finally, we also eliminate galaxies with data of insufficient quality for this study. This is for instance the case for galaxies that have only been observed with the GALEX All--Sky Imaging Survey, which is too shallow to allow for a reliable pixel--by--pixel determination of the UV fluxes. We have therefore selected eight galaxies in our sample: NGC~628, NGC~925, NGC~1097, NGC~3351, NGC~4321, NGC~4736, NGC~5055, and NGC~7793. \subsection{Properties} As we can see in Table~\ref{tab:sample}, the sample is made of fairly regular star--forming galaxies in the nearby universe, spanning the Hubble sequence from Sab to Sd. \begin{table*} \centering \begin{tabular}{cccccccc} \hline\hline Name &Morphological type&Angular size &Distance&$12+\log O/H$&$\log sSFR$\\ & &(arcmin) &(Mpc) &\citep{kobulnicky2004a}&(yr$^{-1}$)\\\hline NGC \phantom{0}628&SAc &$10.5\times9.5$&7.2 &9.02&$-$9.73\\ NGC \phantom{0}925&SABd &$10.5\times5.9$&9.12 &8.79&$-$9.76\\ NGC 1097 &SBb &$9.3\times6.3$ &14.2 &9.09&$-$9.86\\ NGC 3351 &SBb &$7.4\times5.0$ &9.33 &9.19&$-$10.48\\ NGC 4321 &SABbc &$7.4\times6.3$ &14.3 &9.17&$-$9.88\\ NGC 4736 &SAab &$11.2\times9.1$&4.66 &9.01&$-$10.76\\ NGC 5055 &SAbc &$12.6\times7.2$&7.94 &9.14&$-$10.53\\ NGC 7793 &SAd &$9.3\times6.3$ &3.91 &8.88&$-$9.59\\\hline \end{tabular} \caption{Main properties of the eight galaxies in the selected sample. Data extracted from Table~1 of \cite{kennicutt2011a}. See references therein.\label{tab:sample}} \end{table*} This ensures different levels of star-forming activity (from $\log sSFR=-10.76$~yr$^{-1}$ for NGC~4736 to $\log sSFR=-9.59$~yr$^{-1}$ for NGC~7793) and different levels of diffuse dust emission \citep{sauvage1992a}. This range of star formation activity of over one dex is also expanded by the spatially--resolved nature of this study, each galaxy containing relatively more active and more quiescent regions. In the end, this range of star formation activity represents the sweet spot to calibrate hybrid SFR estimators. More extreme galaxies from the point of view of star formation are either dominated by diffuse emission (such as early type galaxies) or quite the opposite have a negligible diffuse emission compared to dust heating by massive stars (such as young starbursts). In terms of sizes, all galaxies are well-resolved, with NGC~3351 being the smallest with a size of $7.4\arcmin\times 5.0\arcmin$ and NGC~5055 being the largest with a size of $12.6\arcmin\times 7.2\arcmin$, providing many resolution elements within each object (see Sect.~\ref{sec:data}). Finally, the metallicities are typically super--solar according to the estimators of \cite{kobulnicky2004a}. This is due to the criteria mentioned in Sect.~\ref{ssec:selection}. They tend to eliminate low metallicity star--forming galaxies as they are often smaller and are not well--resolved with Herschel, if their far--infrared emission is detected at all. While this reduces the volume of the space of physical properties covered by our sample, those galaxies have in general little attenuation and little dust emission. This means that in their case, uncertainties on hybrid estimators will have an especially small impact. \section{Data\label{sec:data}} For all of these galaxies we have spatially resolved data in the UV (FUV and NUV bands from GALEX), near--infrared (in the J, H, and Ks bands with 2MASS, and at 3.6~$\mu$m and 4.5~$\mu$m with IRAC\footnote{The calibration of IRAC data has been converted from point--source to extended emission using the aperture corrections described in \url{https://irsa.ipac.caltech.edu/data/SPITZER/docs/irac/iracinstrumenthandbook/29/}.} on--board \textit{Spitzer}), mid--infrared (IRAC 5.8~$\mu$m, 8~$\mu$m, and MIPS 24~$\mu$m from \textit{Spitzer}), and far--infrared (PACS 70~$\mu$m, 100~$\mu$m, 160~$\mu$m, and SPIRE 250~$\mu$m and 350~$\mu$m from \textit{Herschel}\footnote{PACS \citep{poglitsch2010a} and SPIRE \citep{griffin2010a} data were processed to Level 1 using HIPE version 11.1.0 with calibration products PACS\_CAL\_56\_0 and SPIRE\_CAL\_11\_0. Subsequent processing and map making was carried out with Scanamorphos \citep{roussel2013a} version 24. For SPIRE the beam areas at 250~$\mu$m and 350~$\mu$m were assumed to be 465.39\arcsec$^2$ and 822.58\arcsec$^2$.}). This broad set of data enables us to estimate their local physical properties along with their uncertainties from SED modelling (Sect.~\ref{sec:physical-parameters}). We do not consider the SPIRE 500~$\mu$m band because it would further degrade the angular resolution (the beam size at 500~$\mu$m is $\sim35$\arcsec) for no gain in the present context because the TIR luminosity, which is important to constrain the UV attenuation, is already securely determined without the 500~$\mu$m band. Finally, we do not consider optical data either here as our initial investigations on three galaxies in the sample (NGC~628, NGC~1097, and NGC~3351) have shown that their presence does not allow us to obtain significantly better measurements of the physical parameters we are interested in. In addition, they come from heterogeneous sources, contrary to the other bands here. \subsection{Milky--Way foreground dust extinction correction} The emission from the FUV to the near--infrared (NIR) is affected by the presence of foreground dust in the Milky Way. We unredden the SED adopting the \cite{cardelli1989a} Milky--Way extinction curve including the \cite{odonnell1994a} update. The $E(B-V)$ normalisation is obtained from NASA/IPAC's Galactic Dust Reddening and Extinction service\footnote{\url{https://irsa.ipac.caltech.edu/applications/DUST/}} as the mean $E(B-V)$ computed from the \cite{schlafly2011a} extinction maps in a 5\arcmin\ radius circle, similar to the typical size of the galaxies in the sample. \subsection{Convolution\label{ssec:convolution}} To carry out a spatially resolved modelling and analysis, we need to ensure that for each galaxy all the maps have the same angular resolution and pixel size. To do so, we need to convolve all images to the same point spread function and project them on a common grid of pixels. In a first step, to ensure there is no contamination from prominent foreground stars, we edit them out from the images using \textsc{iraf}'s \textsc{imedit} procedure. It replaces the pixels contained within a manually defined rectangular aperture enclosing a star, with pixels having the same mean and standard deviation as those in the immediate surrounding area. The stars are identified by eye, combining UV, optical, and NIR data. Depending on the galactic latitude of the target and the band considered, the number of stars removed varies from a few per image to several dozens. The \textit{Spitzer} 3.6~$\mu$m and 4.5~$\mu$m images are generally the most affected by the presence of foreground stars. Once the images have been cleaned, we degrade their angular resolution to that of the SPIRE 350~$\mu$m band using the convolution kernels of \cite{aniano2011a}. All images are then registered onto a common grid with a pixel size of 24\arcsec, close to the resolution of the SPIRE 350~$\mu$m band, corresponding to physical scales from 0.5~kpc (NGC~7793) to 1.7~kpc (NGC~1097). The pixel size is similar to the angular resolution so that individual pixels are physically independent from one another. This registration is carried out using \textsc{iraf}'s \textsc{wregister} module with the \textsc{drizzle} interpolant. Finally, the background is subtracted from the registered images by averaging the mean values from typically several tens of $3\times3$ pixel boxes randomly located in empty regions around the galaxy. \subsection{Uncertainties} The uncertainties on the fluxes are computed by summing in quadrature the standard deviation of the background level measurements with the mean of the standard deviation of the pixel values in each of the $3\times3$ pixel boxes. A systematic uncertainty of 5\% is added to reflect calibration errors. For the SED modelling, we only select pixels detected at least at a 1--$\sigma$ level in all bands. The lowest signal--to--noise images are the PACS 70~$\mu$m and 100~$\mu$m bands. While this necessarily increases the uncertainties on the total infrared (TIR) luminosity, these uncertainties are fully taken into account in the modelling and therefore propagated to the estimation of the physical parameters. This yields a final sample of 1364 pixels distributed over eight galaxies. \section{Estimation of the physical parameters\label{sec:physical-parameters}} \subsection{\textsc{cigale} modelling\label{ssec:modelling}} To compute $k_i$, it is essential that we are in a good position to determine either the intrinsic UV emission (Eq.~\ref{eqn:k-IR-v1}) or the SFR (Eq.~\ref{eqn:k-IR-v2}). This means that we need to obtain reliable estimates of either the UV attenuation or the SFR. In principle for deriving new hybrid SFR estimators, using Eq.~\ref{eqn:k-IR-v2} would appear to be the most direct option as there multiple conversion factors ($c_{UV}$) available in the literature. However in practice, these conversion factors strongly rely on multiple hypotheses, and in particular on the assumed shape of the SFH, which 1. is difficult to know a priori, 2. is not always adequate \citep{boquien2014a}, 3. varies from region to region and from galaxy to galaxy, and 4. makes the estimated SFR somewhat model dependent. To avoid such issues it appears safer to rely on Eq.~\ref{eqn:k-IR-v1} and therefore on the UV attenuation, which has a weaker dependence on the exact SFH. This means that while we do not explicitly provide hybrid SFR estimators, we provide relations to correct the UV for the attenuation and we leave the reader the choice of the most adequate UV--to--SFR conversion coefficient for their case, limiting model dependence and ensuring enhanced flexibility compared to hybrid estimators provided in the literature. At the same time, we also want to understand whether and how hybrid estimators depend on local physical properties such as the SFR or the stellar mass, so we also need to be in a position to compute those. To reach these objectives, we model the FUV to FIR emission pixel--by--pixel using the python CIGALE\footnote{\url{http://cigale.lam.fr/}} SED modelling code (Boquien et al.) version 0.9. Briefly, the model is based on an energy balance principle: the energy absorbed by dust in the UV, optical, and NIR domains is self--consistently re--emitted in the mid-- and far--infrared. Each SED is computed from the following high--level procedure: \begin{itemize} \item The star formation history (SFH) is modelled with two decaying exponentials. The first one with a long $e$--folding time of a few billion years models the long term star formation of the galaxy that has created the bulk of the total stellar mass. The second one with a much shorter $e$--folding time under 100 Myr models the latest episode of star formation. This modelling of the SFH represents a compromise. Because we are using only broad band data, it is not possible to constrain the shape of the SFH in more detail. At the same time we are studying regions of a typical size of 1~kpc whose SFH is expected to vary with more amplitude and more rapidly than in the case of entire galaxies. Modelling the SFH with two decaying exponentials allows us to take such variations into account without unnecessary complexity that would not be constrained with broad band observations. We will see later that this flexibility is important to include the impact of a variable SFH on the estimation of other physical properties, and as a consequence on the computation of $k_i$ (see Sect.~\ref{ssec:variations-kIR}). \item The stellar spectrum is calculated by convolving the \cite{bruzual2003a} single stellar populations with the SFH, assuming a \cite{chabrier2003a} initial mass function and a metallicity of $Z=0.02$, which corresponds to $12+\log O/H=8.90$ assuming the solar metallicity ($Z=0.0134$) and oxygen abundance ($12+\log O/H=8.69$) from \cite{asplund2009a}. One of the main uncertainties on the stellar models is the impact of thermally pulsating asymptotic giant branch stars. In total, systematics can induce mass inaccuracies of up to 0.3~dex \citep{conroy2013a}. In addition, due to the degeneracy between the metallicity, the age, and the attenuation, the choice of the metallicity can also have an impact on the estimates of the physical properties we present in Sect.~\ref{ssec:parameters-maps}. We evaluate these uncertainties in Appendix~\ref{sec:impact-metals}. \item The nebular emission, which includes both the recombination lines and the continuum (free--free, free--bound, and 2--photon processes) but does not include lines from photodissociation regions, is computed from dedicated CLOUDY \citep{ferland1998a} templates based on the models of \cite{inoue2011a}. The emission is directly proportional to the production rate of Lyman continuum photons, with the Lyman continuum escape fraction assumed to be negligible and the dust absorbing 10\% of Lyman photons. The ionisation parameter is assumed to be $10^{-2}$, which has an impact on line ratios but little impact on broadband fluxes. \item The stellar and nebular emissions are then attenuated using a power--law--modified starburst curve \citep{calzetti1994a,calzetti2000a}: $\mathrm{A(\lambda)=A(\lambda)_{SB}\times(\lambda/550~nm)^\delta}$, with $\delta$ the free parameter modifying the slope of the attenuation curve. We also include an ultraviolet bump with a variable strength ranging from no bump at all to a strong bump similar to that of the Milky Way. This modelling allows for a flexible shape for the attenuation law and also for a variable differential reddening between the emission due to stars younger and older than 10~Myr, following the model of \cite{charlot2000a}. This means that stellar populations of all ages, including old stars, contribute to dust heating and that stars older than 10~Myr are less attenuated than their younger counterparts. \item Finally, the energy absorbed by the dust is re--emitted in the IR using the latest \cite{dale2014a} templates. We assume a broad of range for $\alpha$, a ``parameter that represents the relative contributions of the different local SEDs'' \citep{dale2002a} and defined as $dM_d(U)\propto U^{-\alpha}dU$, with $M_d$ the dust mass and $U$ the radiation field intensity. \end{itemize} The full grid consists of a little over 80 million models. The list of the main parameters and their respective values is presented in Table~\ref{tab:cigale-parameters}. \begin{table*} \centering \begin{tabularx}{\textwidth}{lX} \hline\hline Free parameters&Values\\\hline $e$--folding time of the old population&0.001, 1, 2, 3, 4, 5, 6, 7, 8 Gyr\\ $e$--folding time of the latest episode of star formation&5, 10, 25, 50, 100 Myr\\ Fraction of the stellar mass formed in the latest episode of star formation&0, 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.25\\ Age of the oldest stars&13 Gyr\\ Age of the onset of the latest episode of star formation&5, 10, 25, 50, 100, 200, 350, 500 Myr\\\hline E(B$-$V) of the young population&0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50, 0.55, 0.60, 0.65, 0.70 mag\\ Differential reddening&0.25, 0.50, 0.75\\ $\delta$&$-0.5$, $-0.4$, $-0.3$, $-0.2$, $-0.1$, 0.0\\ Ultraviolet bump strength&0 (no bump), 1, 2, 3 (Milky Way--like)\\\hline $\alpha$&1.5000, 1.5625, 1.6250, 1.6875, 1.7500, 1.8125,\\ &1.8750, 1.9375, 2.0000, 2.2500, 2.5000, 2.7500,\\ &3.0000, 3.2500, 3.5000, 3.7500, 4.0000\\\hline\hline Fixed parameters&Value\\\hline Metallicity&0.02\\\hline Ionisation parameter&$10^{-2}$\\ Dust absorbed fraction&10\%\\ Escape fraction&0\%\\\hline \end{tabularx} \caption{Main CIGALE parameters\label{tab:cigale-parameters}} \end{table*} In the final step, we fit these models to the observed SED. For each of the output parameters, we compute the marginalised probability distribution functions (PDF) from the $\chi^2$ distribution. The estimated values of the parameters and their uncertainties are taken as the likelihood--weighted means and standard deviations. The precision and uncertainty of the evaluation of the physical properties is described in Appendix \ref{sec:model-uncertainties}. Note that as the different physical components contribute to part or all of the bands considered and as we fit all of them simultaneously, all components potentially have an effect on the determination of the physical properties. For instance, a change in the SFH will also have on effect on the stellar mass and on the attenuation, and therefore indirectly on the relative amount of dust heating by younger and older stars. An example of a best--fit with the stellar, nebular, and dust components is shown in Fig.~\ref{fig:typical-fit}. \begin{figure}[!htbp] \includegraphics[width=\columnwidth]{typical_fit.pdf} \caption{Example of a typical best--fit (median reduced $\chi^2$ of the best--fits for the entire sample). This fit corresponds to a region to the East of NGC~4736. Different components are shown: unattenuated stellar emission (dashed blue line), attenuated stellar emission (yellow line), nebular emission (magenta line), and dust emission (red line). The model fluxes in broad bands are shown as red dots and the observed fluxes as green crosses with their corresponding uncertainties.\label{fig:typical-fit}} \end{figure} \subsection{Parameters maps\label{ssec:parameters-maps}} For each galaxy in the sample, we show in Fig.~\ref{fig:maps-parameters} the maps of three of the main physical properties estimated by CIGALE: the FUV attenuation, the stellar mass surface density, and the current SFR surface density. \begin{figure*}[!htbp] \includegraphics[width=.495\textwidth]{maps_ngc0628.pdf} \includegraphics[width=.495\textwidth]{maps_ngc0925.pdf} \includegraphics[width=.495\textwidth]{maps_ngc1097.pdf} \includegraphics[width=.495\textwidth]{maps_ngc3351.pdf} \includegraphics[width=.495\textwidth]{maps_ngc4321.pdf} \includegraphics[width=.495\textwidth]{maps_ngc4736.pdf} \includegraphics[width=.495\textwidth]{maps_ngc5055.pdf} \includegraphics[width=.495\textwidth]{maps_ngc7793.pdf}\\ \caption{FUV attenuation (left), stellar mass surface density (centre), and the current SFR surface density (right) estimated with CIGALE for all the galaxies in the sample. All surface densities are corrected for the inclination. For easier comparison between each galaxy, the colour scale is fixed across all objects.\label{fig:maps-parameters}} \end{figure*} First of all, the FUV attenuation shows a broad range of behaviours. Galaxies such as NGC~925 and NGC~7793, which are the two lowest metallicity galaxies in the sample, have consistently low attenuations across the disk. At the opposite end, NGC~5055 exhibits high values of the attenuation on a large fraction of the disk. Conversely, NGC~1097 and NGC~3351 have a very strong attenuation in the inner regions but more moderate values further out in the disk. The higher attenuation is probably due to the high concentration of molecular gas and dust in the central parts of these two well--known barred galaxies. The stellar mass maps are markedly different from attenuation maps. We see that all galaxies have a very smooth, radially declining stellar mass profile. This shows the ability of the model to recover older stellar populations even in regions of intense star formation where they could be outshined by younger stellar populations at some wavelengths. Finally, the SFR maps also present interesting features. There are broadly decreasing radial gradients in the SFR density. The nuclear starbursts in NGC~1097 and NGC~3351 present very strong peaks compared to the typical star formation across the rest of the disk. Such nuclear starbursts are expected due to the central gas concentration in barred galaxies \citep[e.g.,][]{tabatabaei2013a}. A strong peak is also present in NGC~4321 and NGC~5055. Conversely NGC~628, NGC~925, and NGC~7793 have much shallower radial gradients. Of particular interest, NGC~4736 has a strong star--forming ring that shows prominently on the map. \section{Estimation of $k_i$ and dependence on the local physical parameters\label{sec:kIR-dependence-parameters}} The estimation of the FUV attenuation with CIGALE allows us to compute $k_i$ for each pixel and for each IR band. This easily comes from Eq.~\ref{eqn:k-IR-v1}: \begin{equation} k_i=\frac{L\left(UV\right)_{obs}}{L\left(IR\right)_i}\times\left[10^{A\left(UV\right)/2.5}-1\right].\label{eqn:ki-auv} \end{equation} This means that for individual IR bands, the derivation of $k_i$ relies on the modelling only for the determination of the UV attenuation (see Appendix \ref{sec:comp-FUV} why SED modelling is required to estimate the UV attenuation). The UV and the IR luminosities are directly obtained from the observations. In the case of the TIR only, $k_i$ also depends on the estimation of the TIR luminosity by the model, a very securely determined quantity as shown in Appendix \ref{sec:model-uncertainties}. \subsection{Distribution of $k_i$ in galaxies\label{ssec:distrib-ki}} In Fig.~\ref{fig:hist-k} we present the distributions of $k_i$ for all the galaxies in the sample at 24~$\mu$m, 70~$\mu$m, and 100~$\mu$m, in addition to the TIR. We drop bands at longer wavelengths as it has been shown that their emission is mainly driven by dust heated by the old stellar populations \citep{bendo2010a,bendo2012a,bendo2015a,boquien2011a,lu2014a}. \begin{figure*}[!htbp] \includegraphics[width=.49\textwidth]{hist_k_24.pdf} \includegraphics[width=.49\textwidth]{hist_k_70.pdf}\\ \includegraphics[width=.49\textwidth]{hist_k_100.pdf} \includegraphics[width=.49\textwidth]{hist_k_TIR.pdf} \caption{Stacked distributions of $k_i$ from 24~$\mu$m (top left) to 100~$\mu$m and the TIR (bottom right) for NGC~628, NGC~925, NGC~1097, NGC~3351 NGC~4321, NGC~4736, NGC~5055, and NGC~7793 from blue (bottom distribution) to red (top distribution). The solid black line indicates the mean value for the sample whereas the dashed black line indicates the luminosity--weighted mean value. We have selected only regions detected at least at a 5--$\sigma$ level in the relevant band. This affects mostly the 70~$\mu$m and 100~$\mu$m bands.\label{fig:hist-k}} \end{figure*} The first important aspect to note is the broad width of the distributions. For instance, as we can see in Table~\ref{tab:kIR}, $k_{24}$ ranges from 1.55 to 13.45 ($\left<k_{24}\right>=8.11\pm 2.10$), so a variation of over a factor 8, and $k_{TIR}$ ranges from 0.19 to 0.85 ($\left<k_{TIR}\right>=0.63\pm0.12$) so a variation of over a factor 4. \begin{table*} \centering \begin{tabular}{cccccccc} \hline\hline Band&min($k_i$)&max($k_i$)&$\left<k_i\right>$&$\left<k_i\right>_{L_i}$&$k_i$ \citep{hao2011a}&$k_i$ \citep{liu2011a}\\\hline 24 &1.55 &13.45 &8.11$\pm$2.10 &6.17$\pm$2.17 &3.89$\pm$0.15 &6.0\\ 70 &0.19 &2.71 &1.57$\pm$0.39 &1.12$\pm$0.50 & &\\ 100 &0.22 &1.95 &1.12$\pm$0.25 &0.94$\pm$0.29 & &\\ TIR &0.19 &0.85 &0.63$\pm$0.12 &0.58$\pm$0.12 &0.46$\pm$0.12 &\\\hline \end{tabular} \caption{From left to right columns: minimum, maximum, mean, and luminosity--weighted mean of $k_i$ at 24~$\mu$m, 70~$\mu$m, 100~$\mu$m, and for the TIR.\label{tab:kIR}} \end{table*} In order to test whether the $k_i$ values from different galaxies are drawn from the same underlying distribution or not, we have carried out Kolmogorov--Smirnov tests. We find that except for a few galaxy pairs at each wavelength that have $p \ge 0.01$ (with $p$ the probability that the values have been drawn from the same underlying distribution), the samples appear to have been drawn from different distributions. This alone already shows that there must be important physical differences driving the variation of $k_i$ between each of those galaxies. A first indication of the physical origin of these variations can be obtained if we consider carefully the $k_{24}$ and $k_{TIR}$ distributions in relation to the morphological type of the galaxies. The earlier type a galaxy is, the lower $k_i$ should be because the fraction of diffuse emission should be higher in such galaxies \citep{sauvage1992a}. It is therefore not surprising that NGC~4736, the earliest type galaxy in the sample, shows some of the lowest $k_i$. At the other end of the Hubble sequence of spiral galaxies, NGC~925 and NGC~7793 are respectively of SABd and SAd types. We see in Fig.~\ref{fig:hist-k} that they indeed have among the highest values of $k_i$. This shows that the diversity in galaxy types likely contributes to the differences in the distribution of $k_i$. Our results here show the possible dangers of blindly applying estimators to individual galaxies and regions within galaxies if they have markedly different intrinsic physical properties compared to the calibration samples of the estimators. Related to the first point, when we consider our entire sample we find, as mentioned earlier, that $\left<k_{24}\right>=8.11\pm 2.10$ and $\left<k_{TIR}\right>=0.58\pm0.12$. This is higher than the values determined by \cite{hao2011a} ($\left<k_{25}\right>=3.89\pm0.15$\footnote{While defined from the IRAS 25~$\mu$m band rather than from the Spitzer 24~$\mu$m band, \cite{kennicutt2009a} showed that $L(24~\mu m)/L(25~\mu m)=0.98\pm0.06$. We therefore make no distinction between those two bands.} and $\left<k_{TIR}\right>=0.46\pm0.12$ for the TIR) and \cite{liu2011a} ($k_{24}=6.0$). To compare more readily with values derived for entire galaxies, we have also computed the luminosity--weighted means: $\left<k_{24}\right>_{L(IR)}=6.17\pm2.17$ and $\left<k_{TIR}\right>_{L(IR)}=0.58\pm0.12$. These values are closer the results of \cite{hao2011a} and \cite{liu2011a}. This shows that at such scales, the global value for each galaxy is driven by the most luminous regions. Nevertheless, applying a variable $k_i$ has a clear impact on the integrated dust--corrected FUV luminosity. For instance, adopting $k_{24}=3.89$ ($k_{24}=6.0$) can lead to galaxy--integrated dust--corrected FUV luminosities lower (higher) by 38\% (40\%) when compared to luminosities computed with a variable $k_{24}$. Deviations can be much larger locally. There are also important variations from one galaxy to another, showing the necessity to have a $k_i$ adapted for each galaxy. The actual impact of using a potentially inadequate $k_i$ naturally depends on the fraction of UV photons absorbed by dust. The lower the attenuation, the less the impact an error on $k_i$ has. We will see in Sect.~\ref{sec:kir-unresolved} how we can relate locally derived $k_i$ to global values adapted to unresolved objects. We can complement this first insight by examining the maps of $k_i$ at 24~$\mu$m, 70~$\mu$m, 100~$\mu$m, and for the TIR that we present in Fig.~\ref{fig:maps-k}. \begin{figure*}[!htbp] \includegraphics[width=.495\textwidth]{maps_k_ngc0628.pdf} \includegraphics[width=.495\textwidth]{maps_k_ngc0925.pdf} \includegraphics[width=.495\textwidth]{maps_k_ngc1097.pdf} \includegraphics[width=.495\textwidth]{maps_k_ngc3351.pdf} \includegraphics[width=.495\textwidth]{maps_k_ngc4321.pdf} \includegraphics[width=.495\textwidth]{maps_k_ngc4736.pdf} \includegraphics[width=.495\textwidth]{maps_k_ngc5055.pdf} \includegraphics[width=.495\textwidth]{maps_k_ngc7793.pdf}\\ \caption{Maps of $k_i$ \label{fig:maps-k} at 24~$\mu$m, 70~$\mu$m, 100~$\mu$m, and for the TIR, from left to right, for each of the galaxies in the sample. Only pixels detected at least at a 5--$\sigma$ level in the corresponding band are shown here.} \end{figure*} The distribution of $k_i$ shows in general a broad gradient, with larger $k_i$ in the outer parts of the galaxies. Some complex structures can also be seen with variations from one band to the other, probably translating variations of the relative influence of the different the dust heating sources. The structures also appear to vary with wavelength translating the different modes of dust heating for dust emitting at different wavelengths. The variations in the SNR with wavelength do not allow however for a comparison over the full extent of each galaxy. While the lower SNR in some bands makes difficult the study of $k_i$ in outer regions, this has a limited impact on the computation of the SFR in such regions. Indeed, most of star formation will be visible in the UV and only a small fraction in the IR. Therefore even a large uncertainty for $k_i$ will translate into a lower uncertainty on the attenuation--corrected UV and thus on the SFR. The pixel--by--pixel study of inner regions still allows us to gain considerable information on $k_i$ in the most interesting range for hybrid estimators. \subsection{Variation of $k_i$ with physical properties\label{ssec:variations-kIR}} As we have just seen, there are strong variations of $k_i$ within and between galaxies. To identify the physical origin of these variations, in Fig.~\ref{fig:k-vs-params} we plot $k_i$ versus the FUV attenuation, the stellar mass surface density, the SFR surface density, and the sSFR, the latter two averaged over the past 100~Myr. \begin{figure*}[!htbp] \includegraphics[width=\textwidth]{k_vs_params.pdf} \caption{$k_i$ versus the FUV attenuation (left), the stellar mass surface density (centre--left), the SFR surface density averaged over 100~Myr (centre--right), and the sSFR averaged over 100~Myr (right) for all the galaxies in the sample. Each dot represents one pixel and the colour indicates the galaxy following the colour bar on the right. We have selected only regions detected at least at a 5--$\sigma$ level in the relevant band. This affects mostly the 70~$\mu$m and 100~$\mu$m bands as they are the shallowest. The number of regions and the Spearman's rank correlation coefficient are shown at the top of each plot. Finally, triangles represent the global value for each galaxy in the sample, taking into account all pixels detected at least at a 1--$\sigma$ level in all bands. \label{fig:k-vs-params}} \end{figure*} A first inspection already provides us with interesting information beyond the systematic galaxy--to--galaxy differences in $k_i$ we found in Sect.~\ref{ssec:distrib-ki}: \begin{enumerate} \item there is very little to no variation of $k_i$ as a function of either the FUV attenuation and the SFR surface density, \item there is a decrease of $k_i$ with an increase of the stellar mass surface density, \item there is a clear increase of $k_i$ with the sSFR. \end{enumerate} Separating the respective impact of the different physical properties considered here is difficult as they are not independent from one another. For instance, as we can see in Fig.~\ref{fig:maps-parameters}, inner regions in our sample will tend to have higher attenuations, higher stellar mass surface densities, and they will also tend to have more star formation than outer regions. This means that even if the variation of $k_i$ is exclusively due to one of these physical properties, in some cases some correlations may also appear with other physical properties. We concentrate here on the relations between $k_i$, the stellar mass surface density, and the sSFR. Let us first examine in detail the variations of $k_i$ with the stellar mass surface density. There is a clearly defined decreasing trend of $k_i$ with increasing stellar mass surface densities ($-0.73\le\rho\le-0.58$, with $\rho$ Spearman's rank correlation coefficient). This trend suggests that evolved stellar populations, which contribute the bulk of the total stellar mass, may play an important role in governing $k_i$. This role is probably not exclusive though. If we consider that these populations are responsible at all wavelengths for the diffuse emission, then for a fixed SFR we can expect that at low (high) stellar mass surface density a larger (smaller) fraction of the IR emission will be due to UV--emitting stellar populations. If this is indeed the case, there should be an excellent correlation between $k_i$ and the sSFR. Even though there are important uncertainties on the average SFR over 100~Myr, we see a positive trend between the average sSFR over 100~Myr and $k_i$ ($0.50\le\rho\le0.97$). To further understand the origin of the relations of $k_i$ with the different physical properties, we need to distinguish IR bands based on whether their emission is proportional or not to the incident radiation field intensity. This is the case for the TIR and the 24~$\mu$m band. The reason is obvious for the TIR, which by definition measures the total dust--absorbed luminosity. The 24~$\mu$m also behaves similarly because it is dominated by stochastic heating of very small grains \citep[see in particular Fig.~15 from][note however that the contamination by the emission from grains at the equilibrium can become significant for large radiation field intensities, which would render this approximation invalid]{draine2007a}. This linear relation between the absorbed and emitted luminosities means that if, for instance, 50\% of the absorbed luminosity originates from old stellar populations, correspondingly 50\% of the emission can also be attributed to old stellar populations. In consequence, the fraction of IR emission due to star formation can be indirectly estimated from the sSFR. This explains the excellent correlation between $k_i$ and the sSFR at 24~$\mu$m and for the TIR (top--right and bottom--right panels of Fig.~\ref{fig:k-vs-params}). The relation between $k_i$ and the stellar mass surface density appears however much weaker. This is expected as this only probes the radiation field from old stars and therefore only gives a weak and indirect information on the fraction of the emission due to young stars. Conversely, the blackbody emission that dominates the 70~$\mu$m and 100~$\mu$m bands is in general not linearly proportional to the incident radiation field intensity. As the radiation field intensity increases (decreases), the emission peaks at shorter (longer) wavelengths following Wien's displacement law. This induces a super--linear or a sub--linear variation of the emission with respect to the radiation field intensity, depending on whether the band probes the Wien or the Rayleigh--Jeans tail of the blackbody. This means that unlike for the 24~$\mu$m and for TIR, the sSFR alone is not sufficient to estimate the fraction of the emission that is related to star formation. This is probably because, regions with the same sSFR can have different radiation field intensities, and therefore different dust temperatures and different $k_i$. However as suggested by the higher Spearman's rank correlation coefficient between $k_{100}$ and the sSFR, the issue is not as severe at 100~$\mu$m as it is at 70~$\mu$m. This is likely due to its closer location to the peak of the blackbody emission in terms of luminosity. As the absolute value of the derivative of the blackbody emission with respect to wavelength is smaller near the luminosity peak, it somewhat weakens the dependence on the temperature and therefore it makes the sSFR a better estimator of $k_{100}$ than for $k_{70}$ as the emission is more directly linked to the radiation field intensity. These differences from one band to the other and the contrast with the 24~$\mu$m band and the TIR illustrate the complexity to disentangle the emission due to star formation and old stellar populations for bands that are sensitive to the emission of dust at thermal equilibrium. \section{Recipes to correct the FUV for the attenuation\label{sec:hybrid-relations}} Following our results, a powerful approach to correct the UV for the attenuation by dust with hybrid estimators would be to parametrise the variation of $k_i$ against the sSFR or the stellar mass surface density. However, such a parametrisation would actually be difficult to put in practice. Parametrisations involving the sSFR would imply that we somehow already know the intrinsic FUV emission, defeating the purpose of attenuation correction. As for the stellar mass, its determination is somewhat dependent on the assumed model of stellar populations. With improvements on models, a parametrisation on the stellar mass would become invalid if current models prove to have systematic biases. Therefore, rather than relying on derived physical properties, we attempt to parametrise $k_i$ directly on observed quantities that are good tracers of the sSFR (FUV$-$NIR colours, Sect.~\ref{ssec:param-colours}) and of the stellar mass surface density (luminosity densities per unit area, Sect.~\ref{ssec:param-NIR}). This approach has the advantage of being purely observational, not relying on the sometimes strong assumptions behind empirical sSFR estimators, and not requiring a full--fledged SED modelling to compute the SFR, which may not be possible for lack of appropriate data or experience. \subsection{Observed colours as a parametrisation for $k_i$\label{ssec:param-colours}} One approach to parametrise $k_i$ is to estimate the sSFR of a galaxy is through its colour. For instance \cite{salim2005a} showed that the $\mathrm{NUV}-r$ colour of a galaxy is tightly linked to its birthrate parameter (the ratio of the average SFR over the last 100~Myr to the lifetime average SFR), a quantity closely related to the sSFR. This means that in principle we should be able to parametrise $k_i$ against $\mathrm{NUV}-r$. One obstacle in doing so in our case is that we do not have reliable optical data for the entire sample. Instead of relying on the $r$--band emission, we rather explore the effectiveness of a parametrisation against FUV$-$NIR colours. The FUV band is chosen not to require additional UV data and the NIR bands are good proxies for the stellar mass \citep{bell2001a}. In Fig.~\ref{fig:kIR-colours} we show the variation of $k_i$ versus the FUV$-$J, FUV$-$H, FUV$-$Ks, and FUV$-$3.6~$\mu$m colours. \begin{figure*}[!htbp] \includegraphics[width=\textwidth]{k_vs_colours.pdf} \caption{$k_i$ versus the FUV$-$J, FUV$-$H, FUV$-$Ks, and FUV$-$3.6 colours (AB magnitudes) for each galaxy in the sample at 24~$\mu$m, 70~$\mu$m, 100~$\mu$m, and for the TIR. The colour of each dot indicates the galaxy following the colour bar to the right. We have only selected regions detected at least at a 5--$\sigma$ level in the relevant band. This affects mostly the 70~$\mu$m and 100~$\mu$m bands. The number of regions and Spearman's rank correlation coefficient are shown at the top of each plot. The black lines represent the best linear fits using an Orthogonal Distance Regression algorithm (see the caption of Table~\ref{tab:kIR-colours} for a more detailed description). The best fit parameters along with the corresponding uncertainties are provided in Table~\ref{tab:kIR-colours}.\label{fig:kIR-colours}} \end{figure*} We see that there is a very clear relation between $k_i$ and the FUV$-$NIR colours at 24~$\mu$m ($-0.83\le\rho\le-0.82$) and for the TIR ($-0.96\le\rho\le-0.97$). The relation is however less clear at 100~$\mu$m ($-0.72\le\rho\le-0.68$) and especially at 70~$\mu$m ($-0.44\le\rho\le-0.40$). For the latter two bands, it appears however that individual galaxies tend to follow their own relation. One explanation is that as we have seen previously, the sSFR alone is not sufficient to estimate reliably the fraction of the TIR emission emerging in a given IR band. This problem is naturally non existent for the TIR which by definition includes all of the dust emission and not just a fraction as for individual bands. The 24~$\mu$m, being dominated by non--equilibrium emission appears to suffer much less from this issue. To provide an easy--to--use recipe to estimate $k_i$ from FUV$-$NIR colours, we compute the best linear fit of the form: \begin{equation} k_i=a_\mathrm{FUV-NIR}+b_\mathrm{FUV-NIR}\times\mathrm{(FUV-NIR)}, \end{equation} taking the uncertainties on both $k_i$ and FUV$-$NIR into account. Selecting all pixels detected at least at 1--$\sigma$ in all bands and at 5--$\sigma$ in band $i$, the coefficients $a_\mathrm{FUV-NIR}$ and $b_\mathrm{FUV-NIR}$ along with the corresponding uncertainties are presented in Table~\ref{tab:kIR-colours}. \begin{table*} \centering \begin{tabular}{ccccccc} \hline\hline IR band&colour&$a$&$b$&$\sigma_{ab}$&$\Delta k_i$\\\hline 24&FUV$-$J&$ 16.429\pm 0.265$&$ -2.122\pm 0.066$&$ -0.017$&$ 0.000\pm 1.393$\\ 24&FUV$-$H&$ 16.236\pm 0.248$&$ -2.005\pm 0.059$&$ -0.014$&$ 0.000\pm 1.446$\\ 24&FUV$-$K&$ 15.332\pm 0.215$&$ -1.923\pm 0.055$&$ -0.011$&$ 0.000\pm 1.404$\\ 24&FUV$-$3.6&$ 15.044\pm 0.227$&$ -2.169\pm 0.068$&$ -0.015$&$ 0.000\pm 1.405$\\\hline 70&FUV$-$J&$ 2.272\pm 0.145$&$ -0.177\pm 0.035$&$ -0.005$&$ 0.000\pm 0.339$\\ 70&FUV$-$H&$ 2.247\pm 0.143$&$ -0.164\pm 0.033$&$ -0.005$&$ 0.000\pm 0.340$\\ 70&FUV$-$K&$ 2.194\pm 0.132$&$ -0.162\pm 0.033$&$ -0.004$&$ 0.000\pm 0.340$\\ 70&FUV$-$3.6&$ 2.109\pm 0.122$&$ -0.166\pm 0.035$&$ -0.004$&$ 0.000\pm 0.347$\\\hline 100&FUV$-$J&$ 1.770\pm 0.120$&$ -0.163\pm 0.029$&$ -0.003$&$ 0.000\pm 0.180$\\ 100&FUV$-$H&$ 1.752\pm 0.118$&$ -0.153\pm 0.027$&$ -0.003$&$ 0.000\pm 0.180$\\ 100&FUV$-$K&$ 1.695\pm 0.107$&$ -0.149\pm 0.027$&$ -0.003$&$ 0.000\pm 0.179$\\ 100&FUV$-$3.6&$ 1.632\pm 0.101$&$ -0.157\pm 0.030$&$ -0.003$&$0.000\pm 0.190$\\\hline TIR&FUV$-$J&$ 1.012\pm 0.099$&$ -0.098\pm 0.024$&$ -0.002$&$ 0.000\pm 0.039$\\ TIR&FUV$-$H&$ 0.998\pm 0.097$&$ -0.092\pm 0.023$&$ -0.002$&$ 0.000\pm 0.043$\\ TIR&FUV$-$K&$ 0.961\pm 0.088$&$ -0.089\pm 0.022$&$ -0.002$&$ 0.000\pm 0.043$\\ TIR&FUV$-$3.6&$ 0.943\pm 0.083$&$ -0.099\pm 0.025$&$ -0.002$&$0.000\pm 0.041$\\\hline \end{tabular} \caption{Coefficients to estimate $k_i$ at 24~$\mu$m, 70~$\mu$m, 100~$\mu$m and for TIR from luminosities in the NIR: $k_i=a+b\times\mathrm{colour}$, with the colour in AB magnitudes. The uncertainties on each coefficient are indicated along with the covariance $\sigma_{ab}$. The last column indicates the mean offset and the standard deviation between the measured $k_i$ and the estimation from the fit. The fitting procedure is carried out using the \textsc{odr} module of the \textsc{python scipy} package.\label{tab:kIR-colours}} \end{table*} This means that we are now in a position to parametrise $k_i$ simply from the FUV$-$NIR colours. We should note that this approach is somewhat similar to that of \cite{arnouts2013a}. While they do not explore the variations of $k_i$, they have derived relations to estimate observationally the IR luminosity based solely on the 24~$\mu$m and on the NUV$-$\textit{r} and \textit{r}$-$K colours, with results that are better than 0.3~dex. In our case, the absence of $r$-band data does not allow us to adopt the same strategy for the estimation of $k_i$. For reference, we have also carried out a similar derivation for the NUV band. Results are presented in Appendix~\ref{sec:kir-NUV}. \subsection{The NIR as a parametrisation for $k_i$\label{ssec:param-NIR}} A complementary approach to parametrise $k_i$ is to estimate the stellar mass surface density of a galaxy through its NIR emission. Indeed, as old stellar populations dominate the NIR emission, the latter acts as a proxy for the total stellar mass even though in principle some colour terms can be required \citep{bell2001a}. In Fig.~\ref{fig:kIR-NIR} we show the variation of $k_i$ versus the J, H, Ks, and 3.6~$\mu$m luminosity densities per unit area. \begin{figure*}[!htbp] \includegraphics[width=\textwidth]{k_vs_NIR.pdf} \caption{$k_i$ versus the J, H, Ks, and 3.6~$\mu$m band luminosity densities per unit area in W~Hz$^{-1}$~kpc$^{-2}$ for each galaxy in the sample at 24~$\mu$m, 70~$\mu$m, 100~$\mu$m, and for the TIR. The colour of each dot indicates the galaxy following the colour bar on the right. We have selected only regions detected at least at a 5--$\sigma$ level in the relevant band. This affects mostly the 70~$\mu$m and 100~$\mu$m bands. The number of regions and the Spearman's rank correlation coefficient are shown at the top of each plot. The black lines represent the best linear fits using an Orthogonal Distance Regresion algorithm (see the caption of Table~\ref{tab:kIR-colours} for a more detailed description) taking into account the uncertainties on both axes. The best fit parameters along with the corresponding uncertainties are provided in Table~\ref{tab:kIR-NIR}.\label{fig:kIR-NIR}} \end{figure*} As expected, this approach provides us with a stronger correlation at 70~$\mu$m even though the scatter remains large ($-0.60\le\rho\le-0.56$ versus $-0.44\le\rho\le-0.40$). At 100~$\mu$m the results are similar as with the FUV$-$NIR colour ($-0.72\le\rho\le-0.65$ versus $-0.72\le\rho\le-0.68$). Finally, for the 24~$\mu$m band and the TIR, the correlation appear much weaker ($-0.61\le\rho\le-0.58$ versus $-0.83\le\rho\le-0.82$ at 24~$\mu$m and $-0.52\le\rho\le-0.48$ versus $-0.97\le\rho\le-0.96$ for the TIR). This shows that taking into account the NIR luminosity density per unit area is especially important for bands sensitive to the blackbody emission of the dust. To provide an easy--to--use recipe to estimate $k_i$ from the NIR luminosity densities per unit area, we compute the best linear fit of the form: \begin{equation} k_i=a_{NIR}+b_{NIR}\times \log\Sigma_\nu(NIR), \end{equation} taking the uncertainties on both $k_i$ and $\Sigma_\nu(NIR)$ into account. Selecting all pixels detected at least at 1--$\sigma$ in all bands and at 5--$\sigma$ in band $i$, the coefficients $a_{NIR}$ and $b_{NIR}$ along with the corresponding uncertainties are presented in Table~\ref{tab:kIR-NIR}. \begin{table*} \centering \begin{tabular}{ccccccc} \hline\hline IR band&NIR band&$a$&$b$&$\sigma_{ab}$&$\Delta k_i$\\\hline 24&J&$ 5.103\pm 0.350$&$ -7.238\pm 0.691$&$ 0.199$&$ 0.000\pm 2.801$\\ 24&H&$ 5.629\pm 0.294$&$ -6.857\pm 0.621$&$ 0.140$&$ 0.000\pm 2.693$\\ 24&K&$ 4.987\pm 0.323$&$ -6.492\pm 0.557$&$ 0.150$&$ 0.000\pm 2.645$\\ 24&3.6&$ 2.858\pm 0.556$&$ -7.449\pm 0.732$&$ 0.378$&$ 0.000\pm 2.784$\\\hline 70&J&$ 1.475\pm 0.053$&$ -0.826\pm 0.125$&$ 0.002$&$ 0.000\pm 0.312$\\ 70&H&$ 1.525\pm 0.051$&$ -0.786\pm 0.118$&$ 0.001$&$ 0.000\pm 0.311$\\ 70&K&$ 1.448\pm 0.053$&$ -0.758\pm 0.113$&$ 0.002$&$ 0.000\pm 0.311$\\ 70&3.6&$ 1.222\pm 0.074$&$ -0.855\pm 0.132$&$ 0.007$&$ 0.000\pm 0.327$\\\hline 100&J&$ 1.005\pm 0.041$&$ -0.465\pm 0.081$&$ 0.002$&$ 0.000\pm 0.183$\\ 100&H&$ 1.034\pm 0.039$&$ -0.451\pm 0.078$&$ 0.001$&$ 0.000\pm 0.179$\\ 100&K&$ 0.990\pm 0.042$&$ -0.430\pm 0.074$&$ 0.002$&$ 0.000\pm 0.179$\\ 100&3.6&$ 0.870\pm 0.058$&$ -0.463\pm 0.083$&$ 0.004$&$0.000\pm 0.193$\\\hline TIR&J&$ 0.575\pm 0.037$&$ -0.118\pm 0.056$&$ 0.001$&$ 0.000\pm 0.103$\\ TIR&H&$ 0.580\pm 0.034$&$ -0.122\pm 0.055$&$ 0.001$&$ 0.000\pm 0.101$\\ TIR&K&$ 0.567\pm 0.038$&$ -0.119\pm 0.052$&$ 0.001$&$ 0.000\pm 0.101$\\ TIR&3.6&$ 0.537\pm 0.050$&$ -0.124\pm 0.058$&$ 0.002$&$0.000\pm 0.103$\\\hline \end{tabular} \caption{Coefficients to estimate $k_i$ at 24~$\mu$m, 70~$\mu$m, 100~$\mu$m and for TIR from luminosities in the NIR: $k_i=a+b\times\left(\log\Sigma_\nu(NIR)-20\right)$, with $\Sigma_\nu$ the luminosity density per unit area in terms of W~Hz$^{-1}$~kpc$^{-2}$. The fitting procedure and the computation of the uncertainties are done in the same way as for Table~\ref{tab:kIR-colours}.\label{tab:kIR-NIR}} \end{table*} This means that we are now in a position to parametrise $k_i$ simply from observations in the NIR. For reference, we have also carried out a similar derivation for the NUV band. Results are presented in Appendix~\ref{sec:kir-NUV}. \subsection{Comparison of attenuation corrected FUV with different methods\label{ssec:comp-methods}} To examine the impact of a variable $k_i$, we compare in Fig.~\ref{fig:comp-sigma-FUV} the estimated attenuation--corrected FUV luminosities per unit area using 1. a constant $k_i$ from \cite{liu2011a} and \cite{hao2011a} ($y$--axis, first two rows), 2. a variable $k_i$ estimated from one of the linear relations with the FUV$-$NIR colours given in Table~\ref{tab:kIR-colours} and with the NIR luminosity density per unit area given in Table~\ref{tab:kIR-NIR} ($y$--axis, other rows), and 3. the estimated FUV attenuation directly obtained from the CIGALE SED modelling ($x$--axis). We carry out this comparison at 24~$\mu$m (left column) and for the TIR (right column). \begin{figure*}[!htbp] \includegraphics[width=\columnwidth]{comp_SigmaFUV_colour.pdf} \includegraphics[width=\columnwidth]{comp_SigmaFUV_NIR.pdf} \caption{Left: Difference between the CIGALE attenuation--corrected FUV luminosities per unit area and the ones estimated using a constant $k_i$ from \cite{liu2011a} and \cite{hao2011a} ($y$--axis, first two rows), a variable $k_i$ estimated from one of the linear relations with the FUV$-$NIR colours given in Table \ref{tab:kIR-colours} ($y$--axis, other rows), and the estimated FUV attenuation directly obtained from the CIGALE SED modelling ($x$--axis). The first column is based on the 24~$\mu$m band and the second column on the TIR. Right: same but estimating $k_i$ from one of the linear relations with the NIR luminosity density per unit area given in Table~\ref{tab:kIR-NIR}.\label{fig:comp-sigma-FUV}} \end{figure*} The relation of \cite{liu2011a} leads to values of the attenuation--corrected FUV that are on average compatible with that from our modelling by $-0.064\pm0.077$~dex at 24~$\mu$m whereas \cite{hao2011a} leads to values that are lower by $-0.177\pm0.062$~dex at 24~$\mu$m and $-0.077\pm0.053$~dex for the TIR. Conversely our relations give a mean offset smaller than 0.032~dex at 24~$\mu$m and smaller than 0.009~dex for the TIR. If estimates of the attenuation--corrected FUV luminosity are very good on average, some regions are nevertheless clearly discrepant, especially when estimating $k_i$ from the NIR luminosity density per unit area. At 24~$\mu$m this is particularly the case of regions with $\log\Sigma FUV\times10^{A(FUV)/2.5}>34$~W~kpc$^{-2}$, with a strong underestimate of the attenuation--corrected FUV luminosity: $\Delta=-0.510\pm0.405$~dex, using the estimator based on the 3.6~$\mu$m band. This offset is due to the strong deviation from the $k_i$--NIR best fit shown in Fig.~\ref{fig:kIR-NIR}. Where the fit led to $k_i<0$ for the most extreme regions, we assumed $k_i=0$. This illustrates one of the main limits of relying on the NIR: it does not trace locally the presence of intense star--forming episodes. Regions with a particularly high sSFR will then see their attenuation-corrected FUV emission underestimated. This effect can be seen more strongly at 24~$\mu$m than for the TIR. The clear luminosity--dependent trends with the attenuation--corrected FUV we see when estimating $k_i$ from the NIR luminosity density per unit area are nearly non--existent when estimating $k_i$ from the FUV$-$NIR colours. In the latter case, there is a slightly higher but still very small systematic offset and a smaller scatter. This means that the FUV$-$NIR colours based estimators will provide us with results that are much less affected by possible FUV luminosity dependent biases. Another interesting result is that the scatter is systematically lower for the TIR than at 24~$\mu$m. We also retrieve this effect when considering the relations of \cite{hao2011a}. This likely comes from the much smaller dynamical range of estimated $k_i$ for the TIR compared to 24~$\mu$m: from 0.35 to 0.90 for the former and from 2.07 to 14.09 for the latter, when considering the 3.6~$\mu$m band. In other words, the absorbed FUV emission from young stars contributes a much more variable fraction of the 24~$\mu$m than for the TIR. This is unsurprising as for a fixed absorbed FUV luminosity, the dust SED is very dependent on the local physical conditions, as we have seen earlier in Sect.~\ref{ssec:variations-kIR}. This is not the case for the TIR, however. Considering energy conservation, at the equilibrium the total luminosity emitted by the dust in the IR is exactly the luminosity it absorbs from the UV to the near--IR. This means that the TIR depends only on the absorbed luminosity by dust, and not on its temperature. This emphasises the importance of the TIR as an SFR estimator even if it can also be contaminated by dust emission unrelated to recent star formation. The induced variations appear small enough that the uncertainties on FUV dust correction remain small, or at least smaller than for individual IR passbands. Finally, we need to keep in mind a caveat of this comparison. We are limited here to the case of the 24~$\mu$m band and the TIR, which are the most favourable cases for the FUV$-$NIR estimators. Given the visibly poorer results at 70~$\mu$m and 100~$\mu$m, which are particularly clear when comparing Fig.~\ref{fig:kIR-colours} with Fig.~\ref{fig:kIR-NIR}, the NIR luminosity density per unit area would yield better (albeit not perfect) estimates of $k_i$ in such cases. \section{Estimators for unresolved galaxies\label{sec:kir-unresolved}} We have derived new hybrid estimators at local scales by parametrising $k_i$ on FUV$-$NIR colours and on NIR luminosity densities per unit area. These estimators are directly applicable when resolved observations are available. However, this is generally not the case beyond the local universe and the application to unresolved observations is non--trivial. This is because in that case the effective $\left<k_i\right>$ for an entire galaxy does necessarily not correspond to the one derived for a given FUV$-$NIR colour or NIR luminosity density per unit area. The reason is that at the scale of a galaxy, the effective $\left<k_i\right>$ is weighted towards the more luminous regions\footnote{In our sample, typically the 20\% to 30\% most luminous pixels dominate the integrated luminosity. The global $\left<k_i\right>$ appear to be driven by those pixels.}, which in star--forming spiral galaxies are often located in the inner regions, which in turn have a redder FUV$-$NIR colour and higher NIR luminosity density per unit area. If we simply considered the average FUV$-$NIR colour or the average NIR luminosity density per unit area of the galaxy, this would lead to an overestimate of $\left<k_i\right>$. For example, when considering the 3.6~$\mu$m luminosity density, the estimated total attenuated FUV luminosity from the 24~$\mu$m band (in effect $k_{24}\times L(24)$) is up to 70\% higher when applying the relations in Table~\ref{tab:kIR-NIR} on integrated galaxies rather than when applying these relations at local scales. This is especially the case for galaxies where star formation is very concentrated, such as NGC~1097 for instance. For galaxies such as NGC~628 and NGC~925 for which star formation is more spread out, the difference is within 10\%. Interestingly, the relations based on the FUV$-$NIR provide very similar results when applied on local or global fluxes. We therefore concentrate here on extending to unresolved galaxies the relations based on the NIR luminosity densities per unit area. By making some assumptions on the distributions of the old and young stellar populations across the disk, it is possible to derive a relation to extend our local $k_i$ estimates to global ones. To explore this possibility, we have assumed that the SFR surface density and the NIR luminosity density per unit area both follow decaying exponential profiles. The derivation, which is detailed in Appendix~\ref{sec:derivation-entire-gal}, leads to the following effective $\left<k_i\right>$ when using the NIR luminosity density per unit area: \begin{equation}\label{eqn:effective-kir} \left<k_i\right>=a_{NIR}+b_{NIR}\times\left[\log\Sigma_{\nu\mathrm{,~NIR}}\left(0\right)-\frac{2}{\ln 10}\times\frac{r_{SFR}}{r_{NIR}}-20\right], \end{equation} with $\Sigma_{\nu\mathrm{,~NIR}}\left(0\right)$ the luminosity density per unit area in the NIR at the centre of the galaxy in W~Hz$^{-1}$~kpc$^{-2}$, $r_{SFR}$ the scale length of the SFR exponential profile, and $r_{NIR}$ the scale length of the NIR exponential profile. Such relations therefore require a priori knowledge of the young and old stellar populations scale lengths. They are not always available or easy to estimate. In such cases, it may be valuable to use average values. Using the GALEX Nearby Galaxies Survey sample \citep{gildepaz2007a}, \cite{munoz2007a} measured the SFR and the stellar mass surface density scale lengths of a large number of galaxies. Using a sub--sample of 131 galaxies for which both quantities have been computed reliably and assuming that the NIR luminosity density per unit area and stellar mass surface density scale lengths are similar, we find that $\left<r_{SFR}/r_{NIR}\right>\simeq1.18\pm0.46$. Injecting this into Eq.~\ref{eqn:effective-kir}, we find: \begin{equation} \left<k_i\right>=a_{NIR}+b_{NIR}\times\left[\log\Sigma_{\nu\mathrm{,~NIR}}\left(0\right)-21.03\right],\label{eqn:kIR-resolved} \end{equation} or equivalently: \begin{equation} \left<k_i\right>=a_{NIR}+b_{NIR}\times\left[\log \frac{L_{\nu\mathrm{,~NIR}}}{2 \times \pi \times r_{NIR}^2}-21.03\right]. \end{equation} The dispersion found from the \cite{munoz2007a} sample induces a typical uncertainty on $\left<k_i\right>$ of $\pm1.7$ at 24~$\mu$m, $\pm0.3$ at 70~$\mu$m, $\pm0.2$ at 100~$\mu$m, and $\pm0.05$ for the TIR. This can be considered as a lower limit as deviations from exponential profiles for either the SFR or the NIR will also increase the uncertainty on $\left<k_i\right>$. To test whether the previous relations provide us with reasonable estimates of $\left<k_i\right>$, we have applied Eq.~\ref{eqn:kIR-resolved} to all the galaxies in our sample from 24~$\mu$m to the TIR, and for each of the four NIR bands considered. In Table~\ref{tab:comp-kIR} we compare these estimates with an effective $\left<k_i\right>$ derived from SED modelling: we compute the global attenuation--corrected FUV luminosity by correcting each individual pixel for the attenuation and then we apply Eq.~\ref{eqn:k-IR-v1} to compute $\left<k_i\right>$ from the integrated luminosities. \begin{table*}[!htbp] \centering \begin{tabular}{ccccccc}\hline\hline Galaxy&Band&$\left<k_i(\textrm{CIGALE})\right>$&$\left<k_i(J)\right>$&$\left<k_i(H)\right>$&$\left<k_i(Ks)\right>$&$\left<k_i(3.6)\right>$\\\hline NGC 628&24&$7.76$&$7.17$&$7.18$&$7.06$&$7.57$\\ NGC 925&24&$9.67$&$12.49$&$12.17$&$11.86$&$12.73$\\ NGC 1097&24&$4.77$&$5.36$&$5.17$&$4.97$&$4.66$\\ NGC 3351&24&$4.07$&$6.83$&$6.66$&$6.49$&$6.83$\\ NGC 4321&24&$7.02$&$7.57$&$7.40$&$7.11$&$7.31$\\ NGC 4736&24&$4.71$&$0.84$&$1.04$&$1.15$&$0.86$\\ NGC 5055&24&$6.82$&$5.65$&$5.43$&$5.31$&$5.42$\\ NGC 7793&24&$9.80$&$9.57$&$9.56$&$9.44$&$10.10$\\\hline NGC 628&70&$1.72$&$1.71$&$1.70$&$1.69$&$1.76$\\ NGC 925&70&$1.50$&$2.32$&$2.27$&$2.25$&$2.36$\\ NGC 1097&70&$1.12$&$1.50$&$1.47$&$1.45$&$1.43$\\ NGC 3351&70&$1.08$&$1.67$&$1.64$&$1.62$&$1.68$\\ NGC 4321&70&$1.50$&$1.76$&$1.73$&$1.70$&$1.73$\\ NGC 4736&70&$0.66$&$0.99$&$1.00$&$1.00$&$0.99$\\ NGC 5055&70&$1.38$&$1.54$&$1.50$&$1.48$&$1.52$\\ NGC 7793&70&$1.66$&$1.98$&$1.98$&$1.97$&$2.05$\\\hline NGC 628&100&$1.25$&$1.14$&$1.14$&$1.13$&$1.16$\\ NGC 925&100&$1.19$&$1.48$&$1.46$&$1.45$&$1.48$\\ NGC 1097&100&$1.00$&$1.02$&$1.00$&$0.99$&$0.98$\\ NGC 3351&100&$0.88$&$1.12$&$1.10$&$1.09$&$1.12$\\ NGC 4321&100&$1.08$&$1.16$&$1.15$&$1.13$&$1.15$\\ NGC 4736&100&$0.64$&$0.73$&$0.73$&$0.74$&$0.75$\\ NGC 5055&100&$0.87$&$1.04$&$1.02$&$1.01$&$1.03$\\ NGC 7793&100&$1.21$&$1.29$&$1.29$&$1.29$&$1.32$\\\hline NGC 628&TIR&$0.68$&$0.61$&$0.61$&$0.61$&$0.62$\\ NGC 925&TIR&$0.73$&$0.70$&$0.70$&$0.69$&$0.70$\\ NGC 1097&TIR&$0.58$&$0.58$&$0.57$&$0.57$&$0.57$\\ NGC 3351&TIR&$0.50$&$0.60$&$0.60$&$0.59$&$0.60$\\ NGC 4321&TIR&$0.61$&$0.62$&$0.61$&$0.61$&$0.61$\\ NGC 4736&TIR&$0.48$&$0.51$&$0.50$&$0.50$&$0.50$\\ NGC 5055&TIR&$0.53$&$0.58$&$0.58$&$0.57$&$0.58$\\ NGC 7793&TIR&$0.73$&$0.65$&$0.65$&$0.65$&$0.66$\\\hline \end{tabular} \caption{Comparison of the effective $\left<k_i\right>$ coefficient from 24~$\mu$m to the TIR for each galaxy derived either from the CIGALE model or estimated from Eq.~\ref{eqn:kIR-resolved} using the central NIR luminosity density per unit area in the J, H, Ks, or 3.6~$\mu$m band.\label{tab:comp-kIR}} \end{table*} Taking the estimates based on the CIGALE modelling as a reference, we find that Eq.~\ref{eqn:kIR-resolved} provides us with excellent estimates. In the vast majority of cases Eq.~\ref{eqn:kIR-resolved} gives closer estimates of $\left<k_i\right>$ than adopting the fixed values of \cite{hao2011a}. There are some cases where the estimates are particularly discrepant though. This concerns the three earlier--type galaxies in our sample: NGC~1097, NGC~3351, and NGC~4736. This is especially clear at 24~$\mu$m. If we inspect the parameters maps of NGC~1097 and NGC~3351 shown in Fig.~\ref{fig:maps-parameters} we see that star formation in the galaxy is strongly dominated by the nuclear starburst. As these two galaxies clearly deviate from the assumption of a decaying exponential profile, it is therefore expected that $\left<k_i\right>$ estimated from CIGALE will be heavily weighted towards the local $k_i$ in the most luminous regions. For NGC~4736, the issue may be different. The four central pixels, which all overlap with the nucleus, present a suspiciously large estimate of the stellar mass. After close inspection, it turns out that the fit of these pixels is particularly poor. Depending on the actual AGN type, it is possible that a very significant part of the flux from these regions actually originates from the active nucleus, with hot dust strongly increasing the IR emission, which leads to an overestimate of the stellar mass and of the SFR, leading to an uncertain impact on the sSFR, and to an overall misestimate of the physical properties we are interested in. In which case this means that for these pixels Eq.~\ref{eqn:kIR-resolved} does not provide us with reliable results. More generally, this also means that the estimators we provide in this article should not be used in regions where physical processes other than star formation contribute in a non--negligible way to the emission. In conclusion, the new relations we provide can be reasonably extended to entire galaxies as long as there is no strong deviation from the assumption of exponential profiles. For other profiles, similar relations can also be derived following the method outlined in Appendix~\ref{sec:derivation-entire-gal}. \section{Limits and recommendations\label{sec:limits}} \subsection{Validity range of the new estimators} One of the main dangers in using the new parametrisations we have derived is extrapolating the relations provided in Tables \ref{tab:kIR-colours} and \ref{tab:kIR-NIR} beyond the domain upon which they have been derived. In that case $k_i$ could be assigned unrealistically low or high values. This is in part because we assume a simple linear relation between $k_i$ and the FUV$-$NIR colours or NIR luminosity densities per unit area. In reality $k_i$ is physically bounded. For instance $k_{TIR}\simeq0$ if the TIR is exclusively due to old stellar populations and $k_{TIR}\lesssim1$ if the TIR is exclusively due to the youngest stellar populations, with the actual maximum value depending on the fraction of the dust luminosity coming from absorbed FUV photons. The range of $k_i$ values probed in this study is probably representative of the typical boundaries one may encounter as we have a wide dynamical range in terms of physical conditions. Nevertheless, it is important to apply the new estimators exclusively in the range in which they have been derived. In terms of dust luminosity per unit area, the estimators we have defined are valid over: \begin{itemize} \item $4.93\le\log\Sigma(24)\le8.36$~L$_\odot$~kpc$^{-2}$, \item $5.83\le\log\Sigma(70)\le8.99$~L$_\odot$~kpc$^{-2}$, \item $5.84\le\log\Sigma(100)\le8.97$~L$_\odot$~kpc$^{-2}$, \item $6.12\le\log\Sigma(TIR)\le9.19$~L$_\odot$~kpc$^{-2}$. \end{itemize} In terms of NIR luminosity densities per unit area, they have been defined over: \begin{itemize} \item $18.31\le\log\Sigma_\nu(J)\le21.62$~W~Hz$^{-1}$~kpc$^{-2}$, \item $18.30\le\log\Sigma_\nu(H)\le21.70$~W~Hz$^{-1}$~kpc$^{-2}$, \item $18.31\le\log\Sigma_\nu(Ks)\le21.62$~W~Hz$^{-1}$~kpc$^{-2}$, \item $18.12\le\log\Sigma_\nu(3.6)\le21.30$~W~Hz$^{-1}$~kpc$^{-2}$. \end{itemize} And finally in terms of FUV$-$NIR colours (AB magnitudes): \begin{itemize} \item $\mathrm{0.97<FUV-J<6.66}$~mag, \item $\mathrm{0.49<FUV-H<6.92}$~mag, \item $\mathrm{0.41<FUV-Ks<6.72}$~mag, \item $\mathrm{0.44<FUV-3.6<5.98}$~mag. \end{itemize} Another possible limit is that the new estimators have been derived on a sample of eight star--forming galaxies. While we have ensured to cover early and late--type spiral galaxies, such derivations are by nature dependent on the sample used. As such they should not be used on galaxies earlier than Sa. Their use on irregular galaxies is equally uncertain. We also caution against using them when there is any indication of a strong active nucleus. A type 1 nucleus can strongly contaminate the FUV and the NIR bands \citep{ciesla2015a}. More generally, active nuclei provide an additional heating mechanism independent from stellar populations. \subsection{Choice of the estimators} The new hybrid estimators that we have derived in this work represent a generalisation and an extension to new wavelengths of hybrid estimators previously published in the literature. We have shown that if these estimators physically depend on the sSFR and on the stellar mass surface density, they can be efficiently parametrised based on FUV$-$NIR colours or on the NIR luminosity densities per unit area. We have seen in Sect.~\ref{ssec:comp-methods} that they can bring important improvements on FUV attenuation estimates, and therefore on the SFR. This however comes at the cost of a slightly increased complexity with the additional requirement of NIR observations. If we take the estimators of \cite{hao2011a} as our baseline comparisons, there are several main scenarios. We suggest here a few possibilities to guide the choice of the estimator. \subsubsection{Dependence on wavelength} We have shown that each of the two methods we have presented have symmetric strengths and weaknesses. For the 24~$\mu$m and for the TIR, we suggest to estimate $k_i$ from the FUV$-$NIR colour. For the 70~$\mu$m and 100~$\mu$m bands, we suggest to estimate $k_i$ from the NIR luminosity density per unit area. \subsubsection{Resolved observations of low redshift galaxies} In case the study benefits from resolved observations of nearby galaxies, given the all--sky coverage in the NIR brought by 2MASS and WISE, we recommend to apply the estimators provided in this paper. This is especially important at 24~$\mu$m as this band appears to exhibit large variations of $k_i$. Even though we cannot compare our results with the \cite{hao2011a} estimators at 70~$\mu$m and 100~$\mu$m, this is also likely to be true. For the TIR, given the small dynamical range of $k_i$, the difference between our estimators and that of \cite{hao2011a} will be much smaller on average. If data are available, given the excellent relation with the FUV$-$NIR colours, we recommend to apply the $k_i$ derived in this paper. Finally, the new estimators have been computed on spatial scales ranging from 0.5~kpc (NGC~7793) to 1.7~kpc (NGC~1097). Their application on much smaller scales is not advised, if only because a number of assumptions for the modelling (fully sampled initial mass function, continuous SFH, etc.) will necessarily break down at some point at small spatial scales \citep{boquien2015a}. \subsubsection{Unresolved observations of low redshift galaxies} To deal with the case of unresolved galaxies, we have extended our new locally derived estimators to entire galaxies in Sect.~\ref{sec:kir-unresolved}. We showed in Table~\ref{tab:comp-kIR} that they appear to provide closer estimates of the effective $k_i$ than when adopting a constant value from \cite{hao2011a}. The main difficulty is that it requires at least the central NIR luminosity density per unit area of the galaxy, if we take the SFR and NIR scale lengths provided by \cite{munoz2007a}. If the galaxies are nevertheless resolved in the NIR only, Eq.~\ref{eqn:kIR-resolved} can be directly applied. This still requires the assumptions behind it to be valid. That is, the young and old stellar populations follow exponentially decaying profiles. This particular point is even more important when there is no resolved observation available in the NIR. In this case it may be safer to apply the average relations of \cite{hao2011a}, which have been derived on a larger sample of unresolved galaxies. \subsubsection{Note on the conversion from FUV luminosity to SFR} To compute the SFR from the relations given in this paper one needs to adopt a conversion factor, for instance one of those given in \cite{boquien2014a} for a \cite{chabrier2003a} IMF. However in this study, we have shown that the sSFR is an important parameter to understand how to combine the FUV and the IR to correct the former for the presence of dust. In other words, it is important to take the SFH into account to correct for the attenuation. But at the same time, the SFH also has an impact on the conversion factor from the FUV luminosity to the SFR, due to the contamination of the FUV emission by long--lived stars \citep{johnson2013a,boquien2014a}, whereas a constant SFR over 100~Myr is generally assumed \citep[e.g.,][]{kennicutt1998a,kennicutt2011a}. In other words, as $k_i$ depends on the exact SFH of the galaxy, the FUV--to--SFR conversion factor also depends on the SFH, albeit in a different way. This emphasises the necessity to take the impact of the SFH into account to reliably measure the SFR of galaxies from hybrid estimators. \section{Summary\label{sec:conclusion}} In this article, we have investigated why and how hybrid SFR estimators based on the combination of UV and IR emission ($L(UV)_{int}=L(UV)_{obs}+k_i\times L(IR)_i$) depend on the local physical conditions. To do so, using CIGALE (Boquien et al., Burgarella et al. in prep.), we have modelled the FUV--to--FIR SED of eight nearby spatially--resolved star--forming spiral galaxies drawn from the KINGFISH sample. This has allowed us to characterise their local physical properties such as the stellar mass, the SFR, the sSFR, the TIR luminosity, and the UV attenuation at a typical scale of 1~kpc. Our main findings are the following: \begin{enumerate} \item There are important region--to--region variations within galaxies and galaxy--to--galaxy variations of $k_i$, from 1.55 to 13.45 at 24~$\mu$m for instance (for comparison, \cite{hao2011a} and \cite{liu2011a} found a constant factor of $3.89\pm0.15$ and $6.0$ respectively). This shows that hybrid estimators using a fixed value for $k_i$ cannot be appropriate for the full diversity of galaxies and may provide systematically biased estimates when applied on galaxies whose physical properties differ from that of the original calibration sample. \item When considering the combination of the FUV with the IR luminosity, $k_i$ varies with the average sSFR over 100~Myr: increasing values of sSFR also yield increasing values of $k_i$. The reason is that as the sSFR increases, the IR emission is increasingly linked to recent star formation and the relative importance of dust heating by older stellar populations diminishes. This is more particularly the case for $k_{24}$ and $k_{TIR}$. However, being sensitive to the blackbody emission of the dust, $k_{70}$ and $k_{100}$ show a more complex behaviour with a particular sensitivity to the stellar mass surface density. \item Exploiting the physical insights provided by these correlations, we have parametrised $k_i$ against a. the FUV$-$NIR colours, and b. the NIR luminosity densities per unit area. As a result, these new parametrisations bring strong improvements compared to a constant $k_i$ to correct the FUV for the attenuation. This shows that when using individual passbands in the IR, it is crucial to take into account the variability of $k_i$. The TIR emission is not as sensitive to a variation of the diffuse emission and the difference with the $k_{TIR}$ of \cite{hao2011a} remains minor. \item Building on the success of the parametrisation of $k_i$ for the FUV, we have expanded those to the NUV (see Appendix \ref{sec:kir-NUV}). We also found clear correlations of $k_i$ with the sSFR and the stellar mass surface density, showing that we can use such estimators efficiently to correct the NUV emission for the presence of dust and retrieve the SFR in the absence of FUV data. \item Assuming exponentially decaying radial profiles for the stellar populations, we have expanded the parametrisation on the NIR luminosity densities per unit area to the case of unresolved galaxies. We have found that it provides better estimates of the effective $\left<k_i\right>$ than adopting a constant factor as for classical hybrid estimators. This shows that these new estimators can work well both on resolved and unresolved data. \end{enumerate} These new estimators provide better estimates than classical hybrid estimators published in the literature. By statistically taking into account the impact of dust heated by old stellar populations they constitute an important step towards universal estimators. \begin{acknowledgements} We would like to thank the referee whose useful comments have helped making the paper clearer. MB would like to thank Andrew Connolly for enlightening discussions about fitting techniques and the handling of uncertainties. This publication was financed by the FIC-R Fund, allocated to the project 30321072. FST acknowledges financial support from the Spanish Ministry of Economy and Competitiveness (MINECO) under grant number AYA2013-41243-P. This research made use of the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration; APLpy, an open-source plotting package for Python hosted at \url{https://aplpy.github.com}; Astropy, a community-developed core Python package for Astronomy \citep{astropy2013a}; the IPython package \citep{ipython2007a}; matplotlib, a Python library for publication quality graphics \citep{matplotlib}; SciPy \citep{scipy}. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. Based on observations made with the NASA Galaxy Evolution Explorer. GALEX is operated for NASA by the California Institute of Technology under NASA contract NAS5-98034. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Support for this work was provided by NASA through an award issued by JPL/Caltech. {\it Herschel} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. \end{acknowledgements} \bibliographystyle{aa}
8d27306583e68600fc5f47d941302d6b746327d3
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{sec:intro} The reionization of intergalactic hydrogen in the universe's first billion years is likely linked to the formation of the first stars and galaxies: considered to be the primary producers of hydrogen-ionizing photons \citep[e.g.,][]{Lehnert2003,Bouwens2003,Yan2004,Bunker2004,Shull2012,Bouwens2015a}. Accurately measuring the timeline of reionization enables us to constrain properties of these first sources \citep[e.g.,][]{Robertson2013,Robertson2015,Greig2017}. Measurements of the reionization timeline are challenging, however, due to the rarity of bright quasars at $z>6$ \citep{Fan2001,Manti2016,Parsa2018}, which have historically provided strong constraints on the end stages of reionization \citep[e.g.,][]{Fan2006,McGreer2014,Greig2016,Banados2017}. In the coming decade 21\,cm observations are expected to provide information about the $z>6$ IGM and the nature of the first galaxies \citep[e.g.,][]{Liu2016,Mirocha2016}, but current progress has been driven by observations of Ly$\alpha$\ (rest-frame 1216\,\AA) emission in galaxies, using near infra-red (NIR) spectroscopy. Ly$\alpha$\ is a highly resonant line, and strongly scattered by intervening neutral hydrogen as it travels to our telescopes. Whilst young star-forming galaxies, selected with a Lyman Break (Lyman Break Galaxies -- LBGs) show Ly$\alpha$\ emission in abundance up to $z\sim6$ \citep[e.g.,][]{Stark2011,Hayes2011,Curtis-Lake2012,Cassata2015,DeBarros2017}, at higher redshifts the fraction of galaxies detected with Ly$\alpha$\ emission, and the scale length of the Ly$\alpha$\ rest-frame equivalent width (EW) distribution, decreases rapidly \citep[e.g.,][]{Fontana2010,Pentericci2011,Caruana2012,Treu2012,Treu2013,Ono2012,Caruana2014,Pentericci2014,Schenker2014,Tilvi2014,Faisst2014,Jung2018b}. This rapid decline of detected Ly$\alpha$\ emission is most plausibly due to absorption in an increasingly neutral IGM \citep{Dijkstra2011,Dijkstra2014a,Mesinger2015}. Large spectroscopic surveys of LBG candidates are being assembled out to $z\sim7$ \citep{Pentericci2011,Pentericci2014,Pentericci2018,Hoag2019} but exploring the earliest stages of reionization requires us to observe Ly$\alpha$\ at even higher redshifts. Only a handful of Ly$\alpha$\ emitters have been confirmed at $z\simgt7.5$ \citep{Zitrin2015a,Oesch2015,Roberts-Borsani2016,Stark2017,Hoag2017}, where the dominance of sky emission in the NIR makes observations of faint sources even more challenging. Additionally, because Ly$\alpha$\ emission can be spatially extended and/or offset from the UV continuum emission \citep{Wisotzki2016,Leclercq2017}, it is likely that slit-based spectroscopy is not capturing the full Ly$\alpha$\ flux. Hence, the observed decline in Ly$\alpha$\ emission at $z>6$ could be partially due to redshift-dependent slit-losses as well as reionization. In this paper we present a search for $z\simgt7.2$ Ly$\alpha$\ emission in NIR spectroscopy of 53 intrinsically faint LBG candidates ($M_\textsc{uv} \simgt -20$), gravitationally lensed behind 6 massive galaxy clusters, including 4 of the Frontier Fields \citep{Lotz2017}, selected from the Grism Lens-Amplified Survey from Space \citep[hereafter GLASS,][]{Schmidt2014,Treu2015}. We also present observations of C\textsc{iv}\ emission in 3 images of a previously confirmed multiply-imaged $z=6.11$ galaxy \citep{Boone2013,Balestra2013,Monna2014}. The observations presented in this work were carried out with the ESO Very Large Telescope (\textit{VLT}) K-band Multi Object Spectrometer \citep[hereafter KMOS,][]{Sharples2013}. This work presents the first results of $z>3.8$ observations with KMOS. KMOS is an integral field unit (IFU) instrument, and we demonstrate here that our observations are more complete to spatially extended and/or offset Ly$\alpha$\ emission than traditional slit spectrographs. We use our new deep spectroscopic observations to infer the average IGM neutral hydrogen fraction (${\overline{x}_\textrm{{\textsc{hi}}}}$) at $z\sim8.$ \citet[][hereafter M18a]{Mason2018a} presented a flexible Bayesian framework to directly infer ${\overline{x}_\textrm{{\textsc{hi}}}}$ from detections and non-detections of Ly$\alpha$\ emission from LBGs. The framework combines realistic inhomogeneous reionization simulations and models of galaxy properties. That work measured ${\overline{x}_\textrm{{\textsc{hi}}}} = 0.59_{-0.15}^{+0.11}$ ($16-84\%$ confidence intervals) at $z\sim7$. Building on \citet{Treu2012} and \citetalias{Mason2018a} we extend this framework to use the full spectra obtained in our observations for the Bayesian inference, accounting for the incomplete wavelength coverage and spectral variation of the noise, and marginalising over emission linewidth. Our framework uses the photometric redshift probability distribution, obtained from deep photometry including new Spitzer/IRAC data, of each object to robustly account for uncertainties in redshift determination. The paper is structured as follows: Section~\ref{sec:obs} describes our KMOS observations and the target selection from the GLASS parent sample; Section~\ref{sec:results} describes the search for Ly$\alpha$\ emission in our KMOS data cubes, and the purity and completeness of our survey; and Section~\ref{sec:reionization} describes the Bayesian inference of the neutral fraction and presents our limit on ${\overline{x}_\textrm{{\textsc{hi}}}}$ at $z\sim8$. We discuss our findings in Section~\ref{sec:dis}, including an assessment of the performance of KMOS for background-limited observations using our deep observations, and summarise in Section~\ref{sec:conc}. We use the \citet{PlanckCollaboration2015} cosmology where $(\Omega_{\Lambda}, \Omega_{\textrm m}, \Omega_{\textrm b}, n, \sigma_8, H_0) =$ (0.69, 0.31, 0.048, 0.97, 0.81, 68 km s$^{-1}$\ Mpc$^{-1}$). All magnitudes are given in the AB system. \section{Observations} \label{sec:obs} \subsection{The KMOS Lens-Amplified Spectroscopic Survey} \label{sec:obs_KLASS} KLASS is an ESO VLT KMOS Large Program (196.A-0778, PI: A. Fontana) which targeted the fields of six massive galaxy clusters: Abell 2744 (hereafter A2744); MACS J0416.1-2403 (M0416); MACS J1149.6+2223 (M1149); MACS J2129.4-0741 (M2129); RXC J1347.5-1145 (RXJ1347); and RXC J2248.7-4431 (RXJ2248, aka Abell S1063). A2744, M0416, M1149 and RXJ2248 are all Frontier Fields \citep[hereafter HFF,][]{Lotz2017}. Observations were carried out in Service Mode during Periods $96-99$ (October 2015 - October 2017). KMOS is a multi-object IFU spectrograph, with 24 movable IFUs, split between 3 different spectrographs \citep{Sharples2013}. Each IFU is $2\farcs8 \times 2\farcs8$ field of view, with pixel size $0\farcs2 \times 0\farcs2$, and 2048 pixels along the wavelength axis\footnote{We use the following definitions for describing 3D spectra in this paper. Pixel: 2D spatial pixel (size $0\farcs2 \times 0\farcs2$). Spaxel: the 1D spectrum in a single spatial pixel (spanning the spectral range $\sim1-1.35\,\mu$m, in 2048 spectral pixels). Voxel: 3D pixel in the data cube with both spatial and spectral indices.}. The key science drivers of KLASS are: \begin{enumerate} \item To probe the internal kinematics of galaxies at $z\sim1-3$, with superior spatial resolution compared to surveys in blank fields \citep{Mason2017}. \item To investigate $z\simgt7$ Ly$\alpha$\ emission from the GLASS sample, independently of the HST spectroscopic observations, providing validation and cross-calibration of the results and enabling us to constrain the timeline and topology of reionization \citep{Treu2012,Treu2013,Schmidt2016,Mason2018a}. \end{enumerate} \citet{Mason2017} addressed the first science driver by presenting spatially resolved kinematics in 4 of the 6 KLASS clusters from our early data, including five of the lowest mass galaxies with IFU kinematics at $z>1$, and provided evidence of mass-dependent disk settling at high redshift \citep{Simons2017}. The KLASS kinematic data were combined with metallicity gradients from the \textit{HST}\ GLASS data to enable the study of metallicity gradients as a diagnostic of gas inflows and outflows \citep{Wang2016}. This paper addresses the second science driver by presenting our $z>7$ candidate targets with complete exposures. We use the YJ observing band, giving us access to Ly$\alpha$\ emission at $z\sim7.2-10.1$. The choice of an IFU instrument for high-redshift Ly$\alpha$\ observations was motivated by indications that ground-based slit-spectroscopy measures lower Ly$\alpha$\ flux than \textit{HST}\ slit-less grism spectroscopy \citep{Tilvi2016,Huang2016a,Hoag2017}, which, as well as reionization, could contribute to the observed decline in Ly$\alpha$\ emission at $z>6$. Ly$\alpha$\ emission can be spatially extended and/or offset from the UV continuum emission \citep{Feldmeier2013,Momose2014,Wisotzki2016,Leclercq2017}, so it is likely that slit-based spectrographs do not capture the full Ly$\alpha$\ flux. By using IFUs our observations should be more complete to spatially extended and/or offset Ly$\alpha$\ than traditional slit spectrographs. \citet{Mason2017} showed that only $\sim60\%$ of emission line flux was contained in $\sim0\farcs7$ simulated slits \citep[a typical slit-width used for Ly$\alpha$\ observations, e.g.,][]{Hoag2017} on KMOS spectra, whereas the full flux is captured within the $2\farcs8 \times 2\farcs8$ KMOS field of view. Thus we expect most Ly$\alpha$\ flux to be captured within the KMOS IFUs. The $2\farcs8$ wide IFUs cover $\sim14$ proper kpc at $z\sim8$, while the UV effective radii of galaxies at these redshifts is only $\simlt 1$ proper kpc \citep{Shibuya2015}. We demonstrate in Section~\ref{sec:results_completeness} that our KMOS observations have good completeness for spatially extended and/or offset Ly$\alpha$\ emission. \subsection{Target selection} \label{sec:obs_targets} \begin{table*} \centering \caption{KLASS cluster targets} \label{tab:targets} \begin{tabular}[c]{lcccccc} \hline \hline Cluster & Run ID & DIT [s]& NDITs$^\ast$ & Exposure [hrs] & \multicolumn{2}{c}{Number of targets}\\ & & & & & Category 1 & Category 2 \\ \hline A2744$^\dagger,^\ddagger$ & A & 900 & 25 & 6.25 & 3 & 7 \\ M0416$^\ddagger$ & B & 900 & 43 & 10.75 & 2 & 5 \\ M1149$^\ddagger$ & C & 900 & 40 & 10.00 & 2 & 6 \\ M2129 & E & 450 & 85 & 10.625 & 3 & 7 \\ RXJ1347 & D & 450 & 88 & 11.00 & 3 & 8 \\ RXJ2248$^\diamond$ & F & 300 & 93 & 7.75 & 1 & 6 \\ \hline \multicolumn{7}{p{.7\textwidth}}{\textsc{Note.} -- $^\ast$ The number of Detector Integration Times (DITs) used in this analysis: we discarded DITs if the seeing was $>0\farcs8$ as measured by stars observed in each DIT. The total exposure time = DIT $\times$ NDITs. $^\dagger$ We had to discard our initial 4 hours of observations of A2744 due to irreparable flexure issues due to rotating the instrument between science and sky DITs. All subsequent observations were performed with no rotation of the instrument between science and sky DITs. $^\ddagger$ Target selection in these clusters was primarily done from preliminary versions of the ASTRODEEP catalogues \citep{Castellano2016b,Merlin2016,DiCriscienzo2017}, which did not include Spitzer/IRAC photometry. $^\diamond$ Due to a high proper motion reference star, some of the observations of RXJ2248 were taken at a slight offset from the required target centre, reducing the total exposure at that position. RXJ2248 also included 3 $z=6.11$ targets (Appendix~\ref{app:CIV}).} \end{tabular} \end{table*} KLASS targets were selected from the GLASS survey\footnote{\url{http://glass.astro.ucla.edu}} \citep{Schmidt2014,Treu2015}, a large Hubble Space Telescope (\textit{HST}) slit-less grism spectroscopy program. GLASS obtained spectroscopy of the fields of 10 massive galaxy clusters, including the HFF and 8 CLASH clusters \citep{Postman2012}. The Wide Field Camera 3 (WFC3) grisms G102 and G141 were used to cover the wavelength range $0.8 - 1.6\,\mu$m with spectral resolution $R\sim150$. We refer the reader to \citet{Schmidt2014} and \citet{Treu2015} for full details of GLASS. KLASS observations aimed to provide the high spectral resolution necessary to measure the purity and completeness of the grism spectra, to measure lines that were unresolved in \textit{HST}, and to obtain velocity information for $z\sim1$ targets which the low resolution grisms cannot provide. In combination with additional GLASS follow-up observations at Keck \citep{Huang2016a,Hoag2017,Hoag2019} we will address the purity and completeness of the HST grisms in a future work. In this work we present our high-redshift candidate targets and our inferences about reionization obtained from the KLASS data. Two categories of high-redshift candidate KLASS targets were selected from the GLASS data: \begin{enumerate} \item Category 1: 14 objects with marginal (S/N $\sim3$) candidate Ly$\alpha$\ emission in the \textit{HST}\ grisms, identified by visual inspection of the GLASS data, which fall within the KMOS YJ spectral coverage ($\sim1-1.35\,\mu$m). 4 candidates were selected from a list of candidates in a preliminary census of GLASS data by \citet{Schmidt2016}. The remaining candidates were selected in a similar method to the procedure followed by \citet{Schmidt2016}. \item Category 2: 39 LBG candidates selected with $z_\textrm{phot} > 7.2$, from an ensemble of photometric catalogues described by \citet{Schmidt2016}. This includes three LBGs which were spectroscopically confirmed via sub-mm emission lines after our survey began: A2744\_YD4 (A2744\_2248 in this paper), at $z=8.38$ \citep[][discussed in Section~\ref{sec:reionization_phot} and \ref{sec:reionization_infer}]{Laporte2017}, M0416\_Y1 (M0416\_99 in this paper), at $z=8.12$ \citep[][discussed in Section~\ref{sec:reionization_phot}]{Tamura2018}, and M1149\_JD1, at $z=9.11$ \citep[][discussed in Section~\ref{sec:dis_JD1}]{Hashimoto2018}. \end{enumerate} An additional three targets were multiple images of the $z=6.11$ system in RXJ2248 \citep{Boone2013,Balestra2013,Monna2014} where we targeted C\textsc{iv}$\lambda$1548,1551 emission. This object is discussed in Appendix~\ref{app:CIV}. We ranked objects in order of the number of inspectors who reported a candidate emission line for our Category 1 targets, and then by the number of independent photometric catalogues the target appeared in (for both categories). Our observations were planned prior to the release of the full HFF datasets, so the photometric catalogues we used to select candidates did not contain the full photometry now available. In particular, deep Spitzer/IRAC data did not exist, which can be useful for distinguishing between high-redshift star forming galaxies and $z\sim1-2$ passive galaxies. Nor were sophisticated intra-cluster light (ICL) removal techniques developed at that point \citep[e.g.,][]{Merlin2016,Morishita2017a,Livermore2017}. Thus our LBG selection was heterogeneous, but in this paper we now add in the new deep and extended photometry to define a homogeneous photometric selection. We expect some faint candidates may have been spurious in the initial photometry and may not appear in the final deep catalogues. Additionally we expect that with the inclusion of Spitzer/IRAC photometry some of the objects originally selected to be $z>7$ may be low redshift contaminants. In our reionization analysis we use catalogues built using the final HFF datasets to define a selection function for a photometrically-selected sample for our inference (described in Section~\ref{sec:reionization_phot}). We demonstrate that this KLASS sub-sample is not a biased sample of the final parent catalogues in Appendix~\ref{app:phot}. The GLASS median $1\sigma$ flux limit is $5\times10^{-18}$\,erg s$^{-1}$ cm$^{-2}$\ \citep{Schmidt2016}, and we tried to be as inclusive as possible when assigning candidates to the KMOS IFUs from the GLASS parent catalogue. Most of the candidates were only $3\sigma$ significance in GLASS data and we were aiming to provide confirmation of those tentative targets our deep KMOS observations - though for our ground-based observations, at least 50\% of the wavelength range is dominated by sky emission and the low spectral resolution ($R\sim100$) of the HST grisms means that the line position is uncertain by $\pm25$\,\AA\ so the lines could be in bad sky regions. Additionally, in planning our observations we likely overestimated the sensitivity of KMOS YJ using the online exposure time calculator, especially at the blue end of the detectors. We discuss this in more detail in Section~\ref{sec:dis_KMOS}. As we describe below in Section~\ref{sec:results_flim}, our KLASS observations are $\sim80\%$ complete for lines with flux $>5.7\times10^{-18}$\,erg s$^{-1}$ cm$^{-2}$, which suggests we should have confirmed the majority of the 14 GLASS candidate Ly$\alpha$\ emission targets we observed (Category 1, described in Section~\ref{sec:obs_targets}). However, we did not detect any emission in the cubes containing these candidates, suggesting at least some of the GLASS candidates were spurious noise peaks in the HST grisms. A more thorough comparison of the GLASS HST grism and ground-based follow-up observations \citep[including KLASS and Keck observations,][]{Huang2016a,Hoag2017,Hoag2019} to recover the grism purity and completeness will be left to a future work. 53 $z_\mathrm{phot} > 7.2$ candidate targets across the 6 clusters were assigned to 51 KMOS IFUs (two IFUs contained two nearby candidates). The cluster list and number of high-redshift candidate targets per cluster is shown in Table~\ref{tab:targets}. \subsection{KLASS observing strategy and reduction} \label{sec:obs_obsplan} KLASS observations were carried out with KMOS YJ ($\sim1-1.35\,\mu$m). The spectral resolution $R\sim3400$ is sufficient to distinguish Ly$\alpha$\ from potential low redshift contaminants with the [O\textsc{ii}]$\lambda3726,3729$ emission doublet at $z\sim2$. Observations were carried out in service mode and executed in one hour observing blocks with repeating ABA science-sky-science integration units (detector integration times -- DITs). Each observing block comprised 1800\,s of science integration, and 900\,s on sky. Pixel dither shifts were included between science frames. A star was observed in 1 IFU in every observing block to monitor the point spread function (PSF) and the accuracy of dither offsets. The PSF was well-described by a circular Gaussian and the median seeing of our observations was FWHM $\sim0\farcs6$. In each cluster, the 3 top priority targets were observed for $1.5\times$ the average exposure time by assigning 2 IFUs per target and nodding between them during A and B modes. \subsection{Reduction} \label{sec:obs_reduction} Data were reduced using the ESO KMOS pipeline v.1.4.3 \citep{Davies2013}. We apply a correction for known readout channel level offsets before running the pipeline. We run the pipeline including optimised sky subtraction routines \texttt{sky\_tweak} \citep{Davies2007} and \texttt{sky\_scale}. To improve the sky subtraction in the pipeline-reduced `A-B' cubes we produced master sky residual spectra by median combining all IFUs on each spectrograph into a 1D master sky residual spectrum for each DIT, excluding cubes containing $z\simlt2$ targets with bright emission lines and/or continua. We then subtract these 1D sky residual spectra from the `A-B' cubes on the same spectrograph for each DIT, rescaling the 1D spectra in each spaxel so that the resulting 1D spectrum in each spaxel is minimised. Similar techniques to improve sky subtraction are described by \citet{Stott2016}. This method worked best for our survey design. We note that this method performed better than in-IFU sky residual subtraction (i.e. subtracting a median sky residual spectrum produced from `empty' spaxels in each IFU) as it preserved emission line flux in the modestly sized KMOS IFUs. Cube frames from each DIT are combined via sigma clipping, using spatial shifts determined by the position of the star observed in the same IFU in each DIT, to produce flux and noise cubes. For this work we used only frames with seeing $\leq 0\farcs8$ (as measured by the star observed in our science frames). The median seeing was $\sim 0\farcs6$. DIT length, observing pattern and total integration times used for this paper are listed in Table~\ref{tab:targets}. We note that due to the failure of one of the KMOS arms, no star was observed in the A2744 observations. We used a bright $z\sim1$ target to estimate the dither offsets for this cluster. For pure Gaussian noise, the pixel distribution of S/N should be normally distributed. We tested this by selecting non-central regions of cubes containing high-redshift candidate targets (i.e. where we expect very little source flux) and found the pixel distribution of S/N to have standard deviation $>1$, suggesting the noise is underestimated by the pipeline. We therefore apply a rescaling to the noise of the combined cubes. We create an average 1D noise spectrum in a single spaxel for each cluster by taking the root-mean-square (RMS) at every wavelength of every spaxel from the cubes containing high-redshift candidate targets. Since the cubes are predominantly noise, taking the RMS of the flux at each wavelength across multiple cubes should give the appropriate noise. We find this RMS spectrum is $\sim1.2\times$ higher than the pipeline average 1D noise spectrum (taking the average of the noise cubes across the same set of high-redshift targets). We rescale the pipeline noise in every cube by this ratio of the cluster RMS noise spectrum to the cluster average noise spectrum. Finally, we rescale the noise in each cube by a constant value so that the S/N distribution of all pixels has standard deviation 1 (clipping pixels within 99.9\% to remove spurious peaks). We find the S/N distribution is well-described by a Gaussian distribution, with non-Gaussian tails only beyond the $\simgt7\sigma$ confidence regions, due to bad sky subtraction residuals. \section{Emission Line Search, Purity and Completeness} \label{sec:results} In this section we describe our search for Ly$\alpha$\ emission in our KMOS observations. We give our algorithm for line detection in Section~\ref{sec:results_lines}, and calculate the purity and completeness of our observations in Sections~\ref{sec:results_detections} and~\ref{sec:results_completeness}. Given that we detect no convincing Ly$\alpha$\ emission lines in our sample we present our flux and EW upper limits in Section~\ref{sec:results_flim}. \subsection{Emission line detection technique} \label{sec:results_lines} To search for emission lines in the KMOS cubes, to robustly determine the completeness and purity of our survey, and determine the flux limits of our observations, we used the following algorithm to flag potential lines: \begin{enumerate} % \item Create a circular aperture with $r = 2 \sigma_\textsc{psf} \sim 0\farcs5 \sim 2.5$ pixels (using our median seeing $FWHM_\mathrm{psf} = 0\farcs6$), which will capture $86\%$ of the total flux for spatially unresolved emission line at the centre of the aperture. % \item Sum the flux, and take the RMS of the noise of all spaxels in the aperture to create 1D data and noise spectra. % \item Rescale the 1D noise spectrum so the S/N in all pixels (excluding the 0.1\% most extreme S/N values) is Normal. % \item Scan through in wavelength and flag a detection if 3 adjacent wavelength pixels have $S/N > 3$. This corresponds to a $S/N \simgt 5$ detection of the integrated line flux. % \item Iterate over 25 apertures centred within 3 pixels ($0\farcs6$) of the IFU centre, i.e. $x = [-3,-1.5,0,1.5,3]$, $y = [-3,-1.5,0,1.5,3]$ where $(x,y) = (0,0)$ is the IFU centre. % \end{enumerate} Our search covers $\sim 25 \times 2000 = 50,000$ potential emission line positions in each cube. As our detection threshold is $5\sigma$ we would expect a false positive rate of $6 \times 10^{-7}$, i.e. $\sim0.03$ false detections per cube for Gaussian noise. As discussed in Section~\ref{sec:obs_reduction} the S/N has small non-Gaussian tails due to sky subtraction residuals so we expect a slightly higher false detection rate than this. \subsection{Candidate emission lines and sample purity} \label{sec:results_detections} We ran the detection algorithm described in Section~\ref{sec:results_lines} on the 54 cubes containing our high-redshift candidate targets (including the 3 cubes containing the $z=6.11$ images). 9 unique candidate lines were flagged (combining candidates at the same wavelength identified in different apertures). Each of these candidate lines was then visually inspected to determine whether it was a true emission line or a spurious noise peak. For our inspections we use both 1D spectra extracted in the detection apertures as well as 2D collapsed images of the candidate line obtained by summing cube voxels in the wavelength direction. The 2D images are helpful for determining plausible spatially compact emission from the uniform emission produced by sky residuals. Our algorithm correctly identifies the C\textsc{iv}$\lambda1551$ emission at 11023.7\,\AA\ in the brightest image of the multiply-imaged $z=6.11$ system, demonstrating the depth of our KMOS observations and the fidelity of our algorithm. Another detection is flagged in this object at 13358.6\,\AA\ but the emission appears diffuse and the wavelength is not consistent with other expected UV emission lines so we deem this spurious. We describe this object in more detail in Appendix~\ref{app:CIV}. Of the remaining 7 lines flagged, 6 are deemed to be spurious detections as they are at the spectral edges of the detector, or immediately adjacent to strong skylines and appear to have P-Cygni profiles, indicating extreme sky subtraction failures. Whilst it could be possible to add a cut to e.g. downweight flagged lines adjacent to skylines, given the relatively low spectral resolution of our observation ($R\sim3400$) we were wary that many true emission lines could be overlapping with skylines, thus visual inspection was necessary. This is clearly demonstrated in our detection of C\textsc{iv}\ emission where both doublet components overlap with sky lines (see Figure~\ref{fig:CIV}). The remaining candidate emission line at 12683.7\,\AA\ is spatially offset from the $z>7$ LBG candidate in the cube. We determine the detected emission to be associated with a nearby ($\sim1.1\arcsec$) galaxy with $z_\textrm{phot} = 4.2$, which has bright continuum emission in the GLASS data. The candidate line appears in a particularly bad spectral region of telluric absorption, and we determine the detection to be due to inadequate continuum subtraction of the $z\sim4$ source. In our reductions we subtract a sky residual spectrum to minimise the flux in each spaxel of the high-redshift candidate cubes (Section~\ref{sec:obs_reduction}). During that process most of the continuum emission from the $z\sim4$ object was poorly subtracted by scaling the sky residual spectrum to high values. Some residual flux is left, which correlates with the positions of sky residuals. We note that the LBG candidate targeted in this IFU is not present in the final deep photometric catalogues and is excluded from our reionization (it was likely a spurious detection in the original shallow photometry, Section~\ref{sec:reionization_phot}). We remove this cube from further analysis. Thus we determine our algorithm has detected 1 real emission line, and 7 spurious detections (excluding the $z\sim4$ continuum object described above), allowing us to define the purity of our spectral sample: \begin{equation} \label{eqn:purity} P = 1 - \frac{N_\textrm{spurious}}{N_\textrm{pos}} \end{equation} where $N_\textrm{spurious} = 17$ is the total number of spurious flags (8 unique false detections which were sometimes flagged in multiple apertures) and $N_\textrm{pos} = 101763 \times 25$ is the number of possible emission line positions in the 53 useful cubes, removing wavelength pixels not covered by certain detectors, in 25 apertures. We measure $P=1-7\times10^{-6}$. Our spurious detection rate is $\sim10\times$ higher than that expected for $5\sigma$ fluctuations in the noise, which was expected due to the non-Gaussian tail in our S/N distribution due to sky subtraction residuals. To verify that the S/N distribution is symmetrical we also ran the detection algorithm to look for negative peaks (S/N$\simlt-5$) which should occur at the same rate. We found 12 flagged negative S/N detections, comparable to our 7 flagged spurious detections with positive S/N. We ran the algorithm on our Category 1 sources with a lower S/N threshold: S/N $>2.5$ per wavelength pixel, corresponding to $S/N \simgt 4$ in the integrated line. We found no convincing detections with this lower threshold and are thus unable to confirm any of the candidate GLASS emission lines. Given that most of the GLASS Ly$\alpha$\ candidates were of low significance in the GLASS \textit{HST}\ data these candidates may have been spurious noise peaks in the grism data. In Section~\ref{sec:reionization_phot} below we list the Ly$\alpha$\ flux and EW limits for our most likely $z_\mathrm{phot}$ LBGs candidates. We discuss our limits on other UV lines in Section~\ref{sec:dis_otherUV}. \subsection{Completeness} \label{sec:results_completeness} \begin{figure*} \centering \includegraphics[width=0.8\textwidth, clip=true]{figure1.pdf} \caption{Completeness as a function of line flux (\textbf{top left}), spatial offset from IFU centre (\textbf{top right}), spectral linewidth (\textbf{lower left}) and spatial extent (\textbf{lower right}). Each colour corresponds to a separate cluster target. Dashed lines show the completeness across the entire wavelength range, solid lines show the completeness in wavelength regions where the noise level is below the median. FWHM$_\textrm{spec}$ velocities were calculated assuming $z=8$. Spatial extent, FWHM$_\textrm{spat}$ is the extent of the source (excluding the PSF). We create simulated lines with total spatial extent $\textrm{FWHM}_\textrm{spat,tot} = \sqrt{\textrm{FWHM}_\textrm{PSF}^2 + \textrm{FWHM}_\textrm{spat}^2}$. In each plot the parameter of interest and wavelength are varied, while the other parameters are held constant. The fiducial parameters are: line flux = $1\times10^{-17}$\,erg s$^{-1}$ cm$^{-2}$, observed FWHM$_\textrm{spec}=4$\,\AA\ (i.e. unresolved), line centred at the IFU centre, and FWHM$_\textrm{spat} = 0\arcsec$ (i.e. unresolved point source).} \label{fig:completeness} \end{figure*} To evaluate the completeness of our emission line search we carry out comprehensive Monte Carlo simulations: inserting simulated lines into cubes with varied total flux, spectral FWHM$_\textrm{spec}$, spatial position, spatial extent FWHM$_\textrm{spat}$, and wavelength, and testing whether they are detected by our detection algorithm (Section~\ref{sec:results_lines}). Traditionally, these types of simulations are carried out by inserting simulated lines into real raw data and then running through the full reduction pipeline \citep{Fontana2010,Pentericci2014,DeBarros2017}, however, due to the complexity of the KMOS pipeline which constructs 3D cubes from 2D frames we instead create simulated cubes and add Gaussian noise drawn from an average noise cube for each cluster, mimicking completeness simulations traditionally done in imaging. We create simulated flux cubes with a 3D Gaussian emission line with varied properties and add noise to each voxel drawn from a Gaussian distribution with mean zero and standard deviation $\sigma_{x,y,\lambda}$ for each cluster. The $\sigma_{x,y,\lambda}$ cubes are constructed by taking the RMS at every voxel of all the final sky-subtracted cubes which do not contain bright $z\simlt 2$ sources ($\sim10$ cubes per cluster). As each `empty' cube is expected to be pure noise, taking the RMS at each voxel across the cubes should give an estimate of the noise per voxel, $\sigma_{x,y,\lambda}$. We calculate completeness as a function of flux, spatial offset from the IFU centre, spectral linewidth and spatial extent. For each simulation we vary the parameter of interest and wavelength, and fix the other three parameters. Our fiducial values for the parameters are: line flux = $1\times10^{-17}$\,erg s$^{-1}$ cm$^{-2}$, observed line FWHM$_\textrm{spec}=4$\,\AA\ (the spectral resolution, i.e. unresolved lines), line centred at the IFU centre, with source spatial extent FWHM$_\textrm{spat} = 0\arcsec$ (i.e. unresolved point source, the emission will have observed spatial extent with $\textrm{FWHM}_\textrm{spat,tot} = \sqrt{\textrm{FWHM}_\textrm{PSF}^2 + \textrm{FWHM}_\textrm{spat}^2}$). We draw 1000 realizations of an emission line with noise at every tested value of a parameter. The resulting completeness is the fraction of these simulated lines detected by our detection algorithm. Our fiducial simulations assume Ly$\alpha$\ emission will be spatially unresolved. These assumptions are reasonable for the intrinsically UV faint LBGs we are observing \citep{Schmidt2016,Marchi2017}. Typical slit spectrograph observations of Ly$\alpha$\ emission centre slits on the UV continuum and use slit-widths $\sim0\farcs7$, thus in KLASS we are more complete to Ly$\alpha$\ emission that may be spatially extended and/or offset from the UV continuum. Figure~\ref{fig:completeness} shows the results of our completeness simulations for all clusters. We reach $80\%$ completeness over the full wavelength range for lines $\simgt 5.7 \times 10^{-18}$\,erg s$^{-1}$ cm$^{-2}$, centred within $<0\farcs8$ of the IFU centre and with intrinsic line FWHM$_\textrm{spec} \simlt 250$\,km s$^{-1}$, assuming $z=8$ to calculate FWHM$_\textrm{spec}$ (median over all clusters). For wavelength ranges where the noise level is below the median across the whole spectrum, we reach $80\%$ completeness for $5\sigma$ lines $\simgt 3.2 \times 10^{-18}$\,erg s$^{-1}$ cm$^{-2}$, centred with $<0\farcs9$ of the IFU centre and with intrinsic line FWHM$_\textrm{spec} \simlt 550$\,km s$^{-1}$. The completeness is fairly flat for Ly$\alpha$\ spatial extent $\simlt0\farcs6$ (total extent $\simlt 0\farcs8$) demonstrating our good completeness for spatially extended Ly$\alpha$\ emission, with the normalisation of the completeness as a function of FWHM$_\textrm{spat}$ scaling with the completeness at a given total line flux. \subsection{Flux and equivalent width limits} \label{sec:results_flim} \begin{figure*} \centering \includegraphics[width=0.99\textwidth, clip=true]{figure2.pdf} \caption{Average $5\sigma$ flux limits for each cluster as a function of wavelength, assuming emission lines are spatially unresolved. We use the 1D RMS noise spectrum for each cluster as described in Section~\ref{sec:results_flim} to obtain the flux limits. Each plot corresponds to the median across all IFUs containing high-redshift candidates, for the different cluster targets. The dashed horizontal lines mark the median flux limit for each cluster.} \label{fig:fluxlim} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.99\textwidth, clip=true]{figure3.pdf} \caption{$5\sigma$ rest-frame Ly$\alpha$\ EW limits in RXJ1347 as a function of wavelength, for 3 values of UV apparent magnitude $m$, assuming emission lines are spatially unresolved. We use the 5$\sigma$ flux limit for RXJ1347 shown in Figure~\ref{fig:fluxlim} and divide by the continuum flux and $(1+z_{\mathrm{Ly}\alpha})$ at each wavelength to obtain the EW limit.} \label{fig:EWlim} \end{figure*} To calculate average flux limits for each cluster we take the average 3D noise spectrum for each cluster, $\sigma_{x,y,\lambda}$ (created by taking the RMS at every voxel across the $\sim10$ IFUs observing high redshift candidates in each cluster). We then create a 1D noise spectrum, $\sigma_\lambda$, by summing the average noise at each wavelength pixel in a circular aperture with radius $r=2\sigma_\textsc{psf}$ (where we use our median seeing $\mathrm{FWHM}_\textsc{psf}=0\farcs6$). At each wavelength pixel, $i$ the flux limit in erg s$^{-1}$ cm$^{-2}$\ is given by: \begin{equation} \label{eqn:fluxlim} f_{\textrm{lim},i} = 5 \times \frac{1}{1 - e^{-{\frac{r^2}{2\sigma_\textsc{psf}^2}}}} \sqrt{\frac{2FWHM_\mathrm{res}}{\Delta \lambda}}\sigma_i \times \Delta\lambda \end{equation} Here we obtaining an estimate of the integrated noise for an emission line with observed $FWHM_\mathrm{res}=4$\,\AA\ or $\approx110$\,km s$^{-1}$\ (the instrumental resolution), and use a threshold integrated $S/N = 5$. The term in the denominator accounts for the fact that the apertures only capture a fraction of the flux. For $r=2\sigma_\textsc{psf}$ this results in a rescaling of 1.16. The spectral pixel width of KMOS YJ is $\Delta \lambda = 1.75$\,\AA. The above calculation assumes the emission is spatially and spectrally unresolved by KMOS, which is reasonable given the expectation that Ly$\alpha$\ emission from UV faint galaxies is likely to be more spatially compact and have lower linewidth than Ly$\alpha$\ from UV bright galaxies \citep[e.g.,][]{Schmidt2016,Marchi2017}. We note that the flux limit for wider lines can be estimated as $f_\textrm{lim} \propto \sqrt{FWHM/4\,\textrm{\AA}}$. The $5\sigma$ flux limits for all clusters are shown in Figure~\ref{fig:fluxlim}. The median flux limit is $4.5 \times 10^{-18}$\,erg s$^{-1}$ cm$^{-2}$, and the range of medians for each cluster is $3.9 - 6.4 \times 10^{-18}$\,erg s$^{-1}$ cm$^{-2}$. Rest-frame Ly$\alpha$\ equivalent widths are $W = (1+z)^{-1}f(\lambda)/f_\textrm{cont}$, where $z=\lambda/\lambda_\alpha - 1$ (with $\lambda_\alpha =1216$\,\AA), and we define the continuum flux: \begin{equation} \label{eqn:fluxlim_fcont} f_\textrm{cont}(m, z) = f_0 10^{-0.4m} \frac{c}{\lambda_\alpha^2(1+z)^2} \left(\frac{\lambda_\textsc{uv}}{\lambda_\alpha}\right)^{-\beta-2} \end{equation} where $f_0 = 3.631 \times 10^{-20}$ erg s$^{-1}$ Hz$^{-1}$ cm$^{-2}$, $m$ is the apparent magnitude of the UV continuum, $c$ is the speed of light, $\lambda_\textsc{uv}$ is the rest-frame wavelength of the UV continuum (usually 1500\,\AA), and $\beta$ is the UV slope. We assume $\beta = -2$, consistent with $z\sim7$ observations \citep[e.g.,][]{Stanway2005,Blanc2011,Wilkins2011,Castellano2012,Bouwens2012,Bouwens2014}. We use the magnitude measured in \textit{HST}\ WFC3/F160W for the apparent magnitude (\texttt{automag}). Example EW limits for objects with a given apparent magnitude, using the RXJ1347 average flux limit, are plotted in Figure~\ref{fig:EWlim}. \section{Reionization inference} \label{sec:reionization} In this section we describe the extension to the \citetalias{Mason2018a} Bayesian inference framework to include the full spectra, robustly including the uncertainties in redshift via the photometric redshift distribution (Section~\ref{sec:inference}), and marginalising over the linewidth of potential emission lines. Using the observations described above we now define a clear selection function for a photometrically-selected sample of LBGs within our survey (Section~\ref{sec:reionization_phot}), and perform the inference of the IGM neutral fraction using these data (Section~\ref{sec:reionization_infer}). \subsection{Bayesian inference framework} \label{sec:inference} To use our observations to make inferences about the neutral hydrogen fraction at $z\sim8$ we use the method described by \citetalias{Mason2018a}. This forward-models the observed rest-frame Ly$\alpha$\ EW distribution as a function of the neutral fraction and galaxy UV magnitude, $p(W \,|\, {\overline{x}_\textrm{{\textsc{hi}}}}, M_\textsc{uv})$, using a combination of reionization simulations with realistic inhomogeneous IGM structure \citep{Mesinger2016a}, and empirical and semi-analytic models of galaxy properties. The models assume the observed $z\sim6$ Ly$\alpha$\ EW distribution is the `emitted' distribution (i.e. the distribution without IGM attenuation due to reionization) and use that to forward-model the observed distribution, including the impact of Ly$\alpha$\ velocity offsets. Here, as in \citetalias{Mason2018a}, we use the recent comprehensive $z\sim6$ Ly$\alpha$\ EW observations from \citet{DeBarros2017}. We use the public Evolution of 21cm Structure (EoS) suite of reionization simulations described by \citet{Mesinger2015,Mesinger2016a}\footnote{\url{http://homepage.sns.it/mesinger/EOS.html}} to generate Ly$\alpha$\ optical depths along millions of sightlines in simulated IGM cubes for a grid of volume-averaged ${\overline{x}_\textrm{{\textsc{hi}}}}$ values. As the size of ionised regions during reionization is expected to be nearly independent of redshift at fixed ${\overline{x}_\textrm{{\textsc{hi}}}}$ \citep[as there is little difference in the matter power spectrum from $z\sim7-11$,][]{McQuinn2007}, we use the same $z\sim7$ cubes as used by \citetalias{Mason2018a} rather than generating new $z\sim8$ cubes. We refer the reader to \citetalias{Mason2018a} for more details of the forward-modelling approach. Here we describe the modifications we have made to our Bayesian inference to make use of the spectral coverage and sensitivity of our observations. We account for the incomplete redshift coverage and for the gravitational lensing magnification of the objects by the foreground clusters. We marginalise over a range of potential linewidths for the Ly$\alpha$\ emission lines. We also marginalise over the photometric redshift distribution for each galaxy, which we obtain from comprehensive photometry (Section~\ref{sec:reionization_phot}), to robustly account for uncertainties and degeneracies in redshift determination. We want to obtain the posterior distribution for the neutral fraction: $p({\overline{x}_\textrm{{\textsc{hi}}}} \,|\, \{f\}, m, \mu)$ for each galaxy, where $\{f\}$ is an observed flux density spectrum as a function of wavelength, $m$ is the observed apparent UV magnitude, and $\mu$ is the magnification. A full derivation of the posterior is shown in Appendix~\ref{app:inference}, and we summarise it here. Our inference framework calculates the likelihood of an emission line emitted at redshift $z_d$ with observed rest-frame EW, $=W$, being present in an observed flux density spectrum. To calculate this likelihood we must assume a lineshape for the observed emission line. In previous inferences by \citet{Treu2013,Pentericci2014} and \citet{Tilvi2014} treated emission lines as unresolved: lines were modelled as Dirac Delta functions, with all the flux contained in a single spectral pixel. However, motivated by recent observations of $z>6$ Ly$\alpha$\ emission with linewidth $\mathrm{FWHM}_\mathrm{spec} \sim200-450$\,km s$^{-1}$, several times greater than the instrumental resolution \citep{Ono2012,Finkelstein2013,Vanzella2011,Oesch2015,Zitrin2015a,Song2016a}, here we improve the method by including the effect of linewidth. The inference is quite sensitive to linewidth as at fixed EW a broader line will have lower S/N in our observations. By assuming unresolved emission lines the lower limits on the reionization `patchiness' parameter inferred by \citet{Treu2013,Pentericci2014} and \citet{Tilvi2014} will be slightly overestimated compared to a more realistic treatment of linewidth. We note that the $z\sim7$ neutral fraction inference by \citepalias{Mason2018a} used EW limits calculated assuming a range of realistic linewidths so their result does not need revision. We discuss the impact of linewidth in more detail in Appendix~\ref{app:inference_FWHM} but note that our results are robust for FWHM in a realistic range $~\sim100-400$\,km s$^{-1}$. To modify our inference to account for linewidth, we assume Gaussian emission lines for simplicity so can write the model emission line flux density as a function of EW: \begin{equation} \label{eqn:inference_linemod} \begin{split} f_\mathrm{mod}&(\lambda, W, m, z_d, \mathrm{FWHM}) = \\ &\frac{W f_{cont}(m,z_d)(1+z_d)}{\sqrt{2\pi}\sigma_\lambda} e^{-\frac{1}{2}\left(\frac{\lambda -\lambda_d}{\sigma_\lambda}\right)^2} \end{split} \end{equation} where $z_d = \lambda_d/\lambda_\alpha - 1$, with $\lambda_\alpha=1216$\,\AA, is the redshift of an emission line, $W$ is the rest-frame equivalent width of the emission line, $f_\mathrm{cont}$ is the flux density of the continuum calculated using Equation~\ref{eqn:fluxlim_fcont} using the observed continuum apparent magnitude $m$, and $\sigma_\lambda = FWHM/2.355$ is the spectral linewidth. The likelihood of observing a 1D flux density spectrum $\{f\} = f(\lambda_i)$ for an individual galaxy (where $i$ is the wavelength pixel index), given our model where the true EW is drawn from the conditional probability distribution $p(W \,|\, {\overline{x}_\textrm{{\textsc{hi}}}}, m, \mu, z_d)$ is: \begin{equation} \label{eqn:inference_linelike} \begin{split} p&(\{f\} \,|\, {\overline{x}_\textrm{{\textsc{hi}}}}, m, \mu, z_d, \mathrm{FWHM}) = \\ &\prod_i^N \int_0^\infty dW \, \biggl[\frac{1}{\sqrt{2\pi}\sigma_i}e^{-\frac{1}{2} \left(\frac{f_i - f_\mathrm{mod}(\lambda_i, W, m, z_d, \mathrm{FWHM})}{\sigma_i}\right)^2} \\ & \quad\quad\quad\quad\quad\quad \times p(W \,|\, {\overline{x}_\textrm{{\textsc{hi}}}}, m, \mu, z_d) \biggr] \end{split} \end{equation} where $\sigma_i$ is the uncertainty in flux density at wavelength pixel $i$ and there are a total of $N$ wavelength pixels in the spectrum. $p(W \,|\, {\overline{x}_\textrm{{\textsc{hi}}}}, m, \mu, z_d)$ is the probability distribution for the observed rest-frame EW as a function of the neutral fraction, and galaxy properties -- UV apparent magnitude, magnification, and redshift. This PDF is obtained by convolving the $p(W \,|\, {\overline{x}_\textrm{{\textsc{hi}}}}, M_\textsc{uv})$ model outputs from \citetalias{Mason2018a} with the probability distribution for each galaxy's absolute UV magnitude, including errors on $m$ and $\mu$ (Equation~\ref{appeqn:inference_likeWMuv}). We note that the range of neutral fraction in the EoS simulations is ${\overline{x}_\textrm{{\textsc{hi}}}} = 0.01 - 0.95$. In order to correctly calculate posteriors and confidence intervals we set the likelihood at ${\overline{x}_\textrm{{\textsc{hi}}}}$ such that we expect to observe no Ly$\alpha$\ flux at all (in a fully neutral universe). I.e. $p(\{f\} \,|\, {\overline{x}_\textrm{{\textsc{hi}}}}=1, m, \mu, z_d, \mathrm{FWHM}) = \prod_i^N \frac{1}{\sqrt{2\pi}\sigma_i}\exp{(-f_i^2/2\sigma_i^2)}$. Given our relatively small sample size, we choose to restrict our inference to $z\sim8$, thus for ease of computation we evaluate $p(W \,|\, {\overline{x}_\textrm{{\textsc{hi}}}}, m, \mu, z_d)$ at $z_d=8$; this has a negligible impact on the final likelihood. We keep $z_d$ free in the rest of the inference. This product of likelihoods over the wavelength range of the spectrum accounts for the wavelength sensitivity of our observations, i.e. high noise regions are weighted lower than low noise regions. We also note that EW is independent of magnification. Therefore, our inferences should be quite robust to magnification, which enters only through the dependency on $M_\textsc{uv}$ of the assumed intrinsic EW distribution. Using Bayes' theorem, the posterior distribution for ${\overline{x}_\textrm{{\textsc{hi}}}}$, $z_d$ and FWHM is: \begin{equation} \label{eqn:inference_post} \begin{split} p({\overline{x}_\textrm{{\textsc{hi}}}}, z_d, \mathrm{FWHM} \,|\, \{f\}, m, \mu) \propto{}& p(\{f\} \,|\, {\overline{x}_\textrm{{\textsc{hi}}}}, m, \mu, z_d, \mathrm{FWHM}) \\ & \times p({\overline{x}_\textrm{{\textsc{hi}}}}) \, p(z_d) \, p(\mathrm{FWHM}) \end{split} \end{equation} We use a uniform prior on ${\overline{x}_\textrm{{\textsc{hi}}}}$ between 0 and 1, $p({\overline{x}_\textrm{{\textsc{hi}}}})$, and use the photometric redshift distribution for the prior $p(z_d)$. As we are only interested in the posterior probability of ${\overline{x}_\textrm{{\textsc{hi}}}}$ we can marginalise over FWHM and $z_d$ for each galaxy. We use a log-normal prior on FWHM with mean depending on $M_\textsc{uv}$ derived through empirical relations and 0.3 dex width; we discuss our choice of FWHM priors in more detail in Appendix~\ref{app:inference_FWHM} but find our results to be negligibly changed if we had used a uniform prior spanning the range of observed Ly$\alpha$\ FWHM at $z>7$ ($\sim100-400$\,km s$^{-1}$). To account for the incomplete wavelength coverage, we use the fact that if the object has Ly$\alpha$\ outside of the KMOS wavelength range (covering $[z_\textrm{min}=7.2$, $z_\textrm{max}=10.1]$) we would measure a non-detection in our data. Thus the posterior for ${\overline{x}_\textrm{{\textsc{hi}}}}$ from one galaxy is: \begin{equation} \label{eqn:inference_postmarg} \begin{split} p({\overline{x}_\textrm{{\textsc{hi}}}} \,|\, \{f\}, m, \mu) &\propto \int_{z_\textrm{min}}^{z_\textrm{max}} dz_d \; p(\{f\} \,|\, {\overline{x}_\textrm{{\textsc{hi}}}}, m, \mu, z_d) p(z_d)\\ &+ \prod_i^N \frac{1}{\sqrt{2\pi}\sigma_i}e^{-\frac{f_i^2}{2\sigma_i^2}} \left( 1 - \int_{z_\textrm{min}}^{z_\textrm{max}} dz_d \; p(z_d) \right) \end{split} \end{equation} We assume all galaxies observed are independent, so that the final posterior is the product of the normalised posteriors (Equation~\ref{eqn:inference_postmarg}) for each object. Using the photometric redshift distributions as a prior on the redshift allows us to incorporate the probability of each galaxy truly being at high redshift (rather than a low redshift contaminant) in a statistically rigorous way. In combining the posteriors in Equation~\ref{eqn:inference_postmarg} for each galaxy, the photometric redshift distribution weights the individual posteriors based on the probability of the source being within our redshift range. LBGs usually have degeneracies in their photometry which make it difficult to determine whether they are high redshift star-forming galaxies or mature $z\sim1-2$ galaxies. Thus with our method we are able to obtain reionization inferences from sources even when the photometric redshift distribution has multiple and/or broad peaks. Whilst here we have carried out the inference at $z\sim8$ only, with larger samples, it will be possible to measure ${\overline{x}_\textrm{{\textsc{hi}}}}(z)$ directly, for example by parametrising its evolution with redshift and inferring the values of its redshift-dependent parameters, or in a Markov Chain Monte Carlo exploration of IGM simulations to also infer relevant astrophysical parameters \citep{Greig2015,Greig2017}. \subsection{Defining a selection function for a photometric sample} \label{sec:reionization_phot} \begin{figure} \centering \includegraphics[width=0.49\textwidth, clip=true]{figure4.pdf} \caption{Photometric redshift distributions centred on the KMOS observable range. We show the KMOS YJ range for Ly$\alpha$\ with the solid blue horizontal line. Black lines show the $p(z_\textrm{phot})$ for the 29 sources which have $>1\%$ probability of $7.2 \leq z_\textrm{phot} \leq 8.8$ (marked by blue dashed vertical lines) which we use for the inference. 14 sources have $P(7.2 \leq z_\textrm{phot} \leq 8.8) < 0.01$, including the galaxy M1149\_JD1, recently spectroscopically confirmed at $z=9.11$ with ALMA by \citet{Hashimoto2018}. In our photometric catalogue this galaxy is correctly found to be outside of our redshift range of interest (shown here as the red curve with $z_\textrm{phot}>9$), so we do not use it for our reionization analysis but discuss it in Section~\ref{sec:dis_JD1}. Note -- the remaining 13 objects have photometric redshift distributions outside of the range plotted here.} \label{fig:pz} \end{figure} \begin{table*} \centering \caption{KLASS targets with $P(7.2 \leq z_\textrm{phot} \leq 8.8)$ solutions} \label{tab:infersample} \begin{tabular}[c]{lrrcccccc} \hline \hline Object ID$^\ast$ & R.A. & Dec. & $m_\mathrm{F160W}$ & $\mu$ & $M_\textsc{uv}^\dagger$ & $P(z_\textrm{phot})^\diamond$ & $f_\textrm{lim}^\ddagger\times 10^{-18}$ & $EW_{\textrm{Ly}\alpha}^{\ddagger,\star}$ \\ & [deg] & [deg] & & & & & [erg s$^{-1}$ cm$^{-2}$] & [\AA] \\ \hline A2744\_2036 & $3.596087$ & $-30.385836$ & $26.95\pm0.07$ & $2.4_{-0.5}^{+7.4}$ & $-19.27\pm0.89$ & 0.97 & $< 12.1$ & $<96$ \\ A2744\_2346 & $3.606460$ & $-30.380995$ & $26.78\pm0.06$ & $1.6_{-0.5}^{+0.8}$ & $-19.89\pm0.41$ & 1.00 & $< 10.6$ & $<70$ \\ A2744\_2345 & $3.606572$ & $-30.380932$ & $26.49\pm0.06$ & $1.6_{-0.5}^{+0.8}$ & $-20.19\pm0.41$ & 0.99 & $< 10.6$ & $<54$ \\ A2744\_2261 & $3.603996$ & $-30.382309$ & $27.29\pm0.10$ & $1.7_{-0.5}^{+1.1}$ & $-19.34\pm0.47$ & 0.79 & $< 10.6$ & $<113$ \\ A2744\_2503 & $3.588979$ & $-30.378668$ & $27.27\pm0.12$ & $2.2_{-0.7}^{+0.9}$ & $-19.04\pm0.39$ & 0.36 & $< 11.4$ & $<120$ \\ A2744\_2257 & $3.598123$ & $-30.382393$ & $28.62\pm0.18$ & $1.9_{-0.4}^{+0.8}$ & $-17.87\pm0.36$ & 0.54 & $< 10.7$ & $<392$ \\ A2744\_20236 & $3.572523$ & $-30.413267$ & $28.61\pm0.24$ & $1.8_{-0.5}^{+1.0}$ & $-17.94\pm0.48$ & 0.42 & $< 9.5$ & $<342$ \\ A2744\_1040 & $3.592505$ & $-30.401482$ & $27.52\pm0.15$ & $14.2_{-6.3}^{+11.2}$ & $-16.79\pm0.65$ & 0.04 & $< 9.7$ & $<129$ \\ A2744\_2248$^{\ast\ast}$ & $3.603863$ & $-30.382261$ & $26.57\pm0.07$ & $1.7_{-0.5}^{+1.1}$ & $-20.06\pm0.47$ & 0.96 & $< 10.6$ & $<58$ \\ M0416\_99$^{\ast\ast\ast}$ & $64.039162$ & $-24.093182$ & $26.28\pm0.05$ & $1.5_{-0.3}^{+0.5}$ & $-20.49\pm0.30$ & 0.78 & $< 3.4$ & $<14$ \\ M0416\_286 & $64.037567$ & $-24.088116$ & $28.20\pm0.17$ & $1.9_{-0.5}^{+0.3}$ & $-18.29\pm0.31$ & 0.66 & $< 3.6$ & $<89$ \\ M0416\_743 & $64.048058$ & $-24.081427$ & $26.56\pm0.06$ & $1.7_{-0.2}^{+0.3}$ & $-20.07\pm0.19$ & 0.07 & $< 3.1$ & $<17$ \\ M0416\_1956 & $64.060333$ & $-24.064962$ & $28.16\pm0.16$ & $1.9_{-0.6}^{+0.2}$ & $-18.33\pm0.28$ & 0.91 & $< 3.0$ & $<72$ \\ M0416\_1997 & $64.049583$ & $-24.064596$ & $27.56\pm0.17$ & $6.3_{-1.5}^{+39.3}$ & $-17.64\pm1.23$ & 0.90 & $< 2.9$ & $<40$ \\ M0416\_22746 & $64.046509$ & $-24.061630$ & $27.77\pm0.23$ & $8.1_{-3.0}^{+4.3}$ & $-17.15\pm0.53$ & 0.62 & $< 2.9$ & $<49$ \\ M1149\_23695 & $177.382996$ & $22.412041$ & $28.11\pm0.14$ & $3.6_{-2.1}^{+0.7}$ & $-17.69\pm0.58$ & 0.77 & $< 4.2$ & $<97$ \\ M1149\_3343 & $177.392715$ & $22.384718$ & $28.64\pm0.28$ & $1.7_{-0.5}^{+0.4}$ & $-17.96\pm0.42$ & 0.04 & $< 5.3$ & $<201$ \\ M1149\_1428 & $177.412216$ & $22.394894$ & $28.34\pm0.17$ & $7.5_{-2.8}^{+0.9}$ & $-16.67\pm0.36$ & 0.25 & $< 3.1$ & $<87$ \\ M1149\_945 & $177.412079$ & $22.389055$ & $27.92\pm0.13$ & $9.2_{-3.2}^{+14.4}$ & $-16.87\pm0.76$ & 0.16 & $< 3.3$ & $<63$ \\ M2129\_2633 & $322.345232$ & $-7.671373$ & $25.65\pm0.12$ & $1.6_{-0.1}^{+0.1}$ & $-21.06\pm0.13$ & 0.20 & $< 3.4$ & $<8$ \\ M2129\_2661 & $322.350848$ & $-7.675239$ & $26.38\pm0.17$ & $1.7_{-0.0}^{+0.0}$ & $-20.25\pm0.17$ & 0.07 & $< 3.4$ & $<15$ \\ M2129\_1556 & $322.344535$ & $-7.688473$ & $27.53\pm0.26$ & $4.2_{-0.2}^{+0.2}$ & $-18.11\pm0.27$ & 0.01 & $< 3.4$ & $<45$ \\ RXJ1347\_1831 & $206.896270$ & $-11.742338$ & $26.30\pm0.26$ & $9.2_{-0.4}^{+0.4}$ & $-18.49\pm0.26$ & 0.06 & $< 3.3$ & $<14$ \\ RXJ1347\_656 & $206.891246$ & $-11.752607$ & $26.43\pm0.24$ & $20.4_{-1.2}^{+1.6}$ & $-17.49\pm0.25$ & 0.72 & $< 3.7$ & $<18$ \\ RXJ1347\_101 & $206.880973$ & $-11.769816$ & $25.16\pm0.15$ & $43.9_{-5.4}^{+10.2}$ & $-17.92\pm0.26$ & 0.20 & $< 3.6$ & $<5$ \\ RXJ1347\_1368 & $206.893076$ & $-11.760230$ & $27.92\pm0.43$ & $16.6_{-1.1}^{+1.1}$ & $-16.22\pm0.43$ & 0.34 & $< 3.1$ & $<60$ \\ RXJ1347\_1280 & $206.896921$ & $-11.763833$ & $27.28\pm0.28$ & $4.8_{-0.5}^{+0.7}$ & $-18.22\pm0.31$ & 0.03 & $< 2.8$ & $<29$ \\ RXJ2248\_1006 & $342.208379$ & $-44.537520$ & $25.83\pm0.17$ & $1.6_{-0.4}^{+0.4}$ & $-20.88\pm0.32$ & 0.92 & $< 4.6$ & $<13$ \\ RXJ2248\_2086 & $342.179829$ & $-44.525664$ & $26.88\pm0.13$ & $41.0_{-25.5}^{+72.3}$ & $-16.28\pm1.09$ & 0.48 & $< 3.9$ & $<28$ \\ \hline \multicolumn{9}{p{0.99\textwidth}}{\textsc{Note.} -- $^\ast$ IDs for A2744, M0416 and M1149 match the ASTRODEEP catalogue IDs \citep{Merlin2016,DiCriscienzo2017}. $^\dagger$ These listed intrinsic magnitudes are calculated using $z=8$ and the listed magnifications and errors. $^\diamond$ This is the photometric redshift from EAzY integrated between $z=7.2$ and $z=8.8$, i.e. the total probability of the object to have. $^\ddagger$ Flux and EW limits are $5\sigma$. $^\star$ All EW are rest-frame. We stress that the EW limits only hold if the Ly$\alpha$\ is actually in the KMOS range, which has probability given by $P(7.2 \leq z_\textrm{phot} \leq 8.8)$. $^{\ast\ast}$ This object was spectroscopically confirmed by \citet{Laporte2017} at $z=8.38$. $^{\ast\ast\ast}$ This object was spectroscopically confirmed by \citet{Tamura2018} at $z=8.31$.} \end{tabular} \end{table*} To make accurate inferences for reionization it is important to have uniform and well-understood target selection functions for the sources we use. At the time of target selection for KLASS not all deep HFF data were available, nor were sophisticated ICL removal techniques developed \citep[e.g.,][]{Merlin2016,Morishita2017a,Livermore2017}. This led to heterogeneous target selections. However, for this analysis we now use the most up-to-date photometry available to create a sub-sample for analysis with a homogeneous selection function. We demonstrate in Appendix~\ref{app:phot} that this sub-sample is not a biased selection from the final parent catalogues. Deep, multi-band \textit{HST}, Spitzer-IRAC and HAWK-I photometry is now available for all our targets through the CLASH, SURFSUP, and HFF programs \citep{Postman2012,Bradac2014,Huang2016,Lotz2017}. For A2744, M0416 and M1149 we used the ASTRODEEP photometric catalogue which removed foreground intra-cluster light \citep{Castellano2016b,Merlin2016,DiCriscienzo2017}. For M2129, RXJ1347 and RXJ2248 we created our own catalogues based on the ASTRODEEP methodology (M. Brada\v{c} et al., in prep). Of the 56 high-redshift candidate targets we assigned to KMOS IFUs, 46 have matches in these final deep catalogues (including the 3 images of a $z=6.11$ multiply-imaged system in RXJ2248). To determine why 10 targets had no match in the final photometric catalogues we examined our target selection catalogues. We used preliminary versions of the ASTRODEEP catalogues for A2744, M0416 and M1149 in our initial selection, so all the objects targeted in A2744 and M0416 have matches in the final catalogues. 3 targets do not appear in the final M1149 catalogue, these objects were never in the preliminary ASTRODEEP catalogue but were selected from alternative preliminary HFF catalogues. 3 targets from M2129, 3 targets from RXJ1347 and 1 target from RXJ2248 have no matches in the final catalogues, which was expected as they were selected from an ensemble of preliminary photometric catalogues with shallower photometry, and narrower wavelength coverage compared to our final catalogues. 3 of the unmatched objects were Category 1 targets. These missing targets were likely faint in the initial photometry and so turn out to be spurious in deep photometry. Photometric redshift distributions were obtained from the final catalogues with the EAzY code \citep{Brammer2008}. We perform the EAzY fit to the entire photometric dataset, and obtain photometric redshift posteriors without the magnitude prior (which weights bright objects to lower redshifts based on observations of field galaxies and may be inappropriate for our lensed sources). As described in Section~\ref{sec:inference} our inference framework uses the full photometric redshift distribution thus we can robustly use all objects with non-zero probability of being in our redshift range of interest for our inferences. Taking the 43 high redshift KMOS targets matched in the catalogues (excluding the three images of the $z=6.11$ galaxy described in Appendix~\ref{app:CIV}) we then use the photometric redshift distributions to select objects which could be in the KMOS YJ range ($ 7.2 < z_\textrm{phot} < 10.1$). We calculate $P(7.2 < z_\textrm{phot} < 10.1) = \int_{7.2}^{10.1} p(z_\textrm{phot}) dz_\textrm{phot}$ using the normalised EAzY photometric redshift distribution for each object to find the total probability of the object being within that redshift range. We select 30 objects with $P(7.2 < z_\textrm{phot} < 10.1) > 0.01$ (though the majority have a much higher probability of being in that redshift range). The photometric redshift distributions of these objects within the KMOS YJ range are plotted in Figure~\ref{fig:pz}. We examined the final deep photometry of the 13 objects which dropped out of the KMOS YJ range in this selection, which include 6 Category 1 targets. As expected, the selection of these objects shifts to lower redshifts now the full photometry is available. The majority of them have detections in the bluest bands which would negate a $z>7$ Lyman Break, and several are clearly $z\sim1$ passive galaxies when the IRAC bands are included. Due to the relatively small sample size, we choose to perform our inference at $z\sim8$, so we select only objects with some probability to have $7.2 < z_\textrm{phot} < 8.8$. We calculate $P(7.2 < z_\textrm{phot} < 8.8) = \int_{7.2}^{8.8} p(z_\textrm{phot}) dz_\textrm{phot}$. We select 29 objects with $>1\%$ probability of being within this redshift range (21 have $>10\%$ probability, and 13 $> 60\%$ probability). One object has $z_\textrm{phot} > 9$ and is excluded from our inference. This is M1149\_JD1, recently spectroscopically confirmed at $z=9.11$ by \citet{Hashimoto2018} in ALMA, who also show a tentative Ly$\alpha$\ detection from X-shooter. As this galaxy's photometric redshift distribution clearly puts it at $z>9$ we do not include it in our $z\sim8$ reionization inferences. Its $p(z_\textrm{phot})$ can be seen in Figure~\ref{fig:pz} (red line) and we discuss our observations of it in Section~\ref{sec:dis_JD1}. Our inference uses the full $p(z_\textrm{phot})$ distribution, to robustly account for any probability of an object being a lower redshift contaminant. The median and standard deviation of best-fit photometric redshifts over this range for the sub-sample of 29 objects is $z_\textrm{phot} = 7.9\pm0.6$. These objects and their observed properties, including $P(7.2 < z_\textrm{phot} < 8.8)$ are listed in Table~\ref{tab:infersample}. We demonstrate that this sub-sample is not a biased sample of the final photometric catalogues in Appendix~\ref{app:phot}. We also cross-checked our Table~\ref{tab:infersample} with publicly available spectroscopic catalogues from ground-based follow-up at optical wavelengths for clusters A2744 \citep{Mahler2018}, M0416 \citep{Balestra2016,Caminha2017}, M1149 \citep{Grillo2016}, M2129 \citep{Monna2017} and RXJ2248 \citep{Karman2015,Karman2016}. We found no matches in those catalogues for any of the objects in Table~\ref{tab:infersample}. Non-detections of these objects in optical spectroscopy lends credence to their selection as $z>7$ candidates. Two objects have been spectroscopically confirmed at $7.2 < z < 8.8$ by groups. A2744\_2248 (a.k.a. A2744\_YD4) was confirmed at $z=8.38$ via [O\textsc{iii}]88$\mu$m emission in ALMA, and a tentative Ly$\alpha$\ emission line was also reported with line flux $(1.82\pm0.64) \times10^{-18}$\,erg s$^{-1}$ cm$^{-2}$\ and $EW=10.7\pm2.7$ \citep{Laporte2017}, which is well below our limit for that object. As discussed in Section~\ref{sec:reionization_infer} we find that treating the object as a detection in our inference has a negligible impact on our inferred limits on the neutral fraction. M0416\_99 (a.k.a M0416\_Y1) was also confirmed via [O\textsc{iii}]88$\mu$m emission in ALMA observations at $z=8.31$ by \citet{Tamura2018}. They also observed the object with X-shooter and found no rest-frame UV emission lines, with a $5\sigma$ Ly$\alpha$\ flux limit of $<8.0\times10^-18$\,erg s$^{-1}$ cm$^{-2}$\ (if the line if offset by up to 250\,km s$^{-1}$). Our KMOS flux median limits are of a comparable depth ($<3.4\times10^-18$\,erg s$^{-1}$ cm$^{-2}$). We obtain magnification estimates for each object using the publicly available HFF lens models\footnote{\url{https://archive.stsci.edu/prepds/frontier/lensmodels/}}. We take the best-fit magnifications from the most recent versions of all available lens models for each object, drop the highest and lowest magnifications to produce an approximate $1\sigma$ range of estimated magnifications, $\{ \mu \}$. We list the median magnification from this sub-sample, and the upper and lower bounds in Table~\ref{tab:infersample}. For the inference, we assume magnifications are log-normally distributed with mean given by the median $\log_{10}\{ \mu \}$ and standard deviation given by half the range of $\log_{10}\{ \mu \}$, which is a reasonable fit to the distribution of magnifications from the models. For M2129 and RXJ1347, the only non-HFF clusters, we use the magnification distribution from the Brada{\v c} group lens models \citep{Huang2016a,Hoag2019} and obtain mean and standard deviation log magnifications. As discussed in Section~\ref{sec:inference} by using the EW in our inference, which is independent of magnification (as opposed to flux), our results are quite robust to magnification uncertainties. We calculate flux and Ly$\alpha$\ EW limits for individual objects as in Section~\ref{sec:results_flim}, using Equations~\ref{eqn:fluxlim} and \ref{eqn:fluxlim_fcont}. The median intrinsic UV absolute magnitude (i.e., corrected for magnification) of the sample is $M_\textsc{uv} = -18.2$. The median observed flux $5\sigma$ upper limit in this sub-sample is $< 3.6\times10^{-18}$\,erg s$^{-1}$ cm$^{-2}$, and the median rest-frame Ly$\alpha$\ EW $5\sigma$ upper limit is $< 58$\,\AA. \subsection{Inference on the IGM neutral fraction} \label{sec:reionization_infer} We use 1D spectra and uncertainties as a function of wavelength for the 29 objects described above to infer the IGM neutral fraction at $z\sim8$ using Equation~\ref{eqn:inference_postmarg} to calculate the posterior distribution of ${\overline{x}_\textrm{{\textsc{hi}}}}$. We obtain the flux density spectra using the cubes for each object, extracting flux and noise in a circular aperture with $r=2\sigma_{psf}$, and apply a rescaling to both to account for the incomplete recovery of flux in the aperture, and a constant rescaling to the noise spectrum to ensure the S/N distribution of pixels in each spectrum is a Normal distribution. In Figure~\ref{fig:xHI} we plot the posterior distribution for ${\overline{x}_\textrm{{\textsc{hi}}}}$ obtained using our observations of the 29 $z\sim8$ KLASS targets, as well as Keck/MOSFIRE observations of 8 $z\sim8$ LBGs from the Brightest of Reionizing Galaxies survey \citep[BoRG,][]{Trenti2011,Bradley2012,Schmidt2014a} described by \citet{Treu2013}. Using the BoRG sample allows us to cover a broader range in intrinsic magnitudes spanning opposite ends of the galaxy UV luminosity function: the IGM attenuation of Ly$\alpha$\ from UV bright and UV faint galaxies is expected to be different due to differing Ly$\alpha$\ escape paths through their interstellar and circumgalactic media \citep[e.g.,][]{Stark2010,Stark2017,Mason2018b}. These two sets of independent observations, both indicate a predominantly neutral IGM at $z\sim8$. The BoRG data alone produce a lower limit of ${\overline{x}_\textrm{{\textsc{hi}}}} > 0.34$ (68\%) and for the KLASS data alone ${\overline{x}_\textrm{{\textsc{hi}}}} > 0.76$ (68\%). Lower limits from the combined dataset are ${\overline{x}_\textrm{{\textsc{hi}}}} > 0.76$ (68\%) and ${\overline{x}_\textrm{{\textsc{hi}}}} > 0.46$ (95\%). By exploiting gravitational lensing, the KLASS sample sets much lower limits on the Ly$\alpha$\ EW for intrinsically UV faint galaxies \citepalias[which produce the strongest constraints on reionization's mid-stages,][]{Mason2018a} than is possible in blank fields. Our new KLASS sample also demonstrates how increasing the number of sources for the inference produces much tighter constraints on the IGM neutral fraction compared to the 8 BoRG sources. To test whether the inclusion of objects with candidate Ly$\alpha$\ emission in GLASS data biased our sample, we tested the inference with and without including the Category 1 targets (which were specifically targeted in KLASS because they had candidate Ly$\alpha$\ emission in the \textit{HST}\ data). We found no significant difference in the posteriors. We also tested the inference with and without including the $z=8.38$ marginal detection of Ly$\alpha$\ in object A2744\_2248 by \citet{Laporte2017} with spectroscopic confirmation from O\textsc{iii}\ emission in ALMA observations. We use the EW reported by \citet{Laporte2017} $W=10.7\pm2.7$\,\AA, which is well below our $5\sigma$ limit for that object ($<53$\,\AA). Despite the potential detection, the posterior distribution for this single object strongly favours a mostly neutral IGM due to its very low EW and low significance. We did our inference using both our KMOS spectra and the \citet{Laporte2017} measurement for this object and found it to have a negligible impact on our final posterior (changing the inferred limit by only $\Delta {\overline{x}_\textrm{{\textsc{hi}}}} \sim 0.01$), demonstrating that deep limits on non-detections have a lot of power in our inferences. Our quoted posterior limits include the object as a non-detection. \begin{figure} \centering \includegraphics[width=0.49\textwidth, clip=true]{figure5.pdf} \caption{Posterior probability distribution for the IGM neutral fraction ${\overline{x}_\textrm{{\textsc{hi}}}}$ at $z\sim8$ obtained using Equation~\ref{eqn:inference_postmarg} and the EW spectra from the KLASS sample described in Section~\ref{sec:reionization_phot} and the BoRG sample described by \citet{Treu2013}. The blue line and shaded regions show the posterior from the combined datasets, and its 68\% and 95\% confidence regions (the darkest region is the 68\% confidence range).} \label{fig:xHI} \end{figure} \section{Discussion} \label{sec:dis} We discuss our new lower limit on the neutral fraction and the implications for the timeline of reionization in Section~\ref{sec:dis_reionization}, and show it favours reionization driven by UV faint galaxies with a low ionizing photon escape fraction. In Section~\ref{sec:dis_JD1} we discuss the recent tentative detection of Ly$\alpha$\ at $z=9.11$ by \citep{Hashimoto2018} and show it is not inconsistent with our results. In Section~\ref{sec:dis_otherUV} we discuss our EW limits on NV and C\textsc{iv}\ emission. Finally, in Section~\ref{sec:dis_KMOS} we present a comparison of the KMOS ETC and our achieved S/N for background-limited observations. \subsection{The timeline of reionization} \label{sec:dis_reionization} \begin{figure} \centering \includegraphics[width=0.49\textwidth, clip=true]{figure6.pdf} \caption{The redshift evolution of the volume average neutral hydrogen fraction of the IGM. Our new lower limit is shown in orange, with the horizontal errorbar at the 68\% confidence level. We also plot measurements derived from observations of: the evolving Ly$\alpha$\ EW distribution at $z\sim7$ \citepalias[orange filled star][]{Mason2018a} previous estimates from the fraction of LBGs emitting Ly$\alpha$\ \citep[open black star,][]{Mesinger2015}; the clustering of Ly$\alpha$\ emitting galaxies \citep[square,][]{Ouchi2010,Sobacchi2015}; Ly$\alpha$\ and Ly$\beta$ forest dark fraction \citep[circle - 68\% limits,][]{McGreer2014}; and QSO damping wings \citep[diamond,][]{Davies2018,Greig2018b}. We offset the constraints at $z\sim7$ by $\Delta z=0.1$ for clarity. We also plot the \citet{PlanckCollaboration2016} redshift range of instantaneous reionization (black pentagon). We show median model reionization histories derived from the \citet{Mason2015} UV luminosity function models as coloured lines. We plot models obtained from integrating the luminosity function down to two magnitude limits -- $M_\textsc{uv} =-17$ (purple dashed line) and $M_\textsc{uv} = -12$ (darkest blue solid line) and drawing from uniform distributions for the ionizing photon escape fraction $10-30$\% ($\langle f_\textrm{esc} \rangle = 20\%$) and clumping factor $C=1-6$, and log-normal distribution for the ionizing efficiency $\xi_\textrm{ion}$ with mean $25.2$ and standard deviation $0.15$\,dex. Comparing reionization histories with ionizing escape fraction drawn from a uniform distribution $1-10\%$ (light green, $\langle f_\textrm{esc} \rangle \approx 5\%$) and $10-20\%$ (medium teal, $\langle f_\textrm{esc} \rangle = 15\%$), integrating LFs down to $M_\textsc{uv} = -12$ in both cases and using the same distribution for the clumping factor and $\xi_\textrm{ion}$ as above.} \label{fig:xHI_history} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\textwidth, clip=true]{figure7.pdf} \caption{The redshift evolution of the `Ly$\alpha$\ fraction' for UV faint galaxies, the fraction of LBGs observed with Ly$\alpha$\ EW $\geq25$\,\AA. We plot literature measurements from \citet{Stark2011,Pentericci2014,Treu2013,Tilvi2014,Schenker2014,Caruana2014} and \citet{DeBarros2017}. We add small offsets in redshift for measurements at the same redshifts to ease the display of the data. We also plot the predicted Ly$\alpha$\ fraction from \citetalias{Mason2018a} calculating $p(W > 25\AA \,|\, {\overline{x}_\textrm{{\textsc{hi}}}}, M_\textsc{uv})$ using $M_\textsc{uv} = -20$ galaxies and the neutral fraction constraint ${\overline{x}_\textrm{{\textsc{hi}}}} = 0.59_{-0.15}^{+0.11}$ ($16-84\%$ confidence intervals) as the orange star. We plot the upper limits recovered in this paper as orange lines, with the solid line showing our 68\% confidence limit, and the dotted line extending to the 95\% confidence limit. We calculate $p(W > 25\AA \,|\, {\overline{x}_\textrm{{\textsc{hi}}}} > 0.76, M_\textsc{uv})$ again using $M_\textsc{uv} = -20$. Our constraint is consistent with literature values at the same redshift.} \label{fig:Lya_frac} \end{figure} We plot our new limit on the reionization timeline in Figure~\ref{fig:xHI_history}. We also plot other statistically robust constraints from \citet{Ouchi2010,McGreer2014,Sobacchi2015,Mesinger2015,Davies2018,Greig2018b,Mason2018a} and the \citet{PlanckCollaboration2016}. While no increase in ${\overline{x}_\textrm{{\textsc{hi}}}}$ compared to the \citet{Mason2018a} constraint is statistically possible within the 95\% confidence interval, our new limit, combined with the other recent ${\overline{x}_\textrm{{\textsc{hi}}}}$ statistical measurements at $z\sim7$, and other estimates \citep[e.g.,][]{Caruana2014,Zheng2017a}, provides increasing evidence for the bulk of hydrogen reionization occurring $z\sim6-8$ \citep{Greig2016,Banados2017,Davies2018,Mason2018a}, late in the \citet{PlanckCollaboration2016} confidence range. Accurate measurements of the reionization timeline can help constrain properties of early galaxies. In Figure~\ref{fig:xHI_history} we show model reionization histories obtained from integrating the \citet{Mason2015} UV luminosity functions, varying the typical reionization parameters: the minimum UV luminosity of galaxies, and the average ionizing photon escape fraction. We see that late reionization is most consistent with either a high minimum UV luminosity of galaxies ($M_\textsc{uv} < -17$) and moderate escape fraction ($\langle f_\textrm{esc} \rangle = 20\%$), or with including ultra-faint galaxies $M_\textsc{uv} < -12$) with low escape fractions ($\langle f_\textrm{esc} \rangle \simlt 15\%$). There are many degeneracies between these reionization parameters, and certainly the escape fraction is unlikely to be constant for all galaxies at all times \citep{Trebitsch2017}, but non-detections of high-redshift GRB host galaxies, and observations of lensed high-redshift galaxies, and local dwarfs, indicate galaxies fainter than $M_\textsc{uv} = -17$ likely exist at $z\sim8$ \citep[e.g.,][]{Kistler2009,Tanvir2012,Trenti2012a,Alavi2014,Weisz2017,Livermore2017,Bouwens2017,Ishigaki2018}. If ultra-faint galaxies do contribute significantly to reionization our result suggests reionization can be completed with low escape fractions, consistent with low redshift estimates of the average escape fraction \citep{Marchi2017,Rutkowski2017,Naidu2018,Steidel2018}. For comparison with previous high redshift Ly$\alpha$\ spectroscopic surveys we plot the so-called `Ly$\alpha$\ fraction', the fraction of LBGs emitting Ly$\alpha$\ with EW $\geq25$\,\AA\ in Figure~\ref{fig:Lya_frac}. We compare our new upper limits on the Ly$\alpha$\ fraction with literature measurements from \citet{Stark2011,Pentericci2011,Treu2013,Tilvi2014,Schenker2014,Caruana2014} and \citet{DeBarros2017}. We also plot the predicted Ly$\alpha$\ fraction from \citetalias{Mason2018a}. Using the \citetalias{Mason2018a} model EW distributions $p(W \,|\, {\overline{x}_\textrm{{\textsc{hi}}}}, M_\textsc{uv})$ we can calculate the Ly$\alpha$\ fraction as the probability of EW $\geq 25$\,\AA\ given our constraint on the neutral fraction. As noted by \citetalias{Mason2018a} and \citet{Mason2018b} the Ly$\alpha$\ EW distribution is likely a function of at least UV magnitude as well as the neutral fraction \citep[see][for a thorough analysis of Ly$\alpha$\ EW dependencies on galaxy properties]{Oyarzun2017}, so it can be difficult to compare Ly$\alpha$\ fraction from samples with different $M_\textsc{uv}$. Hence, when converting from the neutral fraction measurement in this work and \citetalias{Mason2018a} we use the model Ly$\alpha$\ EW distribution for $M_\textsc{uv} = -20$ galaxies to compare more easily with the literature values for which that is the typical median UV magnitude. For $M_\textsc{uv} = -20$ our Ly$\alpha$\ fraction limits are $f_{\textrm{Ly}\alpha} < 0.11$ (68\%), $<0.27$ (95\%). Using our sample median magnitude, $M_\textsc{uv} = -18.2$, the limits are not significantly different: $f_{\textrm{Ly}\alpha} < 0.08$ (68\%), $<0.24$ (95\%). Our measurements are consistent with the literature values. We note that our inference assumes no evolution in the emitted Ly$\alpha$\ EW distribution at fixed UV magnitude from $z\sim6-8$, i.e. the only evolution in the \textit{observed} EW distribution is due to reionization. Whilst there may be evolution in the amount of Ly$\alpha$\ escaping the ISM of galaxies with increasing redshift, it is probably increasing as dust masses and HI covering fractions may decrease at higher redshifts and facilitate Ly$\alpha$\ escape at fixed galaxy mass \citep{Hayes2011,Oyarzun2016}. In this case we expect our model to underestimate the observed EW distribution, which would suggest an even higher neutral hydrogen fraction given our non-detections. Our model also assumes no significant evolution in the dust spatial distribution and/or CGM opacity between $z\sim6-8$, which could both reduce the Ly$\alpha$\ EW before the photons reach the IGM. If these effects do significantly decrease Ly$\alpha$\ EW between $z\sim6-8$, this could lower our constraint on the neutral fraction. In modelling the emitted Ly$\alpha$\ EW distribution we assume a Gaussian plus Dirac Delta function parameterisation, which has been shown to describe the Ly$\alpha$\ EW distribution well \citep{Oyarzun2017}. However, choosing another functional form for the distribution will not significantly change the results \citep{Treu2012,Schenker2014}. More accurate models of Ly$\alpha$\ emerging from the $z>6$ ISM are required to improve our inferences. Whilst it is increasingly difficult to directly observe all of the emitted Ly$\alpha$\ from $z>6$ galaxies, because of the intervening neutral gas, other emission lines could be used as a diagnostic of emerging Ly$\alpha$. For example, \citet{Henry2018} showed that Mg\textsc{ii} emission line profiles and escape fractions closely trace those of Ly$\alpha$\ in Green Peas, low-redshift analogues of high redshift galaxies \citep{Jaskot2014,Yang2016}. As the IGM optical depth to Mg\textsc{ii} is much lower than for Ly$\alpha$, observations of Mg\textsc{ii} at $z>6$ (which will be possible with JWST) could be used infer the nature of Ly$\alpha$\ emission at these redshifts. Additionally, better knowledge of Ly$\alpha$\ line profiles at $z\simgt5$ are necessary to provide more informative priors on the observed FWHM for our inferences. In particular, high resolution spectroscopy ($R>4000$) is needed to resolve the narrow lines expected for UV faint galaxies \citep{Verhamme2015}, and could provide additional constraints via the evolving Ly$\alpha$\ profile \citep{Pentericci2018} and the prevalence of double-peaked Ly$\alpha$\ in the late stages of reionization \citep{Matthee2018}. We also assume the fraction of low redshift contaminants in our photometric sample is the same as our reference $z\sim6$ sample from \citet{DeBarros2017}. Whilst the selection techniques for the two samples are different (ours is based on photometric redshifts, \citet{DeBarros2017} uses a colour selection) our targets have extensive multi-wavelength photometry which help rule out low redshift contaminants \citep[e.g.,][]{Vulcani2017,Livermore2018}. Additionally, we use the full photometric redshift distribution from EAzY in our inference which will weight the most convincing high redshift candidates most strongly in our inference, and robustly account for contamination. With the final GLASS Ly$\alpha$\ candidate sample it will be possible to use the same selections for both the $z\sim6$ reference EW distribution and the $z>6$ samples for reionization inferences (K. B. Schmidt et al., in prep). As our inference weights sources by their photometric redshift distribution, the tightest constraints on ${\overline{x}_\textrm{{\textsc{hi}}}}$ will be obtained from samples with robust redshift estimates or, ideally, spectroscopic redshifts obtained from other emission lines, and deep Ly$\alpha$\ EW limits. We note that the objects which constribute the most to our posterior are the objects with the highest probability ($>60\%$) of having photometric redshift at $z\sim8$ due to their SEDs, and we expect these to have consistent high redshift solutions even if the photometric redshift fitting priors are changed. The prospects for large spectroscopic samples at these redshifts is increasing: ALMA is enabling spectroscopic confirmation of $z\simgt7$ galaxies in the sub-mm \citep[e.g.,][]{Bradac2017,Laporte2017,Smit2018,Hashimoto2018,Tamura2018}, and other UV emission lines have also been confirmed \citep{Stark2015,Schmidt2016,Stark2017,Mainali2017,Mainali2018}. Future observations with JWST slitless and slit spectroscopy will be able to build large and deep spectroscopic samples of $z\simgt7$ galaxies, ideal for this type of analysis. Understanding the differing evolution of Ly$\alpha$\ emission as a function of galaxy properties and environment will be key to understanding how reionization progresses. Here we have shown that a sample of intrinsically UV faint systems at $z\sim8$ (more likely to live in low density environments) show no significant Ly$\alpha$\ emission, and favour a mostly neutral IGM. However, Ly$\alpha$\ has been observed in a handful of UV bright galaxies at $z\simgt7.5$ \citep{Zitrin2015a,Oesch2015,Roberts-Borsani2016,Stark2017}. \citet{Mason2018b} showed that the observed Ly$\alpha$\ fraction for UV bright galaxies at $z\sim8$ could not be reproduced with standard reionization models \citep[using the EoS simulations,][]{Mesinger2016a}, even when placing them in overdense regions (which reionize early) and giving them high Ly$\alpha$\ velocity offsets to facilitate Ly$\alpha$\ IGM transmission. \citet{Mason2018b} proposed those objects have detectable Ly$\alpha$\ because they have unusually high emitted Ly$\alpha$\ EW \citep[they were certainly selected to have high nebular line EW,][]{Roberts-Borsani2016}. Fluctuations in the UV background during reionization, for example, due to the inhomogeneous distribution of ionizing sources, could also contribute to the differing evolution of Ly$\alpha$\ emission from UV bright and UV faint galaxies by boosting the IGM opacity (transparency) in underdense (overdense) regions \citep{Davies2016,Becker2018}. One important missing piece in our inference is the halo environment of the LBGs. This work assumes a simple mapping between UV luminosity and halo mass. This works well in an average sense \citep{Mason2015}, but deep imaging with JWST could measure the clustering strength and scatter of galaxies in the reionization epoch \citep{Ren2018}, and be used to inform more realistic IGM simulations. \subsection{M1149\_JD1 -- Ly$\alpha$\ emission at $z=9.11$?} \label{sec:dis_JD1} One target in our observations \citep[known as M1149\_JD1,][]{Zheng2012,Hoag2018} was recently spectroscopically confirmed at $z=9.11$ via [O\textsc{iii}]$88\,\mu$m emission with ALMA observations \citep{Hashimoto2018}. Our EAzY photometric redshift distribution for this galaxy put it outside of our inference redshift range (all of the $p(z)$ is at $z>9$, see Figure~\ref{fig:pz}), so it was not used in our reionization inference. However, \citet{Hashimoto2018} also report a tentative 4$\sigma$ detection of Ly$\alpha$\ emission from this galaxy in X-shooter observations at 12271.5\,\AA\ with total line flux $(4.3 \pm 1.1) \times 10^{-18}$\,erg s$^{-1}$ cm$^{-2}$. \citet{Hoag2018} also targeted this galaxy with low resolution HST grism spectroscopy, including GLASS data, which covered the Lya wavelength at $z=9.11$. While they did not claim a detection, their spectra show a $\sim2.5\sigma$ feature at approximately the same wavelength and flux as \citet{Hashimoto2018}. We examined our KMOS cube and find no evidence of a feature at this wavelength. Our median $1\sigma$ flux limit for $z>9$ Ly$\alpha$\ in the cube is $>1.1\times10^{-18}$\,erg s$^{-1}$ cm$^{-2}$, and $>0.8\times10^{-18}$\,erg s$^{-1}$ cm$^{-2}$\ at 12271.5\,\AA. As noted by \citet{Hashimoto2018}, if their candidate line is Ly$\alpha$, it is blueshifted by $\sim450$ km/s with respect to the [O\textsc{iii}] emission. \citet{Hashimoto2018} suggest that Ly$\alpha$\ photons scattered off inflowing gas, causing it to emerge blueshifted from the galaxy's systemic velocity. Whilst blueshifts due to inflows are expected and observed for Ly$\alpha$\ \citep[e.g.,][]{Verhamme2006,Dijkstra2006b,Trainor2015}, at $z>6$ the IGM is opaque to emission $<1216\,$\AA, thus no Ly$\alpha$\ emitted bluer than its source galaxy's systemic redshift should be transmitted through the IGM \citep{Dayal2011,Dijkstra2011}. Observing blueshifted Ly$\alpha$\ requires the galaxy to sit in a large ionized bubble \citep[$\simgt500$ km/s or $\simgt400$\,kpc in radius,][]{Haiman2002}. Alternatively, the Ly$\alpha$\ emission could arise in a different component or merging companion of the [O\textsc{iii}] emitting galaxy, similar to a $z=7.1$ galaxy observed by \citet{Carniani2017a}. The tentative emission we observe in our KMOS cube does appear spatially offset from the predicted position of the UV continuum and [O\textsc{iii}] by $\sim0\farcs4$, which could provide evidence for the multi-component/merger scenario. This may also account for our slightly lower flux measurement as the emission extends to the edge of the IFU. However, the weakness of the detection and some general astrometric uncertainty in KMOS make a thorough analysis difficult. Deeper near-IR IFU observations of this galaxy would be extremely interesting to confirm and determine the nature of the Ly$\alpha$\ emission, and will be possible in the future with JWST NIRSpec. We calculate the probability of observing Ly$\alpha$\ emission from such an object in a mostly neutral IGM using the framework of \citetalias{Mason2018a}, which modelled $p(W \,|\, {\overline{x}_\textrm{{\textsc{hi}}}}, M_\textsc{uv})$. Using $m_\textrm{F160W} = 25.7$ \citep{Zheng2012} we obtain $M_\textsc{uv} = 19.2 - 2.5\log_{10}(10/\mu)$, $EW = 4\pm2$\,\AA\ for our measured flux and $EW = 11\pm3$\,\AA\ from the measurement by \citet{Hashimoto2018}. Using these measurements we calculate $p(W = 4\pm2 \,\textrm{\AA} \,|\, {\overline{x}_\textrm{{\textsc{hi}}}} > 0.76, M_\textsc{uv} = -19.2) < 0.05$, while $p(W = 11\pm3 \,\textrm{\AA} \,|\, {\overline{x}_\textrm{{\textsc{hi}}}} > 0.76, M_\textsc{uv} = -19.2) < 0.03$. In fact, the total probability of observing Ly$\alpha$\ from this galaxy with EW $>4\pm2$\,\AA\ if ${\overline{x}_\textrm{{\textsc{hi}}}} > 0.76$ is $\simlt0.5$: low Ly$\alpha$\ EW are expected and consistent with a mostly neutral IGM. We note that our calculations assume the Ly$\alpha$\ is emitted close to systemic velocity (i.e., assuming that the Ly$\alpha$\ comes from another component). Obviously if the galaxy does sit in an ionized bubble the probability of seeing emission would be higher. But we note that assuming emission is emitted at systemic velocity the probability of detecting the emission is not negligible, and thus this detection is still consistent with a mostly neutral IGM at $z>8$. \subsection{Other UV emission lines at $z\sim8$} \label{sec:dis_otherUV} With Ly$\alpha$\ increasingly suppressed at $z>6$, rest-frame UV emission lines can be used to spectroscopically confirm high-redshift LBGs. These lines can also be used as diagnostics for the stellar populations and physical conditions present in these high-redshift galaxies. Our KMOS observations cover the wavelength range where NV$\lambda1238,1242$ and C\textsc{iv}$\lambda1558,1551$ can be observed, and we briefly discuss our upper limits on the EW of these lines. NV$\lambda1238,1242$ can arise due to stellar winds, particularly from very young stars \citep{Shapley2003,Jones2012}, or from H\textsc{ii} regions if powered by an AGN or radiative shocks. Of the three $z>7$ galaxies detected to-date with tentative NV emission ($S/N \sim 4$) all have been UV bright galaxies, where AGN activity could plausibly be powering NV emission \citep{Tilvi2016,Laporte2017a,Mainali2018}. In our KLASS $7.2 < z_\mathrm{phot} < 8.8$ sub-sample (Section~\ref{sec:reionization_phot}), the median NV EW upper limit is $<60$\,\AA. As our sample comprises intrinsically faint galaxies, which are less likely to have strong AGN activity, it is not surprising we do not detect strong NV emission. Nebular C\textsc{iv}$\lambda1558,1551$ emission has been observed in two galaxies at $z>6$ \citep{Stark2015,Schmidt2017a,Mainali2017}. We observed C\textsc{iv}\ in the $z=6.11$ galaxy with KMOS and describe our observations in more detail in Appendix~\ref{app:CIV}. The C\textsc{iv}\ emission can be powered by either AGN activity or extremely metal poor stars. Limits on other UV lines in these objects find low metallicity stars are a more likely source of the hard photons needed to produce C\textsc{iv}\ emission, rather than AGN. The two galaxies are also both UV faint galaxies ($M_\textsc{uv} \simlt -20.2$) and \citet{Mainali2018} has suggested that there is anti-correlation between UV luminosity and C\textsc{iv}\ EW, which could arise if the lowest luminosity (mass) systems are more metal-poor. Our KLASS observation provide a large additional sample of UV faint galaxies which can place new limits on C\textsc{iv}\ emission. In our KLASS $7.2 < z_\mathrm{phot} < 8.8$ sub-sample (Section~\ref{sec:reionization_phot}), the median C\textsc{iv}\ EW upper limit is $<74$\,\AA. In the three most UV faint systems with $P(7.2 < z_\mathrm{phot} < 8.8) > 0.6$, M0416\_22746, RXJ1347\_656, M0416\_1997 (all with $M_\textsc{uv} \sim -17.5$), the C\textsc{iv}\ upper limits are $<62$\,\AA, $<22$\,\AA, and $<51$\,\AA\ respectively. These upper limits are comparable to, and in one case below, the C\textsc{iv}\ detection presented by \citet{Stark2015} in a $M_\textsc{uv} \sim -19$ galaxy (with $EW_\textsc{civ} \approx 38$\,\AA), and so suggest that the proposed anti-correlation between UV luminosity and C\textsc{iv}\ EW may not be so simple. \subsection{Background limited observations with KMOS} \label{sec:dis_KMOS} \begin{figure*} \centering \includegraphics[width=0.99\textwidth, clip=true]{figure8.pdf} \caption{Comparison of our deepest exposure, 11 hours in in RXJ1347, with 450 second DITs, with the ESO KMOS ETC using the same exposure times. We compare the $5\sigma$ flux limits from our data and the ETC as a function of wavelength, assuming emission lines are spatially and spectrally unresolved. We divide the ETC estimated noise by $\sqrt{2}$ to account for the noise introduced by `A-B' sky subtraction. The pale blue solid line shows the ratio of the ETC estimated S/N to our achieved S/N. The blue dashed (dotted) horizontal lines show the median ($16-84\%$ range) of the S/N ratio over the whole YJ range. The orange line shows the KMOS throughput for comparison.} \label{fig:ETC} \end{figure*} Optical and near-IR IFU observations have provided revolutionary 3D information about the structure and kinematics of galaxies out to $z\sim2$ \citep{ForsterSchreiber2009,Epinat2009,Wisnioski2015,Stott2016,Genzel2017} and revealed diffuse Ly$\alpha$\ halos around $z\simlt6$ galaxies \citep{Bacon2014,Karman2016,Wisotzki2016,Leclercq2017}. In KLASS we have provided the first deep NIR IFU observations of $z\simgt7$ galaxy candidates. Whilst we did not make any $5\sigma$ detections of Ly$\alpha$\ it is important to understand how this depended not only on the selection of our targets and the opacity of the IGM to Ly$\alpha$\ at $z\sim8$, but on the sensitivity of KMOS. In our long integrations we have pushed KMOS to the limits of its sensitivity to search for faint emission lines in near-IR IFU cubes, in wavelength regions dominated by OH sky emission lines. Using our deep observations we provide an assessment of the performance of KMOS for background-limited observations. As described in Section~\ref{sec:obs_reduction} we performed additional sky subtraction routines after running the ESO pipeline to reduce residuals around bright OH lines. We also found the pipeline underestimated the noise in cubes by a factor $\sim1.2$ and performed additional rescaling of the noise as a function of wavelength using the RMS noise obtained from the flux cubes. One key question is how well the instrument performs relative to the predictions based on its instrumental capabilities. We can compare S/N estimated by the KMOS ETC\footnote{\url{https://www.eso.org/observing/etc/bin/gen/form?INS.NAME=KMOS+INS.MODE=lwspectr}} to our achieved S/N to assess its performance. We take the $5\sigma$ flux density limits as a function of wavelength for our deepest exposure, 11 hours in RXJ1347, (shown in Figure~\ref{fig:fluxlim}) and calculate the S/N as estimated by the ETC. We use our flux calibration based on observations of standard stars to convert flux to e$^-$/s and rescaled by the wavelength-dependent sky transmission and KMOS throughput curve (both obtained through the KMOS ETC webpage). We use the following ETC settings which are comparable to those of our observations: line FWHM$_\textrm{spec}=4$\,\AA\ (unresolved); point source; seeing $0\farcs6$; airmass: 1.50, Moon illumination FLI: 0.50, Moon-target separation: 45 degrees, PWV: $<2.5$\,mm. We calculate the S/N in an aperture with radius equal to the seeing FWHM $\sim 0\farcs6$. At every wavelength, the estimated S/N is: \begin{equation} \label{eqn:ston} \frac{S}{N} = \frac{\sqrt{NDIT} \times S_\textrm{source}}{\sqrt{S_\textrm{source} + S_\textrm{bkg} + n_\textrm{spat}(\textrm{DC}\times \textrm{DIT} + \textrm{RON}^2)}} \end{equation} where for RXJ1347 NDIT $=88$ is the number of DITs, of length DIT $=450$ seconds. The KMOS dark current (DC) is $0.01$ e$^{-}$/pixel/s and the read-out noise is $3.5$ e$^{-}$/pixel/DIT. The aperture corresponds to $n_\textrm{spat} = 25$ spatial pixels and the calculation is done at the peak wavelength pixel. We use the online ETC to generate the background flux $S_\textrm{bkg}$ in e$^{-}$/DIT as a function of wavelength, convolved with the instrumental resolution, given our input settings described above. We then calculate the estimated S/N using Equation~\ref{eqn:ston} at every wavelength using our 5$\sigma$ flux density limits as the source flux. In Figure~\ref{fig:ETC} we show a comparison of the ETC estimated S/N as a function of wavelength for the line fluxes corresponding to our $5\sigma$ limits. We plot the S/N estimated by the pipeline divided by 5 to show how the achieved S/N compares to the predicted S/N from the ETC. The public ETC does not account for noise due to sky subtraction routines. Assuming all DITs have equal noise $\sigma$, for `A-B' frames the noise should be $\sqrt{2}\sigma$. Thus in Figure~\ref{fig:ETC} we also divide the ETC estimate by a factor $\sqrt{2}$ for a fairer comparison with our data. We find that the ETC S/N is a median $\sim1.4\times$ higher than our achieved values, and this overestimate is highest for wavelengths $\simlt 11500$\,\AA, where the ETC estimate can be $\sim1.6-1.8\times$ higher. As shown in Figure~\ref{fig:ETC}, the KMOS YJ throughput is known to decrease at $\simlt 11500$\,\AA\ but our results suggests that the YJ grating is less sensitive in the blue for background-limited observations than expected. Unfortunately this corresponds to Ly$\alpha$\ redshifts $z\simlt8.5$, where we expect to find the majority of our targets. Using the S/N estimated from the ETC in planning our observations likely led us to overestimate the line sensitivity of KMOS for our targets. Most of the GLASS Ly$\alpha$\ candidates we assigned to KMOS IFUs had tentative detections in the \textit{HST}\ grisms. Thus a key aim of the deeper KMOS observations was to confirm these emission lines. While our deepest $1\sigma$ flux limit in our KMOS sample is $0.8\times10^{-18}$\,erg s$^{-1}$ cm$^{-2}$, deeper than the $1\sigma$ flux limit in GLASS ($5\times10^{-18}$\,erg s$^{-1}$ cm$^{-2}$), we did not detect any emission from the tentative GLASS Ly$\alpha$\ candidates with KMOS, suggesting that some of the HST grism lines were spurious noise fluctuations. A thorough comparison of the GLASS and KLASS observations, in combination with other follow-up at Keck, to determine the HST grism purity and completeness will be discussed in a future paper. We advise any future KMOS users planning observations of faint targets to take into consideration both the additional noise from sky subtraction when using the KMOS ETC, and the lower than expected performance at the blue end of YJ. However, we find better agreement with the ETC estimates at redder wavelengths, demonstrating that KMOS YJ is performing well at $\simgt 11500$\,\AA. \section{Summary and Conclusions} \label{sec:conc} We have presented an analysis of reionization epoch targets from KLASS, a large ground-based ESO VLT/KMOS program following up sources studied in the HST grism survey GLASS. Our main conclusions are as follows: \begin{enumerate} \item The median $5\sigma$ flux limit of our survey is $4.5 \times 10^{-18}$\,erg s$^{-1}$ cm$^{-2}$. We determine our spectroscopic survey to be 80\% complete over the full wavelength range for $7.2 \simlt z \simlt 10.1$ spatially unresolved Ly$\alpha$\ emission lines with flux $\simgt 5.7\times10^{-18}$\,erg s$^{-1}$ cm$^{-2}$, centred within $0\farcs8$ of the IFU centre and with intrinsic line FWHM$_\textrm{spec}\simlt 250$\,km s$^{-1}$. Our observations are more complete to Ly$\alpha$\ emission that may be spatially offset and/or extended compared to the UV continuum than typical slit-spectroscopy surveys. \item Of the 52 $z\simgt7$ candidate targets observed, none have confirmed Ly$\alpha$\ emission, including those with candidate lines detected in the HST grisms. No other UV emission lines are detected at $z>7$. We detect C\textsc{iv}\ emission in one image of a previously known C\textsc{iv}\ emitter at $z=6.11$. \item We define a sub-sample of 29 targets with a homogeneous photometric selection of $7.2 < z_\textrm{phot} < 8.8$ for a Bayesian inference of the IGM neutral hydrogen fraction. The median Ly$\alpha$\ flux limit for our sample is $3.6\times10^{-18}$\,erg s$^{-1}$ cm$^{-2}$\ and the median Ly$\alpha$\ EW upper limit is $58$\,\AA. Combining our sub-sample with 8 previously observed $z\sim8$ LBGs from the BoRG survey \citep{Trenti2011,Treu2013,Schmidt2014a} we obtain a lower limit on the IGM neutral hydrogen fraction at $z=7.9\pm0.6$, ${\overline{x}_\textrm{{\textsc{hi}}}} > 0.76$ (68\%) and ${\overline{x}_\textrm{{\textsc{hi}}}} > 0.46$ (95\%). \item Our constraint favours a late reionization consistent with models where ultra-faint galaxies contribute significantly to reionization, with an ionizing photon escape fraction $\langle f_\textrm{esc} \rangle \simlt 15\%$. \end{enumerate} Our KMOS observations provide more evidence of a predominantly neutral IGM at $z\sim8$. To make more precise constraints on the timeline of reionization will require larger samples of LBGs with precise photometric (or even better, spectroscopic) redshift estimates, more informative priors on Ly$\alpha$\ FWHM, and deep spectroscopic limits on Ly$\alpha$. Forthcoming deep spectroscopic observations with JWST \citep[e.g.,][]{Treu2017} will provide ideal samples for future inferences on reionization. \section*{Acknowledgements} The authors thank the referee for their constructive comments. We thank Trevor Mendel and Owen Turner for useful discussions related to KMOS reductions for faint sources, and T. Mendel for sharing the readout correction code. We thank Andrei Mesinger for providing Ly$\alpha$\ optical depths from the EoS simulations. C.M. acknowledges support by NASA Headquarters through the NASA Earth and Space Science Fellowship Program Grant NNX16AO85H, and through the NASA Hubble Fellowship grant HST-HF2-51413.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. This work was supported by the HST GLASS grant GO-13459, the HST BoRG grants GO-12572, 12905, 13767 and 15212, and HST-AR-13235 and HST-AR-14280. This work was based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO program 196.A-0778; and on observations made with the NASA/ESA Hubble Space Telescope, obtained at STScI. We are very grateful to all ESO and STScI staff who have assisted in these observations. This work utilises gravitational lensing models produced by PIs Brada{\v c}, Natarajan \& Kneib (CATS), Merten \& Zitrin, Sharon, Williams, Keeton, Bernstein and Diego, and the GLAFIC group. This lens modelling was partially funded by the HST Frontier Fields program conducted by STScI. The lens models were obtained from the Mikulski Archive for Space Telescopes (MAST). \textit{Software}: IPython \citep{Perez2007}, matplotlib \citep{Hunter2007}, NumPy \citep{VanderWalt2011}, SciPy \citep{Oliphant2007}, Astropy \citep{Robitaille2013}, QFitsView (\url{http://www.mpe.mpg.de/~ott/QFitsView/}).
ae89ed1bd57a4d8302564181b83ec2de397a2693
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \IEEEPARstart{T}{he} sum of random variables (RVs) finds numerous important applications in wireless communication systems, such as in diversity combining. In particular, maximal-ratio combining (MRC) is considered one of the most efficient diversity techniques that takes advantage of fading to improve system performance \cite{sim}. In this context, a statistical characterization of independent but not identically distributed (i.n.i.d.) $\kappa$-$\mu$ shadowed RVs was addressed in \cite{Paris}. The authors in \cite{OSBadarneh} investigated the performance of $L$-branch MRC receivers under generalized $\eta$-$\mu$ fading with imperfect channel estimation. In \cite{Bithas2007}, the authors analyzed the performance of MRC, equal gain combining (EGC), selection combining (SC) and switch and stay combining (SSC) diversity receivers operating over the composite $K_G$ fading channels. A novel analytical expression for the probability density function (PDF) of the sum of two correlated, dissimilar Nakagami-$0.5$ RVs was derived in \cite{Beaulieu2017}, while the author in \cite{DLee} derived the exact and approximate PDF and cumulative distribution function (CDF) of dual selection with MRC over nonidentical imperfect channel estimation. It is recalled that composite fading models outperform conventional fading models due to their ability to characterize the simultaneous occurrence of multipath fading and shadowing. Based on this, the authors in \cite{Fisher} proposed the Fisher-Snedecor $\mathcal{F}$ composite fading model which was shown to provide accurate modeling of channel measurements obtained in the context of wearable communications. Based on the empirical data presented therein, it was also shown that this model provided better fit compared to the commonly used $K_G$ fading model, in addition to the added benefit of its algebraic representation which is more tractable. Motivated by this, in the present work, the sum of i.n.i.d. Fisher-Snedecor $\mathcal{F}$ variates is investigated and subsequently employed in the analysis of diversity receivers. To this end, novel analytical expressions for the PDF, CDF, and moment generating function (MGF) are derived in closed-form. These expressions are then used to evaluate the performance of an MRC receiver in terms of outage probability (OP), outage capacity (OC), and average bit error rate (BER) of coherent binary modulation schemes. In addition, a corresponding asymptotic analysis is carried out from which the system diversity gain is determined along with additional insights into the overall system performance. \section{MGF of Fisher-Snedecor $\mathcal{F}$ Variates}\label{sec:1} The PDF of the instantaneous signal-to-noise ratio (SNR), $\gamma_{\ell}$, at the $\ell$-th branch of MRC receiver operating under a Fisher-Snedecor $\mathcal{F}$ composite fading channel can be expressed as \cite{Fisher} \begin{align}\label{pdfr} f_{\gamma_\ell}(\gamma)={{m_{\ell}^{m_{\ell}}}(m_{s_{\ell}}{\bar{\gamma}_{\ell}})^{m_{s_{\ell}}}\over B(m_{\ell},m_{s_{\ell}})}{\gamma^{m_{\ell}-1}\over\left(m_{\ell} \gamma+m_{s_{\ell}}{\bar{\gamma}_{\ell}}\right)^{m_{\ell}+m_{s_{\ell}}}}, \end{align} where $m_{\ell}$ and $m_{s_{\ell}}$ denote the fading severity and shadowing parameters, respectively, ${\bar{\gamma}_{\ell}}=\mathbb{E}[\gamma_{\ell}]$ is the mean SNR with $\mathbb{E}[\cdot]$ denoting expectation, and $B(\cdot,\cdot)$ is the beta function \cite[Eq. (8.384.1)]{i:ryz}. The flexibility of the Fisher-Snedecor $\mathcal{F}$ fading model is evident by the fact that it comprises as special cases the Nakagami-$m$ distribution ($m_{s_{\ell}}\rightarrow\infty$, $m_{\ell}=m)$, Rayleigh distribution ($m_{s_{\ell}}\rightarrow\infty$, $m_{\ell}=1$), and one-sided Gaussian distribution ($m_{s_{\ell}}\rightarrow\infty$, $m_{\ell}=0.5$). The MGF of the Fisher-Snedecor $\mathcal{F}$ distribution was addressed in \cite[Eq. (10)]{Fisher}. However, the algebraic form of the proposed expression renders it inconvenient for the analysis of several scenarios of interest. In what follows, we derive an alternative, closed-form analytical expression for the MGF, which facilitates the derivation of the PDF and CDF of the sum of Fisher-Snedecor $\mathcal{F}$ variates. To this end, by recalling that the MGF is defined as $\mathcal{M}(t)\triangleq\int_{0}^{\infty}\exp{(-x t)}f_{X}(x)dx$, we first represent the exponential function in terms of Meijer's G-function \cite[Eq. (8.4.3.1)]{a:pru} and use the PDF given in (\ref{pdfr}). Then, with the aid of \cite[Eq. (7.811.5)]{i:ryz} and \cite[Eq. (9.31.2)]{i:ryz}, the following analytical expression is deduced \begin{align}\label{mgf2} \mathcal{M}_{\gamma_{\ell}}(t)&={1\over \Gamma(m_{\ell})\Gamma(m_{s_{\ell}})}\mathrm{G}_{2,1}^{1,2}\left[{m_{\ell}\over m_{s_{\ell}}\bar{\gamma}_{\ell}t}\left\vert \begin{matrix} 1-m_{s_{\ell}},1\\ m_{\ell}\end{matrix}\right.\right]. \end{align} Also, the MGF in (\ref{mgf2}) can be written in terms of Tricomi's confluent hypergeometric function (also called confluent hypergeometric function of the second kind) using \cite[Eqs. (8.2.2.14)/(8.4.46.1)]{a:pru}. It is noted here that the MGF of the Nakagami-$m$ distribution can be deduced from (\ref{mgf2}) when $m_{s_{\ell}}\rightarrow\infty$ and $m_{\ell}=m$. As such, using \cite[Eq. (8.2.2.12)]{a:pru}, (\ref{mgf2}) reduces to \begin{align}\label{mgf3} \mathcal{M}_{\gamma}(t)={1\over \Gamma(m)}\mathrm{G}_{1,1}^{1,1}\left[{\bar{\gamma}\over m}t\left\vert \begin{matrix} 1-m\\ 0\end{matrix}\right.\right], \end{align} which upon use of \cite[Eq. (8.4.2.5)]{a:pru}, reduces to\cite[Eq. (2.22)]{sim}. \section{Sum of Fisher-Snedecor $\mathcal{F}$ Variates}\label{sec:2} \begin{proposition} Let us consider $\gamma_{\ell}\sim\mathcal{F}\left(\bar{\gamma_{\ell}},m_{\ell},m_{s_{\ell}}\right)$, $\ell=1,\ldots,L$, where all RVs follow i.n.i.d. Fisher-Snedecor $\mathcal{F}$ distributions. The PDF of the sum $\gamma = \sum_{\ell = 1}^{L} \gamma _{\ell}$ is obtained as \begin{align}\label{pdfmrcfin} {f_\gamma }(\gamma ) = {\gamma^{\sum\limits_{\ell=1}^{L}m_{\ell}-1}\over\Gamma\left(\sum\limits_{\ell=1}^{L}m_{\ell}\right)}\left[ {\prod\limits_{\ell = 1}^{L}}\left({m_{\ell}\over m_{s_{\ell}}\bar{\gamma}_{\ell}}\right)^{m_{\ell}} {\Gamma(m_{\ell}+m_{s_{\ell}})\over\Gamma ({m_{s_\ell}})} \right]\cr\times F _B^{(L)} \Bigl( {m_1+m_{s_{1}}},{m_2+m_{s_{2}}}, \ldots ,{m_{L}+m_{s_{L}}},{m_1},{m_2},\cr \ldots{m_{L}}; {\sum\limits_{\ell = 1}^{L} {{m_\ell}} ;{{ - {m_1}} \over {m_{s_{1}}{{\bar \gamma }_1}}}\gamma,{{ - {m_2}} \over {m_{s_{2}}{{\bar \gamma }_2}}}\gamma , \ldots ,{{ - {m_{L}}} \over {m_{s_{L}}{{\bar \gamma }_{L}}}}\gamma } \Bigr),\,\, \gamma \ge 0, \end{align} where $F _B^{(n)}\left(\cdot\right)$ denotes the Lauricella multivariate hypergeometric function \cite[Eq. (1.4.2)]{Srivastava}. The corresponding CDF can be obtained by performing a term-by-term integration of (\ref{pdfmrcfin}) as \begin{align}\label{cdfmrcfin} {F_\gamma }(\gamma ) = {\gamma^{\sum\limits_{\ell=1}^{L}m_{\ell}}\over\Gamma\left(1+\sum\limits_{\ell=1}^{L}m_{\ell}\right)}\left[ {\prod\limits_{\ell = 1}^{L}}\left({m_{\ell}\over m_{s_{\ell}}\bar{\gamma}_{\ell}}\right)^{m_{\ell}} {\Gamma(m_{\ell}+m_{s_{\ell}})\over\Gamma ({m_{s_\ell}})} \right]\cr \times F _B^{(L)}\Bigl( {{m_1+m_{s_{1}}},{m_2+m_{s_{2}}}, \ldots ,{m_{L}+m_{s_{L}}},{m_1},{m_2},} \nonumber \\ \ldots{m_{L}}; {1+\sum\limits_{\ell = 1}^{L} {{m_\ell}} ;{{ - {m_1}} \over {m_{s_{1}}{{\bar \gamma }_1}}}\gamma,{{ - {m_2}} \over {m_{s_{2}}{{\bar \gamma }_2}}}\gamma , \ldots ,{{ - {m_{L}}} \over {m_{s_{L}}{{\bar \gamma }_{L}}}}\gamma } \Bigr),\,\, \gamma \ge 0, \end{align} \end{proposition} \begin{IEEEproof} The proof is provided in the Appendix. \end{IEEEproof} Note that (\ref{pdfmrcfin}) and (\ref{cdfmrcfin}) converge if $|{{ {m_1}} \over {m_{s_{1}}{{\bar \gamma }_1}}}\gamma|<1, |{{ {m_2}} \over {m_{s_{2}}{{\bar \gamma }_2}}}\gamma|<1, \ldots ,|{{ {m_{L}}} \over {m_{s_{L}}{{\bar \gamma }_{L}}}}\gamma|<1$. This restriction can be overcome by applying the following transformation to the Lauricella multivariate hypergeometric function \cite[Eq. (7.2.4.36)]{a:pru} \begin{align}\label{fbident} &F_B^{(n)}\left(a_{1},\ldots,a_{n},b_{1},\ldots,b_{n};c;x_{1},\ldots,x_{n}\right)=\left[ {\prod\limits_{i = 1}^n {{{(1 - {x_i})}^{ - {b_i}}}} } \right]\cr& \, \times F_B^{(n)}\Bigl(c-a_{1},\ldots,c-a_{n},b_{1},\ldots,b_{n};c;{x_{1}\over x_{1}-1},\ldots,{x_{n}\over x_{n}-1}\Bigr).& \end{align} It is worth mentioning here that using the relations $\lim\limits_{a\to\infty}{a^{-b}\Gamma(a+b)/\Gamma(a)}=1$ and $\lim\limits_{\min\{|a_{1}|,\ldots,|a_{n}|\}\to\infty}F_B^{(n)}(a_{1},\ldots,a_{n},b_{1},\ldots,b_{n};c;{x_{1}\over a_{1}},\ldots,{x_{n}\over a_{n}})=\Phi_2^{(n)}(b_{1},\ldots,b_{n};c;x_{1},\ldots,x_{n})$ \cite{Srivastava}, (\ref{pdfmrcfin}) and (\ref{cdfmrcfin}) reduce to the Nakagami-$m$ one \cite{VAAalo} when $m_{s_\ell}\rightarrow\infty$. For the case of i.i.d. Fisher-Snedecor $\mathcal{F}$ variates, the PDF in (\ref{pdfmrcfin}) reduce to \cite[Eq. (7), after correcting some typos]{8359199} \begin{align}\label{iidpdf} & f_{\gamma}(\gamma)= {1\over B\left(Lm ,Lm_{s}\right)}\left({m\over L m_{s}\bar{\gamma}}\right)^{m L} \gamma^{m L-1} \cr&\qquad\qquad\qquad\times\,{}_2F _1\Bigl( L(m+m_{s}),mL;mL,-{ m \over L m_{s}\bar{\gamma}}\gamma \Bigr),& \end{align} which can be rewritten, with the help of \cite[Eq. (9.131.1)]{i:ryz} and \cite[9.155.4]{i:ryz}, as follows \begin{align}\label{iidpdfe} & f_{\gamma}(\gamma)= {1\over B\left(Lm ,Lm_{s}\right)}\left({m\over L m_{s}\bar{\gamma}}\right)^{m L} \gamma^{m L-1} \cr&\qquad\qquad\qquad\qquad\qquad\times\left( 1+{ m \over L m_{s}\bar{\gamma}}\gamma \right)^{-L(m+m_s)}.& \end{align} While the CDF is given by \cite[Eq. (8), after correcting some typos]{8359199} \begin{align}\label{iicpdf} & F_{\gamma}(\gamma)= {\Gamma(Lm+Lm_{s})\over\Gamma (Lm_{s})\Gamma\left(1+mL\right)}\left({m\over L m_{s}\bar{\gamma}}\right)^{mL} \gamma^{mL} \cr&\qquad\qquad\times\,{}_2F _1\Bigl(L( m+m_{s}),mL;1+mL,-{ m \over L m_{s}\bar{\gamma}}\gamma \Bigr),& \end{align} whereby ${}_2F _1(\cdot)$ represents the Gauss hypergeometric function \cite[Eq. (9.100)]{i:ryz}. It is noted here that when $|{{ {m}} \over {m_{s}{{\bar \gamma }}}}\gamma|>1$, the transformation in (\ref{fbident}) can be used. Also, when $m_{s}\rightarrow\infty$, using \emph{(i)} $\lim\limits_{a\to\infty}{a^{-b}\Gamma(a+b)/\Gamma(a)}=1$, and \emph{(ii)} $\lim\limits_{|a|\to\infty}{}_2F _1\left(a,b;c;{z\over a}\right)={}_1F _1\left(b,c;z\right)$ \cite{Srivastava}, (\ref{iidpdf}) reduces to the Nakagami-$m$ one \cite{sim} after using \cite[Eq. (9.210.1)]{i:ryz} and \cite[Eq. (9.215.1)]{i:ryz}. In addition, using \emph{(i)} and \emph{(ii)}, (\ref{iicpdf}) reduces to the Nakagami-$m$ one after using \cite[Eq. (9.210.1)]{i:ryz}, \cite[Eq. (8.351.2)]{i:ryz}, and \cite[Eq. (8.356.3)]{i:ryz}. To the best of the authors' knowledge, equations (\ref{pdfmrcfin}), (\ref{cdfmrcfin}), (\ref{iidpdf}) and (\ref{iicpdf}) have not been previously reported to the open technical literature. \section{Performance of MRC Receiver}\label{sec:3} In this section, we analyze the performance of an $L$-branch MRC receiver operating under Fisher-Snedecor $\mathcal{F}$ composite fading in terms of the OP, OC and BER for coherent binary modulations. \subsection{Outage Probability} It is recalled that $P_{out}\triangleq \Pr[0\leq\gamma<\gamma_{{\rm th}}]=\int_{0}^{\gamma_{{\rm th}}} f_{\gamma}(\gamma)d\gamma$; therefore, the corresponding OP is readily deduced as \begin{align} P_{out}= F_{\gamma}(\gamma_{{\rm th}}), \end{align} where $F_{\gamma}(\gamma)$ is given in (\ref{cdfmrcfin}). \subsubsection{Diversity Gain} The diversity gain or diversity order refers to the increase in the slope of the OP, $P_{out}$, versus the average SNR. Thus, at high average SNR value, the OP, $P_{out}$, may be closely approximated by $P_{out} \simeq {\bar{\gamma}}^{-G_d}$, whereby the exponent $G_d$ represents the diversity gain. The asymptotic OP can be obtained when $\bar{\gamma}_{\ell}\rightarrow\infty$ for all $\ell$. As such, making use of the identity $F_B^{(n)}(a_{1},\ldots,a_{n},b_{1},\ldots,b_{n};c;\underbrace{0,\ldots,0}_{n})=1$, an asymptotic expression for the OP can be represented as follows: \begin{align}\label{opasymp} P_{out}\simeq {\gamma_{{\rm th}}^{\sum\limits_{\ell=1}^{L}m_{\ell}}\over\Gamma\left(1+\sum\limits_{\ell=1}^{L}m_{\ell}\right)}\left[ {\prod\limits_{\ell = 1}^{L}}\left({m_{\ell}\over m_{s_{\ell}}\bar{\gamma}_{\ell}}\right)^{m_{\ell}} {\Gamma(m_{\ell}+m_{s_{\ell}})\over\Gamma ({m_{s_\ell}})} \right]. \end{align} It is evident from (\ref{opasymp}) that the diversity gain $G_d$ is proportional to ${L}$ and $m$. It is also noted that for the i.i.d. case, $G_d=m L$. \subsection{Outage Capacity} The OC is an important statistical measure used to quantify the spectral efficiency over fading channels. It can defined as the probability that the instantaneous capacity $C_{\gamma}$ falls below a certain specified threshold $C_{{\rm th}}$, that is, $C_{out}\triangleq \Pr[0\leq C_{\gamma}<C_{{\rm th}}]$, where $C_{\gamma}=W\log_{2}(1+\gamma)$ and $W$ denotes the signal's transmission bandwidth over AWGN. Thus, the OC can be written in terms of the CDF as follows \begin{align} C_{out}= F_{\gamma}(2^{C_{{\rm th}}\over W}-1). \end{align} To this effect, the OC in MRC receivers under $\mathcal{F}$ composite fading conditions is readily obtained using the derived expression in (\ref{cdfmrcfin}). \subsection{Average Bit Error Rate} In flat fading environments, the average BER of most coherent modulation techniques can be calculated as \cite{sim} \begin{align}\label{berb} {\overline P_b} = \int\limits_0^\infty Q\left(\sqrt{2\lambda\gamma}\right){f_\gamma }(\gamma )d\gamma={1\over\pi}\int\limits_{0}^{\pi/2}\mathcal{M}_{\gamma}\left(\lambda\over\sin^{2}\phi\right)d\phi, \end{align} where $Q(\cdot)$ is the well-known Gaussian Q-function and $\lambda$ is a dependent-modulation constant. For binary phase shift keying (BPSK) $\lambda=1$, while for binary frequency shift keying (BFSK) $\lambda=0.5$, and for BFSK with minimum correlation $\lambda=0.715$. Based on this and utilizing (\ref{mgf2}) and (\ref{berb}), the average BER ${\overline P_b}$ for the considered receiver under $\mathcal{F}$ fading can be expressed as \begin{align} \label{berbin} {\overline P_b} = {1\over\pi}\prod\limits_{\ell = 1}^{L}\int\limits_{0}^{\pi/2}{\mathrm{G}_{2,1}^{1,2}\left[{m_{\ell}\over \lambda m_{s_{\ell}}\bar{\gamma}_{\ell}}\sin^{2}\phi\left\vert \begin{matrix} 1-m_{s_{\ell}},1\\ m_{\ell}\end{matrix}\right.\right]\over \Gamma(m_{\ell})\Gamma(m_{s_{\ell}})}d\phi. \end{align} By performing the change of variable $y=\sin^{2}\phi$ and using \cite[2.24.2.2]{a:pru}, the average BER ${\overline P_b}$ can be obtained in closed-form as \begin{align}\label{aberbsp} &{\overline P_b}={1\over2\sqrt{\pi}}\prod\limits_{\ell = 1}^{L}{\mathrm{G}_{3,2}^{1,3}\left[{m_{\ell}\over\lambda m_{s_{\ell}}\bar{\gamma}_{\ell}}\left\vert \begin{matrix} {1\over2},1-m_{s_{\ell}},1\\ m_{\ell},0\end{matrix}\right.\right]\over \Gamma(m_{\ell})\Gamma(m_{s_{\ell}})},& \end{align} which has a tractable algebraic representation. To this effect, the asymptotic average BER is obtained by assuming $\bar{\gamma}_{\ell}\rightarrow\infty$ for all $\ell$. Using (\ref{aberbsp}) and utilizing \cite[Eq. (8.3.2.21)]{a:pru}, and \cite[Eq. (1.8.5)]{kilbas}, yields \begin{align} \label{berasym} &{\overline P_b}\simeq{1\over2\sqrt{\pi}}{\prod\limits_{\ell = 1}^{L}}\left({m_{\ell}\over\lambda m_{s_{\ell}}}\right)^{m_{\ell}} \left({1\over\bar{\gamma}_{\ell}}\right)^{m_{\ell}} { \Gamma(m_{\ell}+m_{s_{\ell}})\Gamma({1\over2}+m_{\ell})\over\Gamma(m_{s_{\ell}})\Gamma(1+m_{\ell})}.& \end{align} It is evident that the diversity gain $G_d$ (for the i.n.i.d. case) is proportional to ${L}$ and $m$, whereas for the i.i.d. case $G_d=m L$. Furthermore, the derived asymptotic expressions in (\ref{opasymp}) and (\ref{berasym}) have a convenient algebraic representation that renders them convenient to handle both analytically and numerically. \section{Numerical Results and Discussions}\label{sec:4} This section presents numerical and simulation results for the considered MRC receiver with $L$ branches. It can immediately be observed that a tight agreement between numerical and simulation results exists, which verifies the validity of the derived expressions. In addition, the asymptotic approximations match well with both the numerical and simulation results at high SNR values. The results for the Rayleigh case are also included as a benchmark. In Fig. \ref{fig:op}, the OP performance is plotted as a function of the average SNR $\bar{\gamma}$ per branch for i.i.d. Fisher-Snedecor $\mathcal{F}$ variates with $m=2.5$ (moderate), $m_s=1.5$ (heavy), and $\gamma_{{\rm th}}=0~{\rm dB}$. The results show that the difference between $L = 1$ and $L = 2$ is significant as about half of the power is required in the latter to achieve a target BER across all SNR regimes. \begin{figure}[t!] \centering \includegraphics[width=0.8\columnwidth]{op} \caption{Outage probability versus average SNR $\bar{\gamma}$ per branch. \label{fig:op} \end{figure} The OC is plotted in Fig. \ref{fig:oc} as a function of the average SNR $\bar{\gamma}$ per branch for different values of $C_{{\rm th}}$ with $L=3$, $m=1.5$, and $m_s=1.25$. It is noticed that the SNR gains for extra branches are similar from $L = 1$ to $L =2$ and from $L = 2$ to $L = 3$. Also, it can be concluded that three branches are sufficient to achieve low OC even at low SNR regimes, which is feasible as it does not come at a cost of dramatic complexity increase. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{cout3} \caption{Outage capacity versus average SNR $\bar{\gamma}$ per branch.} \label{fig:oc} \end{figure} In Fig. \ref{fig:ber}, the performance of the average BER for coherent BPSK modulation and i.n.i.d. triple-branch MRC is depicted. The results show the influence of shadowing $m_s=0.5$ (heavy), $m_s=5$ (moderate), and $m_s=50$ (light) on the average BER. It is apparent that the average BER slightly improves as $m_s$ increases. In addition, the results show that the impact of the fading parameter $m$ on the system performance is more pronounced than that of $m_s$. This is because the diversity order of the system is proportional to $m$, as shown in (\ref{berasym}). \begin{figure}[t] \includegraphics[width=0.8\columnwidth]{bpskber} \centering\caption{Average BER for BPSK versus average SNR $\bar{\gamma}$ per branch. \label{fig:ber} \end{figure} \section{Conclusions} A new closed-form expression for the MGF of Fisher-Snedecor $\mathcal{F}$ distribution has been obtained. Capitalizing on this, novel closed-form expressions for the PDF and CDF of the sum of i.n.i.d. Fisher-Snedecor $\mathcal{F}$ variates were derived. Using the new expressions, useful theoretical and technical insights into the performance of an $L$-branch MRC receiver operating over Fischer-Scnedecor $\mathcal{F}$ fading have been presented in terms of OP, OC, and average BER.
62673c1396088b8825660209488188582b092f51
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{sec:Introduction} The use of Low Temperature Detectors (LTDs) for sensing X-ray and $\gamma$-ray signals is quite widespread and well established~\cite{Ullom-2015}. LTDs are also widely used in the field of fundamental physics, especially for Double Beta Decay (DBD), and Dark Matter (DM) searches~\cite{Pirro-2017}. In these surveys the need for a hybrid detector, in which an energy release can be measured through different mechanisms, is of primary importance in order to distinguish the nature of interacting particles. For instance hybrid detectors can help identify and reject events caused by the natural background. With thermal detectors this can be achieved using scintillating or luminescent crystals. The simultaneous and \it independent \rm readout of the heat and the (escaping) light produced by the interaction reveals the nature of the interacting particles thanks to the different scintillation yields of $n$, $\alpha$ and $\gamma/\beta$ events. This discrimination technique is presently used for DM searches~\cite{CRESST-2016,Angloher:2017sft,Angloher:2016hbv}, DBD searches~\cite{CUPID-0-2018,Cupid-Mo-2017,Amore-2017}, and it can be also implemented for rare nuclear decays~\cite{Alfa-1,Alfa-2,Pattavina:2018nhk}. At milli-Kelvin temperatures, the light detectors are usually bolometers themselves: a \it dark \rm thin crystal absorbs the photons, producing heat (phonons) that is measured by a suitable thermometer. The main difference among the various Bolometric Light Detector (BLD) instruments currently in use is the choice of the thermometer element, e.g. Transition Edge Sensors (TES)~\cite{TES_LD_CRESST}, Neutron Transmutation Doped (NTD) thermistors~\cite{NTD_LD_Lucifer-2013} or Micro Magnetic Calorimeters (MMC)~\cite{MMC_LD-2015}. The work presented here was performed within the CUPID framework~\cite{CUORE-IHE-2014,CUPID-2015}, the future follow up of CUORE~\cite{CUORE-2018} that represents the largest world-wide bolometric experiment to date. The aim was to develop NTD-based BLDs with improved performance in terms of sensitivity, time response and simplified packaging for large arrays. Using the tiny Cherenkov light emission of TeO$_2$~\cite{Tabarelli-2010,Enriched-TeO2-Cherenkov-2017} to decrease by two order of magnitude the $\alpha$-induced background, requires a BLD with a S/N ratio of the order of $\sim$5~\cite{CUPID-2015}: this corresponds to a RMS baseline resolution of the BLD of the order of $\sim$20~eV being the Cherenkov light signal of the order of 100 eV. Actually one can work towards the optimization of the light collection~\cite{Casali-2017} and/or towards the energy resolution of the BLD or -as we made in this work- both. Additionally, in case of $^{100}$Mo-based compounds, beside the same need to suppress the surface $\alpha$-induced background, a fast time response of the BLD ($\leq$~1~ms) is mandatory to suppress the background induced the pile-up of the 2$\nu$ DBD~\cite{2_nu-Pile_up-2012}: also in this case the S/N ratio will play an important role~\cite{2_nu-Pile_up-2016}. Our work has therefore focused on two aspects of BLD performance: (1) improving the response of the NTD thermometer and (2) increasing the light collection. While the first aspect is strictly related to a specific technique, the second aspect is worthy of additional remarks. The working principle of a BLD is irrespective of the sensor: a thin crystal wafer (usually Si or Ge) absorbs the emitted photons and converts them into heat. Unlike a conventional bolometric approach, we have to avoid the optical coupling between crystal and BLD made with optical grease or similar substance since the unavoidable heat flow through the optical coupling and the increase of the heat capacity of the system would reduce the independence of the two detectors, eliminating the possibility of particle discrimination afforded by the different scintillation yields. Therefore the thermal contact between the luminescent crystal and BLD has to be avoided, especially in the case of extremely low scintillation yields. This is true for most of the Mo-based compounds~\cite{Cupid-Mo-2017} and, even more importantly in case of Cherenkov signals. A 2615~keV $\gamma$-ray energy release in a CUORE-like TeO$_2$ absorber produces a light signal in the BLD on the order of $\sim$100~eV~\cite{Casali-2017}. For this reason the BLD is always facing the scintillating crystal without directly contacting it via a coupling medium. In the following section it is shown that if the BLD is simply resting on the crystal surface, held in position only by gravity, the thermal coupling between the BLD and the crystal is almost negligible and the leakage of the BLD thermal signal through the scintillating crystal vanishes. This fact can be explained considering the acoustic mismatch described in the diffused mismatch model whereby the heat carriers (phonons) in insulating materials are scattered at the interfaces~\cite{Matsumoto-1977,Swartz-1989}. This approach shows that the thermal resistance between two dielectric crystals is strongly dependent on the surface state, on the different phonon characteristics in the two materials (density and Debye temperature), and on the applied force. This latter parameter has a significant effect. When two solids are placed in contact with each other, the actual contact area can be much smaller than the cross sections involved due to surface irregularities. By rising the applied force between the materials, a plastic or permanent deformation occurs and the "real" contact surface area increases. The result of this action is that the thermal conductance of the contact is directly proportional to the applied force~\cite{Barucci-2001,Ventura-2008}. Although such simple stand will clearly not produce a so-called "optical matching," the light collection will be definitively larger due to geometrical factors~\footnote{For instance if the BLD is held in its own structure, depending on the mounting scheme, there are generally a few mm of distance from the BLD to the scintillating crystal. This increases the chance for photon escape or absorption by the holding structure rather than the BLD.}. In addition, removing the BLD mounting structure decreases the presence of materials and surfaces close to the detector which reduces possible radioactive contamination, a fundamental aspect of dealing with rare event searches. \section{Bolometric Light Detectors} Our BLDs are usually constituted by electronic grade undoped Ge wafers, coupled with Ge NTD thermistors. We started to develop these detectors coupled with several scintillating DBD crystals~\cite{PIRRO-2005} and we deeply characterized their operation and performances~\cite{light-detectors-2013} to finally realize the LUCIFER~\cite{LUCIFER-2013} experiment, which has been renamed CUPID-0~\cite{CUPID-0-detector_2018}. Each BLD of CUPID-0 (totalling 26 detectors) was made by a double side polished electronic grade undoped Ge wafer (44.5~mm diameter, 0.17~mm thick). The NTD thermistor, with dimension of (2.85~$\times$~2~$\times$~0.5)~mm$^3$, is glued through six small glue dots ($\sim$~0.5~mm diameter, 0.05~mm height) made with Araldit\textsuperscript{\textregistered} Rapid glue. The performance of six of these detectors was evaluated in a dedicated test run~\cite{LUCIFER-2016} and the results are summarized in Tab.~\ref{tab-cupid-0-LD}. To further optimize our BLDs, we produced a set of devices based on the pioneering work of Coron et.al.~\cite{Coron-2004}. For this study we (1) decreased the heat capacity (size) of the thermistor, (2) increased the thermal conductance between the thermistor and the Ge wafer, and (3) decreased the thermal conductance to the thermal bath. With respect to the thermistor size, we used thermistors with a dimension of (2.85~$\times$~1~$\times$ 0.4)~mm$^3$, roughly 2.5 times smaller than the CUPID-0 devices. We also decided to replace the six glue dots with an uniform glue layer, thus increasing the thermal conductance between the thermistors and light-absorbing Ge wafer. It should be noted that in our experience the use of glue dots instead of a \it more effective \rm thin gluing layer is preferred when coupling inherently different materials (e.g. TeO$_2$ crystals and Ge thermistors). The dot approach reduces the mechanical stresses induced by differential thermal contraction of the materials when cooled. In such cases, and especially when working with larger-sized thermistors, we sometimes observed cracks on the crystal surface after a cooling cycle. This phenomenon is greatly reduced in our case since we glue Germanium thermistors to Germanium light absorbers and use smaller thermistors. Even in this case, however, there are some small unavoidable stresses due to misorientation between the thermistor and absorber crystallographic planes, but we have found that these effects never led to visible cracks. With respect to the mounting (i.e. the conductance to the thermal bath), there are many ways to hold the BLD in place. In earlier work we adopted two~\cite{light-detectors-2013} or three~\cite{CUPID-0-detector_2018} small PTFE clamps that squeeze the edge of the Ge, keeping it fixed in a Cu standalone holder. PTFE is a common material also used by other groups working with NTD sensors~\cite{Lumineu-2017} and with MMC detectors~\cite{MMC_LD-2015}. Other clamping schemes and material choices have been demonstrated by the CRESST group. These include bronze clamps and Silicon or CaWO$_4$-based sticks~\cite{Strauss-2018}. The design used in~\cite{Coron-2004}, however, is probably the most complex from a construction point of view, using several ultra thin superconductive wires to suspend the Ge wafer from a copper frame to produce a negligible thermal link that maximizes the heat flow from the wafer to the NTD. \begin{table}[t] \centering \caption{Mean performance of six CUPID-0-like light detectors~\cite{LUCIFER-2016}. \textbf{R$_{work}$} refers to the resistance of the NTD Ge thermistor in working conditions, \textbf{Response} refers to the absolute voltage drop (in $\mu$V) produced by an energy release of 1\,keV, \textbf{Baseline RMS} is the resolution after signal filtering~\cite{Gatti-1986:1,Alduino-2016:045503}. {\bf$\tau_{r}$} and {\bf$\tau_{d}$} are the rise and decay times, computed as the time difference between the 90$\%$ and 10$\%$ of the leading edge and as the time difference between the 30$\%$ and 90$\%$ of the trailing edge, respectively. The Bessel cut-off frequency is 200 Hz (see last remarks of Sec.~\ref{sec:results}).} \label{tab-cupid-0-LD} \begin{tabular}{ccccc} \hline\noalign{\smallskip} R$_{work}$ &Response &Baseline RMS &$\tau_{r}$ &$\tau_{d}$ \\ [M$\Omega$] &[$\mu$V/keV] &[eV] &[ms] &[ms]\\ \hline 0.87 & 1.36 & 43 & 1.77 & 5.06 \\ \hline \end{tabular} \end{table} We decided to avoid any kind of holding structure whatsoever so we laid the BLD directly on the crystal, kept in position only by its weight ($\sim$1.1 g). In this configuration the main thermal link between the BLD and the cryostat is represented by the thin gold NTD thermistor wires ( 2 $\times$ 15 mm length, 25 $\mu$m diameter). As mentioned above, the expected thermal conductance to the scintillating crystal is negligible. The crystal chosen for this test was a (50.5~$\times$~50.5~$\times$~50.5)~mm$^3$ TeO$_2$ crystal. The aim was to test the new setup with a light signal on the order of few tens of eV. The Ge light-absorbing wafer belongs to the batch used for CUPID-0, which include a 70~nm SiO anti-reflecting coating~\cite{Mancuso-2014} that was deposited on the side that rests on the TeO$_2$ crystal. \section{Experimental details} \label{sec:experimental_details} The TeO$_2$ crystal was mounted in a similar way as described in~\cite{Enriched-TeO2-Cherenkov-2017,Casali-2014} with the only exception that the TeO$_2$ crystal was standing on the reflecting foil and both TeO$_2$ and BLD were not equipped with Si heaters. These heaters were normally glued on the bolometer to inject pulsed thermal signals for gain stabilization. The TeO$_2$ face supporting the BLD and the opposite one were polished at (nearly) optical level. The remaining four lateral faces were matted in order to increase light collection~\cite{Casali-2017}. The TeO$_2$ crystal is held by four S-shaped PTFE supports that are fixed to Cu columns. The PTFE contracts upon cooling, creating a tensioned support that maintains the crystal position. In order to maximize light collection, the crystal is completely surrounded by a plastic reflecting sheet (3M Vikuiti\textsuperscript{TM}), in the same way as in~\cite{Enriched-TeO2-Cherenkov-2017,Casali-2014}. A photograph of the detectors is presented in Fig.~\ref{fig_0-setup}. \begin{figure}[hbt] \centering \includegraphics[width=0.48\textwidth]{fig_0-setup.pdf} \caption{Photograph of the detectors. The BLD is simply resting on the TeO$_2$ and the four PTFE supports (as well as the thermistor glued on the TeO$_2$) do not hold the BLD in any way: they simply avoid the BLD to lean out from the top surface, as a mere translation constraints. The gold wires of both NTDs are then crimped within micro Cu tubes to ensure the electrical contact as well as the thermal conductance to the heat sink. The $^{55}$Fe X-ray source is attached to the top reflecting cover sheet that encloses the detectors (with a clearance of $\sim$4~mm from the BLD) and can be observed -reflected by the Ge wafer surface- between the two NTDs.} \label{fig_0-setup} \end{figure} The entire setup was enclosed in a Cu box and thermally coupled to the mixing chamber of the CUPID R\&D cryostat, a $^3$He/$^4$He dilution refrigerator installed deep underground within Hall C of the Laboratori Nazionali del Gran Sasso, Italy. To avoid vibrations reaching the detectors, the box is mechanically decoupled from the cryostat by utilizing a two-stage pendulum system~\cite{Pirro-2006}. The thermistors of the detectors are biased with a quasi-constant current produced by applying a fixed voltage through large (27+27 or 2+2 G$\Omega$) load resistors~\cite{Arnaboldi-2002:1808}. When light is absorbed in the Ge wafer, a thermal pulse is produced which is subsequently transferred to the NTD sensor, changing the resistance of the thermistor. This, in turn, creates a voltage change across the current-biased NTD which is amplified using front end electronics located just outside the cryostat~\cite{Arnaboldi-2004}. The signals are then filtered by an anti-aliasing 6-pole Bessel filter (with a cutoff frequency of 16~Hz for the TeO$_2$ crystal and 550~Hz for the BLD) and finally fed into a NI PXI-6284 18-bit ADC. The sampling rate of the ADC was 1~kHz for the TeO$_2$ crystal and 8 kHz for the BLD. The two independent triggers are software generated such that when a trigger fires, the corresponding waveform is recorded. Moreover, when the trigger of the TeO$_2$ crystal fires, the corresponding waveform of the BLD is always recorded, irrespective of its trigger. A detailed description of the DAQ system can be found in~\cite{DiDomizio:2018ldc}. The amplitude and the shape of the voltage pulses are then determined via off-line analysis. The pulse amplitude of the thermal signals is estimated by the Optimum Filtering (OF) technique~\cite{Gatti-1986:1,Alduino-2016:045503}, that maximizes the signal-to-noise ratio in a way that improves the energy resolution and lowers the threshold of the detector. The amplitude of the light signal, however, is evaluated from the filtered waveform at a fixed time delay with respect to the TeO$_2$ bolometer, as described in detail in~\cite{Piperno-2001:10005}.\newline The amplitude of the acquired TeO$_2$ heat signals is energy-calibrated using several $\gamma$-ray peaks from a $^{228}$Th source. The BLD, on the contrary, is calibrated thanks to the 5.9~keV and 6.5~keV X-ray quanta produced by a $^{55}$Fe X-ray source permanently faced to the detector. \section{Data analysis and results} \subsection{BLD performance} \label{sec:results} The crystals were tested at a cryostat base temperature of $\sim$11~mK. In order to obtain a fast response, we operated the BLD in the so-called "over-biased" configuration whereby the biasing current of the circuit is set much larger than the current that would ensure the highest absolute thermal response~\cite{NTD_LD_Lucifer-2013}. This choice ensures a small working resistance, thus minimizing the effect of the low pass filtering induced by the overall capacity ($\sim$200 pF) of the front end readout wires. In Fig.~\ref{fig_1-55Fe} we show the $^{55}$Fe calibration spectrum obtained with the BLD. The baseline energy resolution (ie, the absolute sensitivity) of the BLD is given by the width of randomly acquired baselines (noise) after the application of OF. As is typical for this style of detectors, the energy resolution of monochromatic energy absorption events is much worse than the baseline resolution, irrespective of the type of sensor~\cite{NTD_LD_Lucifer-2013,TES_LD_CRESST}. \begin{figure}[hbt] \centering \includegraphics[width=0.48\textwidth]{fig_1-55Fe.pdf} \caption{Energy distribution of the random sampled noise. The width of the distribution ($\sigma\approx$20 eV) represents the baseline energy resolution of our BLD. The right inset shows the $^{55}$Fe calibration spectrum of the BLD. The x-axis units represent the absolute voltage drop across the thermistor. The RMS resolution on the 5.9 keV and 6.5 keV X-ray peaks is 59 eV (see text).} \label{fig_1-55Fe} \end{figure} The noise and signal power spectra of the BLD are presented in Fig.~\ref{fig_2-NPS}. \begin{figure}[hbt] \centering \includegraphics[width=0.48\textwidth]{fig_2-NPS.pdf} \caption{Noise power spectrum (black line) and signal power spectrum (blue line) of the BLD. The y-axis scale is in absolute values for the noise. The signal spectrum is scaled in arbitrary units, being the roll-off induced by the Bessel filter the same between noise and signal. The working resistance of the thermistor is 1.47 M$\Omega$, biased with a current of 3.7 nA thorough (2+2) G$\Omega$ metallic load resistors. The peaks are due to the microphonic noise induced by the vibration of the readout wires.} \label{fig_2-NPS} \end{figure} The bump that can be observed in Fig.~\ref{fig_2-NPS} at $\sim$400 Hz arises from a resonance that enhances the thermal noise generated within the thermistor. This occurs when the impedance of the parasitic capacitance of the link becomes smaller than that of the thermistor, which is a fed-backed device~\cite{Arnaboldi-2005}. The bump is found at the border of the bandwidth of the signal and is rejected from the optimum filter algorithm. Fig.~\ref{fig_3-rise-decay} shows the corresponding rise and decay times of $^{55}$Fe X-rays absorption events. \begin{figure}[hbt] \centering \includegraphics[width=0.48\textwidth]{fig_3-rise-decay.pdf} \caption{Rise and decay times distributions corresponding to the $^{55}$Fe X-rays. The Bessel cut-off frequency of the Front-End is 550~Hz.} \label{fig_3-rise-decay} \end{figure} The measured rise time shown in Fig.~\ref{fig_3-rise-decay} is most likely slower than the intrinsic rise time of the detector since it contains contributions from the Bessel filter (independent from the thermistor impedance) and from the capacitance of the readout wires. This last contribution is difficult to measure since it involves the dynamic resistance of the thermistor. The contribution of the 550~Hz Bessel filter to the rise time was evaluated in~\cite{NTD_LD_Lucifer-2013} and reported as 0.65~ms. Thus, after applying a quadratic deconvolution, the \it intrinsic \rm rise time of our BLD should be of the order of 0.5~ms, compatible with the expectation of ~\cite{Coron-2004}. The overall performance of the BLD is summarized in Tab.~\ref{tab-new-BLD}. \begin{table} \centering \caption{Performances of the BLD of this work, to be compared with the ones of Tab.~\ref{tab-cupid-0-LD}.} \label{tab-new-BLD} \begin{tabular}{ccccc} \hline R$_{work}$ &Response &Baseline RMS &$\tau_{r}$ &$\tau_{d}$ \\ [M$\Omega$] &[$\mu$V/keV] &[eV] &[ms] &[ms]\\ \hline 1.47 &3.86 &20 & 0.83 &1.63 \\ \hline \end{tabular} \end{table} \subsection{Heat and Light measurement} \label{scatter-section} In order to evaluate the long-term discriminatory performance of our BLD, we performed a 70 h run that included two event-generating calibration sources embedded into the setup. A $^{228}$Th source was placed a few cm away from the TeO$_2$ crystal and a \it smeared \rm $^{238}$U $\alpha$ source was applied to the inside of the light reflector facing the TeO$_2$. The aim of the $\alpha$ source was to directly measure the discrimination capability between $\alpha$ and $\beta/\gamma$ in the DBD region of interest of $^{130}$Te. The source was made using 2 $\mu l$ of a standard calibrated solution (0.1 \%) of $^{238}$U, and the dried source deposition was covered with a 6 $\mu m$ aluminized Mylar foil to smear the $\alpha$ energy. The light vs heat scatter plot is presented in Fig.~\ref{fig_4-scatter-plot} and shows an unexpected feature. \begin{figure}[hbt] \centering \includegraphics[width=0.48\textwidth]{fig_4-scatter-plot.pdf} \caption{Light vs heat scatter plot obtained in a 70 h measurement with the TeO$_2$ exposed to a $^{228}$Th source and a smeared $^{238}$U $\alpha$ source. Unfortunately $\alpha$ energy loss in the Mylar -constituting the smearing medium- results in a tiny, but measurable, light emission that increases towards lower energies, i.e. at larger energy loss in the Mylar. The events above 4~MeV, on the contrary, are due to internal and/or surface contaminations and their light emission is compatible with zero (see text).} \label{fig_4-scatter-plot} \end{figure} The $^{238}$U $\alpha$-events arising from the smeared source clearly show a tiny light emission that increases towards lower energies. This feature can only be ascribed to an energy loss in the Mylar which emits few scintillation photons. To avoid this effect we usually face the aluminized surface of the Mylar towards the crystal so as to reflect the (very few) photons that could be produced in this plastic. This time however, we mistakenly mounted the Mylar with the uncoated side towards the detector. This was confirmed after subsequently opening the cryostat and checking. The result is shown in Fig.~\ref{fig_4-scatter-plot}: the amount of Cherenkov light, produced by a 2615~keV $\gamma$, that is collected with this new set-up is (151~$\pm$~4)~eV, 50 \% larger with respect to all our previous measurements with massive crystals~\cite{Casali-2017}, as well as roughly 50 \% larger with respect to a measurement recently performed with a NTD-based light detector~\cite{Lumineu-2017} of the same type (considering the 40 \% reduced transmission area between BLD and crystal, as declared in the article). The light distribution of the 74 events belonging to the internal $^{210}$Po $\alpha$ at 5407~keV (5304~keV $\alpha$ + 103~keV nucleus recoil) shows a mean value of (5.8~$\pm$~3.3)~eV, still compatible with zero (see Sec.~\ref{sec:thermal_interference}) as it should be if the light only arises from the Cherenkov effect. More importantly, the width of the light distribution of $\alpha$'s is $\sigma_{\alpha}$=(22.7~$\pm$~ 2.7)~eV, fully compatible with the RMS noise of the BLD of Tab.~\ref{tab-new-BLD}. The light signal induced by the 2615~keV $\gamma$ -on the contrary- shows a width of $\sigma_{\gamma/\beta}$=(31.5~$\pm$~4.3)~eV which is a result of the photostatistics and the light collection. In order to evaluate the Discrimination Power (DP) that can be obtained between the $\alpha$ and $\beta/\gamma$ distributions at 2528~keV (the Q$_{\beta\beta}$-value of the DBD of $^{130}$Te) we use the same formula and arguments used in~\cite{Enriched-TeO2-Cherenkov-2017,Lumineu-2017}: the DP can be quantified as the difference between the average values of the two distributions normalized to the square root of the quadratic sum of their widths: \begin{equation} DP = \frac{|\mu_{\gamma/\beta}-\mu_{\alpha}|}{\sqrt{\sigma^{2}_{\gamma/\beta}+ \sigma^{2}_{\alpha}}}. \label{eq:DP} \end{equation} Re-scaling the light signal from 2615 to 2528~keV, we obtain DP=3.6, using one highly likely assumption that an $\alpha$ particle at 2528~keV will show a light signal equal than the same particle at~5304 keV ($^{210}$Po). This DP is the best ever achieved with large mass TeO$_2$ crystals (M $>$ 7 g) and without the need for additional Neganov-Luke amplification~\cite{Lumineu-2017,Casali:2015gya,Gironi:2016nae}, or more sophisticated TES sensors~\cite{Karo-2014} or both~\cite{Willers-2014}. \section{Thermal conductance}\label{sec:thermal_interference} As stated in Sec.~\ref{sec:Introduction}, the actual goal of this work was to experimentally demonstrate that the BLD can rest on the scintillating or luminescent crystal without heat sinking to it. Using the results in the previous section we can now calculate a limit on the heat flow through the Ge wafer and the TeO$_2$. If one assumes that a 5407 keV energy release in the TeO$_2$ produces a mean value BLD signal that only depends on the heat flow (assuming no light emission), then we have an upper limit for the ratio of the heat flow through TeO$_2$ and Ge: 5.8~eV/5407~keV$\sim$10$^{-6}$. In our case, an extremely low heat conductance was determined experimentally using static conditions. We measured the base resistance of the BLD as 223.5 M$\Omega$ (corresponding to 11.8~mK), keeping the TeO$_2$ thermistor unbiased (i.e. no power dissipation in it). We then gave the maximum (allowed by our biasing set-up) bias to the TeO$_2$ thermistor, corresponding to 4.8 nA, and the TeO$_2$ thermistor changed its resistance from 626 M$\Omega$ (bias~$\rightarrow$~0) to 1.71 M$\Omega$. The power dissipated on the TeO$_2$ was therefore 40 pW. The base resistance of the BLD decreased to 222.8 M$\Omega$, which corresponds to a temperature increase of only $\approx$~4.3 $\pm$ 0.2 ~$\mu$K. The same operation was performed with the BLD in working condition, i.e. bias current of 3.7 nA and a resistance of 1.47~M$\Omega$ (corresponding to $\sim$23~mK), and no variation of the baseline of the BLD was registered. A further investigation of the thermal conductance between a Ge-BLD and a TeO$_2$ crystal was performed by exploiting a small TeO$_2$ crystal ($20~\times~20~\times~14$~mm$^{3}$, 34~g mass). We used a standard BLD, i.e., the same thickness and height as in the previous discussion, but with the Ge wafer held with PTFE clamps in a stand-alone Cu mounting~\cite{NTD_LD_Lucifer-2013}. For this experiment we rested the $20\times20$~mm$^{2}$ surface of the 34~g crystal on the Ge wafer. The NTD thermistor-equipped TeO$_2$ crystal was surrounded with the same reflecting foil and we performed the same measurement described in Sec.~\ref{scatter-section} with the same overall setup. This time a 5304~keV $^{210}$Po decay occurring in the TeO$_2$ created a mean signal in the BLD of (317~$\pm$~29)~eV, definitively not compatible with the result of Sec.~\ref{scatter-section}. The mean (light) signal registered in coincidence with the 2615~keV $\gamma$-line of $^{208}$Tl was (336~$\pm$~5)~eV. The $\alpha$-induced signal in the BLD, therefore, has to be ascribed to an effective thermal transfer from the TeO$_2$ to the BLD. We can make a very rough estimation of the size of this transfer using the results of the measurement of Sec.~\ref{scatter-section}. If we assume the heat conductance to be linearly proportional to the pressure force between the two mediums, then we may simply compare the weight differences: 1.1 g in the case of the wafer resting onto the TeO$_2$ crystal versus 34~g in this last configuration. Their ratio, i.e. 31, should be, in first approximation, the ratio between the thermal conductance in the two setups. Ascribing the $\alpha$ signal of Sec.~\ref{scatter-section} exclusively to thermal transfer we would expect a thermal transfer signal of (180~$\pm$~90)~eV, which is compatible with the 317~eV observed during this measurement. On the other hand, under the same assumption, we can evaluate the 2615-keV induced Cherenkov light signal of this crystal as the difference between the observed signal and the re-scaled thermal transfer evaluated from the $\alpha$. In this way we observe that the energy of the Cherenkov light emission in this 34~g crystal is (185~$\pm$~15)~eV. \section{Conclusions} We have demonstrated the possibility of mounting BLDs by simply resting them on the surface of the corresponding scintillating crystal. With this new mounting method the light collection can increase up to 50\% with respect to standard setups. We do not observe appreciable heat flow between the scintillating crystal and BLD. We also improved the time response of our thermistor-based light detectors, reaching a rise time of 0.8 ms and demonstrating that 0.5 ms is achievable. This time response is necessary to remove the background induced by the pile-up of the 2$\nu$-DBD mode in the case of $^{100}$Mo-based crystals. We reached a baseline resolution of 20~eV RMS, more than 2 times better than the average value our previous CUPID-0-like detectors. Thanks to these developments, we definitively demonstrated that standard thermistor-based BLDs can be used for CUPID, both to read out the tiny Cherenkov light of TeO$_2$ as well as to read out the Mo-based scintillating crystals. We do believe that this simplified technique could be applied to any kind of BLD, irrespective of the sensor type. The first approximation thermal conductance between crystal and BLD does not depend upon the energy of the phonons, so we would expect that thermal transfer would be as negligible in TES or MMC devices as it is in our NTDs. More generally this new technique could be also applied in the case of stacked, standard small bolometers, provided that the weight does not exceed a few grams. However, since the measured thermal transfer is rather small, the weight of the bolometer will not be a significant limiting factor in low energy threshold applications. \section{Acknowledgments} This work was performed within the CUPID experiment founded by INFN and supported by the National Science Foundation under Grant NSF-PHY-1614611. We thank the CUPID-0 and the CUORE collaborations for the overall support and for sharing their DAQ and software. We express our gratitude to LNGS for the generous hospitality and, in particular, to the mechanical workshop personnel including E. Tatananni, A. Rotilio, A. Corsi, and B. Romualdi for their continuous and constructive help. We are also grateful to M. Guetti for his invaluable support and expertise in the cryostat facility maintenance. We acknowledge Dr. C. Arnaboldi for his precious support, even though he has left this field of research many years ago. We are especially grateful to E. Ferri for her kind support in the thermistor wire-bonding.
ab5d38d74a1c010fc4d4756fe6cd41f680c1a362
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Although deep reinforcement learning (DRL) has been shown to have high performance in various fields, some challenges remain regarding the stability of training. In applications such as games~\cite{dqn,alpha-go}, by designing the score as a reward value, it is possible to obtain a model that obtains a performance comparable to that of a human. However, many DRL studies~\cite{levine-1,levine-2} only conducted experiments with simple reward signals designed by the experimenter. There is a gap between these scenarios and real environments, which often have unstable reward signals. This is an essential issue for DRL because its performance is sensitive to reward design. Therefore, a learning method that is robust to noise in the reward signals is needed. Noise in the DRL reward function can occur for several reasons; a typical example is the errors that occur during observation. In the real-world, reward functions are not perfect. The rewards derived from the actual environment via hardware sensors may include noise, misinterpretations, and observation failures. When misinterpretations or observation failures occur, the reward value may be calculated as an entirely different value. Another case in which noise may occur is the use of feature values as signals for training a neural network. A deep neural network extracts low-dimensional feature vectors from high-dimensional sensor information. Furthermore, we can use extracted features as the target signals of another network. For example, some studies use the feature values of images of target values as a signal to optimize robot behaviors~\cite{Grasp2Vec,sii20}. When employing a reward signal to acquire advanced behavior, a variety of noise types not intended by the experimenter will occur. \begin{figure}[t] \begin{centering} \includegraphics[width=10.0cm]{fig/fig1.eps} \put(-268,117){\normalsize Agent} \put(-139,117){\normalsize Environment} \put(-203,143){\small action} \put(-203,97){\small reward} \put(-283,65){\small Training NNs} \put(-283,53){\small to maximize } \put(-283,41){\small reward signals.} \put(-210,35){\small negative} \put(-210,23){\small influence} \put(-110,15){\small true reward value} \put(-110,5){\small observed reward value} \put(-110,83){\normalsize Sensory Error} \put(-150,67){\small misinterpretation} \put(-55,67){\small noise} \caption{ There are several sources of noise in the reward of DRL in a real-world environment. If noisy signals are used as target signals, they delay convergence in training. } \end{centering} \end{figure} Among the types of unstable reward signals listed above, in this study, we focus on fine-grained noise in the signals, which has been referred to as ``sensory error'' in previous research~\cite{tom2017}. In this case, we assume that the reward value is a continuous value rather than a binary signal. These unstable reward signals may inhibit DRL training; for instance, delaying convergence during training. Therefore, a DRL model needs a learning method that considers the uncertainty of the signals and updates its parameters appropriately. Hence, we propose a stable learning method for DRL with unstable reward signals by directly estimating the variance of the rewards and adjusting the parameter update amount. We incorporate an estimation of the variance of the target signals into the model as a subtask. This makes it possible to extend the original model without significant changes to its configuration. In addition, we use an attention branch network (ABN)~\cite{abn} structure that incorporates the feature map of the subtasks into the main task. This conveys the learning results of the subtask to the policy network. We evaluated our method on the Atari game domain in the Open AI Gym~\cite{gym}. To verify the model can stabilize training convergence in an environment where rewards are unstable, we conducted experiments in which we added artificial noise to the reward signals. The results show that our extension to the base model improves performance. The primary contributions of this study are as follows. \\ 1) A model is proposed to stabilize training convergence that incorporates a subtask that estimates the variance in the rewards signals. \\ 2) An evaluation of the performance of models trained with disturbed rewards shows that the proposed approach improves performance. \section{Related Works} Several methods have been proposed to improve the convergence and stability of DRL, and they fall into two main types. One approach optimizes training, whereas the other one reduces variance. The method proposed in this study uses the latter approach. To increase the convergence and stability of DRL, many optimization methods have been proposed. RMSprop~\cite{rmsprop} is an approach based on AdaGrad~\cite{adagrad}, which adjusts the learning rate with respect to the frequency of each parameter update and results in a rapidly decreasing learning rate. Adam~\cite{adam} is a further improvement on traditional optimization methods and is used in many studies on deep learning. Furthermore, in terms of variance control, methods such as SAG~\cite{sag}, SDCA~\cite{sdca} and stochastic variance reduction (SVR)~\cite{svr} have also been proposed. SVR-DQN~\cite{svr-dqn} reduces the variance resulting from approximate gradient estimation. These optimizers are essential advances in the convergence and stability of DRL. However, in many studies, experimenters empirically use what is appropriate for the model. Several previous studies consider the uncertainty of the target signals. The authors of \cite{tom2017} define a Markov decision process in the presence of misinterpretations of the rewards and observation failures and propose a way to deal with rewards that are not correct through sampling. However, the study remains an investigation of table-style rewards and does not consider a continuous control method. There is also a study that addresses the overestimation error in Q-learning. Double DQN uses separate Q-networks for action selection and Q-function value calculation~\cite{double-dqn}. The model is able to deal with overestimation errors that replace positive bias with negative bias. Another approach is to reduce the target approximation error. An efficient way to reduce this is to use the average of multiple models. The average DQN estimates the current action value using the previously computed K-value~\cite{average-dqn}. In contrast, an estimator has been proposed to reduce the variance of the reward signal~\cite{noise-2}. This model updates the discount value function instead of the sampled rewards. In this study, we propose a model extension that uses a mechanism to estimate the variance of the reward signals directly. The model estimates the mean and variance of the rewards obtained from the environment, and it can be easily integrated with the base model as a subtask. Using the estimated variance, the model updates its parameters to reduce the effects of noise. To apply the above approach, we adopt an actor-critic type network that predicts the policy and state value using the actor and critic, respectively. \section{Method} To solve the problem described above, we extend the DRL model to solve subtasks that predict the variance of reward signals. Figure 2 shows an overview of the proposed network architecture. The model consists of a base neural network model and an extended branch network. We describe the base model in Section 3.1 and the proposed extension in Section 3.2. \subsection{Base model: ABN-A3C} As the base model, we adopt a DRL model that combines the ABN~\cite{abn} and asynchronous advantage actor critic (A3C)~\cite{a3c}. We choose an actor-critic type network so that we may incorporate a subtask that predicts the variance of the reward signals. Moreover, the ABN enables us to visualize the focus of the subtask using a feature map. The base model, ABN-A3C, consists of a feature extractor that extracts features from the input image, a value branch that outputs state values, and a policy branch that outputs actions. The policy branch also uses the feature map $f(s_t)$ of the value branch as input. Feature map $f(s_t)$ is extracted from the current state $s_t$, and the value branch outputs the maximum value of feature map $f(s_t)$ using global max pooling. This emphasizes the more distinctive pixels in the feature map of a subtask when it is incorporated with the main task. The details of each model are described below. \textbf{A3C: } The training of an A3C is stabilized by using various policy searches while running multiple agents in parallel. In asynchronous learning in multiple environments, there is a globally shared parameter and a separate parameter for each thread. Each worker's parameters are copied from the global network's parameters. The parameters learned by agents under different environments are reflected asynchronously in the global network. The gradient exponential moving average of RMSprop, which is used as the optimizer, is also shared globally. A3C takes advantage of its ability to train online and updates the state value using an estimation of the reward several steps in the future as opposed to a method that estimates the reward in the next step only. As a result, learning is more stable because a more likely estimation error of the current state value is used. The value of $adv$, which is used to update the estimated value, is calculated by the following equation. \begin{eqnarray} adv = \sum_{i=0}^{k-1}{\gamma^i r_{t+1}} + \gamma^k V(s_{t+k}) - V(s_t) \end{eqnarray} where $k$ indicates how many future steps are used in the prediction. We decided on the prediction step that gave the best results by trying some experimental settings. In our experiment, we set the prediction step $k=5$. The A3C trains two models: an actor network, which represents the behavior of the agent, and a critic network, which predicts the expected rewards. An actor network is trained to predict the probability of taking action in a certain state $\pi$. The critic network is trained to predict the estimated value of state $V$. Because the estimated values are independent, they are easy to learn even when the action is continuous. \textbf{ABN: } An ABN is a model that makes it possible to visualize the areas of focus and improve the accuracy of the network by incorporating feature maps of the subtask into the main task. In ABN, we compute a new feature map $g'(s_t)$ from the feature map $f(s_t)$ of the value branch and the output of the feature extractor using the following residual mechanism~\cite{res}: \begin{eqnarray} g'(s_t) = (1+f(s_t))*g(s_t) \end{eqnarray} The state value in the current state is reflected in the action, and the loss of the original feature map is suppressed. The action is predicted by inputting $g'(s_t)$ to the LSTM network of the policy branch. Here, feature map $f(s_t)$ represents the features for optimizing a subtask. By visualizing the feature map overlaid on the input image, it is possible to show where the network is focusing its attention in the input image. \begin{figure} \includegraphics[width=12.2cm]{fig/fig2.eps} \put(-89,175){\normalsize Variance} \put(-89,105){\normalsize Value} \put(-38,28){\normalsize Policy} \put(-289,63){\normalsize Feature} \put(-289,52){\normalsize extractor} \put(-340,63){\small Grayscale} \put(-340,53){\small images} \put(-247,112){\small $g(s_t)$} \put(-180,69){\small $f(s_t)$} \put(-161,38){\small $g'(s_t)$} \put(-171,29){\normalsize $+$} \put(-197,29){\normalsize $\times$} \put(-138,88){\small global max} \put(-138,78){\small pooling} \put(-138,160){\small global average} \put(-138,150){\small pooling} \put(-98,34){\small LSTM,} \put(-98,24){\small FC layer} \put(-312,196){\normalsize \textcolor{blue}{\textbf{Extended new branch}}} \put(-312,185){\normalsize \textcolor{blue}{\textbf{(Variance branch)}}} \put(-298,160){\normalsize \textcolor{red}{\textbf{Base model}}} \put(-298,149){\normalsize \textcolor{red}{\textbf{(ABN-A3C)}}} \put(-70, 160){\small \textbf{negative}} \put(-70, 150){\small \textbf{log-likelihood}} \caption{ Overview of the proposed network for predicting the uncertainty of signals as a subtask. The red frame indicates the base model, and the blue frame indicates the proposed extended branch. } \label{fig1} \end{figure} \subsection{Variance branch for predicting uncertainty in rewards} To stabilize the learning convergence, we extend the base model described in the previous section. The aim is to optimize the model's parameters while ignoring the effects of reward noise. Here, the reward signal with noise is assumed to have been generated according to some probability distribution from an unknown generative model. We assume a Gaussian distribution in this study. We use the branching structure of ABN to add a new branch called the variance branch, which takes the feature map as input. The variance branch is similar to a stochastic multi-time scale recurrent neural network (SMTRNN)~\cite{smtrnn}. The SMTRNN is a type of recurrent neural network that enables the predictive learning of probability distributions based on likelihood maximization. The model extends the conventional learning method of point prediction based on squared-error minimization. It learns to minimize the negative log-likelihood and obtains the stochastic structure that underlies the target signals. To estimate the variance of the prediction of state value $V^{\pi}(s_t)$, the probability density $p(r_t|s_t,\theta)$ of reward $r_t$ at time step $t$ during an episode is defined as \begin{eqnarray} p(r_t | s_t,\theta)=\cfrac{1}{\sqrt{2\pi\nu_t}}exp \left( -\cfrac{(V^{\pi}(s_t)-r_t)^2}{2\nu_t} \right) \end{eqnarray} and the log-likelihood L is defined as \begin{eqnarray} L = \prod_{t=1}^T p(r_t | s_t,\theta) \end{eqnarray} where $\theta$ denotes the model parameters. This process is equivalent to minimizing the weighted prediction error by dividing the output error by predicted variance $v$. The model learns while ignoring errors in rewards that contain large variance, i.e., large noise. As a result, the training for the state value is stabilized. In our method, the squared-error calculation of the state value and reward is replaced by the above function. In addition, the configuration of the variance branch is based on the value branch. To smooth the entire feature map of the variance branch, we adopt global average pooling in the final layer. The stabilization of the state value prediction caused by the variance prediction is reflected in the stability of the behavioral prediction in the policy branch. This is a result of the incorporation of the feature-map mechanism of ABN described in the previous section. Visualization of the added branch network is also possible. \section{Experiments} \subsection{Model and environment setup} To evaluate our method, we used the model to learn to play the Atari games in the Open AI Gym~\cite{gym}. We used three games: Break Out, Sea Quest, and Pong. As the input image for each game, we used $84\times84$ grayscale images of four time steps, and for the training, we used RMSprop~\cite{rmsprop} as the optimizer. Its learning rate is $7\times10^{-4}$, and the discount rate is 0.99. The number of workers in the A3C is 32. Table I lists the parameters of our model. We determined the above parameters by trial-and-error, choosing the parameter set that yielded the best results. \begin{table} \centering \begin{tabular}{c|c} \multicolumn{2}{c}{TABLE I: Structures of the networks} \\ \hline Network & Dimensions \\ \hline \hline Feature Extractor & conv@16chs - BN - Relu - \\ & conv@32chs - BN - Relu \\ \hline Value Branch & conv@32chs - BN - Relu - \\ & conv@64chs - BN - Relu - \\ & conv@1chs - BN - MaxPooling \\ \hline Policy Branch & eq.(2) - conv@32chs - BN - Relu - \\ & LSTM@256 - FC@ActionNum \\ \hline Variance Branch & conv@32chs - BN - Relu - \\ & conv@64chs - BN - Relu - \\ & conv@1chs - BN - AvePooling -exp \\ \hline \end{tabular} \begin{center} BN: Barch normalizaion, FC: Fully connect \end{center} \end{table} \subsection{Evaluation metrics} To evaluate the effectiveness of the proposed method, we added artificial noise to the reward signals in the experiments. The noise followed a Gaussian distribution of variance $\sigma^2$. In our experiments, we set $\sigma^2$ to 0.0, 0.03, and 0.05. When $\sigma^2=0.0$, there is no noise in the reward signals, and the noise increases as $\sigma$ increases. We compared the proposed method with a base model that does not have a mechanism for estimating the variance in the reward signals. We performed experiments in each game environment five times while changing the initial weights of the model. \section{Results and Discussion} In this section, we present the results of experiments in multiple game environments to evaluate the robustness of the proposed method to noise in the reward signals. We also present a feature map of the model and analyze the points of focus in each game. Finally, we discuss the suitability of the proposed method for other deep neural network models. \subsection{Atari game performance} The scores during the training of the proposed and base models for each game are shown in Fig. 3. The results show that the variance in the results increases with the variance in the reward signals. The time to convergence increases because the teaching signal given to the model is not stable. The proposed method converges faster than the ABN-A3C base model, regardless of the size of the variance. However, the maximum score of the proposed method is about the same as that of the base model. These results show that the proposed method predicts the mean of the reward signal and converges to the same results as the base model in less time. \begin{figure}[t] \begin{centering} \includegraphics[width=12.2cm]{fig/fig3.eps} \put(-294,170){\normalsize Break Out} \put(-184,170){\normalsize Sea Quest} \put(-60, 170){\normalsize Pong} \put(-345,120){\normalsize \rotatebox{90}{$\sigma^2=0.00$}} \put(-345,65){\normalsize \rotatebox{90}{$\sigma^2=0.03$}} \put(-345,10){\normalsize \rotatebox{90}{$\sigma^2=0.05$}} \put(-302,156){\scriptsize ABN-A3C} \put(-302,148){\footnotesize ours} \put(-14, -5){\scriptsize 1e6} \put(-126, -5){\scriptsize 1e6} \put(-238, -5){\scriptsize 1e7} \caption{ Change in score during training under each condition (i.e., the type of game and level of reward signal noise). The vertical axis of each figure shows the score, and the horizontal axis shows the total number of worker epochs. Each color area shows the maximum and minimum range of the score. The red lines indicate the results of our method, and the blue lines indicate the results of the ABN-A3C base model. } \end{centering} \end{figure} The proposed method converges faster than the base model in all games, regardless of the level of noise in the reward signal. This is also true when there is no noise ($\sigma^2=0.0$). Although rewards are given discrete values in the standard games in the Atari game domain, the results suggest that predicting the mean of the rewards is an effective strategy. We think that was because atari's ordinary rewards include uncertainty. For example, in atari games, not all rewards given are valid. Our method may learn to ignore temporary rewards that cannot maximize cumulative rewards. When the levels of noise are low ($\sigma^2=0.03$), the results of the proposed and base model differ the most. The final convergence score of the base model varies depending on the initial weight values, which are randomly chosen. In contrast, the performance of the proposed method does not depend on these values. The proposed method adequately learns the variance in the rewards and stabilizes the training of the policy network. However, when the level of reward noise increases ($\sigma^2=0.05$), the results of the proposed method are also worse. When the noise reaches a certain level, the training is substantially disturbed. These results demonstrate the robustness of the proposed DRL method to unstable reward signals. In our experiment, the noise added to the reward was artificially set; hence, the impact of realistic reward noise on learning needs to be considered in future work. \subsection{Visualization of the feature map} Next, we visualize the feature map of the proposed model to ensure that the model focuses on the appropriate areas of the feature map. The feature map for each condition superimposed on the input image is shown in Fig. 4. In Break Out, the feature map shows that the model focuses on the movement of the ball. Furthermore, when the number of blocks decreases, the area of attention moves to the blank regions above the blocks (see the results for $\sigma^2=0.03$). The variance branch's feature map is similarly active, focusing mainly on areas of significant change on the screen. In Sea Quest, the feature map indicates focus on agents, enemies, and the bars representing the remaining oxygen. There is less movement in the feature map than in Breakout, which may be because this is a game in which the agent employs a waiting strategy. In contrast, in Pong, the feature map is not informative in most cases, even when the obtained scores reach their upper bounds. Reviewing the gameplay after training, we found that the agent repeated a specific pattern of behavior to score points. This may be because Pong itself does not require any complex behavior. \begin{figure} \begin{centering} \includegraphics[width=12.2cm]{fig/fig4.eps} \put(-300,145){\normalsize Break Out} \put(-188,145){\normalsize Sea Quest} \put(-68, 145){\normalsize Pong} \put(-345,97){\normalsize \rotatebox{90}{$\sigma^2=0.00$}} \put(-345,48){\normalsize \rotatebox{90}{$\sigma^2=0.03$}} \put(-345,0){\normalsize \rotatebox{90}{$\sigma^2=0.05$}} \caption{ Examples of visualization of feature maps for each condition. The input image, the feature map of the value branch, and the feature map of the variance branch are shown, respectively. The value of each feature map is higher as it becomes red. } \end{centering} \end{figure} The above results confirm the effectiveness of our proposed method. The model paid attention to the appropriate areas on the feature map, even in environments with unstable reward signals. Furthermore, the regions of focus of the variance branch feature maps differ from those of the value branch feature maps. In other words, each branch plays a different role in the network. \subsection{Scalability} The proposed method is broadly applicable to many conventional networks because it does not require significant changes to the configuration of the original model. However, because the subtask for predicting variance requires the prediction of state values, the network to be extended should be an actor-critic type network. Because the network is extended using a branch structure, the computational complexity of the network increases; however, parallel computation is possible. Hence, the learning and prediction times should not be much different from those of the original network. The combined method of variance prediction and feature-map visualization could be used in applications other than DRL. We are investigating an extension to recurrent neural networks for end-to-end robot control~\cite{kase,suzuki}. Robot control is a particularly promising application because it is often affected by real-world noise. \section{Conclusion} In this study, we proposed a stable reinforcement learning method for scenarios in which the reward signal contains noise. We incorporated a subtask into an actor-critic-based DRL method. The model directly estimates the variance included in the reward obtained from the environment. Moreover, we input the feature map learned by the subtask in the critic network to the actor network. We evaluated our method in the Atari game environment of the Open AI Gym. Our method enables us to stabilize the convergence of learning in an environment in which rewards are unstable. In future work, we plan to extend our method to real robot tasks. \section*{Acknowledgment} This work was supported by JST, ACT-X Grant Number JPMJAX190I, Japan.
25f3d5da2fb3adacb43e1645774f76c7f9bfcf1f
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \IEEEPARstart{D}{ementia} affects $850,000$ people in the UK and over 50 million globally, and is set to become the developed world's largest socioeconomic healthcare burden over coming decades \cite{world2012dementia, alz}. In the absence of any current treatment, there is an urgent need to focus on reducing the effects of symptoms and help to improve the quality of life and well-being of those already affected \cite{3livingston2020dementia}. The 2020 report of the Lancet Commission on dementia prevention, treatment, and care stresses the importance of individualised interventions to address complex medical problems, multimorbidity and neuropsychiatric symptoms in dementia, which lead to unnecessary hospital admissions, faster functional decline, and worse quality of life \cite{4pickett2018roadmap}. People with dementia have complex problems with symptoms in many domains. It is estimated that up to $90\%$ will develop behavioural and physical symptoms of dementia (BPSD) over the course of their illness, with agitation being one of the most common symptoms \cite{5feast2016behavioural}, and a frequent reason for nursing home placement \cite{6buhr2006caregivers}. Furthermore, patients with dementia often suffer from a number of co-morbid conditions and have a higher frequency of medical problems such as falls, incontinence, dehydration or urinary tract infection (UTI) - the commonest bacterial infection in the older patient population, and the commonest cause of sepsis in older adults \cite{7peach2016risk} with an associated in-hospital mortality of $33\%$ in this age group \cite{8tal2005profile}. If not detected and treated early, both BPSD and medical comorbidities frequently lead to emergency hospital admissions in dementia patients. Alzheimer's Research UK estimates that $20\%$ of hospital admissions in dementia patients are for preventable conditions, such as urinary tract infections. Besides significant costs, hospitalisation places dementia patients at risk of serious complications, with longer hospital stays, higher risk of iatrogenic complications, delayed discharge and functional decline during admission, which contributes to higher rates of transfer to residential care and in-patient mortality \cite{9fogg2018hospital}. Therefore, increased medical supervision, early recognition of deterioration in health status and rapid treatment are key to preventing unnecessary hospitalization for 'ambulatory' conditions, that could be treated outside of hospital, such as UTIs. Furthermore, ongoing monitoring of people with dementia allows immediate detection of behavioural disturbances, enabling earlier psychosocial and environmental interventions to reduce patients’ distress and prevent further escalation and hospitalization. However, monitoring and supporting individuals in an ongoing manner is a resource and cost-intensive task, often not scalable to larger populations. Utilising remote monitoring technologies with the help of caregivers can allow creating practical and generalisable solutions. As part of the research in the Care Research and Technology Centre at the UK Dementia Research Institute (UK DRI), we have been developing and deploying in-home monitoring technologies to help and support people affected by dementia. Our research has led to the development of a digital platform that allows collecting and integrating in-home observation and measurement data using network-connected sensory devices \cite{Enshaeifar20}. In this paper, we discuss how our in-home monitoring data and machine learning algorithms are used to detect early symptoms of agitation and UTI in people with dementia living in their own homes. Sensing technologies have been increasingly used to monitor activities and movements of elderly patients living in their own homes \cite{11majumder2017smart, 12turjamaa2019smart, 13peetoom2015literature}. Interpreting this information; however, demands considerable human effort, which is not always feasible. The use of analytical algorithms allows integration and analysis of rich environmental and physiological data at scale, enabling rapid detection of clinically significant events and development of personalized, predictive and preventative healthcare. Deep learning models have been applied in a variety of healthcare scenarios to identify the risk of various clinical conditions or predict outcomes of treatment \cite{miotto2016deep, ross2017risk}. Recently, there have been several implementations of Recurrent Neural Networks (RNNs) to create learning models for time-series healthcare data analysis \cite{lipton2015learning, esteban2016predicting, choi2016doctor}. The behavioural and physiological symptoms and patterns in long-term conditions such as dementia appear in the data over a long period of time and can fluctuate and change over the course of disease. Machine learning models such as RNNs; however, are not suitable for analysing long sequences of time-points. To address the long sequence analysis issue in RNNs, other methods such as Bidirectional RNN, LSTM and GRU have been used \cite{baytas2017patient, harutyunyan2019multitask}. There also have been attempts to apply attention mechanisms to clinical datasets \cite{choi2016retain, ma2017dipole, bai2018interpretable, ma2019adacare,song2018attend} to improve the performance of analysing imbalanced and long-tail time-series data. A fundamental limitation in these models is the adaptivity and generalisability. When long-distance symptoms and patterns are related to a specific condition, the generalisability and performance of the existing models are limited. The long sequences of data points and the changes in the ongoing conditions vary in patients, and often there are no large labelled training samples to train the models for all the variations. Deep learning models offer a new opportunity to training models that can pay attention to correlations and long-distance relations between the patterns and sequences. However, the off-the-shelf and existing deep learning model require large training samples. While applying neural networks to clinical data, there are two main challenges: 1) selecting the important timesteps and features from long sequences of data to create generalisable models; and 2) imbalance in datasets. Neural networks are very effective in finding a trend in datasets. Models such as Recurrent Networks use the positions of the input and output sequences to generate a sequence of hidden states. This is computationally expensive and limited computing of the global dependencies \cite{vaswani2017attention}. In these models, the computational complexity to relate input or output positions also grows as the distance between positions increases. This latter makes it very challenging to learn dependencies and correlations between long-distance patterns and time points \cite{hochreiter2001gradient}. Additionally, clinical datasets are often imbalanced, with content spanning ensembles of heterogeneous data. Most of the clinical datasets contain more normal cases (i.e. True positives) than abnormal data points (i.e. True Negatives). In our dataset, which includes a large set of in-home environmental and physiological data from people with dementia, the number of positive cases for infections is much smaller than the true negative cases. In large parts of the data, the true status of the infection is unknown (i.e. the data is partially labelled due to the limitations in accessing the patients' clinical records or knowing the presence of any infections without a test). This issue causes the learning models to exhibit a bias towards the majority class. It may ignore the minority class or make a decision based on a partial set which is not a broad representation of the cases \cite{johnson2019survey}. There have been several works on implementing attention mechanisms \cite{vaswani2017attention} to improve the generalisability of learning models in analysing time-series data. However, Jian \textit{et. al} \cite{jain2019attention} found that there are limitations in the weights generated by attention-based models which can lead to wrong predictions. Hence, we need to be more cautious in using the attention mechanisms and their explanations in designing deep learning models. While the attention-based models are promising in healthcare time-series data analysis, considering the time and features dependencies of the predictions poses a challenge for this type of models. Over-sampling which augments the data by generating synthetic samples \cite{chawla2002smote}, down-sampling which prunes the samples in the majority classes are among the typical models that are used to deal with the imbalance issues in datasets \cite{liu2008exploratory}. However, samples in clinical data and variations in the real-data are important aspects of the observations and measurements that may not be present in augmented data generated by sampling methods. It is crucial to find an efficient way to address the imbalance issue without modifying or reducing the original data in pre-processing steps \cite{krawczyk2016learning}. Our goal is to propose a model to address the challenges mentioned above. To support the clinical treatment and adapt to the real-world sensory data readings, the model should filter the redundant and less informative data. Furthermore, the model can explain the predictions by telling us which time periods and sensors are important to give the predictions. Last but not least, the model can adapt to the imbalanced data. \begin{figure*} \centering \includegraphics[width=\linewidth]{Figures/introduction/architecture.png} \caption{An overview of the proposed solution for healthcare data analysis. The data is encoded by positional encoding before passing to the model. The proposed rationalising extract important information and pass to the higher layers. The proposed rationalising block contains a rational layer to extract important time steps. A Long-Short Term Memory (LSTM) model processes the extracted data. The attention layer to pay attention to suitable features. The rationalising process of the data changes during the rationalising block. The rationalising block extracts the important time steps at first. Then it pays attention to different emphasis features of the pruned data. Then the data is given to make a prediction. All the layers are trained simultaneously. } \label{fig:overview} \end{figure*} \section{Design, setting and participants} Real-time, continuous measurement methodologies enabled by the recent advances in pervasive computing and ‘smart-home’ technologies provide opportunities to monitor the behaviour and health status of elderly people using wearable technology or environmental sensors \cite{11majumder2017smart, 12turjamaa2019smart, 13peetoom2015literature}. Computer-derived algorithms have been developed to analyse sensor data and identify patterns of activity over time. These can be applied to detect changes in activities of daily living in order to predict disease progression and cognitive decline. For instance, ORCATECH group used continuous in-home monitoring system and pervasive computing technologies to track activities and behaviours such as sleep, computer use, medication adherence to capture changes in cognitive status \cite{35lyons2015corrigendum}. They also demonstrated the ability of machine learning algorithms to autonomously detect mild cognitive impairment in older adults \cite{36akl2015autonomous}. Machine learning models have also been used to detect clinically significant events and changes in health status. Much of the previous work focused on detection and prediction of falls using wearable accelerometers or other motion detectors \cite{37schwickert2013fall}, as well as tracking behavioural symptoms such as sleep disturbances \cite{38lazarou2016novel}, agitation \cite{39bankole2012validation}, and wandering \cite{40fleiner2016sensor} in elderly patients. However, there is limited research on the use of machine learning models for detection of health changes such as infection in the context of smart-homes. An early supervised UTI detection model has been described using in-home PIR sensors \cite{41rantz2011using}, however it relied on the activity labels and annotations in the training dataset, which is extremely time-consuming and not generalisable to the real-world situations with large amount of unlabelled data collected from uncontrolled environments. We have previously proposed an unsupervised technique that could learn individual’s movement patterns directly from the unlabelled PIR sensor data \cite{42enshaeifar2019machine}. Furthermore, the existing research and the data-driven solutions are either applied to small scale pilot studies and do not provide evidence for scalability and generalisability. They are also limited in analysing long-term patterns and correlations that appear in the data. Attention-based models which can overcome these problems have never been applied to sensor data for detecting clinically significant events or changes in health status in dementia patients. This is the first to use deep learning and attention-based methods to perform risk analysis for behavioural symptoms and health conditions such as UTIs in people living with dementia. The proposed model improves the accuracy and generalisability of machine learning models that use imbalanced and noisy in-home sensory data for the risk analysis. An analysis of the suitability of the digital markers and the use of in-home sensory data is explored in an ablation study. The proposed model is compared with several baseline models and state-of-the-art methods. The proposed approach has been evaluated in an observational clinical study. Participants (n=88, age=81 +/- 6.5) were recruited for a six months trial period. The proposed solution provides a recall of $91\%$ and precision of $83\%$ in detecting the risk of agitation and UTIs. We have also set up a framework and a clinical response team that use the risk alerts generated by the models for ongoing support and management of the conditions in people living with dementia. Using high-resolution in-home observation and measurement data in association with advance machine learning methods leads to early and timely interventions and has a significant impact on reducing preventable and unplanned hospital admissions in people affected with dementia. A key challenge in using analytical and predictive models for risk analysis is identifying and collecting digital markers data using in-home sensory devices. The capacity of the proposed model to address time-series feature identification and data imbalance enables use in a very wide range of healthcare and risk analysis applications using in-home digital markers. \begin{figure*} \centering \includegraphics[width=0.95\linewidth]{Figures/method/raw_activity.png} \caption{Visualisation of the sensor readings. The x-axis represents the time of the day for activation of the sensors. The y-axis represents the days for a period of 8 months for a patient. Each colour represents a type of an environmental activity sensor. Similar colour along the y-axis represent similar patterns of activities around the same time in consecutive days. The more colour distortion/merge of colours along the y-axis represent more changes in pattern of activity over time. } \label{fig:visual_raw_data} \end{figure*} \section{Method} We introduce a model that can identify the important time steps and features and utilise long-distance dependencies to make better predictions. The proposed model provides a prediction based on the selected time points and the selected features from the raw observation and measurement data. Figure \ref{fig:overview} shows how the data changes during the processing. The model selects important time steps through a pruning process. After pruning the data, it pays attention to different features and uses them to make the predictions. Different from methods such as clustering sampling \cite{wu2020stratified}, we select the important time steps of each sample instead of selecting a portion of samples for training. In contrast to statistic feature selection methods such as sequential feature selection \cite{aha1996comparative}, the proposed model selects important time steps based on different data. We use focal loss \cite{lin2017focal} to assign priority to the minority class without generating synthetic samples. \begin{Figure} \centering \includegraphics[width=0.9\linewidth]{Figures/method/agg_data.png} \captionof{figure}{A heat-map of the aggragation of the raw data. The readings are aggregated per hour within each day.} \label{fig:visual_agg_data} \end{Figure} \subsection*{Data sources and pre-processing} We have collected the data as part of an observational clinical study in people living with dementia from December 2018 to April 2020. Each of the participants has had a confirmed diagnosis of dementia (mild to severe) within, at least, the past three months of recruitment and have been stable on dementia medication. The collected data contains continuous environmental sensor data from houses of patients with dementia who live in the UK. The sensors include Passive Infra-Red (PIR), smart power plugs, motion and door produced by Develco in Aarhus, Denmark. The sensors were installed in the bathroom, hallway, bedroom, living room (or lounge) and kitchen in the homes and also on the fridge door, kettle and microwave (or toaster). The sensors also include network-connected physiological monitoring devices that are used for submitting daily measurements of vital signs, weight and hydration. The data is integrated into a digital platform, which is designed in collaboration with clinicians and user group to support the people with dementia, that we have developed in our past research \cite{Enshaeifar20}. A clinical monitoring team that is set up as part of our observational study has used the platform to daily annotate the data and very the risk analysis alert. Based on the annotations, we select four incidents including agitation, Urinary Tract Infection (UTI), abnormal blood pressure and abnormal body temperature to label our data binarily. More specifically, a label is set to true when the abnormal incident is verified by the monitoring team and vice versa. We then use the environmental data to inference if there is any incident happen within one day. Fig \ref{fig:visual_raw_data} shows an example of collected data. To pre-process the data, we aggregate the readings of the sensors within each hour of the day, shown in Fig \ref{fig:visual_agg_data}. Appendix 1 shows a list of potential digital markers and sensory data that can be used in dementia care. In the appendix, we also show a screenshot of the platform that is used for collecting the data. \subsection*{Machine learning model} We aim to use the environmental sensors to predict possible incidents and avoid delayed treatment. Furthermore, the model should provide the reason, i.e. which period of time and sensors are important to give the predictions, to explain the inference. In other words, the model can remove the redundant or less informative information and use the rest of the data to give the prediction, shown in Fig \ref{fig:visual_select_data}. \begin{Figure} \centering \includegraphics[width=0.9\linewidth]{Figures/method/selected.png} \captionof{figure}{Selected time steps from the raw data. These time steps are selected by the model. The model learns to identify time steps that are more important in predicting the outcome.} \label{fig:visual_select_data} \end{Figure} As discussed earlier, in healthcare data analysis, often, the predictions are based on a long sequence of data measured and collected at different time-points. Accessing and feeding more data helps to train more accurate models. However, more information can also mean more noise in the data, and the imbalance in the samples that are given to the model can also lead to decision bias. An efficient model should be able to process and utilise as much data as available. However, the model should also avoid the common pitfalls of noise and bias. To address these issues, we have studied the use of attention-based models. This group of models will utilise all the available information and, in each sequence, will identify the time-points that provide the most information to the training and prediction. This attention and selection process is an embedded step in the model. It will allow the model to be flexible and generalisable for different sequences with variable lengths and for a different combination of features and values that are represented in the data. Before explaining our proposed models and its contributions to creating a generalisable solution for time-series healthcare data analysis, we provide an overview of the related work. We discuss the use of attention-based models in other domains and explain how the ideas presented in the existing work has led to the design of our current model. \begin{Figure} \centering \includegraphics[width=0.9\linewidth]{Figures/method/attention.png} \captionof{figure}{After selecting the important time steps, the model learns which sensors should be attention. In this case, the model think the bathroom sensor has the most contribution the prediction.} \label{fig:visual_attention_map} \end{Figure} The attention mechanisms have been introduced in Neural Language Processing (NLP) by Bahdanau \textit{et. al} \cite{bahdanau2014neural}. The attention-based models are widely used in NLP due to their capability of detecting important parts of a sequence and efficiently interpreting it. The attention-based models have also been used in continuous healthcare and clinical data analysis \cite{usama2020self}. Continuous clinical data are multivariate time-series data with temporal and sequential relationships. For each patient, the data is a set of time steps, and each time step contains medical features ($\mathbf{X} \in \mathbb{R}^{t \times d}$). REverse Time AttentIoN model (RETAIN) is one of the first systems, that used in using attention mechanism for medical data \cite{choi2016retain}. In this model, there are two separate RNNs, one to generate the visit-level attention weights ($\boldsymbol{\alpha}$) and the other one for variable-level ($\boldsymbol{\beta}$) attention weights. In this model, the most relevant time step is the one associated with the largest value in $\boldsymbol{\alpha}$. Choi \textit{et. al} provided a method to find the most influential medical feature \cite{choi2016retain}. However, RETAIN cannot handle long-distance dependencies. To deal with this issue, Ma \textit{et. al} proposed Dipole, a predictive model for clinical data using Bidirectional RNNs \cite{ma2017dipole}. They have implemented the model using two different attention mechanisms: General attention and Concatenation-based attention. The results show that Concatenation-based attention outperforms because of incorporating all the long-distance dependencies. In the above models, the input layer is simple, and the data has the same pipeline, but in the Timeline model, Bai \textit{et. al} adapted the pipeline of data \cite{bai2018interpretable}. They use attention layer to aggregate the medical features, and by modelling each disease progression pattern, they find the most important timesteps. To deal with long-distance dependencies, Timeline implements Bidirectional LSTMs. One of the recent studies in this area is AdaCare \cite{ma2019adacare}, which uses Gated Recurrent Units (GRU). AdaCare utilises convolutional structure to extract all the dependencies in the clinical data. AdaCare showed promising results in the explainability of the model. The models mentioned above have been developed based on recurrent networks. However, the sequential aspect of recurrent models is computationally inefficient. The SAnD model was developed solely based on multi-head attention mechanism \cite{song2018attend}. Song \textit{et. al} implemented a positional encoding to include the sequential order in the model. The models mentioned above show significant improvements in the accuracy and performance of predictive models in the clinical field. However, incorporating both long-distance dependencies and feature associations is a challenging task. In the existing models, the analysis is either on time step-level or feature-level. In this paper, we propose a model to detect and predict the risk of healthcare conditions by analysing long-distance dependencies in the patterns and sequences of the data. This information can be useful for clinical experts in ongoing management of the conditions. The work also helps to use an automated process to alert the risk of adverse health conditions and explore the symptoms related to the detected conditions. Our proposed model consists of two main components, a rationalising block and the classification block, as shown in Figure \ref{fig:overview}. In a high-level overview, the rational layers select the important time steps and pass to an LSTM layer. The LSTM layer will ignore the trivial time steps and process the data for the attention block. The classifier uses these time points for predictions. After processing by the attention block, the model will give a prediction. The details of these blocks are explained in the following sections. \subsection*{Positional Encoding} To use the order of sequence in the analysis, we add positional encoding (PE) before passing the data into the model. We use the sine and cosine positional encoding \cite{vaswani2017attention}. Shown in Equation \ref{eq:pe}, where $pos$ is the position of the time step, $i$ is the position of the sensor, $d$ is the dimension of each time step. \begin{equation} \label{eq:pe} \begin{split} PE(pos, 2i) = sin(pos/10000^{2i/d}) \\ PE(pos, 2i + 1) = cos(pos/10000^{2i/d}) \end{split} \end{equation} \begin{comment} \begin{table*}[] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline & Rational & Attention & Residual & Focal loss & PE\\ \hline AUC - PR & 0.6412 & 0.7429 & 0.7827 & 0.7584 & 0.7239\\ AUC - RC & 0.7675 & 0.8110 & 0.8360 & 0.8244 & 0.7801\\ \hline \end{tabular} \caption{Ablation Study Results. We remove each component one at a time to evaluate the performance of the model.} \label{tab:ablation} \end{table*} \end{comment} \subsection*{Rationalising Prediction} To add more focus on the time steps in the data that are more relevant to the predictions, the generator produces a binary mask to select or ignore a specific time points. For example: $\textbf{x} \in \mathbb{R}^{k \times f}$ contains $k$ time point and $f$ features for each time point, the generator will produce a binary vector $\textbf{z}=\{z_1,z_2,\dots,z_k\}$. The $i^{th}$ variable $z_i \in \{0,1\}$ indicates whether the $i^{th}$ time point in $\textbf{x}$ is selected or not. Whether the $i^{th}$ time point is selected or not is a conditional probability given the input $x$. We assume that the selection of each time point is independent. The Generator uses a probability distribution over the $\mathbf{z}$, which could be a joint probability of the selections. The joint probability is given by: \begin{equation} \label{eq:joint_prob} p(z|x) = \prod^k_{i=1}p(z_i|x) \end{equation} \subsection*{Classifier} After exploring and selecting the most relevant time points, we train a classifier to provide the predictions. The trained classifier contains attention blocks and residual blocks. Attention block is an application of self-attention mechanism to detect the important features. The attention mechanism detects important parts of a sequence. It has three key components: the inputs structure, the compatibility function and the distribution function \cite{galassi2019attention}. \noindent There are three inputs in the structure; Keys ($\mathbf{K} \in \mathbb{R}^{{n}_{k} \times {d}_{k}}$), Values ($\mathbf{V} \in \mathbb{R}^{{n}_{v} \times {d}_{v}}$) and Query ($\mathbf{Q} \in \mathbb{R}^{{n}_{q}}$), where the $n$ is the dimension of the inputs, the $k, v, q$ are the dimension of the outputs. They could have different or same sources. If $\mathbf{K}$ and $\mathbf{q}$ come from the same source, it is self-attention \cite{vaswani2017attention}. $\mathbf{K}$ and $\mathbf{V}$ represent input sequence which could be either annotated or raw data. $\mathbf{q}$ illustrates the reference sequence for computing attention weights. For combining and comparing the $\mathbf{q}$ and $\mathbf{K}$ values, compatibility function has been used. Distribution function computes the attention weights ($\mathbf{a} \in \mathbb{R}^{{d}_{k}}$) using the output of compatibility function ($\mathbf{c} \in \mathbb{R}^{{d}_{k}}$). We obtain the attention by Equation \ref{eq:attention}. The $Q, K, V$ are matrices formed by queries, keys and values vectors, respectively. Since we use the self-attention, the $Q, K, V$ are calculated by the inputs with different weight matrices. \begin{equation} \label{eq:attention} \textup{Attention}(Q,K,V) = \textup{softmax}(\frac{QK^T}{\sqrt{d_k}})V \end{equation} The architecture of the attention block is the same described in \cite{vaswani2017attention}. We employ a residual connection \cite{he2016deep} followed by a normalisation layer \cite{ba2016layer} inside the attention block. Residual blocks and the output layer process the output of the attention block. \subsection*{Objective function} The training samples in healthcare datasets are often imbalanced due to the low prevalence and sporadic occurrences. In other words, some of the classes contain more samples than others. For example, only 25\% of the data we collected are labelled as positive. More details of the dataset will be clarified in the following section. To deal with the imbalance issue, we use focal loss \cite{lin2017focal} as the objective function of the classifier, shown in Equation \ref{eq:loss_clf}: \begin{equation} \label{eq:loss_clf} \mathit{L_c} = - \alpha(1-p)^\beta \log(p) \end{equation} \noindent where $\alpha$ and $\beta$ are hyper-parameters to balance the variant of the focal loss, $p=f(x,z)*y + (1-f(x,z)*(1-y)$. $f(x,z)$ is the probability estimated by the classifier and $y \in \{0,1\}$ is the label of $x$. In addition to the loss function used in the classifier, the generator produces a short rational selection and calculates the loss. Shown in Equation \ref{eq:loss_gen}, where the $\lambda$ is the parameter to weight the selection: \begin{equation} \label{eq:loss_gen} \mathit{L_g} = \lambda||\textbf{z}|| \end{equation} We then combine the focal loss and the loss from generator to construct loss function as shown in Equation \ref{eq:loss_combin}: \begin{equation} \label{eq:loss_combin} \mathit{L} = \sum_{(x,y)\in D}\mathbb{E}[\mathit{L_c} + \mathit{L_g}] \end{equation} \section{Results}\label{sec:res} \begin{figure*} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\linewidth]{Figures/results/TIHM/PR.png} \caption{PR} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\linewidth]{Figures/results/TIHM/ROC.png} \caption{ROC} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\linewidth]{Figures/results/TIHM/loss.png} \caption{Loss} \label{fig:TIHM_loss} \end{subfigure} \caption{Evaluation of the proposed methods using the in-home sensory dataset. (a) shows the precision; (b) evaluates the Receiver Operating Characteristics (ROC) curve and (c) shows the changes to the loss during the training. In (a) and (b) the results of the proposed model is also compared with a set of baseline models.} \label{fig:evaluation_TIHM} \end{figure*} \begin{figure*} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\linewidth]{Figures/results/TIHM/ablation_PR.png} \caption{PR} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\linewidth]{Figures/results/TIHM/ablation_ROC.png} \caption{ROC} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=\linewidth]{Figures/results/TIHM/Rate_changes.png} \caption{Selection Rate changes} \label{fig:TIHM_rate} \end{subfigure} \caption{An ablation study to evaluate the model; (a) shows the precision; (b) evaluates the Receiver Operating Characteristics (ROC) curve and (c) shows the selection rate changes. In (a) and (b) the results of the evaluation is by eliminating different components from the model. } \label{fig:ablation_tihm} \end{figure*} \noindent\textbf{Evaluation Metrics}: To evaluate our proposed method and compare it with the baseline models, we calculated different metrics. One of the primary metrics to assess the model is accuracy which is the measure of how close is the predicted class to the actual class. However, accuracy alone cannot be a good measure to evaluate the performance of a classifier. As a result, we also calculated the Area Under the Curve of Receiver Operating Characteristic (ROC) and Precision-Recall (PR). The precision of class A is the ratio of samples predicted as class A which are correct, and Recall is the ratio of samples as true class A which have been detected. ROC curve is the measure of model capability in differentiating between classes. We do not report the results in terms of specificity and sensitivity. The reason is that in this study, we do not have access to the full electronic healthcare records and hospital admission data of all the participants. So report the specificity and sensitivity only based on the detected and evaluated labels in our dataset, which can only be a sub-set of true and false cases for the cohort, can be misleading in terms of an actual and generalisable clinical finding. Instead, we have opted to evaluate the precision and generalisability of the prediction algorithm based on the existing labelled data and the known cases that we could evaluate and verify the performance of the model. \\ \noindent\textbf{Baseline Models}: We compare our model with the Linear Regression (LR) \cite{sperandei2014understanding}, Long-Short Term Memory (LSTM) neural networks \cite{gers1999learning} and a fully connected Neural Network (NN) model \cite{hassoun1995fundamentals}. LR is a discriminative model which can avoid the confounding effects by analysing the association of all variables together \cite{sperandei2014understanding}. It is also a commonly used baseline model to evaluate the performance of the proposed models \cite{harutyunyan2019multitask}. NN has the ability to learn a complex relationship. Unlike LR, NN does not need to assume the variables are linearly separated. It is also applied to a variety of clinical data sets \cite{lasko2013computational, che2015deep}. In the experiment, we used a Neural Network with one hidden layer contains 200 neurons, a softmax output layer contains two neurons, cross entropy loss and adam optimiser. LSTM is a powerful neural network to analyse the sequential data, including time-wised clinical datasets \cite{choi2016doctor, baytas2017patient}. It can associate the relevant inputs even if they are widely separated. Since our dataset consists of time-series sequences, we take the LSTM as another baseline model. In the experiment, we used a model that contains one residual block, one LSTM layer contains 128 neurons, and a softmax output layer contains two neurons, cross entropy loss and adam optimiser. In the experiments, we aggregate the readings of each sensor per hour. Hence each data point contains 24-time points and eight features. We set the batch size to $32$, learning rate to $0.0001$, sparsity to $0.001$. We divide the data into a train set and a test set. The numbers of training and testing samples in the datasets are 209 and 103 cases with their associated time-series data, respectively. The data is anonymous, and only the anonymous data without any personally identifiable information is used in this research. \\ \noindent\textbf{Experiments}: The ROC and PR changes during training are shown in the first two graphs in Figure \ref{fig:evaluation_TIHM}. Overall, the proposed model outperforms other baseline methods. The LSTM performs well in dealing with the time-series data. Compared to the other methods, the neural network converges much faster. However, the performance of the model fluctuates around 30 epochs. The convergence and the fluctuation are due to the rational process. The model has to learn how to extract important time steps and pay attention to the features. This process is also reflected in Figure \ref{fig:TIHM_loss}, the loss fluctuates during that period. However, the model adjusts this fluctuation automatically and improves the performance. The overall results are also summarised in Table \ref{tab:results}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|c|c|} \hline & LR & LSTM & NN & Proposed method\\ \hline AUC - PR & 0.3472 & 0.6901 & 0.5814 & \textbf{0.8313}\\ AUC - RC & 0.5919 & 0.7644 & 0.7601 & \textbf{0.9131}\\ \hline \end{tabular} \caption{The evaluation results in comparison with a set of baseline models: Linear Regression (LR), Long-Short Term Memory (LSTM) neural networks and a fully connected Neural Network (NN) model. Since the dataset is imbalance, we calculated the Area Under the Curve (AUC) of Receiver Operating Characteristic (ROC) and Precision-Recall (PR) to evaluate the performance.} \label{tab:results} \end{table} \section{Discussion} \textbf{Ablation Study}: We begin the discussion with an ablation study. Our model contains five important components: Rational layers, Attention layers, Residual layers, focal loss and positional encoding. We omit each component one at a time and explore how removing one of the components will impact the performance of the model. The experiments are shown in the first two graphs of Figure \ref{fig:ablation_tihm}. The orange line represents the model without rational layer. Although the performance of the model without rational layer keeps increasing, it underperforms in others significantly. In other words, the rational layer plays an important role in the model. Removing the positional encoding, attention layer, residual layer, or the focal loss decrease the performance as well. The performance change caused by omitting each of these four components are quite similar. As shown in Figure \ref{fig:ablation_tihm}, the positional encoding helps the model to identify relevant patterns of the data over time and plays an important role in the performance of the model. The rate of selected timesteps changes is shown in Figure \ref{fig:TIHM_rate}. The rate of selected timesteps changes is shown in Figure \ref{fig:TIHM_rate}. \\ \textbf{Rationalising prediction}: the Rational component helps to increase the accuracy of the model. Generally, the proposed rationalising method shows that the model knows which time steps and features to give the prediction. These patterns and time steps can also be explored to identify and observer relevant data and symptoms to a condition in each patient. Using this component, a personalised set of patterns and symptoms can be explored for each patient. The last graph in Figure \ref{fig:ablation_tihm} shows the selection rate changes during the training phase. The model learns to extract the time steps, and the accuracy increases after the changes become stable. As mentioned in the ablation study, after learning to extract the important time steps, the proposed model outperforms the baseline models without rational mechanisms. In other words, the model extracts a sub-set of the time steps (e.g. part of the time steps are extracted from Figure \ref{fig:visual_agg_data} to Figure \ref{fig:visual_select_data}) to obtain a better prediction. As the learning process continues, the model tries different selections and finds the optimised selection rate. Comparing to other models, the performance of the proposed model does not decrease during the training. The model learns to pay attention to the most relevant segments of the data and consider long-distance dependencies in the time-series data. In summary, the proposed model can not only explain the prediction but also abandon the redundant information in the data automatically. According to our experiments, the proposed model in average selects $61\%$ of the time points in the datasets to estimate the predictions.\\ \begin{figure*} \centering \includegraphics[width=\linewidth]{Figures/discussion/rational_pair_sim.png} \caption{Visualisation of the outputs within the rational block. The top figure visualises a sample which is validated with a True incident. The bottom figure is a sample which is validated with a False incident.} \label{fig:rational_pair} \end{figure*} \textbf{Pair analysation}: We then analyse the rational block processing on the positive and negative samples. As shown in Figure \ref{fig:rational_pair}, the rational block assigns weights to the positive and negative samples differently. More specifically, the model has learnt to extract different amount and series of time steps based on the inputs. In this case, the model extracts more time steps on the positive case than the negative case. Furthermore, the model pays attention differently based on the input data. In the example above, the model assumes the bathroom is the most important sensors in the positive samples. However, the model takes the bathroom and kettle almost as equally important sensors for predicting the negative case. After the model pays attention to the sensors of selected time steps, the classifier gives the predictions correctly. \begin{comment} \begin{itemize} \item We propose a novel rationalising block which is based on the rational and attention mechanism to process healthcare time-series data. \item We show how the focal loss helps to deal with the imbalanced issues. \item The proposed model can be paralleled, and this improves the scalability of the data in dealing with large datasets. \item We demonstrate how attention-based models can be used effectively in healthcare data analysis. \item We have evaluated our model on an observational clinical study and show how it outperforms conventional machine learning methods. \item We present the effectiveness of the model in a real-world setting and describe how it is used to support people with dementia. \end{itemize} \end{comment} \subsection*{Translating machine learning research into clinical practice} Improving the quality of life by preventing illness‐related symptoms and negative consequences of dementia has been set out as a major goal to advance dementia care. Agitation and infections have been highlighted as areas for priority development \cite{6buhr2006caregivers}. Our proposed model directly addresses these priorities in dementia care and intervention by enabling early detection of agitation and urinary tract infections in remote healthcare monitoring scenario, providing an opportunity for delivering more personalised, predictive and preventative healthcare. When applied to real-world clinical dataset in the context of the current clinical study our proposed algorithm provided a recall of 91\% and precision of 83\% in detecting early signs of agitation and UTI from physiological and environmental sensor data. A clinical monitoring team verified the predictions by contacting the patient or carer when an agitation or UTI alert was generated. A set of clinical pathways for early interventions has also been developed for the clinical monitoring team to use when responding to the alerts. \subsubsection*{Relevance to patient outcomes} We would like to highlight an important aspect of using this type of analysis to evaluate healthcare and patient outcomes. Focusing only on accuracy as a metric for assessment of the solution within a specific cohort goes only so far \cite{MITNews}. Large studies and further experiments with different cohorts and various in-home deployment settings are required to assess how such algorithms will perform in the noisy and dynamic real-world environments. There are several examples of AI and machine learning algorithms that perform very well in controlled and laboratory settings, but the real-world experience is different \cite{MITNews}. In this study, the sensors and data collection happens in uncontrolled, real-world environment. We have done several cross-validations, comparison and ablation studies to avoid overfitting the model and make sure the results are robust and reproducible. However, further independent trials and validation studies with larger cohorts are required to transform the current work into a product that can be used in real-world clinical and care settings. Another important item is that only focusing on the accuracy of the algorithm will not give a complete picture of the real effectiveness and impact of the solution on patient outcomes. Our agitation intervention protocol follows all current guidelines, which agree that individualised and person-centred non-pharmacological therapies are the first-line treatment for agitation in people with dementia \cite{52duff2018dementia, 53ijaopo2017dementia}. In line with the current guidelines, the initial assessment explores possible reasons for patients’ distress and addresses clinical or environmental causes first. The clinical monitoring team asks a set of standardised questions to evaluate the symptoms and to help the carer to identify potential causes of agitation such as pain, illness, discomfort, hunger, loneliness, boredom or environmental factors (temperature, light, noise level). The recognition and treatment of possible organic causes or triggering factors remains the mainstem of the intervention. In particular detection of delirium and a possible underlying infection is of great importance and the clinical monitoring team facilitates early diagnosis and treatment by liaising with the study’s clinical team and patient’s GP. Finally, the clinical monitoring team provides psychological support for the caregivers in order to reduce the caregiver distress. In the future, we are planning to use multimodal sensor data to improve the classification of agitation state which will include measuring sound levels along with activity detected by environmental sensors. Similarly to the agitation protocol, in case of a UTI alert the clinical monitoring team first responds by contacting the patient/carer to evaluate the symptoms. However, the diagnosis of UTI in dementia patients can be problematic, as these patients are less likely to present with a typical clinical history and localised urinary symptoms compared with younger patients \cite{54lutters2008antibiotic}. The team, therefore, arranges a home visit to perform a dipstick urine test. If the urine dipstick test is suggestive of infection (positive nitrates or leukocytes) clinical monitoring team advises the person with dementia/carer to visit the GP the same day to obtain a prescription for antibiotics. Monitoring Team also informs the GP of test results and requesting for antibiotics to be prescribed. One potential criticism of our UTI intervention algorithm could be the possibility of antibiotic over-prescribing contributing to the spread of antibiotic resistance. However, recent evidence demonstrates that in elderly patients with a diagnosis of UTI in primary care, no antibiotics and delayed antibiotics are associated with a significant increase in bloodstream infection and all-cause mortality compared with immediate treatment \cite{55gharbi2019antibiotic}. Therefore, early prescription of antibiotics for this vulnerable group of older adults is advised in view of their increased susceptibility to sepsis after UTI and despite a growing pressure to reduce inappropriate antibiotic use. The impact of our in-home monitoring technologies and the embedded machine learning models on clinical outcomes including hospitalisation, institutionalisation and mortality rates is part of an ongoing study. Nevertheless, the current work demonstrates the effectiveness of the proposed algorithm and its translation into real-life clinical interventions. Fig \ref{fig:rational_pair} illustrates individual cases of agitation and UTI correctly identified by the algorithm, with the digital markers demonstrating a behavioural anomaly. \section{Conclusion} To avoid unplanned hospital admissions and provide early clues to detect the risk of agitations and infections, we collected the daily activity data and vital signs by in-home sensory devices. The noise and redundant information in the data lead to inaccuracy predictions for the traditional machine learning algorithms. Furthermore, the traditional machine learning models cannot give explanation of the predictions. To address these issues, we proposed a model that can not only outperform the traditional machine learning methods but also provide the explanation of the predictions. The proposed rationalising block, which is based on the rational and attention mechanism, can process healthcare time-series data by filtering the redundant and less informative information. Furthermore, the filtered data can be regarded as the important information to support clinical treatment. We also demonstrate the focal loss can help to improve the performance on the imbalanced clinical dataset and attention-based models can be used effectively in healthcare data analysis. The evaluation shows the effectiveness of the model in a real-world clinical dataset and describes how it is used to support people with dementia. \\ \section*{Acknowledgment} This research is funded by the UK Medical Research Council (MRC), Alzheimer's Society and Alzheimer's Research UK and supported by the UK Dementia Research Institute. \bibliographystyle{IEEEtran}
fca5c42ec3df055d5cd8cf9baac8befde0626626
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} A compact complex manifold is called {\em rigid} if it has no nontrivial deformations. In \cite{rigidity} several notions of rigidity have been discussed, the relations among them have been studied and many questions and conjectures have been proposed. We state here only the part of \cite{rigidity}*{Definition 2.1}, which will be relevant for our purposes: \begin{definition}\label{rigid} Let $X$ be a compact complex manifold of dimension $n$. 1) A {\em deformation of $X$} is a proper smooth holomorphic map of pairs $f \colon (\mathfrak{X},X) \rightarrow (\mathcal{B}, b_0)$, where $(\sB,b_0)$ is a connected (possibly not reduced) germ of a complex space. 2) $X$ is said to be {\em rigid} if for each deformation of $X$, $f \colon (\mathfrak{X},X) \rightarrow (\sB, b_0)$ there is an open neighbourhood $U \subset \sB$ of $b_0$ such that $X_t := f^{-1}(t) \simeq X$ for all $t \in U$. 3) $X$ is said to be {\em infinitesimally rigid} if $H^1(X, \Theta_X) = 0$, where $\Theta_X$ is the sheaf of holomorphic vector fields on $X$. \end{definition} \begin{rem}\label{kuranishi} 1) If $X$ is infinitesimally rigid, then $X$ is also rigid. This follows by Kodaira-Spencer-Kuranishi theory, since $H^1(X, \Theta_X)$ is the Zariski tangent space of the germ of analytic space which is the base $\Def(X)$ of the Kuranishi semiuniversal deformation of $X$. So, if $H^1(X, \Theta_X) =0$, $\Def(X)$ is a reduced point and all deformations are induced by the trivial deformation. The other implication does not hold in general as it was shown in \cite{notinfinitesimally}, compare also \cite{kodairamorrow}. 2) Observe that, as it is shown in \cite[Theorem 2.3]{rigidity}, a compact complex manifold is rigid if and only if the base of the Kuranishi family $\Def(X)$ has dimension $0$. 3) The only rigid curve is $\ensuremath{\mathbb{P}}^1$; for $n=2$ it was shown in \cite[Theorem 1.3]{rigidity} that a rigid compact complex surface has Kodaira dimension $- \infty$ or $2$. \end{rem} That the restriction on the Kodaira dimension is a phenomenon in low dimensions and that in higher dimensions rigid manifolds are much more frequent has already been observed in \cite[Theorem 1.4]{rigidity} (cf. also \cite{beauville} for Kodaira dimension $0$, \cite{BG} for Kodaira dimension $1$): \begin{theorem} For all $n \geq 3$ and $-\infty,0 \leq k \leq n$ there is a infinitesimally rigid n-dimensional compact complex manifold $Z_{n,k}$ of Kodaira dimension $k$. \end{theorem} One idea to construct the infinitesimally rigid examples is to consider finite quotients of smooth compact complex manifolds (often products of curves) with respect to a (infinitesimally) rigid holomorphic group action (see Definition \ref{infG}). If one considers non free actions, under mild assumptions it is still true that the quotient is infinitesimal rigid (in dimension at least three), but since we are interested in infinitesimally rigid manifolds, we have to compare the infinitesimal deformations of the singular quotient with those of a suitable resolution of singularities. The aim of this paper is to give a classification of infinitesimally rigid quotients of a product of elliptic curves by a diagonal action of a finite group $G$ as far as possible. We first prove the following: \begin{theorem}\label{1} Let $G$ be a finite group which admits a rigid free diagonal action on a product of elliptic curves $E_1 \times \ldots \times E_n$. Then: \begin{enumerate} \item $n\geq 4$, \item $E_i$ is the Fermat elliptic curve for each $1 \leq i \leq n$, \item $G = \ensuremath{\mathbb{Z}}_3^2$ or the {\em Heisenberg group} $\He(3)$ of order $27$. \end{enumerate} \end{theorem} These two groups are part of four so-called {\em exceptional} groups, which are groups admitting a rigid action on an elliptic curve such that the translation part is not uniquely determined (cf. Proposition \ref{uniquetrans}). For the non exceptional case instead we have the following result: \begin{theorem}\label{2} Assume that $G$ is not exceptional and admits a rigid diagonal action on a product of elliptic curves $E_1 \times \ldots \times E_n$. Then the quotient is isomorphic to $X_{n,d}:=E^n/\mathbb Z_d$, where $\mathbb Z_d$ acts on $E^n$ by multiplication with $\zeta_d \cdot \Id$. Here $d =3,6$ and $E$ is the Fermat elliptic curve, or $d=4$ and $E$ is the harmonic elliptic curve. \end{theorem} \begin{rem} 1) Observe that for the non exceptional groups we have in each dimension exactly one case for $d$. Unfortunately these quotients are singular, the singularities are of type $\frac{1}{k}(1, \ldots, 1)$, where $k~ \big\vert ~ d$. For $n \geq d$ these singularities are canonical and it can be shown that there is a resolution of singularities which is infinitesimally rigid (cf. \cite{BG}). 2) For $n=d=3,4,6$ the variety $X_{n,d}$ is a singular Calabi-Yau variety (cf. \cite{beauville}). \end{rem} We finally completely classify rigid diagonal actions of the two exceptional groups $\mathbb Z_3^2$ and $\He(3)$ in dimensions $3$ and the free ones in dimension $4$. In fact, we prove the following: \begin{theorem}\label{3} 1) There is exactly one isomorphism class of quotient manifolds $E^4/\mathbb Z_3^2$ resp. $E^4/\He(3)$ obtained by a rigid free and diagonal action. They have non isomorphic fundamental groups. 2) For each exceptional group $\mathbb Z_3^2$ and $\He(3)$ there are exactly four isomorphism classes of (singular) quotients $X_i:=E^3/\mathbb Z_3^2$ and $Y_i:=E^3/\He(3)$ obtained by a rigid diagonal $G$-action: \begin{itemize} \item[i)] $X_4$ and $Y_4$ are isomorphic to Beauville's Calabi-Yau threefold $X_{3,3}$. \item[ii)] $X_3$ and $Y_3$ are also Calabi-Yau, uniformized by $X_{3,3}$ and admit crepant resolutions, which are rigid. \item[iii)] $X_2$ and $X_3$, resp. $Y_2$ and $Y_3$, are diffeomorphic but not biholomorphic. \item[iv)] The eight threefolds $X_i$, $Y_j$ form five distinct topological classes. \end{itemize} \end{theorem} Our paper is organized as follows: In a short first section we recall some facts on rigid group actions on complex manifolds. In the second chapter we treat rigid actions on elliptic curves recalling that a finite group $G$ admitting a rigid action on an elliptic curve $E$ has to be of the form $G= A \rtimes_{\varphi_d} \mathbb Z_d$, where $d \in \{3,4,6\}$, and $E$ is the Fermat or the harmonic elliptic curve. Moreover we show that if $G$ admits a rigid action on a product of elliptic curves, these curves have all to be the same. In the following section we prove Theorems \ref{1} and \ref{2}. The last section is dedicated to the classification of the actions of the exceptional groups in dimensions $3$ and $4$. The results on the isomorphism classes in Theorem \ref{3} is done using a MAGMA algorithm \cite{MAGMA}, which is available on the website of the second author: \begin{center} \url{http://www.staff.uni-bayreuth.de/~bt300503/publi.html}. \end{center} To obtain the topological results we exploit the fact that the fundamental groups of the smooth quotients resp. the orbifold fundamental groups of the singular quotients in dimension three are crystallographic groups. \section{Rigid group actions on complex manifolds} \noindent In this short introductory section we recall the definition of rigid group actions on complex manifolds and give a criterion for the rigidity of the diagonal action on a product of such. Since in all of this article we only discuss {\em infinitesimal} rigidity, we often drop the adverb {\em infinitesimally} and only talk about rigidity. \begin{definition}\label{infG} Let $X$ be a compact complex manifold and $G$ be a finite group acting holomorphically on $X$. We say that the $G$-action is {\em infinitesimally rigid} if and only if $H^1(X,\Theta_X)^G=0$, where $\Theta_X$ is the tangent sheaf of $X$. \end{definition} \begin{rem} \ 1) If the action of $G$ is not faithful, then, denoting the kernel by $K$, we obviously have that $H^1(X,\Theta_X)^G=H^1(X,\Theta_X)^{G/K}$, hence after replacing $G$ by $G/K$ we may assume that $G$ acts faithfully. 2) If $G$ acts freely in codimension one, then for all $i$ there is an isomorphism $$ H^i(X/G,\Theta_{X/G}) \simeq H^i(X,\Theta_X)^G. $$ In particular, since $H^1(X/G,\Theta_{X/G})$ classifies the infinitesimal equisingular deformations of $X/G$, we see that, if the action is rigid, the quotient $X/G$ has no equisingular deformations. 3) Consider the low term exact sequence of the local-to-global $\Ext$-spectral sequence on $Z := X/G$: $$ 0 \to H^1(Z, \Theta_Z) \to \Ext^1(\Omega^1_Z, \ensuremath{\mathcal{O}}_Z) \to H^0(Z,\mathcal{E}xt^1(\Omega^1_Z, \ensuremath{\mathcal{O}}_Z)) \to \ldots. $$ By Schlessinger's result \cite{schlessinger} isolated quotient singularities in dimensions at least $3$ are infinitesimally rigid, i.e., $\mathcal{E}xt^1(\Omega^1_Z, \ensuremath{\mathcal{O}}_Z) =0$. Therefore the above exact sequence shows that, if $G$ has only isolated fixed points on $X$, then $$ H^1(Z, \Theta_Z) \simeq \Ext(\Omega^1_Z,\ensuremath{\mathcal{O}}_Z). $$ This means that for $Z$ all infinitesimal deformations are equisingular. In particular, for showing that $Z$ is an infinitesimally rigid variety, it suffices to show that $H^1(X, \Theta_X)^G = 0$. 4) If $Z$ is singular, one has to make sure that there is a resolution of singularities $\widehat{Z} \ensuremath{\rightarrow} Z$ such that $\widehat{Z}$ is infinitesimally rigid, since primarily we are interested in rigid {\em manifolds} (cf. Proposition \ref{resolution}). \end{rem} Let $G$ be a finite group acting holomorphically on the compact complex manifolds $X_1, \ldots, X_n$, then the diagonal subgroup $\Delta_G \simeq G$ acts in a natural way on the product $X_1 \times \ldots \times X_n$, in fact setting: $$ g(x_1, \ldots, x_n):=(g x_1, \ldots, g x_n). $$ It is natural to call this action {\em the diagonal} $G$-action on the product $X_1 \times \ldots \times X_n$. K\"unneth's formula allows us to give a reformulation of the rigidity of the diagonal action in terms of the individual actions on the factors: \begin{proposition}\label{rigiddiag} Let $G$ be a finite group acting holomorphically on the compact complex manifolds $X_1, \ldots, X_n$. Then, the diagonal action on $X:=X_1 \times \ldots \times X_n$ is (infinitesimally) rigid, if and only if: \begin{enumerate} \item the $G$-action on each $X_i$ is rigid and \item $\big[H^0(X_i,\Theta_{X_i}) \otimes H^1(X_j,\mathcal O_{X_j})\big]^G=0$ for all $i \neq j$. \end{enumerate} \end{proposition} \begin{proof} Let $p_i \colon X \ensuremath{\rightarrow} X_i$ be the projection onto the $i$-th factor. Then \[ H^1(X,\Theta_{X})=\bigoplus_{i=1}^{n}H^1(X, p_i^{\ast}\Theta_{X_i}). \] For each $1 \leq i \leq n$ we have \[ H^1(X, p_i^{\ast}\Theta_{X_i})= H^1(X_i,\Theta_{X_i}) \oplus \bigg[H^0(X_i,\Theta_{X_i}) \otimes \bigg(\bigoplus_{\stackrel{j=1}{j \neq i}}^{n} H^1(X_j, \mathcal O_{X_j}) \bigg)\bigg], \] according to K\"unneth's formula. The claim follows, by taking the $G$-invariant part. \end{proof} \begin{rem} In the special case where the products $h^0(X_i,\Theta_{X_i}) \cdot h^1(X_j,\mathcal O_{X_j})$ vanish, a diagonal $G$-action is rigid if and only if the $G$-action on each factor $X_i$ is rigid. This happens for example if the complex manifolds $X_i$ are regular or of general type. \end{rem} \section{Rigid actions on elliptic curves} In this paragraph we study rigid diagonal $G$-actions on a product $E_1 \times \ldots \times E_n$ of elliptic curves under the additional assumption that $G$ acts faithfully on each factor. Recall that any holomorphic map between elliptic curves, or more generally between complex tori, is induced by an affine linear map. Since the tangent bundle of an elliptic curve is trivial, Proposition \ref{rigiddiag} has a particularly simple reformulation: \begin{proposition}\label{quadraticdiff} Let $G$ be a finite group acting holomorphically on the elliptic curves $E_1, \ldots, E_n$, then the following are equivalent: \begin{enumerate} \item the diagonal $G$-action on $E_1 \times \ldots \times E_n$ is rigid, \item none of the quadratic differentials $dz_i \otimes dz_j $ is $G$-invariant. \end{enumerate} \end{proposition} \begin{proof} By duality, the rigidity conditions in Proposition \ref{rigiddiag} are equivalent to \[ H^0(E_i,(\Omega_{E_i}^1)^{\otimes 2})^G =0 \qquad \makebox{and} \qquad \big[H^1\big(E_i, (\Omega_{E_i}^1)^{\otimes 2}\big) \otimes H^0(E_j, \Omega_{E_j}^1)\big]^G = 0. \] We use that $H^0(E_i,(\Omega_{E_i}^1)^{\otimes 2}) = \langle dz_i^{\otimes 2} \rangle $ and $$H^1\big(E_i, (\Omega_{E_i}^1)^{\otimes 2}\big) \simeq H_{\overline{\partial}}^{1,1}(E_i,\Omega_{E_i}^1) = \langle (dz_i \wedge d\overline{z}_i) \otimes dz_i \rangle,$$ to rewrite the rigidity conditions as: \[ \langle dz_i^{\otimes 2} \rangle^G =0 \qquad \makebox{and} \qquad \langle (dz_i \wedge d\overline{z}_i) \otimes dz_i \otimes dz_j \rangle^G = 0. \] The claim follows since $G$ acts trivially on each $2$-form $dz_i \wedge d\overline{z}_i$. \end{proof} \begin{rem}\label{canrep} 1) A holomorphic $G$-action of a finite group on an elliptic curve $E$ induces a natural one-dimensional representation \[ \varphi_E \colon G \to \GL\big(H^0(E, \Omega_{E}^1) \big), \qquad g \mapsto [dz \mapsto (g^{-1})^{\ast}dz ]. \] It is called the \emph{canonical representation}. As it is one dimensional we identify it with its character $\chi_E$. If the linear part of $g\in G$ is $a$, then the value of $\chi_E(g)$ is equal to $\overline{a}$. 2) In terms of the characters $\chi_{E_i}$ the statement on the quadratic differentials $dz_i\otimes dz_j$ in Proposition \ref{quadraticdiff} translates to $\chi_{E_i} \cdot \chi_{E_j} \neq \chi_{triv}$, for all $1 \leq i,j\leq n$. 3) It is well known that a $G$-action on an elliptic curve $E$ is rigid if and only if $E/G\simeq \mathbb P^1$ and the Galois cover $\pi \colon E \to E/G\simeq \mathbb P^1$ is branched in three points $p_1$, $p_2$ and $p_3$. In this case the branching signatures $m_i$, which are defined to be the orders of the (cyclic) stabilizer groups $G_{x_i}$ for some $x_i \in \pi^{-1}(p_i)$, are up to permutation equal to: \[ [m_1,m_2,m_3]=[3,3,3], \quad [2,4,4], \quad \makebox{or} \quad [2,3,6], \] see \cite[Chapter III, Lemma 3.8 b)]{miranda}. \end{rem} Rigid group actions on compact Riemann surfaces can be described purely in terms of group theory by \emph{Riemann's existence theorem}. In our situation we only need the following (much weaker) version of this theorem for elliptic curves. \begin{proposition}\label{RET} A finite group $G$ admits a rigid action on an elliptic curve $E$ if and only if there are elements $g_1,g_2,g_3 \in G$ of order $m_i=\ord(g_i)$, which generate $G$, fulfill the relation $g_1g_2g_3=1_G$ and $[m_1,m_2,m_3]=[3,3,3]$, $[2,4,4]$ or $ [2,3,6]$. \end{proposition} We refer to \cite[Chapter III]{miranda} for details and mention only the following: \begin{rem}\label{RemRET} 1) A triple of elements $V:=[g_1,g_2,g_3]$ like in the Proposition above is called a generating triple of $G$ with (branching) signature $[m_1,m_2,m_3]$. \item Assume that $E$ admits a rigid $G$-action, let $\pi \colon E \to E/G\simeq \mathbb P^1$ be the quotient map and let $\mathcal B:=\lbrace p_1, p_2, p_3 \rbrace$ be the set of branch points of $\pi$. The fundamental group of $\mathbb P^1\setminus \mathcal B$ is generated by three simple loops $\gamma_i$ around $p_i$ fulfilling a single relation $\gamma_1 \ast \gamma_2 \ast \gamma_3=1$. The elements $g_i$ are obtained as the images of $\gamma_i$ under the monodromy homomorphism \[ \eta \colon \pi_1(\mathbb P^1\setminus \mathcal B, p) \to G. \] 2) The cyclic subgroups $\langle g_i \rangle$ and their conjugates provide the non-trivial stabilizer groups of the $G$-action on $E$. Let $x_i \in E$ be a point with stabilizer $G_{x_i}=\langle g_i \rangle$, then $g_i$ acts around $x_i$ as a rotation by \[ \exp\bigg(\frac{2\pi \sqrt{-1}}{m_i} \bigg). \] This rotation constant is nothing else than the linear part of any affine transformation inducing $g_i$. In particular, the character $\chi_E$ (cf. Remark \ref{canrep}) can be read of directly from the generating triple. 3) The union of the non-trivial stabilizer groups will be denoted by $\Sigma_V$. Observe that $\Sigma_V$ consists of the identity and all group elements which are not translations. \end{rem} The further discussion heavily relies on the structure of the automorphism group of an elliptic curve $E$. Recall that $\Aut(E)$ is a semidirect product \[ \Aut(E)=E \rtimes \Aut_0(E), \] where $\Aut_0(E) \simeq \mathbb Z_2$, $\mathbb Z_4$ or $\mathbb Z_6$. \begin{proposition}\cite[Chapter III Proposition 1.12]{miranda} An elliptic curve with $\Aut_0(E)\simeq \mathbb Z_4$ is isomorphic to $\mathbb C/ \mathbb Z[i]$. An elliptic curve with $\Aut_0(E)\simeq \mathbb Z_6$ is isomorphic to $\mathbb C/ \mathbb Z[\zeta_3]$. Here, $ \mathbb Z[i]$ are the {\em Gaussian} and $\mathbb Z[\zeta_3]$ the {\em Eisenstein integers}. \end{proposition} Assume now that $E$ admits a rigid faithful action of a finite group $G$. Then by Remark \ref{canrep} (3), the group $G$ has an element of order $3,4$ or $6$ with a fixed point which up to translation is the origin of $E$. Whence $\Aut_0(E)\simeq \mathbb Z_4$ or $\mathbb Z_6$ and $G$ must be also a semidirect product \[ G\simeq A\rtimes \mathbb Z_d, \] with $d \in \{3,4,6\}$. The normal subgroup $A$ can be considered as a subgroup of the group of $n$-torsion points $E[n] \simeq \mathbb Z_n^2$ for a suitable integer $n$. More precisely, we have the following: \begin{proposition}\label{wallgrps} A finite group $G$ admits a faithful rigid holomorphic action on an elliptic curve $E$, if and only if it is isomorphic to a semidirect product \[ A \rtimes_{\varphi_d} \mathbb Z_d, \] where $d=3,4$ or $6$ and $A \leq \mathbb Z_n^2$ is a subgroup for some $n$, invariant under the action \[ \varphi_d \colon \mathbb Z_d \to \Aut (\mathbb Z_n^2), \] defined by: \begin{itemize} \item $\varphi_3(1)(a,b)=(-b,a-b)$, \item $\varphi_4(1)(a,b)=(-b,a)$ or \item $\varphi_6(1)(a,b)=(-b,a+b)$. \end{itemize} The possible branching signatures $[m_1,m_2,m_3]$ of the cover $\pi \colon E \to E/G$, the abelianisations of $G$ and the isomorphism types of $E$ are summarised in the table below: {\begin{center} \begin{tabular}{ c c c c } & $ $d=3$ ~ $ & $ ~ $d=4$ ~ $ & $ ~ $d=6$~ $ \\ \hline \hline $[m_1,m_2,m_3]$ & $ \quad [3,3,3] \quad $ & $ \quad [2,4,4] \quad $ & $ \quad [2,3,6] \quad $ \\ $G^{ab}$ & $ \quad \mathbb Z_3 $ or $ \mathbb Z_3^2 \quad $ & $ \quad \mathbb Z_4 $ or $ \mathbb Z_2 \times \mathbb Z_4 \quad $ & $ \quad \mathbb Z_6 \quad $ \\ $E$ & $\mathbb C/\mathbb Z[\zeta_3]$ & $\mathbb C/ \mathbb Z[i]$ & $\mathbb C/ \mathbb Z[\zeta_3]$ \\ \hline \end{tabular} \end{center} } \end{proposition} Before we proof the above proposition we observe the following: \begin{rem}\label{tworems} 1) By Proposition \ref{RET} a group $G=A \rtimes_{\varphi_d} \mathbb Z_d$ with a rigid action on an elliptic curve $E$ is a quotient of the triangle group \[ \mathbb T(m_1,m_2,m_3)= \langle a,b,c ~ | ~ a^{m_1} = b^{m_2} =c^{m_3}= abc=1 \rangle, \] where $[m_1,m_2,m_3]$, $d$, and $E$ are according to the table above. Whence $G^{ab}$ is a quotient of $\mathbb T(m_1,m_2,m_3)^{ab}$. Since the canonical projection $p \colon G \to \mathbb Z_d$ induces a surjective homomorphism $p^{ab} \colon G^{ab} \to \mathbb Z_d$, the isomorphism types of $G^{ab}$ in the table follow from the well known formula $\mathbb T(m_1,m_2,m_3)^{ab} \simeq \mathbb Z/k_1 \times \mathbb Z/k_2$, where \begin{itemize} \item $k_1:=\gcd(m_1,m_2,m_3)$, \item $k_2:=\lcm\big(\gcd(m_1,m_2), \gcd(m_2,m_3), \gcd(m_1,m_3)\big)$. \end{itemize} 2) It follows immediately from the above table that if a finite group $G$ admits a rigid action on an elliptic curve $E$, the isomorphism type of $E$ as well as the branching signature of the cover $E\to E/G$ is uniquely determined already by the abelianisation of $G$. \end{rem} An immediate geometric consequence of Remark \ref{tworems} (2) is: \begin{corollary}\label{alliso} Let $G$ be a finite group with a rigid diagonal action on a product of elliptic curves $E_1 \times \ldots \times E_n$, then the curves are all isomorphic to $\mathbb C/\mathbb Z[i]$ or they are all isomorphic to $\mathbb C/\mathbb Z[\zeta_3]$. Moreover, the branching signature $[m_1,m_2,m_3]$ is the same for each cover $\pi_i \colon E_i \to E_i/G$. \end{corollary} \begin{proof}[Proof of Proposition \ref{wallgrps}.] The formulas for $\varphi_d$ are immediately derived from the semidirect product structure of $\Aut(E)\simeq E \rtimes \Aut_0(E)$. The claim about the abelianisations follows from Remark \ref{tworems} (2). We are left to show that any group $A \rtimes_{\varphi_d} \mathbb Z_d$ has a rigid action on $E=\mathbb C/\mathbb Z[i]$ or $\mathbb C/\mathbb Z[\zeta_3]$. For this purpose, we consider the natural action \[ A \rtimes_{\varphi_d} \mathbb Z_d \to \Aut(E), \qquad (a,b,c) \mapsto \bigg [z \mapsto \zeta_d^cz + \frac{a + \zeta_d b}{n} \bigg], \] which is clearly rigid, because $A \rtimes_{\varphi_d} \mathbb Z_d$ acts non trivially on the generator $dz^{\otimes 2}$ of $H^0(E,\omega_E^{\otimes 2})$. \end{proof} We end the section by the following useful result. \begin{lemma}\label{orderel} The order of an element of $A \rtimes_{\varphi_d} \mathbb Z_d$, which is not contained in $A$, is equal to the order of its image under the canonical projection $ p \colon A \rtimes_{\varphi_d} \mathbb Z_d \to \mathbb Z_d$. \end{lemma} \begin{proof} Let $f$ be such an element. Then the order of $p(f)$ divides the order of $f$ and it suffices to show the following: let $\omega_k$ be a $k$-th primitive root of unity and $f(z)=\omega_kz+b$, then $f^k=id$. This claim follows immediately from the well known formula $\omega_k^{k-1}+ \ldots+\omega_k+1=0$ and the computation \[ f^k(z)=z+ (\omega_k^{k-1}+ \ldots+\omega_k+1)b=z. \] \end{proof} \section{Classifying rigid diagonal actions on products of elliptic curves} In this section we shall show that there are only four candidates of finite groups, which we will call {\em exceptional groups} that may allow a free rigid action on the product of (at least 4) elliptic curves, and on the other hand we shall see that all other groups admit in each dimension exactly one rigid (singular) quotient. The following is a trivial but useful observation: \begin{rem} Since we assume the action of $G$ on $E_1 \times \ldots \times E_n$ to be diagonal, it is free if and only if for each $g \in G$ there exists an index $1 \leq j \leq n$ such that $g$ acts on $E_j$ as a translation. \end{rem} This motivates the following definition: \begin{definition} Let $\psi \colon G \to \Aut(E)$ be a faithful action of a finite group on an elliptic curve. We define the {\em translation group of} $\psi$ to be \[ T_{\psi} :=\lbrace g \in G ~ \big\vert ~ \psi(g) ~\makebox{is a translation} \rbrace. \] \end{definition} \begin{remark}\label{transgroup} Let $G$ be a finite group. 1) The translation group $T$ of an action $\psi \colon G \to \Aut(E)$ is a normal abelian subgroup of $G$. If the action is moreover rigid, then $G \simeq T \rtimes \mathbb Z_d$, where $d=3,4$ or $6$. 2) Let $G$ be a finite group, admitting a diagonal action on a product of elliptic curves $E_1 \times \ldots \times E_n$. Then the action is free if and only if \[ G = T_1 \cup \ldots \cup T_n, \] where $T_i$ is the translation group of the action on the i-th factor. \end{remark} A necessary condition for a group $A \rtimes_{\varphi_d} \mathbb Z_d$ to allow a rigid and free action on a product of elliptic curves is the existence of more than one normal abelian subgroup with quotient $\mathbb Z_d$. It turns out that there are only four of them and in fact in all other cases the translation group of any rigid action is $A$. \begin{proposition}\label{uniquetrans} Let $G=A \rtimes_{\varphi_d} \mathbb Z_d$ be a finite group and $\psi \colon G \ensuremath{\rightarrow} \Aut(E)$ be a rigid action on an elliptic curve $E$. Then $T_{\psi} = A$ except if $G$ is one of the following: \[ \mathbb Z_3^2, \quad \mathbb Z_3^2 \rtimes_{\varphi_3} \mathbb Z_3, \quad \mathbb Z_2 \times \mathbb Z_4 \quad \makebox{or} \quad \mathbb Z_2^2 \rtimes_{\varphi_4} \mathbb Z_4. \] \end{proposition} These four groups we shall call {\em exceptional}. Before we give a proof of Proposition \ref{uniquetrans}, we recall some structural properties of the exceptional groups: \begin{rem}\label{structureHeis} \ (1) We denote the group $\mathbb Z_3^2 \rtimes_{\varphi_3} \mathbb Z_3$ by $\He(3)$, because it is isomorphic to the Heisenberg group of upper triangular matrices \[ \begin{pmatrix} 1 & a & c \\ 0 & 1 & b \\ 0 & 0 & 1 \\ \end{pmatrix} \in \SL(3,\mathbb F_3). \] The normal abelian subgroups $A_i \trianglelefteq \He(3)$ of index three are the preimages of the index three subgroups of $\mathbb Z_3^2$ under the surjective homomorphism \[ \alpha \colon \mathbb Z_3^2 \rtimes_{\varphi_3} \mathbb Z_3 \to \mathbb Z_3^2, \qquad (a,b,c) \mapsto (a+b,c). \] They are all isomorphic to $\mathbb Z_3^2$ and the intersection of two of them equals the kernel of $\alpha$. Moreover, the kernel, the center $C_3:=Z\big(\He(3)\big)\simeq \mathbb Z_3$ and the commutator subgroup $[\He(3),\He(3)]$ are all the same. The groups $A_i$, together with the center $C_3$ are all non-trivial normal subgroups of $\He(3)$: \begin{center} \begin{tikzpicture}[scale=.7] \node (one) at (0,2) {$\He(3)$}; \node (a) at (-3,0) {$A_1$}; \node (b) at (-1,0) {$A_2$}; \node (c) at (1,0) {$A_3$}; \node (d) at (3,0) {$A_4$}; \node (zero) at (0,-2) {$C_3$}; \draw (zero) -- (a) -- (one) -- (b) -- (zero) -- (c) -- (one) -- (d) -- (zero); \end{tikzpicture} \end{center} Since the center $C_3$ is contained in all $A_i$'s, Proposition \ref{transgroup} (1) implies that it acts by translations with respect to any rigid action $\psi \colon \He(3) \to \Aut(E)$. (2) There are two normal subgroups of $\mathbb Z_2^2 \rtimes_{\varphi_4} \mathbb Z_4$ with quotient $\mathbb Z_4$. They are isomorphic to $\mathbb Z_2^2$ and obtained as the preimages of the corresponding subgroups $\langle (1,0) \rangle $ and $\langle (1,2) \rangle $ of $\mathbb Z_2 \times \mathbb Z_4$ under the surjective homomorphism \[ \beta \colon \mathbb Z_2^2 \rtimes_{\varphi_4} \mathbb Z_4 \to \mathbb Z_2 \times \mathbb Z_4, \qquad (a,b,c) \mapsto (a+b,c). \] It follows immediately from Remark \ref{transgroup}, (2) that none of these groups can admit a free diagonal action on the product of elliptic curves. \end{rem} \begin{proof}[Proof of Proposition \ref{uniquetrans}.] Let $B$ be a normal abelian subgroup of $G$ such that $G/B$ is cyclic of order $d$. Necessarily, the commutator subgroup $[G,G]$ must be contained in $B$. \underline{$d=6$}: by Proposition \ref{wallgrps} $[G,G]$ has index six in $G$, this implies $B =[G,G] = A$. \underline{$d=4$}: here $G$ is a quotient of the triangle group $\mathbb T(2,4,4)$ (cf. Remark \ref{tworems}) which has three normal subgroups of index four, one is abelian and isomorphic to $\mathbb Z^2$, the other two groups are non abelian with abelianisation $\mathbb Z_2^3$. If $|G|>16$, then $|B| >4$. The preimage of $B$ under $\phi \colon \mathbb T(2,4,4) \to G$ is a normal subgroup of index four and since $B$ is abelian, there is a surjection $\phi^{ab} \colon \phi^{-1}(B)^{ab} \to B$. Assume now that $\phi^{-1}(B)^{ab} \simeq \mathbb Z_2^3$. Then $\phi^{ab}$ is an isomorphism from $\mathbb Z_2^3$ to $B$. A contradiction, since we assume that $B$ is a subgroup of $E[n] \simeq \ensuremath{\mathbb{Z}}_n^2$. Thus $\phi^{-1}(B)^{ab} = \phi^{-1}(B) \simeq \mathbb Z^2$ and $B=A$. Suppose that $|G| \leq 16$ and $B \neq A$, then $B$ is a quotient of $\mathbb Z_2^2$ and $G$ is equal to $\mathbb Z_2 \times \mathbb Z_4$ or $\mathbb Z_2^2 \rtimes_{\varphi_4} \mathbb Z_4$. \underline{$d=3$}: if $|G| >27$, then $|B| >9$. The preimage of $B$ under $\phi \colon \mathbb T(3,3,3) \to G$ is a normal subgroup of index three. Since $B$ is abelian, there is a surjection $\phi^{-1}(B)^{ab} \to B$. It is known that $\mathbb T(3,3,3)$ has exactly four normal subgroups of index three. One of them is isomorphic to $\mathbb Z^2$ and the others are non-abelian, with abelianisation $\mathbb Z_3^2$. It follows that $\phi^{-1}(B)^{ab} = \phi^{-1}(B) \simeq \mathbb Z^2$ and $B=A$. Suppose that $|G| \leq 27$ and $B \neq A$, then $B$ is a quotient of $\mathbb Z_3^2$. We conclude that $G$ has order $9$ or $27$ and is equal to $\mathbb Z_3^2$ or $\He(3)=\mathbb Z_3^2 \rtimes_{\varphi_3} \mathbb Z_3$. \end{proof} \begin{corollary}\label{nonexfixed} Assume that $G$ is not exceptional and admits a rigid diagonal action on $E^n$. Then: \begin{enumerate} \item the action is not free; \item the maps $g(z_1), \ldots, g(z_n)$ have the same linear part for all $g \in G$. \end{enumerate} \end{corollary} \begin{proof} We know that $G = A \rtimes_{\varphi_d} \mathbb Z_d$ with $d=3,4$ or $6$. (1) By Proposition \ref{uniquetrans} we have $T_i = A$ for all $i$. The claim follows now from Remark \ref{transgroup} (2). (2) We denote the curve $E$ at position $i$ by $E_i$. According to Remark \ref{canrep} the value of the character $\chi_{E_i}(g)$ is the complex conjugate of the linear part of $g(z_i)$. Proposition \ref{uniquetrans} implies that $\ker(\chi_{E_i}) =A$, i.e. we can regard $\chi_{E_i}$ as a faithful character of $G/A \simeq \mathbb Z_d$. Such a character is defined by multiplication with a $d$-th primitive root of unity. For $d=3,4$ and $6$ there are only two primitive roots of unity, namely $\zeta_d$ and $\zeta_d^{-1}$. Since we assume that the action is rigid, by Remark \ref{canrep} it holds that $\chi_{E_i} \cdot \chi_{E_j}\neq \chi_{triv}$ and it follows that all characters $\chi_{E_i}$ are the same. \end{proof} The following result shows that for the non exceptional case the situation is quite simple, since for each $n$ and each $d$ there is only one possible quotient, namely $X_{n,d}:=E^n/\mathbb Z_d$. \begin{theorem}\label{oneisoclass} Assume that $G=A \rtimes_{\varphi_d} \mathbb Z_d$ admits a rigid diagonal action on $E^n$. If $G$ is not exceptional, then the quotient $E^n/G$ is isomorphic to $X_{n,d}:=E^n/\mathbb Z_d$, where $\mathbb Z_d$ acts on $E^n$ by multiplication with $\zeta_d \cdot \Id$. \end{theorem} \begin{proof} Since $G$ is not exceptional, the normal subgroup $A$ is the translation group $T_i$ of the action on each factor of the product $E^n$. In particular $E^n/A$ is an abelian variety and by construction $\mathbb Z_d \simeq G/A$ acts on $E^n/A$ by multiplication with $\zeta_d \cdot \Id$. This implies that $E^n/A$ is isomorphic to $E^n$, according to \cite[Corollary 13.3.5]{BL}. \end{proof} \begin{rem}\label{Beau} In \cite{beauville} Beauville considered the varieties $X_{d,d}$ for $d=3,4$ and $6$. He explains that these singular varieties admit rigid, simply connected Calabi-Yau $d$-folds as resolutions of singularities. \end{rem} \section{The exceptional groups} In contrast to Theorem \ref{oneisoclass} the situation for the exceptional groups is more involved. In this chapter we analyse it in dimensions three and four. In Proposition \ref{norigid} we shall see that rigid, free diagonal actions do not exist in dimension three. However we can drop the freeness assumption and classify rigid non-free actions. It turns out that the quotients by the groups $\mathbb Z_2 \times \mathbb Z_4$ and $\mathbb Z_2^2 \rtimes_{\varphi_4} \mathbb Z_4$ have non canonical singularities and therefore we do not consider it further. In case of the groups $\He(3)$ and $\mathbb Z_3^2$, the quotients will be singular, but as we shall see only canonical cyclic quotient singularities of type $ \frac{1}{3}(1,1,1)$ and $\frac{1}{3}(1,1,2)$ We study this case in detail and discover besides Beauville's example $X_{3,3}$, other interesting rigid canonical threefolds and relations among them. From dimension four on, the groups $\He(3)$ and $\mathbb Z_3^2$ allow rigid free diagonal actions. Their existence for $\mathbb Z_3^2$ has already been observed in \cite[Theorem 3.4]{rigidity}. We show that each exceptional group gives precisely one isomorphism class of a smooth rigid quotient fourfold $E^4/\mathbb Z_3^2$ and $E^4/\He(3)$. We prove that these manifolds are non-isomorphic by showing that they are even topologically distinct, i.e. non homeomorphic. \begin{rem} If we consider rigid actions of the exceptional groups $\mathbb Z_2 \times \mathbb Z_4$ and $\mathbb Z_2^2 \rtimes_{\varphi_4} \mathbb Z_4$ on $E^4$, then as observed before the action cannot be free, hence we obtain singular quotients, which have canonical singularities. Classifying these quotients as well as suitable resolutions of singularities of those should yield new interesting examples. \end{rem} \subsection{Free rigid diagonal actions on $E^4$} \ We start with the following quite easy but useful Proposition \begin{proposition}\label{norigid} \ Assume that $G=\He(3)$ or $\mathbb Z_3^2$ admits a rigid diagonal action on a product $E^n$. Then the action is free, if and only if each of the four normal subgroups of $G$ of index three is equal to the translation subgroup $T_i$ on at least one factor. In particular, $n\geq 4$. \end{proposition} \begin{proof} By Remark \ref{transgroup} (2), the action is free if and only if \begin{equation}\label{unionTi} G=T_1 \cup \ldots \cup T_n. \end{equation} Recall that each $T_i$ is equal to one of the four normal subgroups $A_1, \ldots, A_4$ of $G$ of index three. Clearly, for (\ref{unionTi}) to hold, all $A_i$ must appear in the union, because the union of less than four of the $A_i$'s consists of at most $7$ elements if $G=\mathbb Z_3^2$ and of at most $21$ elements if $G=\He(3)$. \end{proof} We need the following Proposition only in a special case but for further use we prefer to state it in greater generality: \begin{proposition}\label{coverf} Let $G$ be a finite group acting holomorphically on the compact complex manifolds $X_1, \ldots, X_n$, let $A\trianglelefteq G$ be a normal subgroup and $Y_i:=X_i/A$. 1) There is a commutative square of finite holomorphic maps: $$ \xymatrix{ X_1\times \ldots \times X_n \ar[r] \ar[d] & Y_1 \times \ldots \times Y_n, \ar[d] \\ (X_1\times \ldots \times X_n)/G \ar[r]_{f}& (Y_1\times \ldots \times Y_n)/(G/A), } $$ where the degree of $f$ is $\vert A \vert^{n-1}$. 2) If $n\geq 2$ the covering $f$ is Galois if and only if $A$ is contained in the center of $G$. In this case the Galois group of $f$ is isomorphic to $A^{n-1}$. \end{proposition} \begin{proof} (1) is obvious. (2) Consider the composition of covers \[ h \colon X_1\times \ldots \times X_n \to Y_1 \times \ldots \times Y_n \to (Y_1\times \ldots \times Y_n)/(G/A). \] We claim that $h$ is always Galois. Observe that the elements of the groups $\Delta_G$ and $A^n$ act as deck transformations of $h$ and therefore in order to show that $h$ is Galois, it suffices to show that the cardinality of $\langle \Delta_G, A^n \rangle$ equals $\deg (h) = |A|^{n-1}|G|$. Since $A$ is a normal subgroup of $G$, also $A^n$ is a normal subgroup of $\langle \Delta_G, A^n \rangle$. Clearly the natural homomorphism \[ \Delta_G \to \langle \Delta_G, A ^n \rangle/A^n \] is surjective and its kernel is $\Delta_A \simeq A$. Thus we see that $\langle \Delta_G, A^n \rangle$ has $|A|^{n-1}|G|$ elements. By the fundamental theorem of Galois theory, $f$ is Galois if and only if $\Delta_G$ is a normal subgroup of $\langle \Delta_G, A^n\rangle$. This is certainly the case if $A\leq Z(G)$. Conversely, assume that $\Delta_G$ is normal. Let $a\in A$, then for all $g\in G$ there exists an element $g'\in G$, such that \[ (a,1,\ldots,1 ) \circ (g, \ldots,g) \circ (a^{-1},1,\ldots,1) =(g',\ldots,g'). \] This implies $aga^{-1}=g'=g$ i.e. $a \in Z(G)$. Assume now that $f$ is Galois. Then its Galois group is $\langle \Delta_G, A^n \rangle/\Delta_G$, which is isomorphic to $A^{n-1}$. In fact, the surjective homomorphism \[ A^n \to \langle \Delta_G, A^n \rangle/\Delta_G \] has kernel $\Delta_A$ and induces an isomorphism $\langle \Delta_G, A^n \rangle/\Delta_G \simeq A^n/\Delta_A \simeq A^{n-1}$. \end{proof} Now we are ready to show that for each of the groups $\He(3)$ and $\mathbb Z_3^2$, there is exactly one isomorphism class of rigid \'etale quotients $E^4 /G$. \begin{theorem}\label{Z1Z2Glatt} There is exactly one isomorphism class of quotient manifolds $Z_1:=E^4/\mathbb Z_3^2$ resp. $Z_2:=E^4/\He(3)$ obtained by a rigid free and diagonal action. These projective manifolds are infinitesimally rigid of Kodaira dimension zero and there is an unramified Galois cover $f \colon Z_2 \to Z_1$ with group $\mathbb Z_3^3$. \end{theorem} \begin{proof} First we show the existence and unicity of $Z_1$ resp. $Z_2$. Let $G$ be $\mathbb Z_3^2$ resp. $\He(3)$. By Riemann's existence theorem the diagonal actions of $G$ on $E^4$ correspond to quadruples of generating triples $[V_1,V_2,V_3,V_4]$. The action of $G$ is free on $E^4$, if and only if $\Sigma_{V_1} \cap \ldots \cap \Sigma_{V_4} = \lbrace 1_G \rbrace$. As explained e.g. in \cite{BeauReal} the group $\mathfrak S_4 \times \mathcal B_3^4 \times \Aut(G)$ acts on the set of quadruples of generating triples $[V_1,V_2,V_3,V_4]$. Here $\mathfrak S_4$ permutes the generating triples $V_i$ of the quadruple, $\Aut(G)$ acts diagonally on $[V_1,V_2,V_3,V_4]$, and the {\em Artin Braid Group} $\mathcal B_3$ acts separately on each $V_i$ by so-called {\em Hurwitz moves}. By \cite[Proposition 3.3]{BeauReal} equivalent quadruples of generating vectors yield isomorphic quotients. With a MAGMA algorithm we check that for each of the two groups there is exactly one orbit corresponding to a free rigid action, corresponding therefore to unique isomorphism class of a rigid manifold $Z_1:=E^4/\mathbb Z_3^2$ resp. $Z_2:=E^4/ \He(3)$. Let us now consider the $\He(3)$ action on $E^4$ yielding $Z_2$. The center $C_3=Z\big(\He(3)\big) \simeq \mathbb Z_3$ acts on each copy of $E$ by translations, such that $E/C_3\simeq E$. Using the identification $\He(3)/C_3 \simeq \mathbb Z_3^2$, it can be checked again by a MAGMA routine that the image of the quadruple $[V_1,V_2,V_3,V_4]$ representing $Z_2$ is a quadruple which lies in the orbit representing $Z_1$. This means that on each factor we have a commutative triangle $$ \xymatrix{ E_i \ar[r] \ar[d]_{\He(3)} & E_i/C_3 \ar[dl]^{\mathbb Z_3^2} \\ \mathbb P^1 } $$ By Proposition \ref{coverf} we have a finite Galois cover $f \colon Z_2 \to Z_1$ with group $\mathbb Z_3^3$. The cover $f$ is unramified, because the other three maps of the diagram in Proposition \ref{coverf} are unramified. \end{proof} \begin{rem}\label{motivConstr} Note that the fourfold $Z_2$ can be realized as a double quotient, namely the quotient of the torus $E^4/C_3$ by the induced action of $\mathbb Z_3^2 \simeq \He(3)/C_3$. By construction, the linear part of the $\mathbb Z_3^2$-action on $E^4/C_3$ giving $Z_2$ is the same as the linear part of the action on $E^4$ giving $Z_1$. It can be determined from the generating triples, see Remark \ref{RemRET} (3). In our situation we have (up to an automorphism of $\mathbb Z_3^2$): \[ \rho(a,b):= \begin{pmatrix} \zeta_3^b & 0 & 0 & 0 \\ 0 & \zeta_3^{a+b} & 0 & 0 \\ 0 & 0 & \zeta_3^a & 0 \\ 0 & 0 & 0 & \zeta_3^{2a+b} \end{pmatrix}, \qquad \makebox{for all} \quad (a,b) \in \mathbb Z_3^2. \] In the literature $\rho$ is usually called the \emph{analytic representation} of the group action. \end{rem} With the above in mind, it is not hard to write down explicit models of $Z_1$ and $Z_2$: \begin{example}\label{explicitex} We consider the following lattices \[ \Lambda_1:=\mathbb Z[\zeta_3]^4 \qquad \makebox{and} \qquad \Lambda_2 = \mathbb Z[\zeta_3]^4 + \mathbb Z \frac{1+2\zeta_3}{3} (1,1,1,1) \] and define the complex tori $T_i:=\mathbb C^4/\Lambda_i$. By definition $T_1 \simeq E^4$ and $T_2 \simeq E^4/\langle t \rangle$, where $E$ is the equianharmonic elliptic curve and \[ t \colon E^4 \to E^4, \qquad z \mapsto z+ \frac{1+2\zeta_3}{3} (1,1,1,1). \] The two actions $\psi_i$ of $\mathbb Z_3^2$ on $T_i$, that give the quotients $Z_i$ are the following: \begin{align*} \psi_1(1,0)(z) & := \diag(1, \zeta_3, \zeta_3, \zeta_3^2)z + \frac{1+2\zeta_3}{3} (1, 2,0,1), \\ \psi_1(0,1)(z) & :=\diag (\zeta_3, \zeta_3, 1, \zeta_3) z + \frac{1+2\zeta_3}{3} (0,0,2,0), \\ \psi_2(1,0)(z) & :=\diag (1, \zeta_3, \zeta_3, \zeta_3^2)z + \frac{1}{3} (1, 0 , 0, 2), \\ \psi_2(0,1)(z) & :=\diag(\zeta_3, \zeta_3, 1, \zeta_3) z + \frac{1}{3}(0, 2 \zeta_3, 1+ \zeta_3, 2). \end{align*} \end{example} The cohomology of the manifolds $Z_i$ is easy to compute: \begin{proposition}\label{Hodgesmooth} The projective manifolds $Z_1$ and $Z_2$ have the same Hodge numbers: \[ h^{1,0}=h^{2,0}=h^{3,1}=h^{4,0}=0, \quad h^{1,1}=4, \quad h^{2,1}=3,\quad h^{3,0}=1, \quad h^{2,2}=6. \] Moreover, $\ensuremath{\mathcal{O}}(K_{Z_i}) \neq \ensuremath{\mathcal{O}}_{Z_i}$ and $\ensuremath{\mathcal{O}}(K_{Z_i}) ^{\otimes 3} \simeq \ensuremath{\mathcal{O}}_{Z_i}$. \end{proposition} \begin{proof} For any complex torus $T=\mathbb C^n/\Lambda$ the Dolbeault groups have the description \[ H^{p,q}(T)= \Lambda^p \Omega \otimes \Lambda^q \overline{\Omega}, \quad \makebox{where} \quad \Omega:=\langle dz_1, \ldots, dz_n \rangle. \] Since the manifolds $Z_i$ are quotients of tori by free actions, the groups $H^{p,q}(Z_i)$ are isomorphic to the invariant parts of $\Lambda^p \Omega \otimes \Lambda^q \overline{\Omega}$ under the $\mathbb Z_3^2$ action induced by $\psi_i$. Since the derivative of a constant is zero, it suffices to act with the linear part of $\psi_i$ i.e. with the analytic representation. Since both actions $\psi_i$ have the same analytic representation $\rho$, both quotients $Z_1$ and $Z_2$ have isomorphic Dolbeault groups and in particular, the same Hodge numbers. To compute these groups explicitly we take the standard basis \[ \mathcal B:=\lbrace dz_{i_1} \wedge \ldots \wedge dz_{i_p} \otimes d\overline{z}_{j_1} \wedge \ldots \wedge d\overline{z}_{j_q} ~ \big\vert ~ i_1 < \ldots < i_p \leq 4, ~ j_1 < \ldots < j_q \leq 4 \rbrace \] of $\Lambda^p \Omega \otimes \Lambda^q \overline{\Omega}$. The fact that $\rho$ acts by diagonal matrices implies that a basis of $H^{p,q}(Z_i)$ is given by the invariant basis vectors of $\mathcal B$. The non-zero Dolbeault groups are: \begin{align*} H^{3,0}(Z_i)& \simeq \langle dz_1\wedge dz_2 \wedge dz_4 \rangle, \\ H^{1,1}(Z_i)& \simeq \langle dz_i\otimes d\overline{z}_i ~ \big\vert ~ i \leq 4 \rangle, \\ H^{2,1}(Z_i)& \simeq \langle dz_1 \wedge dz_3 \otimes d\overline{z}_2, dz_2 \wedge dz_3 \otimes d\overline{z}_4, dz_3 \wedge dz_4 \otimes d\overline{z}_1 \rangle, \\ H^{2,2}(Z_i) & \simeq \langle dz_i \wedge dz_j \otimes d\overline{z}_i \wedge d\overline{z}_j ~ \big\vert ~ i < j \leq 4 \rangle. \end{align*} To prove the statement about $\ensuremath{\mathcal{O}}(K_{Z_i})$, we note that the differential form $$\big( dz_1 \wedge \ldots \wedge dz_4)^{\otimes 3}$$ is $\mathbb Z_3^2$-invariant. Thus it descends to $Z_i$ and provides a trivialization of $\ensuremath{\mathcal{O}}(K_{Z_i})^{\otimes 3}$. \end{proof} The remaining part of the subsection is devoted to prove that the manifolds $Z_1$ and $Z_2$ are not homeomorphic. More precisely, we shall show that they have non isomorphic fundamental groups. \begin{rem} The fundamental group of $Z_i$ is isomorphic to the group of deck transformations $\Gamma_i$ of the universal cover $\mathbb C^4 \to T_i \to Z_i$. It consists of the lifts of the automorphisms $\psi_i(a,b)$ for all $(a,b) \in \mathbb Z_3^2$ and is therefore a group of affine transformations. Since the linear parts $\rho(a,b)$ of the maps $\psi_i(a,b)$, viewed as real $8\times 8$ matrices, are orthogonal we can more precisely say that $\Gamma_i$ is a cocompact free discrete subgroup of of the Euclidean group of isometries $\mathbb{E}(8):= \mathbb R^8 \rtimes \OO(8)$. Because the action of $\mathbb Z_3^2$ on $T_i$ does not contain translations, the lattice $\Lambda_i$ of the torus $T_i$ is equal to the intersection $\Gamma_i \cap \mathbb R^8$ i.e. the translation subgroup of $\Gamma_i$. \end{rem} \begin{definition} 1) A discrete cocompact subgroup of $\mathbb{E}(n)$ is called a {\em crystallographic group}. 2) A {\em Bieberbach group} is a torsion free crystallographic group. \end{definition} As a modern reference for Bieberbach groups we use \cite{LCh}, for the original results see \cite{bib1}, \cite{bib2}. \begin{rem} 1) It is worth observing that the underlying $\mathcal C^{\infty}$-manifold of $Z_i$ admits a \emph{flat Riemannian metric} i.e., a metric such that the curvature tensor \[ R(X,Y)Z:=\nabla_X \nabla_YZ - \nabla_Y \nabla_XZ - \nabla_{[X,Y]}Z \] with respect to the \emph{Levi-Civita connection} is identically zero. Vice versa each compact flat Riemannian $n$-manifold is isometric to a quotient $\ensuremath{\mathbb{R}}^n /\Gamma$, where $\Gamma$ is a Bieberbach group (cf. \cite[Chapter II]{LCh}). Moreover, the quotient $\Gamma / \Lambda$ by the translation subgroup is isomorphic to the holonomy group of $\ensuremath{\mathbb{R}}^n /\Gamma$. 2) Obviously not every quotient of $\ensuremath{\mathbb{R}}^{2n}$ by a Bieberbach group has a complex structure. If there is a complex structure, then the complex manifold is an \'etale torus quotient and is called a {\em generalized hyperelliptic manifold}. These have been studied and classified in dimension 2 by Bagnera and de Franchis and in dimension 3 by Uchida-Yoshihara \cite{UchidaYoshi}, Lange \cite{Lange} and Catanese-Demleitner in \cite{AndiFab}. In his PhD thesis \cite{Demleitner} Demleitner gave a complete list of holonomy groups of generalized hyperelliptic $4$-folds. The manifolds $Z_1$ and $Z_2$ are two distinct {\em rigid} examples, with holonomy group $\mathbb Z_3^2$. The second author and Demleitner work on a complete classification of rigid generalized hyperelliptic $4$-folds. \end{rem} In order to distinguish the fundamental groups $\Gamma_1$ and $\Gamma_2$ of $Z_1$ and $Z_2$ we will use the first and second of the following three theorems of Bieberbach (cf. \cite[Chapter I]{LCh}): \begin{theorem}[Bieberbach's three theorems]\label{biberer} \ (1) The translation subgroup $\Lambda:=\Gamma \cap \mathbb R^n$ of a crystallographic group $\Gamma \leq \mathbb{E}(n)$ is a lattice of rank $n$ and $\Gamma/\Lambda$ is finite. All other normal abelian subgroups of $\Gamma$ are contained in $\Lambda$. (2) Let $\Gamma_1, \Gamma_2 \leq \mathbb{E}(n)$ be two crystallographic groups and $f \colon \Gamma_1 \to \Gamma_2$ be an isomorphism. Then there exists an affine transformation $\alpha \in \Aff(n)$, such that $f(g)=\alpha \circ g \circ \alpha^{-1}$ for all $g\in \Gamma_1$. (3) In each dimension there are only finitely many isomorphism classes of crystallographic groups. \end{theorem} \begin{rem}\label{diffeovarphi} 1) Assume that $Z_1$ and $Z_2$ are homeomorphic. Then, by Bieberbach's second theorem, there exists an affine transformation $\alpha(x)=Ax+b$ such that $\alpha \circ \Gamma_2 \circ \alpha^{-1}=\Gamma_1$. Bieberbach's first theorem implies that $\alpha \circ \Lambda_2 \circ \alpha^{-1}=\Lambda_1$. In other words, $\alpha$ induces diffeomorphisms \[ \widehat{\alpha} \colon Z_2 \to Z_1 \qquad \makebox{and} \qquad \widetilde{\alpha} \colon \mathbb R^8/\Lambda_2 \to \mathbb R^8/\Lambda_1 \] which make the following diagram commutative: \[ \xymatrix{ \mathbb R^8 \ar[r]^{\alpha} \ar[d] & \ar[d] \mathbb R^8 \\ \mathbb R^8/\Lambda_2 \ar[d]\ar[r]^{\widetilde{\alpha}} & \mathbb R^8/\Lambda_1 \ar[d] \\ Z_2\ar[r]^{\widehat{\alpha}} & Z_1.} \] In particular, Bieberbach's theorems imply that $Z_1$ and $Z_2$ have isomorphic fundamental groups if and only if they are diffeomorphic, even by an affine diffeomorphism. 2) Recall that the actions $\psi_1$ and $\psi_2$ have the same analytic representation $\rho$, which we now view as a real representation: \[ \rho_{\mathbb R}(a,b) := \begin{pmatrix} B^b & 0 & 0 & 0 \\ 0 & B^{a+b} & 0 & 0 \\ 0 & 0 & B^a & 0 \\ 0 & 0 & 0 & B^{2a+b} \end{pmatrix}, \quad B:=-\frac{1}{2} \begin{pmatrix} 1 & \sqrt{3} \\ -\sqrt{3} &1 \end{pmatrix}. \] By the commutativity of the diagram in (1), there exists an automorphism $\varphi \in \Aut(\mathbb Z_3^2)$, such that $$ A \rho_{\mathbb R}(a,b) A^{-1} =\rho_{\mathbb R}\big(\varphi(a,b)\big), \ \forall \ (a,b) \in \mathbb Z_3^2. $$ In other words, $\rho_{\mathbb R}(a,b) $ and $\rho_{\mathbb R}\big(\varphi(a,b)\big)$ are isomorphic as representations over $\mathbb R$. \end{rem} \begin{proposition}\label{4twodims} The representation $\rho_{\mathbb R}$ is the sum of four disctinct irreducible two-dimensional representations over $\mathbb R$. \end{proposition} \begin{proof} The representations $B^b, B^{a+b}, B^a$ and $B^{2a+b}$ are indeed irreducible, because $B$ is not diagonalizable over $\mathbb R$. Obviously they are distinct. \end{proof} \begin{rem} 1) The group $\mathbb Z_3^2$ has precisely $5$ irreducible real representations: the trivial representation and the four two dimensional representations from above. This can be verified with the help of the formula \[ |G| = \sum_{\chi \in \Irr_{\mathbb R}(G)} \frac{\chi(1)^2}{\langle \chi, \chi \rangle} , \] which holds for any finite group $G$. 2) By Schur's Lemma the endomorphism algebra $\End_G(V) $ of an irreducible real representation $V$ of a finite group $G$ is a finite dimensional division algebra. As it is clearly associative, it is isomorphic to $\mathbb R$, $\mathbb C$ or the quaternions $\mathbb H$, according to Frobenius' theorem \cite{frob}. \end{rem} \begin{proposition}\label{comMatrices} \ 1) The $\mathbb R$-algebra of matrices $H$ which commute with $B$ is: \[ \bigg\lbrace \begin{pmatrix} \lambda & -\mu \\ \mu & \lambda \end{pmatrix} ~ \bigg\vert ~ \lambda,\mu \in \mathbb R \bigg\rbrace \simeq \mathbb C. \] 2) The $\mathbb R$-vectorspace of matrices $H$ with $HB=B^2H$ is \[ \bigg\lbrace \begin{pmatrix} \lambda & \mu \\ \mu & -\lambda \end{pmatrix} ~ \bigg\vert ~ \lambda,\mu \in \mathbb R \bigg\rbrace \simeq \mathbb R^2. \] The matrices in $1)$ define $\mathbb C$-linear maps and the matrices in $2)$ $\mathbb C$-antilinear maps. In complex coordinates $z=x+ i y$ we may identify them with \[ h_{\lambda + i \mu} \colon \mathbb C \to \mathbb C, \quad z\mapsto (\lambda+i\mu)z \qquad \makebox{and} \qquad \overline{h}_{\lambda + i \mu} \colon \mathbb C \to \mathbb C, \quad z\mapsto (\lambda+i\mu)\overline{z}. \] \end{proposition} \begin{proof} An Ansatz with a general $2\times 2$ matrix $H$ yields a system of linear equations. The solutions are the displayed matrices. \end{proof} Now we are ready to prove the following: \begin{theorem}\label{Z1notZ2} The fundamental groups of the manifolds $Z_1$ and $Z_2$ are not isomorphic. \end{theorem} \begin{proof} Assume the converse. Then, as we have seen in Remark \ref{diffeovarphi}, there exists an affine transformation $\alpha(x)=Ax+b$ inducing a diffeomorphism $\widehat{\alpha} \colon Z_2 \to Z_1$ and an automorphism $\varphi \in \Aut(\mathbb Z_3^2)$, such that \[ A \rho_{\mathbb R}(a,b) A^{-1} =\rho_{\mathbb R}\big(\varphi(a,b)\big), \qquad \makebox{for all} \qquad (a,b) \in \mathbb Z_3^2. \] We subdivide $A$ into $16$ blocks $A_{i,j}$ of $2\times 2$-matrices: \[ A= \begin{pmatrix} A_{1,1} & A_{1,2} & A_{1,3} & A_{1,4} \\ A_{2,1} & A_{2,2} & A_{2,3} & A_{2,4} \\ A_{3,1} & A_{3,2} & A_{3,3} & A_{3,4} \\ A_{4,1} & A_{4,2} & A_{4,3} & A_{4,4} \end{pmatrix} \] By Proposition \ref{4twodims} the representation $ \rho_{\mathbb R}$, and henceforth also $\rho_{\mathbb R}\circ \varphi$, is made up of four distinct irreducible two-dimensional representations of $\mathbb Z_3^2$. Schur's lemma tells us that there exists a permutation $\tau \in \mathfrak S_4$ such that for each $i$ the block $A_{\tau(i),i}$ is invertible, while the other $12$ blocks are identically zero. The non-zero blocks are those described in Proposition \ref{comMatrices}: each block $A_{\tau(i),i}$ either commutes with $B$, or $A_{\tau(i),i} B=B^2A_{\tau(i),i}$. Whence up to a permutation of blocks, $A$ is a sum of $\mathbb C$-linear and $\mathbb C$-antilinear maps: \[ h_{w_i}(z)=w_i z \qquad \makebox{or} \qquad \overline{h}_{w_i}(z)=w_i \overline{z} \qquad \makebox{where} \quad w_i \in \mathbb C^{\ast}. \] Since $\alpha$ defines at the same time a diffeomorphism between the tori $T_2 $ and $T_1$, it holds \[ A \cdot \Lambda_2 = \Lambda_1 =\mathbb Z[\zeta_3]^4, \] where we view $A$ as a map $A\colon \mathbb C^4 \to \mathbb C^4$. In particular $Ae_i \in\mathbb Z[\zeta_3]^4$, for all $1\leq i \leq 4$. This shows that all $w_i$ belong to $ \mathbb Z[\zeta_3]$. Similarly, since $A^{-1}e_i$ is a vector with just one non-zero entry, it must be contained in the sublattice \[ \mathbb Z[\zeta_3]^4 \subset \Lambda_2= \mathbb Z[\zeta_3]^4 + \mathbb Z \frac{1+2\zeta_3}{3} (1,1,1,1). \] Thus, $w_{i}^{-1}$ or its conjugate is also an Eisenstein integer and we conclude that $w_i$ is a unit in $ \mathbb Z[\zeta_3]$, for all $1\leq i \leq 4$. On the other hand, the product of $A$ and the lattice vector $\frac{1+2\zeta_3}{3} (1,1,1,1) \in \Lambda_2$ belongs to $\Lambda_1$, which means that \[ w_i \frac{1+2\zeta_3}{3} \in \mathbb Z[\zeta_3] \qquad \makebox{or} \qquad w_i \frac{1+2\zeta_3^2}{3} \in \mathbb Z[\zeta_3]. \] A contradiction. \end{proof} \subsection{Rigid quotients of $E^3$ by the exceptional groups} Let $X := E^3 / G$ be a quotient of $E^3$ by a rigid diagonal action of one of the four exceptional groups $G$. Then according to Proposition \ref{norigid} and Remark \ref{structureHeis}, (2) the action is not free and $X$ has singular points. The singular points of $X$ are precisely the images of the finitely many points $p=(p_1,p_2,p_3)$ in $E^3$ with non trivial stabilizer group. The stabilizer of a point $p\in E^3$ is the intersection of the cyclic groups $G_{p_i}$ and therefore cyclic (cf. Remark \ref{RemRET}). We show that if $G = \mathbb Z_2 \times \mathbb Z_4$ or $\mathbb Z_2^2 \rtimes_{\varphi_4} \mathbb Z_4$, then $X$ always has non canonical singularities of type $\frac{1}{4}(1,1,1)$. Therefore we shall restrict ourselves to the two remaining exceptional groups $\mathbb Z_3^2$ or $\He(3)$. Since a non trivial cyclic subgroup of $\mathbb Z_3^2$ or $\He(3)$ is isomorphic to $\mathbb Z_3$, the threefold $X$ has only cyclic quotient singularities of type $\frac{1}{3}(1,1,1)$ and $\frac{1}{3}(1,1,2)$. We point out that these singularities are canonical, more precisely $\frac{1}{3}(1,1,2)$ is terminal by the Shepherd-Barron-Tai criterion (see \cite[ p. 376 Theorem]{R87}) and $\frac{1}{3}(1,1,1)$ is Gorenstein. In particular, for any resolution of singularities $\rho \colon \widehat{X} \to X$ it holds $\kappa(\widehat{X})=\kappa(X)= \kappa(E^3)=0$. In terms of infinitesimal deformation theory these singularities are also well behaved in the following sense: \begin{proposition}\label{resolution} Let $X$ be a threefold with only isolated singularities of type $\frac{1}{3}(1,1,1)$ and $\frac{1}{3}(1,1,2)$, then $X$ is canonical and there is a resolution of singularities $\rho \colon \widehat{X} \to X$, such that $H^1(\widehat{X}, \Theta_{\widehat{X}}) \simeq H^1(X, \Theta_X)$. In particular, if $X$ is rigid, then also $\widehat{X}$ is rigid. \end{proposition} \begin{proof} In \cite[Corollary 5.9, Proposition 5.10]{BG} the authors showed that a germ $(U,p_0)$ of a singularity of type $\frac{1}{3}(1,1,1)$ or $\frac{1}{3}(1,1,2)$ has a resolution $\rho \colon \widehat{U} \to U$ such that \[ \rho_{\ast} \Theta_{\widehat{U}}= \Theta_U \qquad \makebox{and} \qquad R^1 \rho_{\ast} \Theta_{\widehat{U}}=0. \qquad \qquad (\ast) \] These resolutions can be glued to obtain a resolution $\rho \colon \widehat{X} \to X$ with the same property $(\ast)$. The low term exact sequence of the Leray spectral sequence \[ 0 \to H^1(X,\rho_{\ast} \Theta_{\widehat{X}}) \to H^1(\widehat{X},\Theta_{\widehat{X}}) \to H^0(X,R^1\rho_{\ast} \Theta_{\widehat{X}}) \to \ldots \] gives us an isomorphism $H^1(\widehat{X},\Theta_{\widehat{X}}) \simeq H^1(X,\Theta_X)$. \end{proof} \begin{rem} Since canonical singularities are rational (see e.g. \cite[ p. 363 (3.8)]{R87}), Leray's spectral sequence implies that for any resolution of singularities $\rho \colon \widehat{X} \ensuremath{\rightarrow} X$ the irregularities \[ q_i(\widehat{X}) := h^i(\widehat{X},\ensuremath{\mathcal{O}}_{\widehat{X}}) \qquad \makebox{and} \qquad q_i(X):= h^i(X, \ensuremath{\mathcal{O}}_X) \] coincide. Since $H^{i,0}(\widehat{X}) \simeq H^{i,0}(E^3)^G$, we can compute the irregularities $q_i$ in terms of invariant holomorphic differential forms: \[ q_i(X)=q_i(\widehat{X})=\dim_{\mathbb C}\big(H^{i,0}(E^3)^G\big). \] It is common to denote the top irregularity $q_3$ by $p_g$ and call it the geometric genus $X$ or $\widehat{X}$, respectively. \end{rem} \begin{rem}\label{InvS} Let $X:=E^3/G$, where $G$ is a finite group acting diagonally on $E^3$. 1) According to Proposition \ref{quadraticdiff} rigidity means that none of the quadratic differentials $dz_i \otimes dz_j$ is $G$- invariant. This implies that $q_2(X)=0$, since none of the $2$-forms $dz_i \wedge dz_j$ can then be invariant either. By the same reason, we have $q_1(X)=0$. 2) If $dz_1 \wedge dz_2 \wedge dz_3$ is $G$-invariant, then the canonical sheaf of $X$ is trivial, hence $X$ is Gorenstein and $p_g(X)=1$. Otherwise, $p_g(X)=0$, and in this case $\ensuremath{\mathcal{O}}(3K_X) \simeq \ensuremath{\mathcal{O}}_X$. 3) If $p_g(X)=1$, then $X$ is a Gorenstein Calabi-Yau threefold and its singularities must be of type $\frac{1}{3}(1,1,1)$. If the $G$-action is moreover rigid, none of the quadratic differentials $dz_i \otimes dz_j$ is $G$- invariant and an easy calculation using the invariance of $dz_1 \wedge dz_2 \wedge dz_3$ shows that there are no invariant forms of type $(1,2)$ on $E^3$. In particular, the topological Euler number is given by $e(X)=2 \dim_{\mathbb C}\big( H^{1,1}(E^3)^G\big)$, because $H^i(X,\mathbb C) \simeq H^i(E^3,\mathbb C)^G$ for all $i$. 4) Similarly, if $p_g(X)=0$ and $X$ is rigid, then we have \[ e(X)=2 \big[1+ \dim_{\mathbb C}\big( H^{1,1}(E^3)^G\big) - \dim_{\mathbb C}\big( H^{1,2}(E^3)^G\big)\big]. \] \end{rem} \begin{lemma}\label{Euler} Let $X$ be a quotient of $E^3$ by a rigid diagonal action of $\mathbb Z_3^2$ or $\He(3)$. Let $N_{gor}$ be the number of singularities of type $\frac{1}{3}(1,1,1)$ and $N_{ter}$ be the number of singularities of type $ \frac{1}{3}(1,1,2)$, then \[ e(X)=\frac{2}{3}(N_{gor}+N_{ter}). \] \end{lemma} \begin{proof} The quotient map $\pi \colon E^3 \to X$ restricts to an unramified cover \[ \pi \colon E^3\setminus \pi^{-1}(S) \to X\setminus S \] of degree $d=9$ or $27$, where $S:=\Sing(X)$. Since the Euler number is additive and $e(E^3)=0$, we obtain: \[ -\vert \pi^{-1}(S) \vert = e\big(E^3\setminus \pi^{-1}(S)\big)=d e\big(X\setminus S\big) = d\big(e(X) - \vert S \vert \big). \] We conclude the proof, because the fibre of $\pi$ over each singularity consists of $d/3$ points. \end{proof} \begin{proposition}\label{welcheCalabi} Let $X$ be a quotient of $E^3$ by a rigid action of $\mathbb Z_3^2$ or $\He(3)$. \begin{enumerate} \item If $p_g=0$ then $X$ has $9$ terminal singularities of type $\frac{1}{3}(1,1,2)$. \item If $p_g=1$ then $\Sing(X)$ consists of $9$ or $27$ Gorenstein singularites of type $\frac{1}{3}(1,1,1)$. The latter happens if and only if $X$ is isomorphic to Beauville's threefold $X_{3,3}$. \end{enumerate} \end{proposition} \begin{proof} 1) Thanks to the orbifold Riemann-Roch formula (see e.g. \cite[ p. 412 Corollary 10.3]{R87}) it is possible to express $ \chi(\mathcal O_X)$ in terms of the intersection of Chern classes of a resolution $\rho \colon \widehat{X} \ensuremath{\rightarrow} X$ and local data coming from the singularities. In fact, it reads \[ 24\chi(\mathcal O_X) = - c_1(\rho^*K_X) \cdot c_2(\widehat{X}) + \sum_{\tiny{x ~ ter}} \frac{n_x^2-1}{n_x}, \] where the sum runs over all terminal singularities $\frac{1}{n_x}(1,a_x,n_x-a_x)$ of $X$. Since $\ensuremath{\mathcal{O}}(3K_X) \simeq \ensuremath{\mathcal{O}}_X$, the first summand is zero. Moreover, all terminal singularities of $X$ are of type $\frac{1}{3}(1,1,2)$ and $\chi(\ensuremath{\mathcal{O}}_X) = 1$. Hence the formula implies $N_{ter}=9$. 2) According to Lemma \ref{Euler} and Remark \ref{InvS} 3), we have \[ N_{gor}=3 \dim_{\mathbb C}\big( H^{1,1}(X)^G\big). \] As in the proof of Proposition \ref{Hodgesmooth} we consider $X$ as a $\mathbb Z_3^2$-quotient of a three dimensional torus und use the analytic representation $\rho$ to compute the dimension of the space of invariant $(1,1)$-forms. The analytic representation $\rho$ is easy to describe, because it is a sum of three non-trivial characters $\chi_i$ of $\mathbb Z_3^2$. The invariance of $dz_1 \wedge dz_2 \wedge dz_3$ imposes the condition $\chi_1 \chi_2 \chi_3 =\chi_{triv}$. Moreover, we have $\chi_1 \neq \chi_2^2$, because $dz_1 \otimes dz_2$ is not invariant. Whence there are two cases: \noindent \underline{Case 1:} $\chi_1=\chi_2$, then $\chi_3$ is also equal to $\chi_1$ and up to an automorphism of $\mathbb Z_3^2$ it holds $\rho(a,b)=\diag\big(\zeta_3^{a},\zeta_3^{a}, \zeta_3^{a} \big)$. In this case all $(1,1)$ forms are invariant i.e., $\dim_{\mathbb C}\big( H^{1,1}(X)^G\big)=9$ and we conclude $N_{gor}=27$. The argument we used in the proof of Theorem \ref{oneisoclass} i.e. \cite[Corollary 13.3.5]{BL} tells us that $X \simeq X_{3,3}$. \underline{Case 2:} $\chi_1 \neq \chi_2$, then since $\chi_1 \neq \chi_2^2$ there exists an automorphism of $\mathbb Z_3^2$ such that $\chi_1(a,b)=\zeta_3^{a}$ and $\chi_2(a,b)=\zeta_3^{b}$. This implies that $\chi_3(a,b)=\zeta_3^{2a+2b}$ and shows that the analytic representation is $\rho(a,b)=\diag\big(\zeta_3^{a},\zeta_3^{b}, \zeta_3^{2a+2b} \big)$. We compute $\dim_{\mathbb C}\big( H^{1,1}(X)^G\big)=3$ and obtain $N_{gor}=9$. \end{proof} \begin{proposition} Let $X$ be a quotient of $E^3$ by a rigid diagonal action of $G=\mathbb Z_2 \times \mathbb Z_4$ or $\mathbb Z_2^2 \rtimes_{\varphi_4} \mathbb Z_4$, then $X$ has non-canonical singularities. \end{proposition} \begin{proof} We show that $\rho(g)=\zeta_4 \cdot \Id$, for some $g\in G$, which implies the existence of a singularity of type $\frac{1}{4}(1,1,1)$. As above, we may consider $X$ as a quotient by $G=\mathbb Z_2 \times \mathbb Z_4$. The analytic representation $\rho$ is a sum of three characters $\chi_i$ of $G$ of order $4$, such that $\chi_i \neq \overline{\chi_j}$ for $i \neq j$. The order $4$ characters of $G$ are: \[ \zeta_4^b, \quad \zeta_4^{3b}, \quad (-1)^a \zeta_4^b \quad \makebox{and} \quad (-1)^{a} \zeta_4^{3b}. \] Since they come in pairs of conjugates, we conclude that two of the three characters $\chi_i$ in the analytic representation must be the same. Without loss of generality $\chi_1=\chi_2=\zeta_4^b$ and $\rho$ is equal to \[ \diag(\zeta_4^b,\zeta_4^b,\zeta_4^b), \quad \diag(\zeta_4^b,\zeta_4^b,(-1)^a \zeta_4^b ) \quad \makebox{or} \quad \diag(\zeta_4^b ,\zeta_4^b,(-1)^{a} \zeta_4^{3b}). \] \end{proof} Recall that Beauville's threefold $X_{3,3}$ is simply connected, see Remark \ref{Beau}. This allows us to show: \begin{proposition}\label{BeauvilleUnifor} Let $X$ be a quotient of $E^3$ by a rigid diagonal action of $\mathbb Z_3^2$ or $\He(3)$ with $p_g=1$ and $9$ singularities. Then $X$ is uniformized by Beauville's threefold $X_{3,3}$ by a degree three map. In particular, $\pi_1(X)$ is isomorphic to $\mathbb Z_3$. \end{proposition} \begin{proof} We assume that $X$ is a quotient of $E^3$ by $\mathbb Z_3^2$. The case where $X$ is a quotient of $E^3$ by $\He(3)$, or equivalently $X$ is a quotient of $E^3/C_3$ by $\mathbb Z_3^2$ is handled exactly in the same way. In the proof of Proposition \ref{welcheCalabi} 2), we saw that up to an automorphism of $\mathbb Z_3^2$, the analytic representation is $\rho(a,b)=\diag\big(\zeta_3^{a},\zeta_3^{b}, \zeta_3^{2a+2b} \big)$. The restriction to the subgroup $H:=\langle (1,1) \rangle \leq \mathbb Z_3^2$ is generated by $\zeta_3 \cdot \Id$, which implies that $E^3/H$ is isomorphic to $X_{3,3}$, again using \cite[Corollary 13.3.5]{BL}. We show that the map $u$ fitting in the diagram \[ \begin{xy} \xymatrix{ E^3 \ar[r]^{p_1} \ar[d]_{p_2} & X \\ E^3/H \ar[ru]_u & } \end{xy}, \] is a local biholomorphism. Then, since $u$ is proper, it is an unramified cover. Clearly, $u$ maps $\Sing(E^3/H)=\lbrace 27 \times \frac{1}{3}(1,1,1) \rbrace$ to $\Sing(X)=\lbrace 9 \times \frac{1}{3}(1,1,1) \rbrace$. More precisely, since $u$ has degree three, the fibre of $u$ over each point $p \in X$ consists either of three singular or three smooth points, depending if $p$ is singular or not. Take a point $q \in E^3/H$ such that $u(q)=p$ and a point $x \in E^3$ such that $p_ 2(x) =q$, then $p_1(x)=p$ and the diagram of local rings commutes: \[ \begin{xy} \xymatrix{ \mathcal O_{E^3,x} & \ar[l]_{p_1^{\ast}} \mathcal O_{X,p} \ar[ld]^{u^{\ast}} \\ \mathcal O_{E^3/H,q} \ar[u]^{p_2^{\ast}} & } \end{xy}, \] By definition of the sheaf of holomorphic functions on a quotient, the maps $p_1^{\ast}$ resp. $p_2^{\ast}$ are isomorphisms onto the subrings $\mathcal O_{E^3,x}^{G_x}$ resp. $\mathcal O_{E^3,x}^{H_x}$ of $\mathcal O_{E^3,x}$. The inclusion $\mathcal O_{E^3,x}^{G_x} \subset \mathcal O_{E^3,x}^{H_x}$ is an equality, since the stabilizers $H_x \subset G_x$ are equal. Indeed both groups $H_x$ and $G_x$ are either trivial or of order $3$, depending if $p$ and $q$ are smooth or singular. \end{proof} \begin{rem} Let $X$ be a quotient of $E^3$ by a rigid diagonal action of $\mathbb Z_3^2$ or $\He(3)$ uniformized by $X_{3,3}$. Then there is a crepant resolution $\rho \colon \widehat{X} \to X$ of singularities, such that $\widehat{X}$ is uniformized by the crepant resolution $\widehat{X}_{3,3}$ of Beauville's threefold. \end{rem} \begin{theorem}\label{rigsingkod0} For each exceptional group $\mathbb Z_3^2$ and $\He(3)$ there are exactly four isomorphism classes of quotients \[ X_i:=E^3/\mathbb Z_3^2 \qquad \makebox{and} \qquad Y_i:=E^3/\He(3) \] obtained by a rigid diagonal $G$-action. Some of the invariants are listed in the following table: \begin{center} {\footnotesize { \renewcommand{\arraystretch}{1.5} \setlength{\tabcolsep}{4pt} \begin{tabular}{c c | c c c | c } & & $p_g$ & $b_3$ & $b_2$ & $\Sing$ \\ \hline \hline $X_1$ & $Y_1$ & $0$ & 0 & $5$ & $9 \times \frac{1}{3}(1,1,1)$, $9 \times \frac{1}{3}(1,1,2)$ \\ \hline $X_2$ & $Y_2$ & $0$ & 2 & $3$& $9 \times \frac{1}{3}(1,1,2)$ \\ \hline $X_3$ & $Y_3$ & $1$ & 2 & $3$ & $9 \times \frac{1}{3}(1,1,1)$ \\ \hline $X_4$ & $Y_4 $ & $1$ & 2 & $9$ & $27 \times \frac{1}{3}(1,1,1)$ \\ \hline \end{tabular} }} \end{center} The threefolds $X_4$ and $Y_4$ are isomorphic to Beauville's threefold. \end{theorem} \begin{proof} In analogy to the proof of Theorem \ref{Z1Z2Glatt} we consider for each group $G=\mathbb Z_3^2$ resp. $\He(3)$ the set of triples of generating triples $[V_1,V_2,V_3]$ which correspond to a diagonal rigid group action on $E^3$. Then we determine the orbits of the action of $\mathfrak S_3 \times \mathcal B_3^3 \times \Aut(G)$ on this set. We obtain $4$ orbits for each group. The invariants are then computed as explained in Remark \ref{InvS} and Lemma \ref{Euler}. \end{proof} \begin{rem} 1) In the sequel we shall investigate the relations among the threefolds in the table. By looking at the Betti numbers, it is obvious that $X_i$ is not homeomorphic to $X_j$ or $Y_j$ for $j\neq i$, except for $i=2$ and $j=3$ or vice versa. 2) Using as in the proof of Theorem \ref{Z1Z2Glatt} the identification $\He(3)/C_3 \simeq \mathbb Z_3^2$, it can be checked again by a MAGMA routine that the image of the triple $[W_{i,1}, \ldots, W_{i,3}]$ representing $Y_i$ is a triple $[V_{i,1}, \ldots,V_{i,3}]$ which lies in the orbit representing $X_i$. Therefore there are ramified Galois covers $f_i \colon Y_i \to X_i$ with group $\mathbb Z_3^2$, thanks to Proposition \ref{coverf}. \end{rem} \begin{proposition}\label{diffeo} The threefolds $X_2$ and $X_3$ are diffeomorphic in the orbifold sense. Likewise $Y_2$ and $Y_3$ are also diffeomorphic. \end{proposition} \begin{rem} By Proposition \ref{BeauvilleUnifor} it follows that $\pi_1(X_2)$ and $\pi_1(Y_2)$ are isomorphic to $\mathbb Z_3$. Therefore we know all fundamental groups of the threefolds $X_i, Y_i$ in the above table, except for $i = 1$. It can be shown using Armstrong's result (cf. \cite{armstrong}) that they are simply connected. \end{rem} \begin{proof}[Proof of Proposition \ref{diffeo}] We may realize $X_3$ as the quotient of $E^3$ by the $\mathbb Z_3^2$-action: \[ \psi_3(a,b)(z)=\diag(\zeta_3^a,\zeta_3^b,\zeta_3^{2a+2b})z+ \frac{1+2\zeta_3}{3}(b,a,a). \] Indeed the action is faithful on each factor and, according to the argument in the proof of Proposition \ref{welcheCalabi}, Case 2, the quotient has $9$ singularities of type $\frac{1}{3}(1,1,1)$. Now we modify this action by replacing the third component of $\psi_3$ with the complex conjugate: \[ \psi_2(a,b)(z)=\diag(\zeta_3^a,\zeta_3^b,\zeta_3^{a+b})z+ \bigg( \frac{1+2\zeta_3}{3}b, \frac{1+2\zeta_3}{3}a, \frac{1+2\zeta_3^2}{3}a\bigg). \] The action is still faithful on each factor, rigid and the invariants of the quotient are $p_g=0$, $b_3=2$ and $b_2=3$. Whence the quotient with respect to this action is $X_2$. By construction, the diffeomorphism \[ F\colon E^3 \to E^3, \qquad (z_1,z_2,z_3) \mapsto (z_1,z_2,\overline{z}_3) \] descends to the quotients $\widehat{F} \colon X_3 \to X_2$. An affine diffeomorphism between $Y_3$ and $Y_2$ is established in the same way. \end{proof} \bigskip The rest of the section is devoted to show the following: \begin{theorem}\label{nothomeo} The threefolds in the table consist of five distinct topological types: \[ X_1, \quad Y_1, \quad X_2 \simeq_{\makebox{\tiny{diff}}} X_3, \quad Y_2\simeq_{\makebox{\tiny{diff}}} Y_3 \quad \makebox{and} \quad X_4\simeq_{\makebox{\tiny{bihol}}} Y_4 \] \end{theorem} To prove the theorem it remains to show that the threefolds $X_i$ and $Y_i$ are not homeomorphic for $1 \leq i \leq 3$. Albeit we cannot use the fundamental groups to distinguish them, our argument is still analogous to the one that we gave in the previous section i.e., based on Bieberbach's theorems. As a substitute for the fundamental group, we use: \begin{definition} Let $T=\mathbb C^n/\Lambda$ be a complex torus and $G$ be a finite group of automorphisms acting on $T$ without translations. Let $\pi \colon \mathbb C^n \to T$ be the universal cover, then we define the \emph{orbifold fundamental group} as \[ \Gamma:=\lbrace \gamma \colon \mathbb C^n \to \mathbb C^n ~ \big\vert ~ \exists ~ g \in G, ~~ s.t. ~ \pi \circ \gamma = g \circ \pi \rbrace. \] \end{definition} \begin{rem}\label{orbipi1} 1) By definition $\Gamma$ is a cocompact discrete subgroup of the group of affine transformations. The subgroup of translations of $\Gamma$ is the lattice $\Lambda$. If $G$ acts freely in codimension at least two, then $\Gamma$ is isomorphic to the fundamental group of the smooth locus of $T/G$. 2) We point out that $\Gamma_{X_i}$ and $\Gamma_{Y_i}$ can be described in terms of the triples $[V_{i,1}, \ldots,V_{i,3}]$ and $[W_{i,1}, \ldots, W_{i,3}]$ of generating triples, which correspond to $X_i$ and $Y_i$ (cf. \cite[Section 3]{FourNames}): let $\mathbb T$ be the triangle group $\mathbb T(3,3,3)$ and consider the homomorphisms \[ \phi_{V_{i,j}} \colon \mathbb T \to \mathbb Z_3^2 \qquad \makebox{and} \qquad \phi_{W_{i,j}} \colon \mathbb T \to \He(3). \] Then the groups $\Gamma_{X_i}$ and $\Gamma_{Y_i}$ are isomorphic to the fibred products: \begin{align*} \Gamma_{X_i}& \simeq \lbrace t \in \mathbb T^3 ~ \big\vert ~ \phi_{V_{i,1}}(t_1)= \ldots = \phi_{V_{i,3}}(t_3) \rbrace, \\ \Gamma_{Y_i} & \simeq \lbrace t \in \mathbb T^3 ~ \big\vert ~ \phi_{W_{i,1}}(t_1)= \ldots = \phi_{W_{i,3}}(t_3) \rbrace. \end{align*} \end{rem} \begin{lemma} The groups $\Gamma_{X_i}$ and $\Gamma_{Y_i}$ are not isomorphic for $1\leq i \leq 3$. \end{lemma} \begin{proof} We only treat the case $i=1$. The strategy in the other cases is the same, even easier. Assume that $\Gamma_{X_1}$ and $\Gamma_{Y_1}$ are isomorphic then, by Bieberbach's second theorem \ref{biberer}, there exists an affine transformation \[ \alpha \colon \mathbb R^6 \to \mathbb R^6, \quad x \mapsto Ax+b, \] such that $\alpha \circ \Gamma_{X_1} \circ \alpha^{-1}= \Gamma_{Y_1}$. As explained above, the analytic representations of the $\mathbb Z_3^2$ actions giving the quotients $X_1$ and $Y_1$ coincide. We view this representation $\rho=\rho_1$ as a real representation in the orthogonal group of $6\times 6$ matrices: \[ \rho_{\mathbb R}(a,b) := \begin{pmatrix} B^{a+b} & 0 & 0 \\ 0 & B^{a+b} & 0 \\ 0 & 0 & B^b \end{pmatrix}, \quad B=-\frac{1}{2} \begin{pmatrix} 1 & \sqrt{3} \\ -\sqrt{3} &1 \end{pmatrix}. \] In analogy to Remark \ref{diffeovarphi} (2), there exists an automorphism $\varphi \in \Aut(\mathbb Z_3^2)$, such that $$ A \rho_{\mathbb R}(a,b) A^{-1} =\rho_{\mathbb R}\big(\varphi(a,b)\big), \quad \ \forall \ (a,b) \in \mathbb Z_3^2. $$ The representation $\rho_{\mathbb R}$ consists of two copies of $B^{a+b} $ and one copy of $B^{b}$. Thus $\rho_{\mathbb R} \circ \varphi$ is also the sum of two copies of an irreducible two-dimensional real representation and another (distinct) irreducible two dimensional real representation. Schur's Lemma implies that $A$ is a block matrix: \[ A= \begin{pmatrix} A_1 & 0 \\ 0 & A_2 \end{pmatrix}, \quad \makebox{where} \quad A_1 \in \GL(4,\mathbb R) \quad \makebox{and} \quad A_2 \in \GL(2,\mathbb R). \] As in the proof of Theorem \ref{Z1notZ2} we conclude that $A_2$ has to be $\ensuremath{\mathbb{C}}$-linear or $\ensuremath{\mathbb{C}}$-antilinear and we obtain a contradiction for the same reasoning. \end{proof} \begin{rem} We can use the description of the groups $\Gamma_{X_i}$ and $\Gamma_{Y_i}$ from Remark \ref{orbipi1} (2) to compute, with the help of MAGMA, the number of their index three normal subgroups: \begin{center} {\footnotesize \begin{tabular}{c | c | c | c | c| c| c } & $\Gamma_{X_1}$ & $\Gamma_{Y_1}$ & $\Gamma_{X_2}$ & $\Gamma_{Y_2}$ & $\Gamma_{X_3}$ & $\Gamma_{Y_3}$ \\ \hline \# subgrps & $41$ & $14$ & $41$ & $5$ & $41$ & $5$ \\ \end{tabular} } \end{center} This is provides another argument that the groups $\Gamma_{X_i}$ and $\Gamma_{Y_i}$ cannot be isomorphic for $1 \leq i \leq 3$. Note that Proposition \ref{diffeo} implies $\Gamma_{X_2} \simeq \Gamma_{X_3}$ and $\Gamma_{Y_2}\simeq \Gamma_{Y_3}$, a fact that can be also verified using the MAGMA command {\tt SearchForIsomorphism}. \end{rem} \begin{proof}[Proof of Theorem \ref{nothomeo}] It remains to show that the threefolds $X_i$ and $Y_i$ are not homeomorphic for $1 \leq i \leq 3$. Assume the converse. We claim that a homeomorphism $f_i \colon X_i \to Y_i$ maps smooth to smooth and singular to singular points. In particular, it restricts to a homeomorphism between the regular loci $f_i \colon X_i^{\circ} \to Y_i^{\circ}$ and therefore induces an isomorphism between $\Gamma_{X_i}$ and $\Gamma_{Y_i}$, see Remark \ref{orbipi1} (1). A contradiction. To verify the claim, we point out that the local fundamental group of a singularity of type $\frac{1}{3}(1,1,1)$ or $\frac{1}{3}(1,1,2)$ is isomorphic to $\mathbb Z_3$, while the local fundamental group of a smooth point is trivial. We conclude the proof, because $f$ induces an isomorphism between $\pi_1^{loc}(X_i,p)$ and $\pi_1^{loc}\big(Y_i,f_i(p)\big)$, for all $p \in X_i$. \end{proof} \bigskip \bigskip \begin{biblist} \bib{armstrong}{article}{ AUTHOR = {Armstrong, M. A.}, TITLE = {The fundamental group of the orbit space of a discontinuous group}, JOURNAL = {Proc. Cambridge Philos. Soc.}, FJOURNAL = {Proceedings of the Cambridge Philosophical Society}, VOLUME = {64}, YEAR = {1968}, PAGES = {299--301}, ISSN = {0008-1981}, MRCLASS = {54.80}, MRNUMBER = {221488}, MRREVIEWER = {R. W. Bagley}, DOI = {10.1017/s0305004100042845}, URL = {https://doi.org/10.1017/s0305004100042845}, } \bib{rigidity}{article}{ author={Bauer, Ingrid}, author={Catanese, Fabrizio}, title={On rigid compact complex surfaces and manifolds}, journal={Adv. Math.}, volume={333}, date={2018}, pages={620--669}, issn={0001-8708}, review={\MR{3818088}}, doi={10.1016/j.aim.2018.05.041}, } \bib{BeauReal}{article}{ author={Bauer, Ingrid}, author={Catanese, Fabrizio}, author={Grunewald, Fritz}, title={Beauville surfaces without real structures}, booktitle= {Geometric methods in algebra and number theory}, series = {Progr. Math.}, volume = {235}, pages = {1--42}, date = {2005}, review={\MR{2159375}}, } \bib{FourNames}{article}{ author={Bauer, Ingrid}, author={Catanese, Fabrizio}, author={Grunewald, Fritz}, author={Pignatelli, Roberto}, TITLE = {Quotients of products of curves, new surfaces with {$p_g=0$} and their fundamental groups}, JOURNAL = {Amer. J. Math.}, FJOURNAL = {American Journal of Mathematics}, VOLUME = {134}, YEAR = {2012}, NUMBER = {4}, PAGES = {993--1049}, ISSN = {0002-9327}, MRCLASS = {14J29 (14J10)}, MRNUMBER = {2956256}, MRREVIEWER = {Christian Liedtke}, DOI = {10.1353/ajm.2012.0029}, URL = {https://doi.org/10.1353/ajm.2012.0029}, } \bib{BG}{article}{ AUTHOR = {Bauer, Ingrid}, author={Gleissner, Christian}, TITLE = {Fermat's cubic, {K}lein's quartic and rigid complex manifolds of {K}odaira dimension one}, JOURNAL = {Doc. Math.}, FJOURNAL = {Documenta Mathematica}, VOLUME = {25}, YEAR = {2020}, PAGES = {1241--1262}, ISSN = {1431-0635}, MRCLASS = {14B12 (14D06 14M25 32G05 32G07)}, MRNUMBER = {4164723}, DOI = {10.3934/dcdsb.2019218}, URL = {https://doi.org/10.3934/dcdsb.2019218}, } \bib{notinfinitesimally}{article}{ author={Bauer, Ingrid}, author={Pignatelli, Roberto}, title={Rigid but not infinitesimally rigid compact complex manifolds}, eprint={arXiv:1805.02559 [math.AG]}, date={2018}, pages={18} } \bib{beauville}{article}{ author={Beauville, Arnaud}, title={Some remarks on K\"ahler manifolds with $c_{1}=0$}, conference={ title={Classification of algebraic and analytic manifolds}, address={Katata}, date={1982}, }, book={ series={Progr. Math.}, volume={39}, publisher={Birkh\"auser Boston, Boston, MA}, }, date={1983}, pages={1--26}, review={\MR{728605}}, doi={10.1007/BF02592068}, } \bib{bib1}{article}{ AUTHOR = {Bieberbach, Ludwig}, TITLE = {\"{U}ber die {B}ewegungsgruppen der {E}uklidischen {R}\"{a}ume}, JOURNAL = {Math. Ann.}, FJOURNAL = {Mathematische Annalen}, VOLUME = {70}, YEAR = {1911}, NUMBER = {3}, PAGES = {297--336}, ISSN = {0025-5831}, MRCLASS = {DML}, MRNUMBER = {1511623}, DOI = {10.1007/BF01564500}, URL = {https://doi.org/10.1007/BF01564500}, } \bib{bib2}{article}{ AUTHOR = {Bieberbach, Ludwig}, TITLE = {\"{U}ber die {B}ewegungsgruppen der {E}uklidischen {R}\"{a}ume ({Z}weite {A}bhandlung.) {D}ie {G}ruppen mit einem endlichen {F}undamentalbereich}, JOURNAL = {Math. Ann.}, FJOURNAL = {Mathematische Annalen}, VOLUME = {72}, YEAR = {1912}, NUMBER = {3}, PAGES = {400--412}, ISSN = {0025-5831}, MRCLASS = {DML}, MRNUMBER = {1511704}, DOI = {10.1007/BF01456724}, URL = {https://doi.org/10.1007/BF01456724}, } \bib{BL}{book}{ AUTHOR = {Birkenhake, Christina}, AUTHOR= {Lange, Herbert}, TITLE = {Complex abelian varieties}, SERIES = {Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]}, VOLUME = {302}, EDITION = {Second}, PUBLISHER = {Springer-Verlag, Berlin}, YEAR = {2004}, PAGES = {xii+635}, ISBN = {3-540-20488-1}, MRCLASS = {14-02 (14H37 14Kxx 32G20)}, MRNUMBER = {2062673}, MRREVIEWER = {Fumio Hazama}, DOI = {10.1007/978-3-662-06307-1}, URL = {https://doi.org/10.1007/978-3-662-06307-1}, } \bib{MAGMA}{article}{ AUTHOR = {Bosma, Wieb}, AUTHOR = {Cannon, John}, AUTHOR = { Playoust, Catherine}, TITLE = {The {M}agma algebra system. {I}. {T}he user language}, NOTE = {Computational algebra and number theory (London, 1993)}, JOURNAL = {J. Symbolic Comput.}, FJOURNAL = {Journal of Symbolic Computation}, VOLUME = {24}, YEAR = {1997}, NUMBER = {3-4}, PAGES = {235--265}, ISSN = {0747-7171}, MRCLASS = {68Q40}, MRNUMBER = {MR1484478}, DOI = {10.1006/jsco.1996.0125}, URL = {http://dx.doi.org/10.1006/jsco.1996.0125}, } \bib{AndiFab}{article} { AUTHOR = {Catanese, Fabrizio}, AUTHOR = {Demleitner, Andreas}, TITLE = {The classification of hyperelliptic threefolds}, JOURNAL = {Groups Geom. Dyn.}, FJOURNAL = {Groups, Geometry, and Dynamics}, VOLUME = {14}, YEAR = {2020}, NUMBER = {4}, PAGES = {1447--1454}, ISSN = {1661-7207}, MRCLASS = {Prelim}, MRNUMBER = {4186481}, DOI = {10.4171/ggd/587}, URL = {https://doi.org/10.4171/ggd/587}, } \bib{LCh}{book}{ AUTHOR = {Charlap, Leonard S.}, TITLE = {Bieberbach groups and flat manifolds}, SERIES = {Universitext}, PUBLISHER = {Springer-Verlag, New York}, YEAR = {1986}, PAGES = {xiv+242}, ISBN = {0-387-96395-2}, MRCLASS = {57S30 (22E40 53C30)}, MRNUMBER = {862114}, MRREVIEWER = {Kyung Bai Lee}, DOI = {10.1007/978-1-4613-8687-2}, URL = {https://doi.org/10.1007/978-1-4613-8687-2}, } \bib{Demleitner}{thesis}{ AUTHOR = {Demleitner, Andreas}, TITLE = {On Hyperelliptic Manifolds, PhD thesis University of Bayreuth}, FJOURNAL = {EPub Bayreuth, PhD thesis, University of Bayreuth}, YEAR = {2020}, PAGES = {0--201}, } \bib{frob}{article}{ AUTHOR = {Frobenius, Ferdinand Georg}, TITLE = {\"Uber lineare Substitutionen und bilineare Formen}, JOURNAL = {J. Reine Angew. Math.}, FJOURNAL = {Journal f\"ur die reine und angewandte Mathematik}, VOLUME = {84}, Date = {1877}, pages={1--63},} \bib{Lange}{article}{ AUTHOR = {Lange, Herbert}, TITLE = {Hyperelliptic varieties}, JOURNAL = {Tohoku Math. J. (2)}, FJOURNAL = {The Tohoku Mathematical Journal. Second Series}, VOLUME = {53}, YEAR = {2001}, NUMBER = {4}, PAGES = {491--510}, ISSN = {0040-8735}, MRCLASS = {14J30 (14J50)}, MRNUMBER = {1862215}, MRREVIEWER = {Miguel A. Barja}, DOI = {10.2748/tmj/1113247797}, URL = {https://doi.org/10.2748/tmj/1113247797}, } \bib{kodairamorrow}{book}{ author={Morrow, James}, author={Kodaira, Kunihiko}, title={Complex manifolds}, publisher={Holt, Rinehart and Winston, Inc., New York-Montreal, Que.-London}, date={1971}, pages={vii+192}, review={\MR{0302937}}, } \bib{miranda}{book}{ AUTHOR = {Miranda, Rick}, TITLE = {Algebraic curves and {R}iemann surfaces}, SERIES = {Graduate Studies in Mathematics}, VOLUME = {5}, PUBLISHER = {American Mathematical Society, Providence, RI}, YEAR = {1995}, PAGES = {xxii+390}, ISBN = {0-8218-0268-2}, MRCLASS = {14Hxx (14-01 30F99)}, MRNUMBER = {1326604}, MRREVIEWER = {R. F. Lax}, DOI = {10.1090/gsm/005}, URL = {https://doi.org/10.1090/gsm/005}, } \bib{R87}{article}{ author={Reid, Miles}, title={Young person's guide to canonical singularities}, conference={ title={Algebraic geometry, Bowdoin, 1985}, address={Brunswick, Maine}, date={1985}, }, book={ series={Proc. Sympos. Pure Math.}, volume={46}, publisher={Amer. Math. Soc., Providence, RI}, }, date={1987}, pages={345--414}, review={\MR{927963}}, } \bib{schlessinger}{article}{ AUTHOR = {Schlessinger, Michael}, TITLE = {Rigidity of quotient singularities}, JOURNAL = {Invent. Math.}, FJOURNAL = {Inventiones Mathematicae}, VOLUME = {14}, YEAR = {1971}, PAGES = {17--26}, ISSN = {0020-9910}, MRCLASS = {32G05}, MRNUMBER = {292830}, MRREVIEWER = {F. Oort}, DOI = {10.1007/BF01418741}, URL = {https://doi.org/10.1007/BF01418741}, } \bib{UchidaYoshi}{article}{ AUTHOR = {Uchida, K\^{o}ji}, AUTHOR = {Yoshihara, Hisao}, TITLE = {Discontinuous groups of affine transformations of {$C^{3}$}}, JOURNAL = {Tohoku Math. J. (2)}, FJOURNAL = {The Tohoku Mathematical Journal. Second Series}, VOLUME = {28}, YEAR = {1976}, NUMBER = {1}, PAGES = {89--94}, ISSN = {0040-8735}, MRCLASS = {57E30}, MRNUMBER = {400271}, MRREVIEWER = {F. A. Sherk}, DOI = {10.2748/tmj/1178240881}, URL = {https://doi.org/10.2748/tmj/1178240881}, } \end{biblist} {\tiny MATHEMATISCHES INSTITUT, UNIVERSIT\"AT BAYREUTH, 95440 BAYREUTH, GERMANY} {\scriptsize\emph{E-mail address: Ingrid.Bauer@uni-bayreuth.de}} \quad {\scriptsize\emph{Christian.Gleissner@uni-bayreuth.de}} \end{document}
f3a68982052705e5706d79f408d0bc7fa68ea159
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{introduction} In this paper we present a method for deep learning of atmospheric radiative transfer from a few observed spectra from a ground-based or overhead spectra. This method could be use to measure atmospheric constituents, but we focus on using this method to convert a hyperspectral image from at-sensor radiance (the amount of light per wavelength measured at the sensor) to ground reflectance (the percent reflectance per wavelength), a process known as atmospheric compensation or atmospheric correction. Our method uses an autoencoder~\cite{bank2020autoencoders} similar to a denoising autoencoder, treating the atmosphere as noise and ground reflectance as the image. An autoencoder is a deep learning neural network that passes data through a series of layers that decrease in size leading to a dimensionally smaller representation of the data (encoding), and then passing the data through a symmetric series of layers back to the original data shape (decoding). When trained well, the process learns a representation of the data in the reduced dimensional space so that the encoding stage removes noise and the decoding stage recovers the data in the original space~\cite{vincent2008extracting, vincent2010stacked, ruikai2019research, bank2020autoencoders}. A diagram of the architecture of our autoencoder is shown in Figure~\ref{AuotoencoderDiagram}. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{AuotoencoderDiagram.png}} \caption{The autoencoder neural network used for converting from radiance to reflectance. Details for the network architecture are included in the labels.} \label{AuotoencoderDiagram} \end{center} \vskip -0.2in \end{figure} While our deep learning autoencoder method does not outperform a naive regression method based on the universal mean principle in QUACC~\cite{Bernstein2012}, it is an important first step in applying the growing field of deep learning of physical principles to atmospheric compensation in hyperspectral imagery and remote sensing. We expect, based on the trajectories of other efforts in deep learning of physical phenomena, that better inclusion of physical principles in the architecture of the autoencoder would substantially improve the quality of output. For example, the inclusion of skip connections in the ResNet Network enables layers that learn functions closer to the identity, leading to the ability to train much larger, more complex, networks~\cite{he2016deep} with a smoother loss function that has fewer local minima~\cite{li2018visualizing}. All of our data and methods are provided open access\footnote{https://www.kaggle.com/code/billbasener/autoencoder-atmospheric-compensation/notebook}. A hyperspectral image is a digital image in which each pixel has more than the usual three color (red, green, blue) bands, but often hundreds of bands across wavelengths sufficient to get spectral information about the materials in each pixel. We focus on hyperspectral images that have bands wavelengths from about 400nm to 2500nm - for comparison the visual colors occur around 450 (blue), 550nm (green) and 650nm (red) - and our spectra have 452 wavelengths. For a hyperspectral images collected at these wavelengths, the measured light at the sensor is from sunlight, having passed through the atmosphere, reflected off materials, and passed again through some amount of atmosphere (which may be small for a ground-based space or significant if the sensor is on board an aircraft or satellite.) Each band is typically 2nm to 10nm wide, and the bands are contiguous across the wavelength range. The reflectance for a material is important because it is the result of the interaction of photons at different wavelengths and the resonant frequencies of molecular bonds (for the wavelengths above the visible range) and the interaction of photons and electrons moving between quantum states. Specifically, important information about the constituents and bonds present in a material can be computed from reflectance spectra, for example distinguishing between different polymers, or distinguishing talcum powder from powdered sugar from dangerous white powdered substances. The percentage of light that passes through the atmosphere (as opposed to being absorbed) is called transmittance, and varies depending on the wavelengths for each band. A plot of transmittance for two different atmospheric models across our wavelength range with a spectral resolution (band width) of 5nm is shown in Figure~\ref{transmittance}. Part of our goal is to determine this transmittance amount from spectra on the ground measured from a sensor even when the reflectance of the ground material is unknown. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.9\columnwidth]{transmittance.png}} \caption{Two plots of transmittance for two different atmospheric models created using MODTRAN.} \label{transmittance} \end{center} \vskip -0.2in \end{figure} The plot in Figure~\ref{flux} shows more components involved in the radiative transfer model The Direct Solar (100km) gives the amount of sunlight measured at the top of the atmosphere. The main shape of this curve is from the blackbody radiation given the temperature of the sun. The Direct Solar (0km) is the amount of sunlight reaching the ground, which is the Direct Solar (100km) times the transmittance shown in Figure\ref{transmittance}. The Downward Diffuse (0km) is the amount of light per wavelength that reaches the ground after scattering in the atmosphere; this is the indirect illumination on an object that is in a shadow from the sun. The upward Diffuse is the amount of upward light, which at 100mk (top of atmosphere) is from atmospheric scattering and at 0km is from the blackbody radiation of the ground (which is insignificant given the wavelength range and ground temperature assumed in this model). \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.9\columnwidth]{flux.png}} \caption{Two plots of transmittance for two different atmospheric models created using MODTRAN.} \label{flux} \end{center} \vskip -0.2in \end{figure} The at-sensor radiance is the Upward Diffuse (at the elevation of the sensor) plus the total illumination (Direct Solar + Downward Diffuse) times the percent reflectance per band of the material on the ground per wavelength $_\lambda$, \[ UD_\lambda + (DS_\lambda + DD_\lambda )Ref_\lambda = Rad_\lambda. \] There are also nonlinear effects as well. Light that has passed through a leaf and reflected off the ground would have the leaf transmition times ground reflectance in place of $Ref_\lambda$ in this equation. In the lower wavelengths, especially blue and below, photons will take multiple bounces/scattering (collectively, 'haze' in the image) in the atmosphere (which you can observe in the higher values for Upward/Downward Diffuse in Figure~\ref{flux}), but the amount of haze is highly dependent on constituents and length of pass through the atmosphere; moreover has causes photons reflecting off material at one location to scatter in the atmoshpere and enter the sensor at locations/directions for other pixels, causing in "Upward Diffues" that varies across the image comprised of a nonlinear mixture of nearby ground material spectra and atmospherics, rather than the ideal Upward Diffuse from the atmosphere alone shown in figure~\ref{flux}. All of these values change with respect to atmospheric constituents, water vapor, CO2, Ozone, CH4, aerosols, sun angle, fraction of sun and sky visible to each pixel (shadow from objects, terrain, clouds, etc.), sensor angle, angle and roughness of the ground material, and other factors. The MODTRAN software (http://modtran.spectral.com/modtran\_home) can simulate these effects if they are known, and provide a modeled at-sensor radiance for reflectance spectra, or a ground reflectance spectrum for a measured at sensor radiance. The purpose of hyperspectral imaging is to perform spectroscopy writ large; that is, to be able to determine the materials in each pixel from the reflectance spectra for those pixels. As such, good atmospheric compensation is an essential step. The most accurate methods are usually either based on physics based modelling with MODTRAN (MODerate resolution atmospheric TRANsmission, http://modtran.spectral.com/modtran\_home) such as FLAASH~\cite{Perkins2012} or using materials of known reflectance in the image, for example the empirical line method~\cite{mamaghani2018initial,gao2009atmospheric}, in which case it is usually preferable to have materials that are spectrally flat, for example a set of five panels which are 5\%, 30\% 50\% 80\% and 95\% reflectance across all wavelengths, which can be used to estimate a best fit regression line per wavelength to convert from radiance units to reflectance. However, all of these methods have some approximations and attempt to measure physical parameters. For example, a good ELM will estimate the Upward diffuse as the intercept and the Direct Solar and Downward Diffuse as the slope, but assume these are consistent values for every pixel in the image. FLAASH attempts to estimate the physical parameters from the image, even estimating water vapor content per pixel, and use a physics based model to compute the grouped reflectance from each at-sensor radiance measurement. A heuristic and approximate but surprisingly effective method for atmospheric correction is to make the assumption that the mean spectrum of a significantly large library of reflectance spectra will be consistent, and use this assumption to compute a single gain and offset that is applied per wavelength across the image. A method based on this assumption is QUACC (QUick Atmospheric Correction Code), in which a sample of 50 different spectra (called endmembers) are selected from the radiance image, usually iteratively so each new endmember is optimally different than the previous ones, and the mean of these 50 endmembers is determined. The least measured radiance value across the image per wavelength is assumed to be the upwelling radiance, and is thus subtracted from the image. Then the ratio between the ideal mean and mean of the endmembers (after subtracting estimated upwelling radiance) is computed and used as a 'gain' and is multiplied by every upwelling-subtracted radiance value to convert to reflectance. This is often implemented with additional heuristic improvements such as removing spectra of mud or vegetation from the spectra, is provided in QUACC~\cite{Bernstein2012}. There are other semi-heuristic methods for example SHARC~\cite{Katkovsky2018}. The QUAAC method is faster than physics-based methods are quires no manual input. In tests, is provides reflectance spectra that are $\pm 15\%$ in comparison to FLAASH generated reflectance spectra, and perhaps more importantly QUACC tends to generate reflectance spectra that retain the features of the true spectra, which is the most important factor for spectroscopy. A small to moderate variation in the magnitude of a spectrum is often not important. Materials are identified more from variation in reflectance at different bands which indicates which bands are absorbing vs. reflecting, resulting from the constituents and molecular bonds present. The total reflectance (i.e. total albedo) can vary with illumination amount, angle to sun, and surface roughness, inconsistent calibration, none of which are dependent on the molecular bonds and constituents of the material. The primary questions in atmospheric compensation methods is whether they retain the features of the true reflectance and whether they avoid creating new features not present in the material. For this paper, we started with a set of about 5,000 reflectance spectra of known materials, each of which passed some basic quality check. We then took random samples of size 39, randomly selected a set of parameters for MODTAN (solar zenith angle from 0-85 in increments of 5, random selection from the 6 possible atmospheric models, random selection from the 12 possible aerosol models) and created an associated set of 39 radiance spectra. For each set, we also computed the mean spectrum and added this as a $40^\textrm(th)$ spectra. This creates arrays that have even dimension (number of spectra and number of bands) which are preferable for some Deep Learning methods. So our input data is a set of 40 radiance spectra each with 452 wavelengths, and our output data is a set of 40 reflectance spectra. In Figure~\ref{SampleRadAndRef} we show a plot of the 40 radiance spectra (top) and the associated reflectance spectra (bottom). We created a set of 10,000 such samples of 40 spectra. \begin{figure}[ht] \begin{center} \centerline{\includegraphics[width=0.8\columnwidth]{SampleRadAndRef.png}} \caption{At-sensor radiance for 39 spectra and the mean plotted in black (top plot) along with the true reflectance for these materials and the mean of their reflectance in black (bottom plot). Below are the data arrays for these plots (40 spectra by 452 bands each) are shown below as images.} \label{SampleRadAndRef} \end{center} \vskip -0.2in \end{figure} \section{Baseline Regression Method and Results} We create a baseline regression method for atmospheric correction based on the assumption that the mean spectrum of a significantly large library of reflectance spectra will be consistent, called the reference spectrum. The is the same assumption used in the QUACC~\cite{Bernstein2012} method, although we are using the synthetically generated data which has different distributions for the types of spectra present. Our baseline is not an approximation to QUACC, but a baseline for comparison to the Deep Learning methods that is simple and based on an accepted heuristic approximation. The mean spectra for a each of a random selection of ten of our sets of 40 spectra are shown plotted in Figure~\ref{meanReflectanceSpectra}. This assumption is very approximately true, but the approximation seems more consistent using 120 per sample, shown below in this figure. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{meanReflectanceSpectra.png}} \caption{Ten mean spectra, each of which is the mean from a random selection of 40 reflectance spectra (top) and 120 spectra (bottom).} \label{meanReflectanceSpectra} \end{center} \vskip -0.2in \end{figure} A physics-based support for the general shape of the reference reflectance spectrum is provided in~\cite{Bernstein2012} as follows: \begin{quote} The general shape of the reference reflectance spectrum has a simple physical origin. The decrease toward the longwavelength edge arises because the molecular constituents of materials have relatively strong NIR vibrational absorption features that increase in strength with increasing wavelength. The decrease toward the short-wavelength edge arises because the molecular constituents have strong electronic absorption features that increase in strength with decreasing wavelength. Although we normalize the peak of this curve to unity for reasons discussed later, it is important to note that the peak average reflectance is $\sim 0.4$... \end{quote} From the assumption that a sample of spectra will have a fixed mean reflectance $m$, we can simply take this mean divided the mean measured radiance spectra to obtain a 'gain' multiplication factor for each band. An approximate atmospheric compensation can be done by multiplying this gain times each measured radiance value, per band. The output from the baseline method is shown in Figure~\ref{test_results_regression_method}. Some of the features in the true reflectance spectra column (third, right-hand, column)can be observed in the spectra predicted using the baseline method with the standard mean reflectance assumption (center column). There are regions in the predicted column where the spectra have values near zero - these are not errors, but regions where the radiance has near-zero values. Effectively, the water in the atmosphere has near-zero transmittance in these wavelength regions, and there is insufficient signal for prediction. These regions are removed from all hyperspectral images created with solar illumination. Wavelength regions around 0.9 and 1.2 microns are also often removed because some atmospheres will also have very low transmittance in these regions, and these regions can be observed particularly strong in the 4th row. Figure~\ref{test_results_regression_method_bbr} shows these same spectra but with the water bands removed (replaced with a straight line in the plots) to aid in visually comparing spectra. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{test_results_regression_method.png}} \caption{Three samples of spectra in modeled radiance units (first column), converted to reflectance using the heuristic gain from baseline method (center column), and true reflectance (last column). The water absorption regions (just below 1.5 and just below 2.0) are not errors but locations where the water absorption in the astmosphere effectively blocks all light.} \label{test_results_regression_method} \end{center} \vskip -0.2in \end{figure} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{test_results_regression_method_bbr.png}} \caption{Predicted reflectance using the heuristic gain from baseline method (left-hand column), and true reflectance (right-hand column). The water absorption regions have been removed (replaced with a straight line in plots) to aid in visual comparison of features. Visual inspection shows that many of the features in the true reflectance are present in the predicted reflectance.} \label{test_results_regression_method_bbr} \end{center} \vskip -0.2in \end{figure} To measure the effectiveness of this method we randomly selected 500 sets of 39 spectra and computed the correlation between each true reflectance and predicted reflectance (after removing bands in $0-0.45$,$1.3-1.5$,$1.75-2.05$, and $2.4-3$ micron regions) and the mean correlation of these 19,500 spectra was 0.87 with a standard deviation of 0.29. Of these spectra, $69\%$ of the predicted spectra were within $15\%$ of the true spectrum. These results are not great, and not sufficient without addition modifications like those in QUACC, but this is sufficiently promising that a deep learning method may be viable. \section{Autoencoder Method and Results} Because the simple baseline regression method works reasonably, it can be expected with a proper architecture and proper training, a deep learning method could improve on accuracy. Perhaps the model could learn the baseline regression plus adjustments based on common types of variation in the atmosphere and illumination. Because the in put and output have the same shape, we decided to try an autencoder network treating the atmosphere as noise (see~\cite{bank2020autoencoders}). The input into our autencoder is the $40\times452$ array of 40 spectra in radiance units, each of which has 452 bands. The output is an array of the same shape, number of spectra, and bands, but in reflectance units. The architecture for the network is shown in Figure~\ref{AuotoencoderDiagram}, constructed in Keras and trained for 50 epochs in batches of 256 each with an adam optimizer and binary\_crossentropy loss. All activation functions are ReLU except the final decoder layer which is sigmoid, in which case the $0-1$ output of the sigmoid neurons match the $0-1$ range of values for reflectance. The loss curve from optimizing the network is shown in Figure~\ref{Loss_curve} (using $33\%$ of the data for validation). Example output showing original radiance, predicted reflectance, and true reflectance are shown in Figure~\ref{test_results}. The output predicted reflectance of the autoencoder along with true reflectance with the standard water bands removed are shown in Figure~\ref{test_results 2_bbr}. \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{Loss_curve_2.png}} \caption{The loss curve from training the autoencoder.} \label{Loss_curve} \end{center} \vskip -0.2in \end{figure} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{test_results_2.png}} \caption{Three samples of spectra in modeled radiance units (first column), converted to reflectance using the autoencoder (center column), and true reflectance (last column). The water absorption regions (just below 1.5 and just below 2.0) are not errors but locations where the water absorption in the astmosphere effectively blocks all light.} \label{test_results} \end{center} \vskip -0.2in \end{figure} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{test_results_2_bbr.png}} \caption{Predicted reflectance using the autoencoder (left-hand column), and true reflectance (right-hand column). The water absorption regions have been removed (replaced with a straight line in plots) to aid in visual comparison of features. Observe that, especially in the 0.45 to 1.25 micon region, the autoencoder is introducing features that are not present in the true reflectance spectra.} \label{test_results 2_bbr} \end{center} \vskip -0.2in \end{figure} To measure the effectiveness of the autoencoder comparable to our baseline method, as before we randomly selected 500 sets of 39 spectra from the validation set and computed the correlation between each true reflectance and predicted reflectance (after removing bands in $0-0.45$,$1.3-1.5$,$1.75-2.05$, and $2.4-3$ micron regions) and the mean correlation of these 19,500 spectra was 0.52 with a standard deviation of 0.44. Of these spectra, $34\%$ of the predicted spectra were within $15\%$ of the true spectrum. These results are significantly worse than the baseline method. \section{Conclusions} It is clear that our autoencoder performed poorly in comparison to the baseline method, summarized in Table~\ref{results_table}. \begin{table}[t] \caption{Evaluation metrics for the baseline regression and autoencoder methods.} \label{results_table} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule Metric & Regression & Autoencoder \\ & Result & Result \\ \midrule Mean Corr & 0.87 & 0.52 \\ Std Corr & 0.29 & 0.44 \\ percentile in $\pm 15\%$ & $69\%$ & $34\%$ \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} It seems likely that the architecture of CNNs is making the network unable to use the wavelength information. Specifically, the architecture we used folllows a standard framework for images. In images, a combination of pixels forming a nose/ear/wheel/etc. is meaningful at any location in the image, and CNNs are able to leverage this information. But in spectroscopy, the meaning of a combination of values is dependent on the location of the feature; although a network might learn the shape of features in general and use both the shape and location in final layers. We tried a significant number of modifications to the architecture, including replacing some or all Conv layers with fully connected dense layers, modifying the amount of dropout, using different optimization methods and loss functions (for example, rmsprop in place of adam and/or mean squared error in place of binary crossentropy). We also tried building the network to use just the mean spectrum (to learn an approximation to the baseline regression concept), or building convolution layers with different rectangular windows. However, none of these methods provided benefit above the autencoder provided in this section in any manner worth reporting. It does seem the architecture should learn a function for removing atmosphere in a way that the same function is applied to every row (spectrum) in the data array. Our approach was to use an autoencoder and treat the atmosphere as noise since this is a well-developed method of application. But perhaps since the baseline regression method is a reasonable approximation, perhaps a regression neural network would be better, or some combination that leverages the known heuristics and physics together with a deep learning approach. We believe the physical explanation of the data and reasonable effectiveness of the baseline regression method strongly suggest that a Deep Learning approach to atmospheric compensation, perhaps incorporated with some heuristics and physics, has the likelihood of being completely automated and highly effective, with the potential to outperform currently available methods such as QUACC and FLAASH.
85c9cd717b470e32c87846e9065d2c6245ad801b
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} In recent years, the performance of automatic speech recognition~(ASR) has been greatly improved by sequence-to-sequence modeling, such as connectionist temporal classification~(CTC)~\cite{graves2006ctc}, recurrent neural network transducer~(RNN-T)~\cite{battenberg2017rnnt}, and attention-based encoder-decoder~(AED)~\cite{hori2017joint}. Many of the earlier researches have focused on autoregressive (AR) modeling, which generates the token sequence using a left-to-right probabilistic chain rule. Despite their great performance, such AR models require $L$-step incremental model calculations to generate $L$ tokens, resulting in high inference latency and considerable computational costs. From another aspect, non-autoregressive (NAR) modeling generates the token sequence in constant steps, and removes the chain-rule assumption. CTC~\cite{graves2006ctc} plays an essential role in recent NAR researches. Modern NAR approaches outperform CTC by leveraging alignments~(align-based) and the output token sequence~(token-based). Based on the joint CTC/attention architecture~\cite{hori2017joint}, Mask-CTC~\cite{higuchi2020mask} utilizes the (conditional) masked language model~((C)MLM) decoder to refine the CTC token sequence. \cite{higuchi2021improved} proposes two auxiliary tasks to solve the length prediction problem that occurred in Mask-CTC. From another point of view, CTC alignment shows its advantage for building NAR models in Align-Refine~\cite{chi2021alignrefine}, CASS-NAT~\cite{fan2021improved} and AL-NAT~\cite{wang2022alignmentlearning}. Moreover, the self-supervised model wav2vec2.0~\cite{baevski_2020_wav2vec_20_framework} has achieved promising results by CTC. However, there are two major challenges remaining for NAR modeling: First, NAR models converge slowly and perform poorly compared to state-of-the-art~(SOTA) AR models~\cite{higuchi2020mask,deng2022improving}. Second, although NAR models are often favored in resource-constrained situations for fast inference speed and high accuracy~\cite{higuchi2020mask}, the large model scale and computational cost limit the application of NAR modeling. Knowledge distillation~(Transfer learning) is typically used to solve such a problem by teaching a smaller student model~\cite{gou2021knowledge}. To be specific, the student aims to mimic the soft targets provided by the well-trained teacher using the Kullback-Leibler divergence~(KLD)~\cite{huang2018knowledgea,takashima2019investigation,munim2019sequencelevel}. Nevertheless, when applying knowledge distillation on non-autoregressive ASR, the poor NAR teacher limits the improvement. In this paper, we propose a novel architecture by transferring and distilling the knowledge from an autoregressive~(AR) teacher to a non-autoregressive~(NAR) student together with a beam-search decoding method to boost the performance of non-autoregressive modeling. Firstly, we introduce a beam-search decoding method to enlarge the search space for the (conditional) masked language model~((C)MLM) decoder~\cite{ghaz2019mask}. Then, we extend the knowledge distillation technique by transferring the AR teacher's knowledge to NAR in two distillation levels, therefore improving NAR students' performance. The encoder distillation is conducted following our previous setup~\cite{huang2018knowledgea}. For the decoder distillation, we develop the frame- and sequence-level distillation from the attention-based autoregressive model into Mask-CTC. The distillation loss is customized for token-based NAR models, so that the NAR decoder can benefit from the AR decoder. The structure of the paper is organized as follows: In Section~\ref{sec:related}, the attention-based autoregressive model and non-autoregressive Mask-CTC are briefly introduced. In Section~\ref{sec:method}, we present the proposed beam search method for Mask-CTC and the knowledge distillation from AR to NAR ASR. In Section~\ref{sec:expr}, experimental results and analysis are given on the Mandarin AISHELL-1 and English LibriSpeech datasets. Finally, the conclusion is drawn in Section~\ref{sec:conc}. \section{Autoregressive and Non-autoregressive ASR} \label{sec:related} Basically, end-to-end ASR models map speech features $\boldsymbol{X} = \xcdot{\boldsymbol{x}}{T}, \boldsymbol{x}_t \in \mathbb{R}^F$ to a token sequence $\boldsymbol{y} = \xcdot{y}{L}, y_l \in \mathcal{U}$, where $F$ is the feature dimension and $\mathcal{U}$ denotes the vocabulary set. Traditional attention-based autoregressive~(AR) ASR models~\cite{vaswani2017attention,gulati2020conformer} firstly encode speech features $\boldsymbol{X}$ into a hidden representation $\boldsymbol{H}$: $\boldsymbol{H} = \text{Encoder} (\boldsymbol{X})$, and then compose it with previous tokens $\boldsymbol{y}_{<l}$ to estimate the posterior $p(y_l | \boldsymbol{y}_{<l}, X)$: \begin{align} p_{\text{ar}}(y_l | \boldsymbol{y}_{<l}, \boldsymbol{H}) &= \text{Decoder}(\boldsymbol{y}_{<l}, \boldsymbol{H}) \label{eq:attention}, \end{align} and the whole sequence probability is \begin{equation} p_{\text{ar}}(\boldsymbol{y} | \boldsymbol{H}) = p_{\text{ar}}(y_1 | \boldsymbol{H}) \prod_{l=2}^{L} p_{\text{ar}}(y_l | \boldsymbol{y}_{<l}, \boldsymbol{H}). \end{equation} During inference, the AR model generates hypothesis $\hat \boldsymbol{y}$ token-by-token. Connectionist temporal classification~(CTC)~\cite{graves2006ctc} is one of the earliest non-autoregressive~(NAR) method, which introduces a many-to-one function $\eta$ from the frame-level alignment $\boldsymbol{Z} = \xcdot{z}{T}, z_t \in \mathcal{U} \cup \{\text{\nofont{blank}}\}$ to the token sequence $\boldsymbol{y}$, by merging same labels and removing the $\text{\nofont{blank}}$ in $\boldsymbol{Z}$. The sequence probability is represented as: \begin{align} \label{eq:ctc} p_{\text{ctc}}(\boldsymbol{y} | \boldsymbol{H}) = \sum_{\boldsymbol{Z} \in \eta^{-1}(\boldsymbol{y})} \prod_t p_{\text{ctc}}(z_t | \boldsymbol{H}), \end{align} where $\eta$ is a many-to-one function from $\boldsymbol{Z}$ to $\boldsymbol{y}$. During inference, greedy CTC predicts the alignment by selecting the tokens with the highest probability for each step. Mask-CTC~\cite{higuchi2020mask}, which is a popular instantiation of NAR ASR, is actually a refinement of CTC results via the conditional masked language model~(MLM)~\cite{ghaz2019mask}. During training, the groundtruth $\boldsymbol{y}$ are randomly replaced by the special token \textless\nofont{MASK}\textgreater~, and the MLM decoder predicts masked tokens $\boldsymbol{y}_{\text{mask}}$, conditioned on the observed tokens $\boldsymbol{y}_{\text{obs}} = \boldsymbol{y} \setminus \boldsymbol{y}_{\text{mask}}$: \begin{align} \label{eq:mlm} p_{\text{mlm}} (\boldsymbol{y}_{\text{mask}} | \boldsymbol{y}_{\text{obs}}, \boldsymbol{H}) = \prod_{y \in \boldsymbol{y}_{\text{mask}}} p_{\text{mlm}} (y | \boldsymbol{y}_{\text{obs}}, \boldsymbol{H}). \end{align} During inference, the output is initialized by CTC greedy decoding, and low-confidence tokens are substituted by \textless\nofont{MASK}\textgreater~ based on pre-defined threshold $p_{\text{thr}}$. After that, masks are filled using easy-first algorithm: fill all masks in $\lceil N/K \rceil$ iterations, where $N$ denotes the total number of \textless\nofont{MASK}\textgreater~ and each iteration predicts top-$k$ tokens with the highest confidence guided by MLM: \begin{align} \hat{\boldsymbol{C}} &= \mathop{\arg\max}_{\substack{\boldsymbol{C} \subset \boldsymbol{y}_{\text{mask}}, \\ |\boldsymbol{C}| = k}} \prod_{c \in \boldsymbol{C}} p_{\text{mlm}} (c | \boldsymbol{y}_{\text{obs}}, \boldsymbol{H}), \label{eq:infer_mlm1} \end{align} where $k = \begin{cases} N \mod K, &\text{the last iteration} \\ K, &\text{otherwise} \end{cases}$, $\boldsymbol{C}$ is the candidate set of \textless\nofont{MASK}\textgreater~ tokens and $\ \what{\boldsymbol{y}'_{\text{obs}}} = \boldsymbol{y}_{\text{obs}} + \hat{\boldsymbol{C}}\ $ is the updated result after mask filling. The joint CTC/attention architecture~\cite{hori2017joint,higuchi2020mask} is widely used in modern AR and NAR ASR models, with a multi-task learning based loss function: \begin{align} \label{eq:joint_loss} \mathcal{L}_{\text{jca}} = \alpha \mathcal{L}_{\text{ctc}} + (1-\alpha) \mathcal{L}_{\text{att}}, \end{align} where $\alpha \in [0,1]$ is a hyperparameter, $\mathcal{L}_{\text{att}} = \mathcal{L}_{\text{ar}}$ for AR ASR, and $\mathcal{L}_{\text{att}} = \mathcal{L}_{\text{mlm}}$ for NAR ASR, respectively. \section{Proposed Methods} \label{sec:method} In this section, we introduce: (1) the proposed beam search method for NAR, (2) the distillation architecture transferring knowledge from AR to NAR ASR. \subsection{Beam Search for NAR ASR} \label{sec:beam_search} Inspired by joint and rescoring decoding~\cite{hori2017joint}, we design a beam-search decoding method to enlarge the search space for the MLM decoder. The procedure is shown in Algorithm~\ref{alg:beam}. $\Omega_1$ is a sorted queue to be updated at the beginning of one iteration, $\Omega_0$ stores the final $\Omega_1$ after one iteration. During each iteration, a $B$-size beam is preserved, and the number of updated tokens is fixed and computed by $k$~(i.e. $|\what{y_{\text{obs}}^{i+1}}| = |\what{y_{\text{obs}}^{i}}| + k$). Top-$B$ candidates are selected, according to the log domain posterior probability and Equation~\ref{eq:infer_mlm1}. \begin{algorithm} \caption{Beam Search on NAR MLM Decoding} \label{alg:beam} $\hat{\boldsymbol{y}} \leftarrow$ greedy CTC \; $\hat{\boldsymbol{y}}_{\text{mask}}$ and $\hat{\boldsymbol{y}}_{\text{obs}}^0$ by $p_{thr}$ \; $\Omega_0 \leftarrow \{ \hat{\boldsymbol{y}}_{\text{obs}}^0\}$: a list to store accepted hypotheses \; \For{each iteration $i = 1,2,\cdots,\lceil N / K \rceil$}{ $\Omega_1 \leftarrow \varnothing$: $\Omega_1$ is a sorted queue with $B$ length \; $k = \begin{cases} N \mod K, &\text{the last iteration,} \\ K, &\text{otherwise.} \end{cases}$ \; \For{$\boldsymbol{y}_{\text{obs}} \in \Omega_0$}{ Get top-$B$ candidate sets $\what{\boldsymbol{y}_{\text{obs}}^{'1}}, \cdots, \what{\boldsymbol{y}_{\text{obs}}^{'B}}$ by the posterior in Equation~\ref{eq:infer_mlm1} \; Push $\what{\boldsymbol{y}_{\text{obs}}^{'1}}, \cdots, \what{\boldsymbol{y}_{\text{obs}}^{'B}}$ into $\Omega_1$ \; } $\Omega_0 = \Omega_1$ \; } \Return $\mathop{\arg\max}\limits_{\hat{\boldsymbol{y}} \in \Omega_0} \hat{\boldsymbol{y}}$ \; \end{algorithm} \subsection{Knowledge Transfer and Distillation from Autoregressive to Non-Autoregressive ASR} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{KD.pdf} \caption{Overview of proposed knowledge distillation from autoregressive to non-autoregressive ASR.} \label{fig:model} \end{figure} As previously stated, knowledge distillation performance on NAR is constrained owing to the poor performance of the NAR teacher. Here, we propose knowledge transfer and distillation from autoregressive~(AR) to non-autoregressive~(NAR) ASR, pushing the limits of NAR. Firstly, we introduce two types of knowledge distillation techniques based on Kullback-Leibler divergence~(KLD): $KLD(P, Q) = \sum_i P_i \log (P_i / Q_i)$, where $P, Q$ are the teacher and student output distributions, respectively. \textbf{Frame-level knowledge distillation} as a basic distillation criterion is formulated as below: \begin{align} \label{eq:frame_kd} \loss_{\text{F-KD}} &= - \sum_{t=1}^T \sum_{c \in \mathcal{U}} P_t(c) \log Q_t(c), \end{align} where $P_t(c)$ and $Q_t(c)$ are the posterior probabilities of token $c$ at timestamp $t$ of teacher $P$ and student $Q$. $\boldsymbol{H}$, $\boldsymbol{y}_{\text{obs}}$, and $\boldsymbol{y}_{<t}$ are the conditions of the above probabilities, but omitted for simplicity. $P_t(c) \log P_t(c)$ is omitted in computing KLD loss due to the frozen teacher during training. \textbf{Sequence-level knowledge distillation} is another distillation criterion: \begin{align} \label{eq:seq_kd} \loss_{\text{S-KD}} = - \sum_{\what{\boldsymbol{y}^i} \in \scalebox{1.2}{$\tau$}} P(\what{\boldsymbol{y}^i}) \log Q(\what{\boldsymbol{y}^i}) \end{align} where $\what{\boldsymbol{y}^i}$ is the hypotheses from teacher model, $\scalebox{1.2}{$\tau$}$ is the set of all possible sequences and similar omission as in Equation~\ref{eq:frame_kd}. Using such sequence-level knowledge distillation is unaffordable, as we are approximating an exponentially-sized sequence distribution $\scalebox{1.2}{$\tau$}$. Similar to MWER~\cite{prabhavalkar2017minimum} training, N-best candidates set $\Omega$ is accessed by beam search, and then $P(\what{\boldsymbol{y}}^i)$ can be approximated by: \begin{align} \label{eq:seq_kd_approx} P(\what{\boldsymbol{y}^i}) \simeq \frac{P(\what{\boldsymbol{y}^i})} {\sum_{\what{\boldsymbol{y}^j} \in \Omega} P(\what{\boldsymbol{y}^j})}. \end{align} And then we can achieve the knowledge distillation loss by: \begin{align} \label{eq:loss_kd} \mathcal{L}_{\text{KD}} = \beta_F \loss_{\text{F-KD}} + \beta_S \loss_{\text{S-KD}}, \end{align} where $\beta_F, \beta_S$ are hyper-parameters for frame-level knowledge distillation loss $\loss_{\text{F-KD}}$ and sequence-level $\loss_{\text{S-KD}}$, respectively. As shown in Figure~\ref{fig:model}, the proposed knowledge distillation methods are divided into two parts: the first part is the distillation after the encoder, and the second part is the distillation after the decoder. The encoder distillation $\mathcal{L}_{\text{KD}}^{\text{Encoder}}$ is done after the linear layer of the encoder, which has $\loss_{\text{F-KD}}$ and $\loss_{\text{S-KD}}$ similar to literature \cite{huang2018knowledgea}. The decoder distillation is setup as follows. For frame-level distillation, only $\boldsymbol{y}_{\text{mask}}$ positions are selected, so the objective function is normalized by the number of \textless\nofont{MASK}\textgreater~ tokens: \begin{align} \label{eq:loss_lkd_dec} \loss_{\text{F-KD}}^{\text{Decoder}} = - \sum_{t \in \boldsymbol{y}_{\text{mask}}} \sum_{c\in \mathcal{U}} \frac{P_t(c | \boldsymbol{y}_{<t}, \boldsymbol{H})}{|\boldsymbol{y}_{\text{mask}}|} \log Q_t(c | \boldsymbol{y}_{\text{obs}}, \boldsymbol{H}). \end{align} For sequence-level distillation, approximate probability $P'$ from N-best $\Omega$ is used: \begin{align} \label{eq:loss_skd_dec} \loss_{\text{S-KD}}^{\text{Decoder}} = - \sum_{\hat{\boldsymbol{y}} \in \Omega} \frac{P'(\hat{\boldsymbol{y}})}{|\boldsymbol{y}_{\text{mask}}|} \log Q( \boldsymbol{y}_{\text{mask}} | \boldsymbol{y}_{\text{obs}}, \boldsymbol{H})). \end{align} And the final loss is then: \begin{align} \label{eq:final_loss} \mathcal{L} = \mathcal{L}_{\text{jca}}^{\text{student}} + \gamma_{\text{enc}} \mathcal{L}_{\text{KD}}^{\text{Encoder}}+ \gamma_{\text{dec}} \mathcal{L}_{\text{KD}}^{\text{Decoder}}, \end{align} where $\gamma_{\text{enc}}, \gamma_{\text{dec}}$ are weight coefficients for encoder and decoder knowledge distillation. \section{Experiments} \label{sec:expr} \input{table} \subsection{Datasets} Our experiments are conducted on the Mandarin AISHELL-1~\cite{bu2017AISHELL1} and the English LibriSpeech corpora~\cite{panayotov2015LibriSpeech}. AISHELL-1 contains a 150h training set, with a development~(dev) and test set for evaluation, while LibriSpeech has a 960h training set, with test-clean/other~(test~c/o) used for tests. We report the character error rate~(CER) on AISHELL-1, word error rate~(WER) on LibriSpeech, respectively. \subsection{Model description} For acoustic feature extraction, 80-dimensional mel filterbank~(Fbank) features are extracted with global level cepstral mean and variance normalization~(CMVN). When it comes to data augmentation, speed perturbation is applied only for AISHELL-1 and SpecAugment~\cite{park2019specaugment} for both datasets, respectively. For text modeling, 5000 English Byte Pair encoding~(BPE)~\cite{kudo2018sentencepiece} subword units are adopted for English, and 4233 characters for Mandarin. The baseline follows the recipe of ESPnet v2~\cite{watanabe2018espnet}, a 12-layer conformer encoder with four times down-sampling and a 6-layer transformer decoder. The weight $\alpha$ for CTC module is fixed to $0.3$. For knowledge transfer and distillation, we firstly train a new NAR student model from scratch with $\loss_{\text{F-KD}}$ for 80 epochs. The hyper-parameters are set to $\beta_F = 1.0, \beta_S = 0, \gamma_{\text{enc}} = 0.5,\gamma_{\text{dec}} = 0.3$. Then we fine-tune the distillation procedure by adding $\loss_{\text{S-KD}}$, and the tuning parameters are set as $\beta_F = 1.0, \beta_S = 1.0, \gamma_{\text{enc}} = 0.5,\gamma_{\text{dec}} = 0.5$ with 20 epochs. In Equation~\ref{eq:seq_kd},~\ref{eq:seq_kd_approx}, we use beam-size $|\Omega| = 10$, which is consistent with the decoding hyper-parameters in AR model. \begin{table}[ht] \caption{Model hyper-parameters for different AR and NAR Conformer scales at L, M, X and XS.} \label{tab:model_size} \centering \begin{tabular}{c|c c c c} \Hline Model & L & M & S & XS \\ \hline \hline \#Params~(M) &115.0&46.8 &12.4 & 6.5 \\ Attention Dim & 512 & 256 & 128 & 128\\ Attention Heads & 8 & 4 & 4 & 4 \\ Inner-dim &2048 &2048 &1024 & 256\\ \Hline \end{tabular} \end{table} Different NAR student model sizes are explored in Table~\ref{tab:model_size}, identified as large~(L), medium~(M), small~(S), and extremely small~(XS). The AR teacher model keeps L size for LibriSpeech, and M-size for AISHELL-1. In the inference stage, no language model is used during the following experiments. Model parameters are averaged over the last 5 checkpoints. For autoregressive models, joint CTC/attention one-pass decoding~\cite{hori2017joint} is used with beam size 10, and the score interpolation of CTC is 0.3. For non-autoregressive Mask-CTC decoding, we follow the beam decoding method in Section~\ref{sec:beam_search}, with beam $B = 10$, the threshold $p_{\text{thr}} = 0.99$ and $K = 2$, for AISHELL-1 and LibriSpeech corpus. \subsection{Results with the NAR Beam Decoding} As proposed in Section~\ref{sec:beam_search}, we firstly evaluate the beam search performance with real time factor~(RTF) in Table~\ref{tab:expr_beam}. RTF is computed using Intel-Xeon E5-2690 CPU with a single core at test set. To be consistent with literature~\cite{higuchi2020mask}, the NAR~(M) model speeds up AR~(M)~\cite{hori2017joint} model by more than 10x, as the RTF is 0.58 for AR~(M) and 0.31 for AR~(S). Without too much degradation of inference speed~(1.5x as slow as `Beam1'), the beam decoding method achieves better performance compared with the greedy~(Beam1) one by 5\%$\sim$9\% relative WER reduction on the test set. As the beam size $B$ grows, the rate of improvement decreases. \begin{table}[ht] \caption{Non-autoregressive Mask-CTC Performance (CER) on AISHELL-1 corpus. Real time factor~(RTF) are reported for the test set.} \label{tab:expr_beam} \centering \begin{tabular}{l ccc} \Hline Decoding & Dev~(\%) & Test~(\%) & RTF \\ \hline \multicolumn{4}{l}{\textit{Non-Autoregressive~(M)}} \\ \quad Beam1 & 5.4 & 6.7 & 0.051 \\ \quad Beam5 & 5.4 & 6.4 & 0.064 \\ \quad Beam10 & 5.4 & 6.4 & 0.072 \\ \multicolumn{4}{l}{\textit{Non-Autoregressive~(XS)}} \\ \quad Beam1 & 7.6 & 9.2 & 0.017 \\ \quad Beam5 & 7.3 & 8.6 & 0.023 \\ \quad Beam10 & 7.2 & 8.5 & 0.027 \\ \Hline \end{tabular} \end{table} \subsection{Results on Knowledge Transfer and Distillation} Table~\ref{tab:expr_aishell} compares the knowledge transfer distillation and other modern AR and NAR models on AISHELL-1 and LibriSpeech datasets to validate the performance. \textbf{AISHELL-1: } As shown in Table~\ref{tab:expr_aishell}, the teacher AR model obtains more than 24\% relative reduction on CER compared with NAR~(M), and 40\% with NAR~(XS). After knowledge distillation, the NAR~(M) with `+$\loss_{\text{F-KD}}$' achieves 8\% and 16\% relative CER reduction on dev and test sets respectively, and the one based on `+$\loss_{\text{F-KD}}$' with `++$\loss_{\text{S-KD}}$' shows a further CER reduction on test set by 15\%. The student results achieve competitive performance~(5.0\%/5.4\% CER) to the state-of-the-art NAR models like CASS-NAT~\cite{fan2021improved} or AL-NAT~\cite{wang2022alignmentlearning}. Similar results are obtained for distilled NAR~(XS) as 18\%/25\% CER reduction on two evaluation sets. \textbf{LibriSpeech: } Table~\ref{tab:expr_libri} shows the performance comparison on the large LibriSpeech corpus. The AR~(L) is adopted as teacher model while NAR~(L,S) as student model. The observations are consistent with that in Table~\ref{tab:expr_aishell}, $\loss_{\text{F-KD}}$ and $\loss_{\text{S-KD}}$ further boost the performance of NAR Mask-CTC model at L~(3.3/7.8\% WER) and S~(3.7/9.2\% WER) scale by $\sim$25\% relative WER reduction. However, due to the limits of the AR teacher, the insertion and deletion error rate is high on AR~(L). Results show that such our method narrows the gap between AR and NAR, with the improvement being significantly greater in the more difficult evaluation set (i.e. test set in AISHELL-1, test-other in LibriSpeech). After knowledge transfer and distillation, the length error problem is greatly alleviated compared with the original NAR model owing to the high prediction accuracy of AR teacher. Moreover, both $\loss_{\text{F-KD}}$ and $\loss_{\text{S-KD}}$ attribute to reducing insertion and deletion errors~(`I+D'), pushing the length error problem~\cite{higuchi2020mask} to its limits at 0.2\% CER for `I+D' in AISHELL-1 and 1.4\% for LibriSpeech test-other. Meanwhile, the NAR student model performs comparable results with other state-of-the-art NAR methods, including wav2vec2-CTC~\cite{baevski_2020_wav2vec_20_framework,deng2022improving}, Improved CASS-NAT~\cite{fan2021improved} and AL-NAT~\cite{wang2022alignmentlearning}. \section{Conclusions} \label{sec:conc} In this paper we propose a novel knowledge transfer and distillation architecture that leverages knowledge from AR models to improve NAR performance while reducing the model size. To further boost the performance of NAR, we propose a beam search method on Mask-CTC, which enlarges the search space during inference stage. Experiments demonstrate that NAR beam search obtains relative 5\% reduction in AISHELL-1 dataset with a tolerable RTF increment. For knowledge distillation, most results achieve over 15\% relative CER/WER reduction on large and smaller NAR modeling. Future works are going to explore the generalization of knowledge distillation from AR to NAR. Different non-autoregressive like CASS-NAT~\cite{fan2021improved} and AL-NAT~\cite{wang2022alignmentlearning} might be explored with external non-autoregressive language models. Hidden feature distillation~\cite{gou2021knowledge} is also a valuable extension of this paper. \section{Acknowledgment} This work is supported in part by China NSFC projects under Grants 62122050 and 62071288, and in part by Shanghai Municipal Science and Technology Major Project under Grant 2021SHZDZX0102. The authors would like to thank Tian tan and Wangyou Zhang for discussion. \bibliographystyle{IEEEtran}
5363fd21d1b3596eeee570e0abf7afa22a856818
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{sec:intro} Glass formers (spin glasses, fragile molecular glasses, polymers, colloids, etc.) stay perennially out of equilibrium below their glass temperature $T_\mathrm{g}$. Indeed, these materials are said to \emph{age}~\cite{struik:80}. As a very counterintuitive consequence, one should be ready to treat \emph{time} or, more precisely, the thermal history of the sample below $T_\mathrm{g}$, on equal footing with thermodynamic control parameters such as temperature, pressure or a external perturbing field (the magnetic field, for instance). Notwithstanding this complexity, two of the main experimental protocols for cooling a glass former below $T_\mathrm{g}$, namely the \emph{zero-field-cooling} (ZFC) and the \emph{thermoremanent-magnetization} (TRM) protocols, see the discussion below, have been widely regarded as equivalent, see e.g.,~\cite{nordblad:97}. Here, we critically assess the long-held assumption of protocol equivalence through high-accuracy experiments (carried out on single-crystal samples of a CuMn spin glass) and simulations of spin-glass dynamics (carried out on the Janus~II custom-built supercomputer~\cite{janus:14}). Aging is caused by the expansion of cooperative regions~\cite{adam:65} whose linear size $\xi$ defines a glassy coherence length. For temperatures $T<T_{\mathrm{g}}$ the time growth of $\xi$ is unbounded, albeit very slow. Experimentally, $\xi$ is measured by perturbing the sample with an external field,\footnote{A magnetic field in the case of spin glasses, or an electric field for other glass formers.} and measuring the time-dependent, non-linear response. This approach was pioneered in spin-glass experiments~\cite{joh:99} and later extended to other glass-forming materials, see e.g.,~\cite{berthier:05,albert:16}. This method to extract $\xi$ has been reproduced in spin-glass simulations, and successfully compared with microscopic determinations of the size of the cooperative regions~\cite{janus:17b}. Indeed, parallel analysis of laboratory and \emph{numerical} experiments has recently become possible~\cite{zhai-janus:20a,zhai-janus:21}. A problem with the non-linear response method, however, is that, ideally, it should be applied in the limit of a vanishing perturbing field. Yet, real experiments are carried out in finite fields. This fact complicates the interpretation of the results, because of the long-standing controversy on the nature of the $T<T_{\mathrm{g}}$ phase. On one side of this debate, we find the \emph{Replica Symmetry Breaking} picture~\cite{mezard:87,marinari:00,dealmeida:78}, which expects the non-linear response to the field to remain anomalous even if the magnetic field $H$ is non-vanishing (provided that $\xi$ grows as large as the sample size). On the other hand, the \emph{droplet} model~\cite{mcmillan:84,bray:87,fisher:86,fisher:88} expects the non-linear response to be regular when $H>0$ (also in the limit of large $\xi$). So far, experimental analysis of the response seems to indicate a similar behavior for $H>0$ and for $H=0$~\cite{lefloch:92,lefloch:94}, while analyses of conductance fluctuations in a mesoscopic CuMn spin glass also give some support for Replica Symmetry Breaking~\cite{weissman:88,israeloff:91}. Nevertheless, because $\xi$ will be much smaller than the sample size in our experiments and simulations, the droplet vs. Replica Symmetry Breaking controversy will not enter our analysis. As explained above, the two thermal protocols compared in this work are the TRM and the ZFC. In both protocols, the system is first let to reach thermal equilibrium at some temperature $T\gg T_\mathrm{g}$, then cooled as fast as possible to the measuring temperature $\ensuremath{T_\mathrm{m}}\xspace$,\footnote{See Ref.~\cite{zhai-janus:21} for a discussion of the differences of the sudden cooling in experiments and simulations.} where it is let to relax for a waiting time $\ensuremath{t_\mathrm{w}}\xspace$. At that point, the magnetic field is varied, and the magnetization is measured at later times $t+\ensuremath{t_\mathrm{w}}\xspace$. In the TRM protocol, an external magnetic field $H>0$ is applied (ant kept constant) from the very beginning, until the field is switched off at time $\ensuremath{t_\mathrm{w}}\xspace$. As a consequence, the magnetization $M_{\text{TRM}}(t,\ensuremath{t_\mathrm{w}}\xspace)$ decreases with time $t$, see~Fig.~\ref{fig:M_exp_num}--bottom. On the other hand, in the ZFC protocol one keeps $H=0$ until the magnetic field is switched-on at time $\ensuremath{t_\mathrm{w}}\xspace$. Hence, the magnetization $M_{\text{ZFC}}(t,\ensuremath{t_\mathrm{w}}\xspace)$ grows from its vanishing starting value at $t=0$, see Fig.~\ref{fig:M_exp_num}--top. We shall consider $M_{\text{TRM},\text{ZFC}}(t,\ensuremath{t_\mathrm{w}}\xspace)$ as a function of the two times, $t$ and $\ensuremath{t_\mathrm{w}}\xspace$, and of the applied magnetic field $H$. \begin{figure}[t] \includegraphics[width = 1\columnwidth]{figures/fig1.pdf} \caption{{\bf Magnetization as a function of time} as obtained for several waiting times, with the ZFC ({\bf top}) and the TRM ({\bf bottom}) protocols, see text), from experiments in a CuMn single-crystal sample ({\bf left}, the experimental time is measured in units of $\tau_{\mathrm{exp}}=1$~s) and from simulations on Janus II ({\bf right}, $\tau_{\mathrm{num}}=1$~ps). The analysis will focus on the time region $t\approx\ensuremath{t_\mathrm{w}}\xspace$, where the numerical and experimental curves display similar shapes.} \label{fig:M_exp_num} \end{figure} In anticipation of our results, it is worthwhile to examine the current state of understanding for the two protocols: \begin{itemize} \item The \emph{extended principle of superposition} \cite{nordblad:97} requires \begin{equation}\label{eq:super_positionM} M_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace,t) + M_\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace,t)=M_\mathrm{FC}(0,\ensuremath{t_\mathrm{w}}\xspace+t) \; , \end{equation} where $M_\mathrm{FC}(0,\ensuremath{t_\mathrm{w}}\xspace+t)$ is the magnetization at a time $\ensuremath{t_\mathrm{w}}\xspace+t$ after cooling in a field. \item It seems that $M_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace,t)$ is simply the complement of $M_\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace,t)$, see \cite{nordblad:97}, so their sum is always equal to $M_\mathrm{FC}(\ensuremath{t_\mathrm{w}}\xspace+t)$ (which remains essentially constant as $t$ grows, see also the discussion in~\cite{zhai-janus:21}). \end{itemize} What we shall see is that these statements are not generally true in the presence of a magnetic field. The central questions that we shall address from this standpoint are: \begin{enumerate} \item Does an effective time $\ensuremath{t^\mathrm{eff}}\xspace_H$, defined below, remain the same as a function of the magnetic field $H$ for the TRM and ZFC protocols? \item Does the correlation length $\xi(\ensuremath{t_\mathrm{w}}\xspace;H)$ differ for the two protocols? \item Is it true that $\xi_\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace;H)>\xi_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace;H)$? \end{enumerate} We think that these questions are the key to the dynamics of spin glasses in a finite magnetic field. The remaining part of this work is organized as follows. The next Section will introduce a phenomenological approach based on the Hamming distance ($\mathrm{Hd}$), see the discussion below, for outlining the effect of the magnetic field on spin-glass dynamics. Section \ref{sec:difference_xi_in_exp} will exhibit experimental results for $\xi_\mathrm{ZFC}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ and $\xi_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ on $\mathrm{CuMn}$ \,8 at.~\% samples cut from the same single crystal boule. Section ~\ref{sec:dependence_Delta_vs_Hd_exp} will present a quantitative relationship for the dependence of the free-energy barrier $\varDelta(\ensuremath{t_\mathrm{w}}\xspace)$ on $\mathrm{Hd}$ from our experiments on single CuMn crystals, relying on the parameters found for AgMn \cite{lederman:91}. Section~\ref{sec:xi_num} will present the results of simulations on a massive special-purpose supercomputer, Janus II, for both $\xi_\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace;H)$ and $\xi_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace;H)$. Section \ref{subsec:details_num} will give details of the simulations; Sec.~\ref{subsec:violation_superpostion} will explain the failure of the two basic assumptions, see Eq.~\eqref{eq:super_positionM}; Sec.~\ref{subsec:relaxation_S} shows our estimate of the response function $S(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ from the simulations; Sec.~\ref{subsec:Hd_connection_to_Ez_num} derives the Hamming distance; Sec.~\ref{subsec:teff_extraction_num} extracts the shift of the peak of the relaxation function $S(t,\ensuremath{t_\mathrm{w}}\xspace;H)$. Sec.~\ref{sec:diff_TRM_ZFC_num} displays the different shifts of the peak of the relaxation function $S(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ for the ZFC and TRM protocols; Sec.~\ref{subsec:num_approachZFC_TRM_xi_micro} evaluates a microscopic value of $\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace;H)$, comparing the ZFC with the TRM values; and in Sec.~\ref{subsec:exp_approachZFC_TRM_teff} we unveil the difference between the two experimental protocols through the lens of the magnetic response. Section \ref{sec:conclusion} summarizes the conclusions of this paper, and points to future investigations based on these results. Finally, the paper finishes with four appendices. \section{A phenomenological approach based on the Hamming distance} \label{sec:phenomenological_HD} The beautiful solution of the \emph{mean-field} Sherrington-Kirkpatrick model~\cite{mezard:84} is a landmark in the physics of disordered systems and gives rise to a theoretical picture that is known by several names, such as the \emph{Replica Symmetry Breaking} or the \emph{hierarchical} models of spin glasses. Unfortunately, connecting this solution to experimental work is not straightforward, because of two major difficulties. First, strictly speaking, the mean-field solution applies only to space dimension higher than six (alas, experiments are carried out in the three-dimensional world we live in). How this hierarchical picture needs to be modified in three dimensions is a much-debated problem (see e.g., Ref.~\cite{martin-mayor:22} for an updated account). The second, and perhaps more serious, problem is related to the fact that this theory describes systems in thermal equilibrium. Now, below the glass temperature, the coherence length $\xi$ of a system in thermal equilibrium is as large as the system's size. Unfortunately, because of the extreme slowness of the time growth of $\xi$, the experimental situation is the opposite: the sample size is typically much larger than $\xi$. Clearly, some additional input is needed to connect the hierarchical picture of the spin-glass phase with real experiments. One such connecting approach is based on generalized fluctuation-dissipation relations~\cite{cugliandolo:93,franz:95,marinari:98f,franz:98,franz:99,marinari:00b,cruz:03,janus:17}, which have been also investigated experimentally for atomic spin glasses~\cite{herisson:02,herisson:04}. Unfortunately, these relations focus on the \emph{linear} response to the magnetic field, while non-linear relations will be crucial to us. An alternative approach was worked out in Ref.~\cite{joh:96}, which explores the dynamics in an ultrametric tree of states. A crucial quantity in this approach is the Hamming distance ($\mathrm{Hd}$) between the state of the system after the initial preparation at time $\ensuremath{t_\mathrm{w}}\xspace$ and the state at the measuring time $t+\ensuremath{t_\mathrm{w}}\xspace$. Yet, we still do not know how this Hamming distance should be defined microscopically. We do have a surrogate that can be obtained from a correlation function (this correlation function can be computed, see Sect.~\ref{subsec:Hd_connection_to_Ez_num}, and experimentally measured~\cite{herisson:02}). Unfortunately, the surrogate is not a fully adequate substitute for the Hamming distance of Joh \emph{et al.}~\cite{joh:96}. Nevertheless, the dynamics in the hierarchical tree does provide useful intuition. This is why we briefly recall here its main results and assumptions. The interested reader will find a more complete account in Appendix~\ref{appendix:phenomenological_HD}. In fact, we shall take a further step because, at variance with Ref.~\cite{joh:96}, we shall accept the possibility that barrier heights increase faster than linearly with Hd (we shall work out the consequences of this possibility as well). There are many experimental protocols for exploring spin-glass dynamics. As we explained in the Introduction, those that involve the time change of the magnetization are the zero field cooled magnetization (ZFC) and the thermoremanent magnetization (TRM) protocols, generating $M_\mathrm{ZFC}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ and $M_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$, respectively. The basic concept in the analysis will be the maximum free-energy barrier $\varDelta_{\text{max}}$ between the involved states (see Appendix~\ref{appendix:phenomenological_HD}). Take, for instance, the TRM protocol. When the magnetic field $H$ is cut to zero, the system remembers its correlations achieved after aging for the time $\ensuremath{t_\mathrm{w}}\xspace$. This generates an inflection point in the time decay of $M_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ at $t\approx \ensuremath{t_\mathrm{w}}\xspace$. This is exhibited as a peak in the relaxation function \cite{granberg:88}, \begin{equation}\label{eq:relaxationS_def} S(t,\ensuremath{t_\mathrm{w}}\xspace;H) = - {\frac { \mathrm{d} \, M_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)}{\mathrm{d} \,\log\,t}} \; . \end{equation} The log of the time at which $S(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ peaks, $\log \ensuremath{t^\mathrm{eff}}\xspace_H$,\footnote{Here, and all over the text, $\log$ is the natural logarithm; otherwise, the logarithmic basis is explicitly indicated, \textit{i.e. $\log_2$}.} is thus a measure of $\varDelta_\mathrm{max}$. The activation energy is set approximately by the maximum barrier height reached in the waiting time $\ensuremath{t_\mathrm{w}}\xspace$: \begin{equation}\label{eq:teff_arrehenius_law} \ensuremath{t^\mathrm{eff}}\xspace_H=\tau_0\,\mathrm{e}^{\varDelta_\mathrm{max}(t=0,\ensuremath{t_\mathrm{w}}\xspace;H)/k_B T} \; , \end{equation} where $\tau_0$ is an exchange time of the order of $\hbar/k_B T_\text{g}$, and $\varDelta_\mathrm{max}(t=0,\ensuremath{t_\mathrm{w}}\xspace;H)$ is the highest barrier created by the growth of $\xi(t=0,\ensuremath{t_\mathrm{w}}\xspace;H)$ in the time $\ensuremath{t_\mathrm{w}}\xspace$. As Bouchaud has shown \cite{bouchaud:92}, when a magnetic field is present the barrier heights $\varDelta$ are reduced by a Zeeman energy $E_\mathrm{Z}$: \begin{equation}\label{eq:Delta_reduction_Ez} \varDelta(t,\ensuremath{t_\mathrm{w}}\xspace;H) = \varDelta(t,\ensuremath{t_\mathrm{w}}\xspace;0) + E_\mathrm{Z}~~, \end{equation} where, \cite{joh:99,guchhait:14,bouchaud:92,vincent:95,janus:17b} \begin{equation}\label{eq:zeeman_energy_def} E_\mathrm{Z}= - M_\mathrm{FC} H \equiv - \chi_\mathrm{FC} N_\mathrm{c}H^2~~. \end{equation} Here, $\chi_\mathrm{FC}$ is the field-cooled magnetic susceptibility {\it per spin} when the spin glass is cooled in a magnetic field to the measurement temperature $\ensuremath{T_\mathrm{m}}\xspace$; $N_\mathrm{c}$ is the number of correlated spins [spanned by the spin glass correlation length, $\xi(t,\ensuremath{t_\mathrm{w}}\xspace;H)$]; and $H^2$ is the square of the applied magnetic field.\footnote{ Another view of $E_\mathrm{Z}$ (Bert \emph{et al.} \cite{bert:04}) relies on fluctuations in the magnetization of all of the spins. They used $E_\mathrm{Z}$ linear in $H$, replacing $N_\mathrm{c}$ with $\sqrt N_\mathrm{c}$, and using the free spin value in place of $\chi_\mathrm{FC}$. A very recent investigation of the magnetic field's effect on spin-glass dynamics, Paga \emph{et al.} \cite{zhai-janus:21}, shows that their fit to experiments can also be ascribed to non-linear effects introduced by their use of rather large values of the magnetic field. We therefore shall use our Eq.~\eqref{eq:zeeman_energy_def} in our subsequent analysis.} This is the basis underlying the experimental method to determine the coherence length~\cite{joh:99}. \section{\boldmath $\xi_\mathbf{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ and $\xi_\mathbf{ZFC}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ from experiment} \label{sec:difference_xi_in_exp} The TRM and ZFC experiments were performed on 8 at.\% CuMn samples cut from the same single-crystal boule grown at Ames Laboratory, and characterized in \cite{zhai:19}. The TRM experiments were performed both at the Indiana University of Pennsylvania on a home-built SQUID magnetometer, capable of sensitivity roughly an order of magnitude greater than commercial devices, and at The University of Texas at Austin on a Quantum Design commercial SQUID magnetometer. The ZFC experiments were performed at The University of Texas at Austin on the same equipment as the TRM. Both ZFC and TRM protocols use the time at which $S(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ peaks to be the measure of $\ensuremath{t^\mathrm{eff}}\xspace_H$, with $S(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ defined by Eq.~\eqref{eq:relaxationS_def} \begin{equation} S(t,\ensuremath{t_\mathrm{w}}\xspace;H)_{(\mathrm{ZFC}/\mathrm{TRM})} = (\pm) \frac{\mathrm{d} \, M_{(\mathrm{ZFC}/\mathrm{TRM})}(t,\ensuremath{t_\mathrm{w}}\xspace;H)}{\mathrm{d}\,\log\,t} \, , \end{equation} where the $-$ sign pertains to TRM experiments and the $+$ sign to ZFC experiments. The measurements were all made at $37.5$~K, or a reduced temperature ($T_\mathrm{g}=41.5$ K) of $T_\mathrm{r}=0.9$. Two waiting times were set at $\ensuremath{t_\mathrm{w}}\xspace= 2500$~s and $5000$~s, testing the growth law \cite{marinari:96,kisker:96,sibani:94} \begin{equation}\label{eq:xi_growth_law} \xi(\ensuremath{t_\mathrm{w}}\xspace)=c_1\,\bigg({\frac {\ensuremath{t_\mathrm{w}}\xspace}{\tau_0}}\bigg)^{c_2(T/T_\mathrm{g})}\,, \end{equation} where $c_1$ is a constant of order unity and $c_2\approx 0.104$. The time $\ensuremath{t^\mathrm{eff}}\xspace_H$ at which $S(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ peaks is indicative of the largest barrier $\varDelta_\mathrm{max}(\ensuremath{t_\mathrm{w}}\xspace;H)$ surmounted in the time $\ensuremath{t_\mathrm{w}}\xspace$ \cite{nordblad:86}. Hence, in the presence of a magnetic field, $\log\,\ensuremath{t^\mathrm{eff}}\xspace_H$ is proportional to $\varDelta_\mathrm{max}(\ensuremath{t_\mathrm{w}}\xspace;H)$ through the relationship \begin{equation}\label{eq:Delta_vs_teff_arrhenius} \varDelta_{\text {max}}(\ensuremath{t_\mathrm{w}}\xspace;H)=(k_\mathrm{B} T)\,\log\,(\ensuremath{t^\mathrm{eff}}\xspace_H/\tau_0) \,. \end{equation} In the $H\rightarrow 0$ limit, $S(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ peaks close to $\ensuremath{t_\mathrm{w}}\xspace$. The {\it shift} of the peak of the relaxation function $S(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ from $\ensuremath{t_\mathrm{w}}\xspace$ to $\ensuremath{t^\mathrm{eff}}\xspace_H $ as $H$ increases from zero is a direct measure of the reduction of $\varDelta_\mathrm{max}$ with increasing $H$ [see Eq.~\eqref{eq:Delta_reduction_Ez}]. From Eqs.~\eqref{eq:Delta_reduction_Ez} and~\eqref{eq:zeeman_energy_def}, one can then extract the number of correlated spins $N_\mathrm{c}(\ensuremath{t_\mathrm{w}}\xspace)$ and thence the spin-glass correlation length $\xi(\ensuremath{t_\mathrm{w}}\xspace)$ through the relation \begin{equation}\label{eq:Nc_definition} N_\mathrm{c} \approx \xi^{3-\theta/2}~~, \end{equation} where $\theta$ is the replicon exponent \cite{janus:17b}. Thus, \begin{equation}\label{eq:Delta_Ez_reduction_vs_teff} \varDelta_\mathrm{max} - N_\mathrm{c} \chi H^2=k_\mathrm{B} T[\log\,\ensuremath{t^\mathrm{eff}}\xspace_H-\log\,\tau_0]\,. \end{equation} In this manner, the TRM and ZFC protocols generate $\xi_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ and $\xi_\mathrm{ZFC}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$, respectively. The hypothesized difference, $\xi_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H) < \xi_\mathrm{ZFC}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$, can then be tested. We expect that difference, if any, to be a result of an upward curvature of $\varDelta$ as a function of $\mathrm{Hd}$, as outlined in the previous Section and in Appendix~\ref{appendix:phenomenological_HD}. Fig.~\ref{fig:log_teff_375exp} exhibits experimental values of $\log\,\ensuremath{t^\mathrm{eff}}\xspace_H$ vs $H^2$ and fits to the data for the ZFC and TRM protocols for waiting times $\ensuremath{t_\mathrm{w}}\xspace=2500$~s and $\ensuremath{t_\mathrm{w}}\xspace=5000$~s at $T=37.5$ K. Because $T=37.5$ K is so close to $T_\mathrm{g}$, non-linear terms are evident in the data. As a consequence, the fits employ higher-order terms in $H$ than quadratic. Appendix~\ref{appendix:second-single-crystal} presents data taken from another sample cut from a 6 at.\% CuMn single crystal boule with $\ensuremath{T_\mathrm{g}}\xspace = 31.5$ K at a measurement temperature of $\ensuremath{T_\mathrm{m}}\xspace = 26$ K. The growth of $\xi(t,t_w; H)$ was slower for both ZFC and TRM protocols, leading to smaller values of the correlation lengths and therefore smaller differences between $\xi_{\text {TRM}}(t_w)$ and $\xi_{\text {ZFC}}(t_w)$, as compared to those exhibited here in the main text. Nevertheless, at the largest waiting time, that difference lies well outside the sum of the error bars. \begin{figure}[h] \centering \includegraphics[width = 1\columnwidth]{figures/fig2.pdf} \caption{A plot of data and fit for the effective waiting time $\ensuremath{t^\mathrm{eff}}\xspace_H$ (logarithmic scale) vs $H^2$ for ZFC and TRM measurements on a CuMn 8 at.\% single crystal at $\ensuremath{t_\mathrm{w}}\xspace=2500, 5000$~s and $T=37.5$~K. The polynomial fitting parameters for each of the values of $H$, up to and including $H^4$, are given in Tables~\ref{tab:zfc375t0}---\ref{tab:trm375} (see Appendix~\ref{appendix:only_tables}). The value of the correlation length is extracted from the $H^2$ fitting terms.} \label{fig:log_teff_375exp} \end{figure} Fitting to the coefficients of the $H^2$ terms for the 8 at.\% CuMn sample described above (see the respective tables in Appendix~\ref{appendix:only_tables} for specific values), we are able to extract $\xi_\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace)$ and $\xi_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace)$.\\ We find: \begin{align*} \xi_\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace=2500\text{ s})&=220(20), \\ \xi_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace=2500\text{ s})&=210(16), \\ \xi_\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace=5000\text{ s})&=270(20), \\ \xi_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace=5000\text{ s})&=220(30), \end{align*} all in units of the lattice constant $a_0$. As hypothesized, the magnitude of $\xi_\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace)$ exceeds $\xi_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace)$. It must be noted, however, that the difference lies well within the error bars for $\ensuremath{t_\mathrm{w}}\xspace=2500$~s, while the difference is just inside the sum of the error bars for $\ensuremath{t_\mathrm{w}}\xspace=5000$~s. Our attempts at larger values of $\ensuremath{t_\mathrm{w}}\xspace$ have not been successful, as the $S(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ curves broaden so much that it proved too difficult to extract reproducible values for $\ensuremath{t^\mathrm{eff}}\xspace_H$. Smaller values of $\ensuremath{t_\mathrm{w}}\xspace$ were not attempted as the difference between the ZFC and TRM correlation lengths would be smaller than for $\ensuremath{t_\mathrm{w}}\xspace=2500$~s and the error bars would obviate any reliable conclusions. The ratio for the respective values of $\xi(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ is instructive. One finds \begin{align*} \xi_\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace\!=\!5000\text{ s})/\xi_\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace\!=\!2500\text{ s})&=1.26 \\ \xi_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace\!=\!5000\text{ s})/\xi_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace\!=\!2500\text{ s})&=1.06 \; . \end{align*} This confirms that $\xi_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ grows more slowly than $\xi_\mathrm{ZFC}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ with $\ensuremath{t_\mathrm{w}}\xspace$, indicating that the barrier heights are larger for TRM growth. This is additional evidence that the dependence of $\varDelta(\ensuremath{t_\mathrm{w}}\xspace)$ on $\mathrm{Hd}$ curves upward. All the measurements reported above were made with the magnetic field $H$ calibrated using a standard Pd sample provided by Quantum Design. The residual magnetic field was measured before each series of measurements, and the cooling profile kept the same for both ZFC and TRM measurements. The self-consistency of each data set, when combined with calibration of $H$ as described above, gives some confidence in the differences between $\xi_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ and $\xi_\mathrm{ZFC}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$. \section{\boldmath Explicit dependence of $\varDelta(\ensuremath{t_\mathrm{w}}\xspace)$ on Hamming distance ($\mathrm{Hd}$)} \label{sec:dependence_Delta_vs_Hd_exp} The paper ``Dynamics in spin glasses'' by Lederman \emph{et al.}~\cite{lederman:91}, as a consequence of detailed experiments on AgMn (2.6 at.\%), developed a quantitative relationship between the change in $\varDelta(\ensuremath{t_\mathrm{w}}\xspace)$ as a function of the change in $\mathrm{Hd}$. Writing out their Eq. (13), [see our Eq.~\eqref{eq:variation_Delta_vs_Hd}] \begin{equation}\label{eq:Lederman_eq13} \begin{split} \varDelta(\mathrm{Hd})-\beta(T)/\alpha(T)\\ =[\varDelta(\mathrm{Hd}_0)-\beta(T)&/\alpha(T)]e^{[\alpha(T)(\mathrm{Hd}-\mathrm{Hd}_0)]}, \end{split} \end{equation} where $\varDelta(\mathrm{Hd})-\varDelta(\mathrm{Hd}_0)$ is the change in barrier heights when $\mathrm{Hd}$ increases from $\mathrm{Hd}_0$ to $\mathrm{Hd}$. The coefficient in the exponent, $\alpha(T)$, was estimated from experiment to be approximately 38.1 at a reduced temperature of $T_\text{r}=0.865$. Furthermore $\alpha(T)$ and $\beta(T)$ are defined by: \begin{equation}\label{eq:alpha_beta_expression} \begin{split} \alpha(T) &= -2a(T)/(\delta q_\mathrm{EA}/\delta T), \\ \beta(T)&=-2b(T)/(\delta q_\mathrm{EA}/\delta T), \end{split} \end{equation} where $q_\mathrm{EA}$ is the Edwards-Anderson self-overlap, and $a(T)$ and $b(T)$ are defined by an experimental fit, \begin{equation}\label{eq:Delta_over_deltaT} (\delta\varDelta/\delta T)|_{\ensuremath{T_\mathrm{m}}\xspace}=a(\ensuremath{T_\mathrm{m}}\xspace)\varDelta+b(\ensuremath{T_\mathrm{m}}\xspace), \end{equation} where $\ensuremath{T_\mathrm{m}}\xspace$ is the measuring temperature. Figs. 11 and 12 of~\cite{lederman:91} display $a(T)$ and $b(T)$, respectively, for four representative values of $\ensuremath{T_\mathrm{m}}\xspace$. For our purposes, we are only interested in the ratio $\beta/\alpha$, which, from Eq.~\eqref{eq:alpha_beta_expression}, is independent of $\delta q_\mathrm{EA}/\delta T$. Our working temperature is $\ensuremath{T_\mathrm{m}}\xspace = 37.5$, or a reduced temperature $T_\mathrm{r}=0.90$. From Fig. 10 of Lederman \emph{et al.}~\cite{lederman:91}, this leads to $a\approx 29.02$ and $b\approx 684.03$, generating the ratio, \begin{equation}\label{eq:ratio_ab} \beta/\alpha=b/a\approx 23.57. \end{equation} On the assumption that the ratios for AgMn are relevant to CuMn, we can then address our data. At $T = 37.5$ K, we fit the time at which $S(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ peaks, $\ensuremath{t^\mathrm{eff}}\xspace_H$ by, \begin{equation}\label{eq:fitting_for_teff} \log(\ensuremath{t^\mathrm{eff}}\xspace_H)=a_0+a_2H^2+a_4H^4+a_6H^6+\mathcal{O}(H^8)\,. \end{equation} Eq.~\eqref{eq:fitting_for_teff} can be converted to an energy scale by rewriting as \begin{equation}\label{eq:teff_explicit_coefficients_dep} \begin{split} k_\mathrm{B} T\log(\ensuremath{t^\mathrm{eff}}\xspace_H/\tau_0)=k_\mathrm{B}T[a_0-\log(\tau_0)]~~~~~~~~~~~~~~~~~\\ +k_\mathrm{B} T[a_2H^2+a_4H^4+a_6H^6+\mathcal{O}(H^8)]\,.\\ \end{split} \end{equation} Dividing Eq.~\eqref{eq:teff_explicit_coefficients_dep} by $k_\mathrm{B} T_\mathrm{g}$ gives the energy scale in units of $k_\mathrm{B} T_\mathrm{g}$. We define $\varDelta_0(\ensuremath{t_\mathrm{w}}\xspace)\equiv E_0 = (T/T_\mathrm{g})[a_0-\log(\tau_0)]$ as the height of the last barrier encountered during a waiting time $\ensuremath{t_\mathrm{w}}\xspace$ in the absence of a magnetic field, and \begin{equation} \label{eq:defEn} E_n(\ensuremath{t_\mathrm{w}}\xspace;H) = (T/T_\mathrm{g})a_n(\ensuremath{t_\mathrm{w}}\xspace)H^n\,, \end{equation} as the nth-order change in the barrier height's free-energy scale caused by the presence of the external magnetic field at the waiting time $\ensuremath{t_\mathrm{w}}\xspace$. The ZFC experiments for $\ensuremath{t_\mathrm{w}}\xspace=2500$~s yields $\varDelta_0^\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace=2500\text{ s})\equiv E_0=33.55848$ in units of $k_\mathrm{B}T_\mathrm{g}$ (see Table~\ref{tab:zfc375t0} in Appendix~\ref{appendix:only_tables}). The value for the TRM experiments is $\varDelta_0^\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace=2500\text{ s})\equiv E_0=33.58718$ (see Table~\ref{tab:trm375t0}), which should be the same as for the ZFC protocol, the slight difference being a result of fits to the data. Likewise, for $\ensuremath{t_\mathrm{w}}\xspace=5000$~s, Tables~\ref{tab:zfc375} and \ref{tab:trm375} give $\varDelta_0^\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace=5000\text{ s})\equiv E_0=34.11658$ and $\varDelta_0^\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace=5000\text{ s})\equiv E_0=34.07752$, respectively. From these values, and Eqs.~\eqref{eq:Lederman_eq13} and \eqref{eq:ratio_ab} with $\alpha(T_\mathrm{r}\!=\!0.90)=46.97$, we can arrive at $\delta \mathrm{Hd}=\mathrm{Hd}-\mathrm{Hd}_0$. We find \begin{equation}\label{eq:Hd_delta_variation} \begin{split} \delta \mathrm{Hd}^{\text {TRM}}= 1.02 \times 10^{-3},\\ \delta \mathrm{Hd}^{\text {ZFC}}= 1.16 \times 10^{-3}. \end{split} \end{equation} The small values of the difference in Hamming distances for a doubling of the waiting times is an indication of the slow growth of the correlation lengths with waiting times $\ensuremath{t_\mathrm{w}}\xspace$. The equilibrium value of the Hamming distance [$q_{\alpha\beta}=0$ in Eq.~\eqref{eq:Hdistance_def}] is approximately 0.0575 so, even for $\ensuremath{t_\mathrm{w}}\xspace = 5000$~s, the change in Hamming distance is still tiny. To reach equilibrium would indeed require time scales of the order of the age of the universe. One can relate the correlation length $\xi(\ensuremath{t_\mathrm{w}}\xspace)$ directly to $\mathrm{Hd}$. As shown above, the Hamming distance increases by unity for each mutual spin flip, that is, for each reduction in $q_{\alpha\beta}$ by two. Thus, $\xi(\ensuremath{t_\mathrm{w}}\xspace)$ increases by a lattice constant for each mutual spin flip. The volume of real space increases as $[\xi(\ensuremath{t_\mathrm{w}}\xspace)]^{D-\theta/2}$, where $D$ is the spatial dimension. This must equal the total number of mutual spin flips, given by $N\times \mathrm{Hd}(\ensuremath{t_\mathrm{w}}\xspace)$. Tables \ref{tab:zfc375t0}---\ref{tab:trm375}, see Appendix~\ref{appendix:only_tables}, give the correlation lengths $\xi(\ensuremath{t_\mathrm{w}}\xspace)$ for $\ensuremath{t_\mathrm{w}}\xspace=2500$ and $\ensuremath{t_\mathrm{w}}\xspace=5000$~s. From Eq.~\eqref{eq:Hd_delta_variation} the change in $\mathrm{Hd}$ is known for both waiting times. One can therefore take the ratio of $\xi(\ensuremath{t_\mathrm{w}}\xspace)$ for the two waiting times for each protocol, and establish an absolute value for $\mathrm{Hd}(\ensuremath{t_\mathrm{w}}\xspace)$ at each value of $\ensuremath{t_\mathrm{w}}\xspace$. Expressed numerically, \begin{align} \label{eq:estimation_Hd_through_xi_ratio} \frac {\xi(\ensuremath{t_\mathrm{w}}\xspace=5000\text{ s})^{[D-(\theta(t_\mathrm{w}\!=\!5000\text{ s})/2)]}}{\xi(\ensuremath{t_\mathrm{w}}\xspace\!=\!2500\text{ s})^{[D-(\theta(t_\mathrm{w}\!=\!2500)/2)]}}&={\frac {\mathrm{Hd}_{5000}}{\mathrm{Hd}_{2500}}}\\ &={\frac {\mathrm{Hd}_{2500}+\delta \mathrm{Hd}}{\mathrm{Hd}_{2500}}} \; .\nonumber \end{align} In order to evaluate Eq.~\eqref{eq:estimation_Hd_through_xi_ratio}, it is necessary to know the respective values of $\theta$ at $T=37.5$ K. They are \begin{equation} \begin{split} \theta_\mathrm{ZFC}(T=37.5~{\text {K}}, t_\mathrm{w}=2500~{\text{s}})=0.354,\\ \theta_\mathrm{TRM}(T=37.5~{\text {K}}, t_\mathrm{w}=2500~{\text{s}})=0.356,\\ \theta_\mathrm{ZFC}(T=37.5~{\text {K}}, t_\mathrm{w}=5000~{\text{s}})=0.343,\\ \theta_\mathrm{TRM}(T=37.5~{\text {K}}, t_\mathrm{w}=5000~{\text{s}})=0.353.\\ \end{split} \end{equation} Using the values of $\xi(\ensuremath{t_\mathrm{w}}\xspace)$ from Tables \ref{tab:zfc375t0}---\ref{tab:trm375}, and $\delta \mathrm{Hd}$ from Eq.~\eqref{eq:Hd_delta_variation}, one obtains \begin{equation}\label{eq:Hd_values} \begin{split} \mathrm{Hd}_{\ensuremath{t_\mathrm{w}}\xspace\!=\!2500}^{\text {TRM}}=5.64 \times 10^{-3},\\ \mathrm{Hd}_{\ensuremath{t_\mathrm{w}}\xspace\!=\!5000}^{\text {TRM}}=6.66 \times 10^{-3},\\ \mathrm{Hd}_{\ensuremath{t_\mathrm{w}}\xspace\!=\!2500}^{\text {ZFC}}=1.20 \times 10^{-3},\\ \mathrm{Hd}_{\ensuremath{t_\mathrm{w}}\xspace=5000}^{\text {ZFC}}=2.36 \times 10^{-3} . \end{split} \end{equation} The value of $q_{\mathrm{EA}}$ at $T_\text{r}=0.90$ is approximately 0.115, so that the full Hamming distance at this temperature, from Eq.~\eqref{eq:Hdistance_def} with $q_{\alpha\beta}=0$, is 0.0575. The occupied phase space in our experiments from the results exhibited in Eq.~\eqref{eq:Hd_values} therefore spans only about 12\% of the available phase space. The slow growth of $\xi(\ensuremath{t_\mathrm{w}}\xspace)$ is evidence that true equilibrium in the spin-glass condensed phase can never be accomplished in laboratory time scales, except perhaps at temperatures in the immediate vicinity of $T_\mathrm{g}$ where $q_\mathrm{EA}$ can became arbitrarily small. It is also interesting to note from Eq.~\eqref{eq:Hd_values} that $\mathrm{Hd}$ for TRM experiments is larger than for ZFC experiments. This is, of course, consistent with our picture of the shift of the beginning of aging from $q_\mathrm{EA}$ for ZFC experiments to $q_\mathrm{EA}-q(E_\mathrm{Z})$ or, equivalently from $\mathrm{Hd}=0$ to $\mathrm{Hd}(|E_\mathrm{Z}|=\varDelta)$ from Eq.~\eqref{eq:Hd_vs_Ez} for TRM protocols as compared to ZFC protocols. An important lesson from this analysis is that a true definition of $\xi(\ensuremath{t_\mathrm{w}}\xspace)$ can be extracted only from a ZFC protocol. A TRM protocol, assuming that $\varDelta(\ensuremath{t_\mathrm{w}}\xspace)$ increases with $\mathrm{Hd}$ faster than linearly, will generate a value for $\xi(\ensuremath{t_\mathrm{w}}\xspace)$ that is a function of the magnetic field. In that sense, though an average is usually taken, the only meaningful protocol for extraction of $\xi(\ensuremath{t_\mathrm{w}}\xspace)$ is ZFC. In Secs.~\ref{sec:xi_num}-\ref{sec:diff_TRM_ZFC_num}, simulations will be used : \begin{enumerate} \item to establish the microscopic features of a 3D spin glass in the presence of an external magnetic field; \item to numerically extract the difference between the magnetic response to the thermoremanent magnetization (TRM) and the zero-field-cooled (ZFC) protocols; \item to investigate the relationship of our simulation results to the Hamming distance, $\mathrm{Hd}$, defined in Eq.~\eqref{eq:Hdistance_def}. \end{enumerate} \section{\boldmath $\xi_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ and $\xi_\mathrm{ZFC}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ from simulations} \label{sec:xi_num} This section is organized as follows. In Sec.~\ref{subsec:details_num} we present the details of the simulations carried out on the Janus~II supercomputer. Sec.~\ref{subsec:violation_superpostion} will explain the failure of the the basic experimental assumption, see Eq.~\eqref{eq:super_positionM}. In Sec.~\ref{subsec:relaxation_S}, we compare the numerical relaxation function, $S(t,\ensuremath{t_\mathrm{w}}\xspace;H)$, of the ZFC and the TRM protocols; and in Sec.~\ref{subsec:Hd_connection_to_Ez_num}, we display the relationship of our results to the Hamming distance, $\mathrm{Hd}$. Finally, we conclude this section with the extraction of an effective correlation length for both the ZFC and th TRM protocols using the relationship relied upon through experiment, with a microscopic direct calculation of the correlation length. \subsection{Details of the simulations} \label{subsec:details_num} We carried out massive simulations on the Janus II supercomputer \cite{janus:14} studying the Ising-Edwards-Anderson (IEA) model on a cubic lattice with periodic boundary conditions and size $L=160~a_0$, where $a_0$ is the average distance between magnetic moments. \begin{table}[!h] \begin{centering} \begin{ruledtabular} \begin{tabular}{c c c c c c c} &$\ensuremath{T_\mathrm{m}}\xspace$&$\ensuremath{t_\mathrm{w}}\xspace$ & $\xi(\ensuremath{t_\mathrm{w}}\xspace,H\!=\!0)$ & $t_{\text {max}}$ & $\theta(\tilde {x})$&$C_\mathrm{peak}(\ensuremath{t_\mathrm{w}}\xspace)$\\ \hline\\[-5pt] ${\textbf{Run 1}}$&0.9&$2^{22}$&8.294(7)&$2^{30}$&0.455&0.533(3)\\ ${\textbf{Run 2}}$&0.9&$2^{26.5}$&11.72(2)&$2^{30}$&0.436&0.515(2)\\ ${\textbf{Run 3}}$&0.9&$2^{31.25}$&16.63(5)&$2^{32}$&0.415&0.493(3)\\ ${\textbf{Run 4}}$&1.0&$2^{23.75}$&11.79(2)&$2^{28}$&0.512&0.422(2)\\ ${\textbf{Run 5}}$&1.0&$2^{27.625}$&16.56(5)&$2^{32}$&0.498&0.400(1)\\ ${\textbf{Run 6}}$&1.0&$2^{31.75}$&23.63(14)&$2^{34}$&0.484&0.386(4)\\ ${\textbf{Run 7}}$&0.9&$2^{34}$&20.34(6)&$2^{34}$&0.401&0.481(3)\\ \end{tabular} \caption{Parameters for each of our numerical simulations: $\ensuremath{T_\mathrm{m}}\xspace,~\ensuremath{t_\mathrm{w}}\xspace,~\xi(\ensuremath{t_\mathrm{w}}\xspace)$, the longest simulation time $t_{\text {max}}$, the replicon exponent $\theta$, and the value of $C_\mathrm{peak}(\ensuremath{t_\mathrm{w}}\xspace)$ as defined and employed in the zero-field-cooling protocol of Ref.~\cite{zhai-janus:20a,zhai-janus:21}. The replicon exponent $\theta$ is a function of $\tilde{x}= \ell_\mathrm{J}(T)/\xi(\ensuremath{t_\mathrm{w}}\xspace)$, where $\ell_J(T)$ is the Josephson length \cite{janus:18,zhai:19}. } \label{tab:details_num} \end{ruledtabular} \end{centering} \end{table} The $N=L^D$ Ising spins, $s_x=\pm 1$, interact with their lattice nearest neighbors through the Hamiltonian \begin{equation}\label{eq:IEA_hamiltonian} \mathcal{H}= -\sum_{\langle \boldsymbol x,\boldsymbol y\rangle}J_{\boldsymbol x\boldsymbol y}s_{\boldsymbol x} s_{\boldsymbol y} - H\sum_{\boldsymbol x} s_{\boldsymbol x}, \end{equation} where the quenched disorder couplings are $J_{\boldsymbol x \boldsymbol y}=\pm 1$, with 50 \% probability. We name a particular choice of the couplings a {\it sample}. In the absence of an external magnetic field ($H=0$), this model undergoes a spin-glass transition at the critical temperature in simulation units $T_\text{g}=1.102(3)$ \cite{janus:13}. We study the off-equilibrium dynamics of model~\eqref{eq:IEA_hamiltonian} using a Metropolis algorithm (one lattice sweep roughly corresponds to one picosecond of physical time). We have studied a single sample, see Ref.~\cite{zhai-janus:20a,zhai-janus:21} for sample variability studies. For each of the considered protocols, we have considered 1024 statistically independent system trajectories (termed \emph{replicas}), except for Runs 6 and 7 in Table \ref{tab:details_num} for which we have simulated 512 replicas. Further simulation details can be found in Table \ref{tab:details_num} (the rationale for our choices of temperatures and magnetic fields is explained in Ref.~\cite{zhai-janus:20a}). In order to simulate the experimental protocols, the following procedures were taken: \begin{itemize} \item For the TRM protocol, the initial random spin configuration was placed instantaneously at the working temperature $\ensuremath{T_\mathrm{m}}\xspace$ in a magnetic field $H$. It was allowed to relax for a time $\ensuremath{t_\mathrm{w}}\xspace$ in the presence of $H$, after which the magnetic field was removed, and the magnetization, \begin{equation}\label{eq:M_TRM_def} M_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)={\frac {1}{160^3}}\sum_{\boldsymbol x}s_{\boldsymbol x}(t+\ensuremath{t_\mathrm{w}}\xspace;0), \end{equation} as well as the temporal auto-correlation function, \begin{equation}\label{eq:C_TRM_def} C_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)={\frac {1}{160^3}}\sum_{\boldsymbol x}s_{\boldsymbol x}(\ensuremath{t_\mathrm{w}}\xspace;H)s_{\boldsymbol x}(t+\ensuremath{t_\mathrm{w}}\xspace;0), \end{equation} were recorded. \item For the ZFC protocol, the initial random spin configuration was placed instantaneously at the working temperature $\ensuremath{T_\mathrm{m}}\xspace$ and allowed to relax for a time $\ensuremath{t_\mathrm{w}}\xspace$ at $H=0$. At time $\ensuremath{t_\mathrm{w}}\xspace$, the magnetic field $H$ was applied and the magnetization, \begin{equation}\label{eq:M_ZFC_def} M_\mathrm{ZFC}(t,\ensuremath{t_\mathrm{w}}\xspace;H)={\frac {1}{160^3}}\sum_{\boldsymbol x}s_{\boldsymbol x}(t+\ensuremath{t_\mathrm{w}}\xspace;H), \end{equation} as well as the temporal auto-correlation function, \begin{equation}\label{eq:C_ZFC_def} C_\mathrm{ZFC}(t,\ensuremath{t_\mathrm{w}}\xspace;H)={\frac {1}{160^3}}\sum_{\boldsymbol x}s_{\boldsymbol x}(\ensuremath{t_\mathrm{w}}\xspace;0)s_{\boldsymbol x}(t+\ensuremath{t_\mathrm{w}}\xspace;H), \end{equation} were recorded. \end{itemize} Note that the auto-correlation function can be obtained as well experimentally in the TRM protocol~\cite{herisson:02}. Indeed, in the limit $H\to 0$, one has $C_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)\propto \langle M_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace) M_\mathrm{TRM}(t+\ensuremath{t_\mathrm{w}}\xspace)\rangle$, where $\langle\ldots \rangle$ indicates the average over the thermal noise. Indeed, although $\langle M_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace) M_\mathrm{TRM}(t+\ensuremath{t_\mathrm{w}}\xspace)\rangle\propto \sum_{{\boldsymbol x},{\boldsymbol y}}\,\langle s_{\boldsymbol x}(\ensuremath{t_\mathrm{w}}\xspace;H)s_{\boldsymbol y}(t+\ensuremath{t_\mathrm{w}}\xspace;0)\rangle$, the gauge invariance~\cite{toulouse:77} of the Hamiltonian~\eqref{eq:IEA_hamiltonian}, that holds for $H\to 0$, ensures that only terms with ${\boldsymbol x}={\boldsymbol y}$ are non-vanishing in the double sum.\footnote{The null contribution (in average) of the cross terms ${\boldsymbol x}\neq{\boldsymbol y}$ makes $\langle M_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace) M_\mathrm{TRM}(t+\ensuremath{t_\mathrm{w}}\xspace)\rangle$ rather noisy, as it can be appreciated in Ref.~\cite{herisson:02}.} \subsection{The superposition principle breaks down for finite magnetic fields} \label{subsec:violation_superpostion} The experimental investigation of spin glasses is based on ZFC and TRM protocols, which are related to each other in the limit $H \to 0^+$ through Eq.~\eqref{eq:super_positionM}. The \emph{extended superposition principle} has been the touchstone of experimental analysis for more than three decades. However, thanks to the massive numerical simulations carried out on Janus ~II, we have discovered a range of validity for Eq.~\eqref{eq:super_positionM}, and the failure of the assumption that the $M_\mathrm{FC}(0,\ensuremath{t_\mathrm{w}}\xspace+t)$ is always equal to the sum of $M_\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace,t)+M_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace,t)$ for $H>0$. We analyze separately the growth of the left-hand side, $M_\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace,t)+M_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace,t)$, and the right-hand side, $M_\mathrm{ZFC}(0,\ensuremath{t_\mathrm{w}}\xspace+t)$, of Eq.~\eqref{eq:super_positionM}. As the reader notices in Fig.~\ref{fig:superposition_regime_M_overH}, when the magnetic field increases, the violation of Eq.~\eqref{eq:super_positionM} increases. Moreover, the field-cooled magnetization, $M_\mathrm{FC}(0,\ensuremath{t_\mathrm{w}}\xspace+t)$, changes with time. \begin{figure}[h] \centering \includegraphics[width = 1\columnwidth]{figures/fig3.pdf} \caption{Growth of the rescaled magnetizations for different experimental protocols: $[M_\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace,t)+M_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace,t)]/H$, and $M_\mathrm{FC}(0,\ensuremath{t_\mathrm{w}}\xspace+t)/H$. The darker colors are referred to the FC case; instead, the lighter ones are for the quantity $\mathrm{ZFC} + \mathrm{TRM}$. The vertical dashed lines correspond to the effective times, $\ensuremath{t^\mathrm{eff}}\xspace_H$, associated with each case. The violation of Eq.~\eqref{eq:super_positionM} is evident.} \label{fig:superposition_regime_M_overH} \end{figure} In order to characterize the violation of Eq.~\eqref{eq:super_positionM}, let us define the quantity \begin{equation} \begin{split} D(t,\ensuremath{t_\mathrm{w}}\xspace;H) \equiv & \frac{1}{H} \left(M_\mathrm{FC}(0,\ensuremath{t_\mathrm{w}}\xspace+t)+\right.\\ & -\left.\left[ M_\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace,t)+M_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace,t) \right]\right) \; . \end{split} \end{equation} In Fig.~\ref{fig:superpositionDelta_vs_logt}, we compare the behavior of $D(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ as a function of time for different magnetic fields $H$ and waiting times $\ensuremath{t_\mathrm{w}}\xspace$. For small magnetic fields, $H\leq 0.02$, the quantity $D(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ is consistently equal to zero [Eq.~\eqref{eq:super_positionM} is holding]; for $H>0.02$, $D(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ has an inflection. This behavior suggests to us that the extended superposition principle is valid only when $H \to 0$. \begin{figure}[h] \centering \includegraphics[width = 1\columnwidth]{figures/fig4.pdf} \caption{Characterization of the violation of the extended superposition principle, Eq.~\eqref{eq:super_positionM}. The plot shows the temporal behavior of the quantity $D(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ for different magnetic fields $H$ and waiting times $\ensuremath{t_\mathrm{w}}\xspace$.} \label{fig:superpositionDelta_vs_logt} \end{figure} Thus, if Eq.~\eqref{eq:super_positionM} is only valid for $H \to 0$, we can hypothesize that $D(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ could behave as: \begin{equation}\label{eq:superpostion_Delta_expantion} D(t,\ensuremath{t_\mathrm{w}}\xspace;H) = a_2(\ensuremath{t_\mathrm{w}}\xspace;T) H^2 + a_4(\ensuremath{t_\mathrm{w}}\xspace;T) H^4 + \mathcal{O}(H^6) \; , \end{equation} where the coefficients $a_2(\ensuremath{t_\mathrm{w}}\xspace;T)$ and $a_4(\ensuremath{t_\mathrm{w}}\xspace;T)$ are some unknown functions and $\mathcal{O}(H^6)$ represents higher-order terms. To test Eq.~\eqref{eq:superpostion_Delta_expantion}, we address the temporal behavior of the rescaled quantity, $D(t,\ensuremath{t_\mathrm{w}}\xspace;H)\,\ensuremath{T_\mathrm{m}}\xspace^2/H^2$, in Fig.~\ref{fig:superpostionDelta_rescaledHT}. We have rescaled the quantity $D(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ by the temperature to compare data at different temperatures. \begin{figure}[h] \centering \includegraphics[width = 1\columnwidth]{figures/fig5.pdf} \caption{Non-linear dependence of the rescaled quantity $D(t,\ensuremath{t_\mathrm{w}}\xspace;H)\,\ensuremath{T_\mathrm{m}}\xspace^2/H^2$ as a function of time. For clarity, we have omitted data at $H=0.01$ (errors are larger than 100\% for this field).} \label{fig:superpostionDelta_rescaledHT} \end{figure} In Fig.~\ref{fig:superpostionDelta_rescaledHT}, we analyze two aspects: \begin{itemize} \item For a given waiting time $\ensuremath{t_\mathrm{w}}\xspace$ [\emph{i.e.}, $\xi(\ensuremath{t_\mathrm{w}}\xspace)$], what is the effect of increasing the magnetic field, $H$? \item For a given value of the external magnetic field, $H=0.02$, what is the effect of changing the waiting time, $\ensuremath{t_\mathrm{w}}\xspace$, [\emph{i.e.}, $\xi(\ensuremath{t_\mathrm{w}}\xspace)$]? \end{itemize} The answer to the first question is straightforward. From Fig.~\ref{fig:superpostionDelta_rescaledHT}, increasing the magnetic field $H$ the separation between the $H$ curves increases as well. To the second question, we still do not have a satisfactory answer. Runs at $T=1.0$, namely Run 4, Run 5, and Run 6 (the purple, orange and red points) in Figs.~\ref{fig:superpositionDelta_vs_logt} and \ref{fig:superpostionDelta_rescaledHT} follow almost the same curve even though they are characterized by a very different correlation length (see Tab.~\ref{tab:details_num}). Thus, the violation of Eq.\eqref{eq:super_positionM} is caused by the difference between the time developments of $\xi_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ and $\xi_\mathrm{ZFC}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$. The lack of a dependence on $\ensuremath{t_\mathrm{w}}\xspace$ in the $a_2(\ensuremath{t_\mathrm{w}}\xspace;T)$ and $a_4(\ensuremath{t_\mathrm{w}}\xspace,T)$ coefficients is consistent with the expectation that the only $\ensuremath{t_\mathrm{w}}\xspace$ dependence lies within the $\ensuremath{t_\mathrm{w}}\xspace$ dependence of the correlation lengths themselves. Otherwise, there would be a $\ensuremath{t_\mathrm{w}}\xspace$ dependence even in the $H^2 \to 0$ limit. \subsection{\boldmath Evaluation of the relaxation function $S(t,\ensuremath{t_\mathrm{w}}\xspace;H)$} \label{subsec:relaxation_S} Exploiting the de-noising method introduced in Refs.~\cite{zhai-janus:20a,zhai-janus:21}, we calculate the numerical value for the relaxation function $S_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ \cite{joh:99} as \begin{equation}\label{eq:S_TRM_def_num} S_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)=-\frac{1}{H}{\frac {d\,M_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)}{d\,\log\,t}}\,. \end{equation} In Fig.~\ref{fig:relaxation_S_TRM}, we exhibit a typical set of results for $S_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$. \begin{figure}[h] \centering \includegraphics[width = 1\columnwidth]{figures/fig6.pdf} \caption{A typical set of simulated relaxation functions, $S_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$. The data are taken from Run 5, with $\ensuremath{T_\mathrm{m}}\xspace=1.0$ and $\ensuremath{t_\mathrm{w}}\xspace=2^{27.625}$. \textbf{Left:} $S_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ as a function of time. \textbf{Right:} $S_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ as a function of the temporal auto-correlation function $C(t,\ensuremath{t_\mathrm{w}}\xspace;H)$. The dashed line indicates the value of $C_\mathrm{peak}(\ensuremath{t_\mathrm{w}}\xspace)$ (see Table \ref{tab:details_num}).} \label{fig:relaxation_S_TRM} \end{figure} The simulation data strongly suggests that, when $H\rightarrow 0$, the temporal correlation function $C(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ approaches a constant value, $C_\mathrm{peak}(\ensuremath{t_\mathrm{w}}\xspace)$, at the maximum of the relaxation function $S(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ ~\cite{zhai-janus:20a,zhai-janus:21}. Hence, we define the time $\ensuremath{t^\mathrm{eff}}\xspace_H$ in our simulations as the time when $C(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ reaches the value $C_\mathrm{peak}(\ensuremath{t_\mathrm{w}}\xspace)$: \begin{equation}\label{eq:teff_def_num} C(\ensuremath{t^\mathrm{eff}}\xspace_H,\ensuremath{t_\mathrm{w}}\xspace;H)=C_\mathrm{peak}(\ensuremath{t_\mathrm{w}}\xspace)~. \end{equation} As shown in Fig.~\ref{fig:comparison_S_TRM_vs_ZFC}, this physical feature holds both for the ZFC and for the TRM protocols. In addition, as seen in Fig.~\ref{fig:comparison_S_TRM_vs_ZFC}, the relaxation functions $S(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ peak at the same value for $C_\mathrm{peak}(\ensuremath{t_\mathrm{w}}\xspace)$ in the two experimental protocols at small magnetic fields. This suggests that the highest free-energy barrier explored in both protocols is the same in the limit that $H\rightarrow 0$. \begin{figure}[h] \centering \includegraphics[width = 1\columnwidth]{figures/fig7.pdf} \caption{Comparison between the TRM and the ZFC relaxation functions for Run 5, with $\ensuremath{T_\mathrm{m}}\xspace=1.0$ and $\ensuremath{t_\mathrm{w}}\xspace=2^{27.625}$. The empty points are for $S_\mathrm{TRM}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$, while the full dots are for the $S_\mathrm{ZFC}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$. The dashed line displays the value for $C_\mathrm{peak}(\ensuremath{t_\mathrm{w}}\xspace)$ (see Table \ref{tab:details_num}). The ZFC points are taken from Refs.~\cite{zhai-janus:20a,zhai-janus:21}.} \label{fig:comparison_S_TRM_vs_ZFC} \end{figure} In the following sub-sections, exploiting the behavior of the Hamming distance, we will show how the value of the effective time, $\ensuremath{t^\mathrm{eff}}\xspace_H$, is independent of the value of $C_\mathrm{peak}(\ensuremath{t_\mathrm{w}}\xspace)$ and unveils the physical meaning of Eq.~\eqref{eq:teff_def_num}. \subsection{Hamming distance: Scaling} \label{subsec:Hd_connection_to_Ez_num} We extract the Hamming distance, or at least a surrogate of it, from our knowledge of the temporal auto-correlation function $C(t,\ensuremath{t_\mathrm{w}}\xspace;H)$: \begin{equation}\label{eq:Hd_def_num} {\text {Hd}}(t,\ensuremath{t_\mathrm{w}}\xspace;H)={\frac {1}{2}}\big[1-C(t,\ensuremath{t_\mathrm{w}}\xspace;H)\big]\,. \end{equation} A discussion of the connection between the above \emph{numerical} Hamming distance and the dynamics in the ultrametric tree of states is provided in Appendix~\ref{appendix:Hd-Cttw}. In Fig.~\ref{fig:Hd_growth_num}, we exhibit the behavior of the Hamming distance in Eq.~\eqref{eq:Hd_def_num}, $\mathrm{Hd}(t,\ensuremath{t_\mathrm{w}}\xspace;H=0)$, as a function of the rescaled time $\ensuremath{T_\mathrm{m}}\xspace\log_2(t)$ for $H=0$. Notice that the $\mathrm{Hd}$ curves depart from a simple scaling curve as soon as the time $t$ for each run reaches their respective $\ensuremath{t_\mathrm{w}}\xspace$. \begin{figure}[h] \centering \includegraphics[width = 1\columnwidth]{figures/fig8.pdf} \caption{Plot of the Hamming distance, $\mathrm{Hd}(t,\ensuremath{t_\mathrm{w}}\xspace;H=0)$ see Eq.~\eqref{eq:Hd_def_num}, as a function of the rescaled time, $\ensuremath{T_\mathrm{m}}\xspace\log_2(t)$. Notice the scaling for lower times. Runs with the same $\ensuremath{T_\mathrm{m}}\xspace$ share the shorter time regime.} \label{fig:Hd_growth_num} \end{figure} If one displays $\log (t/\ensuremath{t^\mathrm{eff}}\xspace_H)$ as a function of $\mathrm{Hd}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$ at the two simulation temperatures, $\ensuremath{T_\mathrm{m}}\xspace=0.9$ and $\ensuremath{T_\mathrm{m}}\xspace=1.0$, a scaling behavior is apparent from Fig.~\ref{fig:Hd_scaling_num}, with \begin{equation}\label{eq:Hd_scaling_implication} \log \big\{t/\ensuremath{t^\mathrm{eff}}\xspace_H[C_\mathrm{peak}(\ensuremath{t_\mathrm{w}}\xspace)]\big\}=\mathcal{F} \big[C(t,\ensuremath{t_\mathrm{w}}\xspace;H),\ensuremath{t_\mathrm{w}}\xspace\big]~~. \end{equation} \begin{figure}[h] \centering \includegraphics[width = 1\columnwidth]{figures/fig9.pdf} \caption{Behavior of the rescaled time $\log(t/\ensuremath{t^\mathrm{eff}}\xspace_H)$ as a function of the Hamming distance $\mathrm{Hd}(t,\ensuremath{t_\mathrm{w}}\xspace;H)$. The empty circles represent the ZFC data, while the full triangles and joined with lines represent the TRM data.} \label{fig:Hd_scaling_num} \end{figure} The determination of the precise value for $C_\mathrm{peak}(\ensuremath{t_\mathrm{w}}\xspace)$ is not crucial because $C_\mathrm{peak}(\ensuremath{t_\mathrm{w}}\xspace)$ changes $\log \ensuremath{t^\mathrm{eff}}\xspace_H(C_\mathrm{peak})$ only by a constant. This implies that $\log \ensuremath{t^\mathrm{eff}}\xspace_H(C_\mathrm{peak})$ does not depend upon $H^2$. Developing this important concept further, from Eq.~\eqref{eq:Hd_scaling_implication} we can write, \begin{equation}\label{eq:H2_independence_of_Cpeak} \begin{split} \log \bigg[{\frac {\ensuremath{t^\mathrm{eff}}\xspace_H(C)}{t_{H}^{\text {eff}}(C_\mathrm{peak})}}\bigg]=\mathcal{F}(C,\ensuremath{t_\mathrm{w}}\xspace)=\log\bigg[{\frac {t_{H \to 0^+}^{\text {eff}}(C)}{t_{H \to 0^+}^{\text {eff}}(C_\mathrm{peak})}}\bigg]\\ \Rightarrow \log\bigg[{\frac {\ensuremath{t^\mathrm{eff}}\xspace_H(C)}{t_{H \to 0^+}^{\text {eff}}(C)}}\bigg]=\log\bigg[{\frac {\ensuremath{t^\mathrm{eff}}\xspace_H(C_\mathrm{peak})}{t_{H \to 0^+}^{\text {eff}}(C_\mathrm{peak})}}\bigg]~~, \end{split} \end{equation} implying that the value of the effective time, $\ensuremath{t^\mathrm{eff}}\xspace_H$, is {\it independent} of the value of $C_\mathrm{peak}(\ensuremath{t_\mathrm{w}}\xspace)$. \subsection{\boldmath Extraction of the effective response time $\ensuremath{t^\mathrm{eff}}\xspace_H$} \label{subsec:teff_extraction_num} We can extract the effective response time $\ensuremath{t^\mathrm{eff}}\xspace_H$ using Eq.~\eqref{eq:teff_def_num}. In Table \ref{tab:details_num} we listed the values of $C_\mathrm{peak}(\ensuremath{t_\mathrm{w}}\xspace)$ for the ZFC protocol from Refs.~\citep{zhai-janus:20a,zhai-janus:21}. The results are displayed in Fig.~\ref{fig:log_teff_vs_H2_num}, along with those for the TRM protocol (see below). The data for $\log (\ensuremath{t^\mathrm{eff}}\xspace_H/ \ensuremath{t^\mathrm{eff}}\xspace_{H \to 0^+})$ are fitted by the function, \begin{equation}\label{eq:fitting_teff_num} f(x)=c_2(\ensuremath{t_\mathrm{w}}\xspace;T)x+{\mathcal {O}}(x^2)~~, \end{equation} where $x=H^2$. In order to avoid the unphysical wild oscillations at large magnetic fields (recall that $H=1$ for the IEA model roughly corresponds to $5\times 10^4$~Oe in physical units~\cite{zhai-janus:20a}), we define a unique fitting range in the small $x$ region, $x=H^2\in [0,0.0003]$. Our fitting parameters are displayed in Table \ref{tab:ZFC_fit_num} for the ZFC data, and Table \ref{tab:TRM_fit_num} for the TRM data. \begin{figure}[h] \centering \includegraphics[width = 1\columnwidth]{figures/fig10.pdf} \caption{The numerical ratio of $\log(\ensuremath{t^\mathrm{eff}}\xspace_H/\ensuremath{t^\mathrm{eff}}\xspace_{H\rightarrow 0^+})$ for the seven runs defined in Table \ref{tab:details_num} for both the ZFC and TRM protocols. The filled dots refer to the ZFC protocol; the empty squares to the TRM protocol. The coefficients of the $H^2$ fit, $c_2(\ensuremath{t_\mathrm{w}}\xspace;T)$ from Eq.~\eqref{eq:fitting_teff_num} are listed in Table \ref{tab:ZFC_fit_num} for the ZFC data, and Table \ref{tab:TRM_fit_num} for the TRM data. The continuous lines represent the fit to the ZFC data, while the dashed lines represent the fit to the TRM data. The ZFC data are the same as in Refs.~\cite{zhai-janus:20a,zhai-janus:21}.} \label{fig:log_teff_vs_H2_num} \end{figure} \begin{table}[!h] \begin{centering} \begin{ruledtabular} \begin{tabular}{c c c c} $T$&$\ensuremath{t_\mathrm{w}}\xspace$ &Coefficient & Numerical value\\ \hline\\[-5pt] 0.9&$2^{22}$&$c_2$&$-5.01(14)\times 10^2$\\ 0.9&$2^{26.5}$&$c_2$&$-1.54(2)\times 10^3$\\ 0.9&$2^{31.25}$&$c_2$&$-4.13(11)\times 10^3$\\ 0.9&$2^{34}$&$c_2$&$-6.78(13)\times 10^3$\\ &&&\\ 1.0&$2^{23.75}$&$c_2$&$-1.29(2)\times 10^3$\\ 1.0&$2^{27.625}$&$c_2$&$-3.25(3)\times 10^3$\\ 1.0&$2^{31.75}$&$c_2$&$-7.48(17)\times 10^3$ \end{tabular} \caption{Results for the fit to Eq.~\eqref{eq:fitting_teff_num} for the ZFC data for the time ratio $\log(\ensuremath{t^\mathrm{eff}}\xspace_H/\ensuremath{t^\mathrm{eff}}\xspace_{H\rightarrow 0^+})$. The fitting range is $0\leq H^2\leq0.0003$.} \label{tab:ZFC_fit_num} \end{ruledtabular} \end{centering} \end{table} \begin{table}[!h] \begin{centering} \begin{ruledtabular} \begin{tabular}{c c c c} $T$&$\ensuremath{t_\mathrm{w}}\xspace$ &Coefficient & Numerical value\\ \hline\\[-5pt] 0.9&$2^{22}$&$c_2$&$-6.77(11)\times 10^2$\\ 0.9&$2^{26.5}$&$c_2$&$-1.52(2)\times 10^3$\\ 0.9&$2^{31.25}$&$c_2$&$-3.60(14)\times 10^3$\\ 0.9&$2^{34}$&$c_2$&$-5.84(16)\times 10^3$\\ &&&\\ 1.0&$2^{23.75}$&$c_2$&$-1.06(1)\times 10^3$\\ 1.0&$2^{27.625}$&$c_2$&$-2.64(3)\times 10^3$\\ 1.0&$2^{31.75}$&$c_2$&$-5.65(22)\times 10^3$ \end{tabular} \caption{Results for the fit to Eq.~\eqref{eq:fitting_teff_num} for the TRM data for the time ratio $\log(\ensuremath{t^\mathrm{eff}}\xspace_H/\ensuremath{t^\mathrm{eff}}\xspace_{H\rightarrow 0^+})$. The fitting range is $0\leq H^2\leq0.0003$.} \label{tab:TRM_fit_num} \end{ruledtabular} \end{centering} \end{table} \section{Difference Between the ZFC and the TRM Protocols} \label{sec:diff_TRM_ZFC_num} This section is the core of our numerical results in showing and characterizing the effect of an external magnetic field on a 3D spin glass. The scaling law, first introduced by Joh \emph{et al.} \cite{joh:99} and then developed by Refs.~\cite{zhai-janus:20a,zhai-janus:21}, solves a decades-old controversy concerning the nature of the Zeeman energy for describing the magnetic response both in experiments and simulations: \begin{equation}\label{eq:scaling_law_decay_teff} \begin{split} \log \bigg[{\frac {\ensuremath{t^\mathrm{eff}}\xspace_H}{t_{H\rightarrow 0^+}^{\text {eff}}}}\bigg] &={\frac {\hat{S}}{2T}}\xi(\ensuremath{t_\mathrm{w}}\xspace)^{D-(\theta/2)}H^2+\\ & \xi(\ensuremath{t_\mathrm{w}}\xspace)^{-\theta/2}{\mathcal G}\big(T,\xi(\ensuremath{t_\mathrm{w}}\xspace)^{D-(\theta/2)}H^2\big) \, . \end{split} \end{equation} Here, $\xi(\ensuremath{t_\mathrm{w}}\xspace)$ is the microscopic correlation length, see below Eq.~\eqref{eq:xi_micro_def}, $\hat{S}$ is a constant from the Fluctuation-Dissipation Theorem (FDT), $D=3$ is the spatial dimension and $\theta$ is the replicon exponent \cite{janus:17b}. The replicon exponent $\theta$ is a function of $\tilde{x}= \ell_\mathrm{J}(T)/\xi(\ensuremath{t_\mathrm{w}}\xspace)$, where $\ell_J(T)$ is the Josephson length \cite{janus:18,zhai:19} . For notational simplicity, we have omitted this functional dependence. In Fig.~\ref{fig:scaling_law_num}, we show that this scaling law holds for both the ZFC and TRM protocols. \begin{figure}[h] \centering \includegraphics[width = 1\columnwidth]{figures/fig11.pdf} \caption{The non-linear parts from the numerical response time data, $[\log (\ensuremath{t^\mathrm{eff}}\xspace_H/\ensuremath{t^\mathrm{eff}}\xspace_{H\rightarrow 0^+})-c_2(\ensuremath{t_\mathrm{w}}\xspace,T)H^2]\xi^{\theta({\tilde x})/2}$ plotted against $(\xi^{D-\theta({\tilde x})/2}H^2)^2$. The abscissa of the {\it main panel} is in a linear scale, showing an expanded view for small values of $(\xi^{D-\theta({\tilde x})/2}H^2)^2$. The abscissa of the {\it insert} is in a log scale in order to report all of our numerical data. The open squares refer to the ZFC data, while the filled squares refer to the TRM data.} \label{fig:scaling_law_num} \end{figure} One of the main differences between experiments and simulations is access to the microscopic spin configurations. Eq.~\eqref{eq:scaling_law_decay_teff} could be read as a bridge to connect the macroscopic observable of the effective time $\ensuremath{t^\mathrm{eff}}\xspace_H$ and the microscopic spin rearrangement $\xi_\mathrm{micro}$. Numerically, we have easy access to the spin configuration enabling us to calculate the microscopic correlation length $\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace;H)$ as follows. Let us define the replicon propagator \cite{dealmeida:78,dedominicis:06}: \begin{equation}\label{eq:Gr_def} {\mathcal G}_{\text {R}}({\boldsymbol r},t,T)\!=\!\frac {1}{V}\!\sum_{\boldsymbol x} {\overline {(\langle s_{{\boldsymbol x},t}s_{{\boldsymbol x+ \boldsymbol r},t}\rangle_T - \langle s_{{\boldsymbol x},t}\rangle_T\langle s_{{\boldsymbol x+ \boldsymbol r},t}\rangle_T)^2}} \end{equation} The replicon correlator ${\mathcal G}_{\text {R}}$ decays to zero in the long-distance limit. We therefore compute $\xi_{\text {micro}}(\ensuremath{t_\mathrm{w}}\xspace;H)$ by exploiting the integral estimators \cite{janus:08,janus:09}: \begin{equation}\label{eq:Ik_def} I_k(t;T)=\int_0^\infty d{\boldsymbol r}\,r^k{\mathcal G}({\boldsymbol r}, t;T), \end{equation} where \begin{equation}\label{eq:xi_micro_def} \xi_{k,k+1}(t,T)={\frac {I_{k+1}(t,T)}{I_k(t,T)}}. \end{equation} The $\xi_{12}(\ensuremath{t_\mathrm{w}}\xspace;T)$ is designated as the microscopic correlation length $\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace;T)$. From the experimental point of view, Eq.~\eqref{eq:scaling_law_decay_teff} determines an \emph{effective correlation length}, $\xi_\mathrm{eff}(\ensuremath{t_\mathrm{w}}\xspace;H^2 \rightarrow 0)$. See Sec.~\ref{sec:difference_xi_in_exp} and below. We shall follow two approaches to claim the same result: ``the two experimental protocols are not equivalent, and the presence, or absence, of an external magnetic field in the \textit{thermal history} of a spin glass is not negligible''. On the one hand, we analyze the effect of the external magnetic field on the microscopic correlation length $\xi_\mathrm{micro}$ [directly accessible in simulations]; on the other hand, we follow the same experimental approach, see Sec.~\ref{sec:difference_xi_in_exp}, to evaluate the magnetic response through the lens of the effective time $\ensuremath{t^\mathrm{eff}}\xspace_H$. \subsection{\boldmath Numerical approach: the effect of an external magnetic field through the lens of the microscopical correlation length $\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace)$} \label{subsec:num_approachZFC_TRM_xi_micro} In the zero-field-cooling protocol, the system is cooled to the working temperature $\ensuremath{T_\mathrm{m}}\xspace$ in the absence of an external magnetic field, which is then switched on after a waiting time $\ensuremath{t_\mathrm{w}}\xspace$. Thus, for definition, the ZFC protocol can be described by its microscopic correlation length, $\xi^\mathrm{ZFC}_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace,H=0)$. The thermoremanent protocol, conversely, brings the system to $\ensuremath{T_\mathrm{m}}\xspace$ in the presence of an external magnetic field. This implies that each run has its own $\xi^\mathrm{TRM}_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace;H)$ before $H$ is turned off. In Fig.~\ref{fig:xi_micro_limit}, we display the $H^2\rightarrow 0^+$ behavior of $\xi^\mathrm{TRM}_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace;H)$ in order to show the effect of an external magnetic field $H$. \begin{figure}[h!]\centering \centering \includegraphics[width=\columnwidth]{figures/fig12.pdf} \caption{ Behavior of $\xi^\mathrm{TRM}_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace;H)$ as a function of $H^2 \rightarrow 0$.} \label{fig:xi_micro_limit} \end{figure} Thus, in Fig.~\ref{fig:diff_xi_micro_ZFC_TRM_vs_H2} we display the difference between the ZFC and TRM correlation length $\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace;H)$, defined through Eqs.~\eqref{eq:Gr_def}-\eqref{eq:xi_micro_def}, against $H^2$. \begin{figure}[h!]\centering \centering \includegraphics[width=\columnwidth]{figures/fig13.pdf} \caption{The difference between the ZFC and TRM correlation length $\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace;H)$, defined through Eqs.~\eqref{eq:Gr_def}-\eqref{eq:xi_micro_def}, is plotted against $H^2$. The \textbf{main} plot scale is log-log; the \textbf{insert} has a linear scale for the ordinate. The ordering of the different runs (see Table \ref{tab:details_num}) displays an increase of the difference with increasing $\ensuremath{t_\mathrm{w}}\xspace$.} \label{fig:diff_xi_micro_ZFC_TRM_vs_H2} \end{figure} As can be seen from Figs.~\ref{fig:xi_micro_limit}-\ref{fig:diff_xi_micro_ZFC_TRM_vs_H2}, $\xi^\mathrm{TRM}_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace;H)$ approaches the $H^2\rightarrow 0^+$ limit with a linear slope and the difference between the ZFC and TRM correlation length seems to have the same behavior for different runs (i.e. $\ensuremath{t_\mathrm{w}}\xspace$). This enables us to extract a scaling law if the correlation length is rescaled as \begin{equation}\label{eq:xi_naive_scaling} \begin{split} 1- \xi^\mathrm{TRM}_\mathrm{micro}&(\ensuremath{t_\mathrm{w}}\xspace;H)/ \xi^\mathrm{ZFC}_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace;H=0) \\ = & A(\ensuremath{t_\mathrm{w}}\xspace,T)[\xi^\mathrm{ZFC}_\mathrm{micro}(H=0)]^{D-[\theta({\tilde x})/2]}H^2 \end{split} \end{equation} as demonstrated in Fig.~\ref{fig:naive_scaling_ximicro}. The logic behind the effect of the upward curvature of $\varDelta(\ensuremath{t_\mathrm{w}}\xspace)~\mathrm{vs}~\mathrm{Hd}$, as suggested in Ref.~\cite{lederman:91} and discussed in Sec.~\ref{sec:dependence_Delta_vs_Hd_exp} of this paper, requires that the difference $\xi_\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace;H)-\xi_\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace;H)$ increase with increasing waiting time $\ensuremath{t_\mathrm{w}}\xspace$. This is because $\mathrm{Hd}$ itself increases with $\ensuremath{t_\mathrm{w}}\xspace$, and hence the barrier height difference between the ZFC and TRM protocols increases with $\ensuremath{t_\mathrm{w}}\xspace$. \begin{figure}[h] \centering \includegraphics[width = 1\columnwidth]{figures/fig14.pdf} \caption{ The plot displays the dependence as a rescaled quantity $[\xi^\mathrm{ZFC}_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace)-\xi^\mathrm{TRM}_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace)]/\xi^\mathrm{ZFC}_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace)$ against the function $H^2[\xi^\mathrm{ZFC}_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace)]^{D-[\theta({\tilde x})/2]}$. \textbf{The main plot} is in scale log-log; \textbf{the insert} has the linear scale for the ordinate.} \label{fig:naive_scaling_ximicro} \end{figure} \subsection{Experimental approach: evaluation of the magnetic response through the effective times} \label{subsec:exp_approachZFC_TRM_teff} We now focus on the differences between the two protocols through the lens of the effective time, i.e., following the \textit{experimental} approach. From Eqs.~\eqref{eq:fitting_teff_num} and \eqref{eq:scaling_law_decay_teff}, the quantities $\log \ensuremath{t^\mathrm{eff}}\xspace_H(\mathrm{ZFC})$ and $\log \ensuremath{t^\mathrm{eff}}\xspace_H(\mathrm{TRM)}$ can be written as, \begin{equation}\label{eq:teff_ZFC_and_TR_explicit_dependence} \begin{split} &\log \ensuremath{t^\mathrm{eff}}\xspace_H(\mathrm{ZFC})=c_0^\mathrm{ZFC} - K^\mathrm{ZFC} \xi(\ensuremath{t_\mathrm{w}}\xspace)^{D-(\theta/2)}H^2 + \mathcal{O}(H^4)\\ &\log \ensuremath{t^\mathrm{eff}}\xspace_H(\mathrm{TRM})= c_0^\mathrm{TRM}-K^\mathrm{TRM} \xi(\ensuremath{t_\mathrm{w}}\xspace)^{D-(\theta/2)}H^2 +\mathcal{O}(H^4)\, \end{split} \end{equation} where we have substituted $c_2(\ensuremath{t_\mathrm{w}}\xspace,T) = K\,\xi(\ensuremath{t_\mathrm{w}}\xspace)^{D-(\theta/2)}$ and $\xi(\ensuremath{t_\mathrm{w}}\xspace)$ stands for $\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace;H=0)$. By definition, the value of $\xi(\ensuremath{t_\mathrm{w}}\xspace;H=0)$ is the same for both ZFC and TRM. That is, as will be shown below, $\xi(\ensuremath{t_\mathrm{w}}\xspace;H)$ differs between ZFC and TRM by terms of the order of $H^2$. Taking the difference between the two terms in Eq.~\eqref{eq:teff_ZFC_and_TR_explicit_dependence}, we obtain \begin{equation}\label{eq:difference_teff_ZFC_TRM} \begin{split} \log \ensuremath{t^\mathrm{eff}}\xspace_H(\mathrm{ZFC})-\log \ensuremath{t^\mathrm{eff}}\xspace_H(\mathrm{TRM})=[c_0^\mathrm{ZFC}-c_0^\mathrm{TRM}] \\ -[K^\mathrm{ZFC}-K^\mathrm{TRM}]\xi(\ensuremath{t_\mathrm{w}}\xspace)^{D-(\theta/2)}H^2+{\mathcal O}(H^4) \, . \end{split} \end{equation} By definition, $c_0^\mathrm{ZFC}=c_0^\mathrm{TRM}=\log \ensuremath{t^\mathrm{eff}}\xspace_{H\rightarrow 0^+}$, so that, \begin{equation}\label{eq:diff_log_teff_vs_xi} \begin{split} &\big(\log \ensuremath{t^\mathrm{eff}}\xspace_H(\mathrm{ZFC})-\log \ensuremath{t^\mathrm{eff}}\xspace_H(\mathrm{TRM}) \big)/\xi(\ensuremath{t_\mathrm{w}}\xspace)^{D-(\theta/2)}\\ &=-\big(K^\mathrm{ZFC}-K^\mathrm{TRM}\big) H^2+ {\mathcal O}(H^4)\,. \end{split} \end{equation} Eq.~\eqref{eq:diff_log_teff_vs_xi} allows us to compare the two protocols directly, avoiding a precise determination for each of the coefficients, $K^\mathrm{ZFC}$ and $K^\mathrm{TRM}$. For ease of notation, we define, \begin{equation}\label{eq:diff_log_teff_notation} \log \ensuremath{t^\mathrm{eff}}\xspace_H(\mathrm{ZFC})-\log \ensuremath{t^\mathrm{eff}}\xspace_H(\mathrm{TRM})\equiv \delta\,\log \ensuremath{t^\mathrm{eff}}\xspace_H~~. \end{equation} We exhibit the rescaled quantity $T (\delta\,\log \ensuremath{t^\mathrm{eff}}\xspace_H)/[\xi(\ensuremath{t_\mathrm{w}}\xspace)]^{D- \theta/2}$ as a function of $H^2$ in Fig.~\ref{fig:diff_log_teff}. \begin{figure}[h] \centering \includegraphics[width = 1\columnwidth]{figures/fig15.pdf} \caption{The rescaled quantity $T(\delta\log \ensuremath{t^\mathrm{eff}}\xspace_H)/[\xi(\ensuremath{t_\mathrm{w}}\xspace)]^{D-\theta/2}$ as a function of $H^2$. The various run details are listed in Table \ref{tab:details_num}. The correlation length $\xi$ is the replicon correlation length $\xi_\mathrm{micro}(t,\ensuremath{t_\mathrm{w}}\xspace;H=0)$, defined through Eqs.~\eqref{eq:Gr_def}-\eqref{eq:xi_micro_def}.} \label{fig:diff_log_teff} \end{figure} A scaling behavior for $\xi(\ensuremath{t_\mathrm{w}}\xspace)$ sufficiently large is observed, as well as support for the principal relationship explored in this paper: \begin{equation}\label{eq:comparison_Kzfc_Ktrm} K^\mathrm{ZFC}> K^\mathrm{TRM}~~. \end{equation} This difference can be made quantitative by plotting the differences of the rescaled fitting coefficients, $\big(c_2^\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace,T)-c_2^\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace,T)\big)/\xi(\ensuremath{t_\mathrm{w}}\xspace)^{D-\theta/2}$ as a function of the waiting time, see Fig.~\ref{fig:diff_K_ZFC_TRM_vs_tw}. \begin{figure}[h] \centering \includegraphics[width = 1\columnwidth]{figures/fig16.pdf} \caption{The difference between the ZFC and TRM decay of the slope of the ratio $\log (\ensuremath{t^\mathrm{eff}}\xspace_H/\ensuremath{t^\mathrm{eff}}\xspace_{H \to 0^+})$ as a function of the waiting time, $\ensuremath{t_\mathrm{w}}\xspace$. By definition, $K^\mathrm{ZFC}-K^\mathrm{TRM} = [c_2^\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace,T)-c_2^\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace,T)]/\xi(\ensuremath{t_\mathrm{w}}\xspace)^{D-\theta/2}$, see Eq.~\eqref{eq:teff_ZFC_and_TR_explicit_dependence}.} \label{fig:diff_K_ZFC_TRM_vs_tw} \end{figure} \subsection{\boldmath Comparison between the microscopical correlation length $\xi_{\text {micro}}(\ensuremath{t_\mathrm{w}}\xspace;H)$ and the \textit{effective} correlation length $\xi_\mathrm{eff}(\ensuremath{t_\mathrm{w}}\xspace;H)$} \label{subsec:ximicro_vs_xieff} The analysis presented in this section is based on the numerical hypothesis that the microscopic correlation length, $\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace)$, measures quantities equivalent to the \textit{effective} correlation length $\xi_\mathrm{eff}(\ensuremath{t_\mathrm{w}}\xspace)$ experimentally achieved through Eq.~\eqref{eq:scaling_law_decay_teff}. Thus, we test the relationship: \begin{equation}\label{eq:xi_eff_is_xi_micro} \xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace;H=0) \simeq \xi_\mathrm{eff}(\ensuremath{t_\mathrm{w}}\xspace;H^2 \rightarrow 0) \; \end{equation} as follows. Let us consider separately the ZFC and TRM protocols. The numerical data for $\log ( \ensuremath{t^\mathrm{eff}}\xspace_H/ \ensuremath{t^\mathrm{eff}}\xspace_{H \to 0^+}) $ are fitted in the small x region to the function: \begin{equation} f(x)= c_2(\ensuremath{t_\mathrm{w}}\xspace;T) x + {\mathcal O}(x^2) \, , \end{equation} where $x \equiv H^2$ and the coefficient $c_2(\ensuremath{t_\mathrm{w}}\xspace;H)$ corresponds to: \begin{align}\label{eq:c2_dependence_K_ZFC} &c_2^\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace;H) = K^\mathrm{ZFC}\xi(\ensuremath{t_\mathrm{w}}\xspace)^{D-\theta(\hat{x})/2} \\ \label{eq:c2_dependence_K_TRM}& c_2^\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace;H) = K^\mathrm{TRM}\xi(\ensuremath{t_\mathrm{w}}\xspace)^{D-\theta(\hat{x})/2} \, \end{align} where $\xi(\ensuremath{t_\mathrm{w}}\xspace)$ is the $\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace)$ defined in Eq.~\eqref{eq:xi_micro_def} in the absence of a magnetic field. According to the scaling law introduced in Refs.~\cite{zhai-janus:20a,zhai-janus:21}, the coefficient $c_2^\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace;T)$ is: \begin{align}\label{eq:c2_explicit_dependence_ZFC} &c_2^\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace;H) = \left[ \frac{\hat{S}}{2\ensuremath{T_\mathrm{m}}\xspace} \right] \xi(\ensuremath{t_\mathrm{w}}\xspace)^{D-\theta(\hat{x})/2} \\ \label{eq:K_ZFC_def}&\implies K^\mathrm{ZFC}= \frac{\hat{S}}{2\,\ensuremath{T_\mathrm{m}}\xspace} \,. \end{align} Hence, using the fitting data from Table.~\ref{tab:ZFC_fit_num}, we define a ZFC effective correlation length, $\xi_\mathrm{eff}^\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace;T)$, as: \begin{align}\label{eq:derivation_xi_eff_ZFC} \frac{c_2^\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace,\ensuremath{T_\mathrm{m}}\xspace)}{c_2^\mathrm{ZFC}( \ensuremath{t_\mathrm{w}}\xspace^*, \ensuremath{T_\mathrm{m}}\xspace)} = \frac{K^\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace)}{K^\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace^*)} \left[ \frac{\xi_\mathrm{eff}(\ensuremath{t_\mathrm{w}}\xspace,\ensuremath{T_\mathrm{m}}\xspace)}{\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace^*;\ensuremath{T_\mathrm{m}}\xspace)} \right]^{D-\theta(\tilde{x})/2} \, , \end{align} where $K^\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace) = K^\mathrm{ZFC}$, see Eq.~\eqref{eq:K_ZFC_def}, and we have omitted the $\xi(\ensuremath{t_\mathrm{w}}\xspace)$ dependence on the replicon $\theta(\tilde{x})$.\\ Thus, we define $\xi_\mathrm{eff}^\mathrm{ZFC}(\ensuremath{t_\mathrm{w}}\xspace,\ensuremath{T_\mathrm{m}}\xspace)$ as: \begin{equation}\label{eq:xieff_def_num_ZFC} \xi_\mathrm{eff}^\mathrm{ZFC} (\ensuremath{t_\mathrm{w}}\xspace;\ensuremath{T_\mathrm{m}}\xspace) = \left[ \frac{c_2(\ensuremath{t_\mathrm{w}}\xspace,\ensuremath{T_\mathrm{m}}\xspace)}{c_2( \ensuremath{t_\mathrm{w}}\xspace^*, \ensuremath{T_\mathrm{m}}\xspace)}\right]^{1/(D- \theta(\tilde{x}) /2)} \hspace*{-.4cm} \xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace^*,\ensuremath{T_\mathrm{m}}\xspace) \, . \end{equation} The quantity $\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace^*,\ensuremath{T_\mathrm{m}}\xspace)$ plays the role of a reference length to avoid having to require a precise determination of the constants in Eq.~\eqref{eq:c2_explicit_dependence_ZFC}. Let us now focus on the TRM protocol. Following the results of Sec.~\ref{subsec:exp_approachZFC_TRM_teff}, the equivalence between the TRM and ZFC protocols is not holding, and we quantified the difference between the energetic barrier in the TRM and ZFC protocol in Fig.~\ref{fig:diff_K_ZFC_TRM_vs_tw}. Let us formalize this difference as: \begin{equation}\label{eq:formalization_diff_K} K^\mathrm{ZFC}-K^\mathrm{TRM} = \tilde{B}(\ensuremath{t_\mathrm{w}}\xspace) \; . \end{equation} By manipulating the above expression, we can obtain an expression for $K^\mathrm{TRM}$: \begin{equation}\label{eq:K_TRM_expression} K^\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace)= K^\mathrm{ZFC}- \tilde{B}(\ensuremath{t_\mathrm{w}}\xspace). \end{equation} Hence, using the fitting data from Table~\ref{tab:TRM_fit_num}, we can rewrite Eq.~\eqref{eq:derivation_xi_eff_ZFC} for the TRM case as: \begin{align}\label{eq:derivation_xi_eff_TRM} \frac{c_2^\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace,\ensuremath{T_\mathrm{m}}\xspace)}{c_2^\mathrm{TRM}( \ensuremath{t_\mathrm{w}}\xspace^*, \ensuremath{T_\mathrm{m}}\xspace)} = \left( \frac{K^\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace)}{K^\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace^*)} \right) \left[ \frac{\xi_\mathrm{eff}(\ensuremath{t_\mathrm{w}}\xspace,\ensuremath{T_\mathrm{m}}\xspace)}{\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace^*;\ensuremath{T_\mathrm{m}}\xspace)} \right]^{D-\theta(\tilde{x})/2} \hspace*{-1.cm} , \end{align} where, $K^\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace)$ has a weak dependence on the waiting time $\ensuremath{t_\mathrm{w}}\xspace$, see Fig.~\ref{fig:diff_K_ZFC_TRM_vs_tw} and Eq.~\eqref{eq:K_TRM_expression}.\\ Thus, we obtain: \begin{eqnarray}\label{eq:xieff_def_num_TRM_Istep} \xi_\mathrm{eff}^\mathrm{TRM} (\ensuremath{t_\mathrm{w}}\xspace;\ensuremath{T_\mathrm{m}}\xspace) &=& \left[ \frac{c_2^\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace,\ensuremath{T_\mathrm{m}}\xspace)}{c_2^\mathrm{TRM}( \ensuremath{t_\mathrm{w}}\xspace^*, \ensuremath{T_\mathrm{m}}\xspace)} \frac{K^\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace^*)}{K^\mathrm{TRM}(\ensuremath{t_\mathrm{w}}\xspace)} \right]^{\frac{2}{2D- \theta(\tilde{x})}} \nonumber\\ &\times&\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace^*,\ensuremath{T_\mathrm{m}}\xspace) \, . \end{eqnarray} In Fig.~\ref{fig:xieff_ximicro_vs_tw}, we report the comparison between $\xi_\mathrm{eff}(\ensuremath{t_\mathrm{w}}\xspace;H)$ and $\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace;H)$ as a function of the waiting time $\ensuremath{t_\mathrm{w}}\xspace$ in both the ZFC and TRM cases. By definition, the $\ensuremath{t_\mathrm{w}}\xspace^*$ taken as a reference has exactly the same $\xi_\mathrm{eff}(\ensuremath{t_\mathrm{w}}\xspace^*,T) =\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace^*,T)$. We used as the reference $\ensuremath{t_\mathrm{w}}\xspace^*=2^{34}$ at $\ensuremath{T_\mathrm{m}}\xspace=0.9$ and $\ensuremath{t_\mathrm{w}}\xspace^*=2^{31.75}$ at $\ensuremath{T_\mathrm{m}}\xspace=1.0$. \begin{figure}[h] \centering \includegraphics[width = 1\columnwidth]{figures/fig17.pdf} \caption{Comparison between $\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace;H\!=\!0)$ and $\xi_\mathrm{eff}(\ensuremath{t_\mathrm{w}}\xspace;H^2 \!\rightarrow\! 0)$ as a function of the waiting time $\ensuremath{t_\mathrm{w}}\xspace$. By definition of $\xi_\mathrm{eff}(\ensuremath{t_\mathrm{w}}\xspace;H^2 \!\rightarrow\! 0)$, see Eqs.~\eqref{eq:xieff_def_num_ZFC} and \eqref{eq:xieff_def_num_TRM_Istep}, the $\ensuremath{t_\mathrm{w}}\xspace^*$ taken as a reference has exactly the same $\xi_\mathrm{eff}(\ensuremath{t_\mathrm{w}}\xspace^*,T) =\xi_\mathrm{micro}(\ensuremath{t_\mathrm{w}}\xspace^*,T)$.} \label{fig:xieff_ximicro_vs_tw} \end{figure} Given our lack of statistics, we could simulate only a single sample for each case. The errors are calculated from only thermal fluctuations, and the numerical ansatz of Eq.~\eqref{eq:xi_eff_is_xi_micro} is confirmed. \section{Conclusion} \label{sec:conclusion} The fruitful collaboration between the experimental group at Austin and the Janus collaboration exhibits for the first time the consequences of the difference between the two relaxation protocols for spin glasses, ZFC and TRM. The power of the Janus II supercomputer allows us to extend simulation times and length scales to values explored experimentally. The use of single crystals of CuMn enables experiments to exhibit the consequences of very large spin-glass correlation lengths. Both these ingredients were vital for unveiling the difference in the magnetic response of the two experimental protocols, which have been considered equivalent for more than three-decades. This paper analyzes the effects of magnetic fields on spin-glass dynamics. The scaling law introduced in Refs.~\cite{zhai-janus:20a,zhai-janus:21} has played the role of a touchstone for evaluating the magnetic response of a 3D spin glass. We connected the difference between the ZFC and the TRM protocols to the dynamics of the Hamming distance through the reduction of the free-energy barriers. In Sec.~\ref{subsec:ximicro_vs_xieff}, we claim the equivalence between the experimental extraction of the correlation length though Eq.~\eqref{eq:scaling_law_decay_teff} and the microscopic calculation of $\xi$ through the replicon propagator $\mathcal{G}_\text{R}(\boldsymbol{r},\ensuremath{t_\mathrm{w}}\xspace,T)$. The unique and extraordinary collaboration between experiments, simulations, and theory has displayed once again its potential for the investigation of complex systems, as the 3D spin glass. We look forward to continued investigation of spin-glass dynamics building on the results of this paper as we examine the microscopic nature of such phenomena as rejuvenation and memory. \section*{Acknowledgements} \addcontentsline{toc}{section}{Acknowledgements} We are grateful for helpful discussions with S. Swinnea about sample characterization. This work was supported in part by the U.S. Department of Energy (DOE), Office of Science, Basic Energy Sciences, Materials Science and Engineering Division, under Award No. DE-SC0013599, and performed at the Ames Laboratory, which is operated for the U.S. DOE by Iowa State University under contract No. DE-AC02-07CH11358. We were partly funded as well by Ministerio de Econom\'ia, Industria y Competitividad (MINECO, Spain), Agencia Estatal de Investigaci\'on (AEI, Spain), and Fondo Europeo de Desarrollo Regional (FEDER, EU) through Grants No. PID2020-112936GB-I00, No. PID2019-103939RB-I00, No.PGC2018-094684-B-C21 and PGC2018-094684-B-C22, by the Junta de Extremadura (Spain) and Fondo Europeo de Desarrollo Regional (FEDER, EU) through Grant No. GR21014 and No. IB20079 and by the DGA-FSE (Diputaci\'on General de Arag\'on -- Fondo Social Europeo). This project has also received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant No. 694925-LotglasSy). DY was supported by the Chan Zuckerberg Biohub and IGAP was supported by the Ministerio de Ciencia, Innovaci\'on y Universidades (MCIU, Spain) through FPU grant No. FPU18/02665. BS was supported by the Comunidad de Madrid and the Complutense University of Madrid (Spain) through the Atracci\'on de Talento program (Ref. 2019-T1/TIC-12776). JMG was supported by the Ministerio de Universidades and the European Union NextGeneration EU/PRTR through 2021-2023 Margarita Salas grant. I.P was a post-doctoral fellow at the Physics Department of Sapienza University of Rome during most part of this work.
93a3fa66768fcb05b730533cf8024a17dd0ac6ae
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{sec:introduction} \IEEEPARstart{R}{elighting}, which aims to change the illumination settings of an image under given lighting conditions, has recently attracted widespread interests ~\cite{sen2005dual, li2018learning, xu2018deep, zhou2019deep, sun2019single, nestmeyer2020learning, 2020AIM, 7792614, 9725240, 9785513, 7983410}. Its high practical value promotes its applications across a variety of fields, including mobile imaging, augmented virtual reality, post-processing image editing and e-commerce products visualization. Thanks to the booming of deep learning~\cite{lecun2015deep, simonyan2014very, he2016deep}, deep relighting methods ~\cite{sengupta2019neural, li2018learning, xu2018deep, Philip2019Multi, nestmeyer2020learning, wang2020single, qiu2020towards, yazdani2021physically, xu2018deep, zhou2019deep, sun2019single} have significantly accelerated the development in the field of relighting. With the assistance of powerful representation ability of deep neural networks, it becomes possible for these methods to relight scenes under more ambiguous inputs and complicated environments. Most of previous methods ~\cite{xu2018deep, Philip2019Multi, nestmeyer2020learning, wang2020single, qiu2020towards, yazdani2021physically} inherit the overall framework of conventional methods and some modified NeRFs ~\cite{srinivasan2020nerv, niemeyer2021giraffe, schwarz2020graf} can also be utilized for relighting. However, these methods suffer from huge data requirements to fit a single scene and weak generalization ability. So a new framework which can ease constraints on data and be utilized in generalized scenarios has been expected. Recently, Murmann{~\textit{et al.}~} proposed an indoor scene multi-illumination dataset~\cite{murmann2019dataset} which can be used for real scene relighting. They set up a new point of view in the field of relighting, which regards image relighting as an image-to-image translation task. After Helou{~\textit{et al.}~} proposed a novel outdoor scene synthetic dataset (\emph{i.e.,~}~VIDIT~\cite{helou2020vidit}) and organized the image relighting competitions in AIM 2020~\cite{2020AIM} and NTIRE 2021~\cite{helou2021ntire} based on it, this viewpoint attracts more attentions. The viewpoint considers the unavailability of accurate lighting information in real-world applications, so only image pairs or triplets with depth information are provided for training, lacking specific illumination information. Since relighting is highly ill-posed when accurate illumination properties are unknown~\cite{ramamoorthi2001signal, aldrian2012inverse}, this task therefore becomes more challenging. For example, Ramamoorthi and Hanrahan~\cite{ramamoorthi2001signal} demonstrate that the difference between low-frequency texture and lighting effects is hard to distinguish for most situations. To tackle relighting on such ill-posed condition, we hope to get inspiration from rendering frameworks. However, instead of previous physics-based networks which directly estimate rendering-related parameters with supervisions by laboriously designed losses. We intend to figure out specific network and modules corresponding with rendering process and design a network intrinsically suitable to relighting without supervisions for intermediate parameters. We mainly establish links between our proposed network and typical ideas of conventional rendering in following aspects: \textbf{1)} Hierarchical sampling strategy~\cite{levoy1990efficient}, which has shown its ability and efficiency in voxel rendering. \textbf{2)} Spherical harmonic lighting~\cite{green2003spherical}, which parameterizes light source with bases of different frequencies. \textbf{3)} Physics-based rendering under the spherical harmonic lighting~\cite{green2003spherical, imageworks2010physically}, which can be modeled without integral and needs only multiplication. According to above discussions, we propose an \textbf{I}llumination-\textbf{A}ware \textbf{N}etwork (\textbf{IAN}) for accurate deep image relighting in this paper. Specifically, as a \textbf{simulation of hierarchical sampling}, a pyramid-like architecture is deployed for progressively changing the lighting condition in an image from coarse to fine. In addition, inspired by the idea of physics-based rendering model, we elaborately design an illumination-aware residual block in a two-branch structure. With the guidance from \textbf{spherical harmonic lighting} which decouples light into components of different frequencies, we utilize convolutions with diverse dilation rates to obtain samples under different frequencies and design a statistical-coupled attention branch as illumination descriptor extractor, which models light by diverse statistics. Another branch preserves local geometry- and reflectance-related information. Finally, the multiplication of the illumination descriptor and the geometry-related information implicitly serves as an approximation of \textbf{rendering under spherical harmonic lighting assumption}. Besides, considering that depth information is available in many applications, we also introduce a depth-guided geometry encoder which takes the depth map, surface normal, and positional encoding as the inputs, aiming to extract multi-scale geometry and structure information to assist the relighting task. We evaluate the proposed network on the VIDIT~\cite{helou2020vidit} dataset with the absence and the presence of depth information, which correspond to the settings of AIM 2020~\cite{2020AIM} and NTIRE 2021~\cite{helou2021ntire} challenges, respectively. Our proposed method outperforms all comparison methods, including the champion solutions of AIM 2020~\cite{2020AIM} and NTIRE 2021~\cite{helou2021ntire} both quantitatively and quantitatively. Besides, we also perform evaluations on the Adobe Multi-Illumination dataset which contains real indoor scene, and our results still obtain the best performance in comparison with other methods. Then we apply our method on a portrait relighting dataset (\emph{i.e.,~} DPR dataset~\cite{zhou2019deep}) and our method surpasses previous methods by a large margin, demonstrating the superiority and robustness of our proposed method. In summary, our contribution is three-fold: \begin{itemize} \item We design an illumination-aware network (IAN) which implicitly inherits the idea of physics-based rendering to perform image relighting. Through extensive experiments, we show that our proposed method achieves better performance than other methods while maintaining promising computational efficiency. \item We propose an illumination-aware residual block (IARB) which implicitly conducts rendering process and is suitable for relighting task. \item We introduce a depth-guided geometry encoder to fully extract the geometry- and structure-related features from additional information (\emph{e.g.,~} depth, normal, and linear positional encoding). % These features assist the network to obtain favorable relighting results. \end{itemize} \section{Relative Works} Numerous image relighting methods have been proposed in the literature. Based on the usage of convolutional neural network, we divide these methods into two groups: conventional physics-based method and deep network based methods. \subsection{Conventional physics-based methods} Conventional physics-based methods focus on building explicit assumptions and models to approximate effects of illumination in reality efficiently. These assumptions~\cite{basri2003lambertian} greatly reduce dimensions of light transport function into low-dimensional subspace~\cite{basri2003lambertian, belhumeur1998set, ramamoorthi2001relationship} in order to ease the difficulties in calculation. Though dimensions of light transport function are highly reduced, fitting a reasonable function still needs hundreds of images of a scene by brute-force searching~\cite{debevec2000acquiring, sen2005dual}. Subsequently, to reduce the number of required samples to perform relighting, some works~\cite{wang2009kernel, reddy2012frequency} take advantages of the local coherence of light transport matrix in lower dimensions~\cite{malzbender2001polynomial} and others~\cite{karsch2011rendering} involve in human interactions. Decomposing rendering-related factors from given images of a scene ~\cite{karsch2011rendering, duchene2015multi, zhang2016emptying} is a widely used strategy in relighting, which is known as inverse renderer. Geometry, material, and illumination are usually predicted separately at first. By directly controlling these explicit factors, these methods can re-render given scene and obtain relighting results of good quality. However, these methods require a complex calibration process, huge computational costs, storage resources, and even specialized hardware (\emph{e.g.,~} panorama camera and drone). Rendering-related factors (\emph{e.g.,~} geometry, surface reflectance, and environmental illumination) are either estimated by complicated system or measured by specific equipment. The former leads to the accumulation of errors in the whole process and the latter limits the general applications of these methods. Besides, numerous input images with strict constraints are needed to fit a model for a single scene. In contrast, our method is data-driven, and the training data is easy to acquire and access. Once the model is trained, only a single image is needed to perform relighting and the pretrained model can be generalized into diverse scenes. \subsection{Deep network based methods} Recently, deep neural networks~\cite{krizhevsky2012imagenet,he2016deep} have shown their potentials on illumination-related manipulation~\cite{ding2019argan, nagano2019deep,zhang2020portrait,zhang2020copy}, which promotes the development of deep network based methods for relighting task. There are mainly three viewpoints to design relighting networks, namely physics-based viewpoint, neural radiance field viewpoint, and image-to-image translation viewpoint. \subsubsection{\textbf{Physics-based viewpoint}} Physics based neural network derives from conventional physics-based methods and these methods replace parts of original system with neural networks. Owing to the strong representation ability of neural network, Ren \textit{et al.}~\cite{ren2015image} and Xu \textit{et al.}~\cite{xu2018deep} simplify the estimation process of light transport function with sparse samples. Inspired by the idea of decomposition, some methods~\cite{Philip2019Multi, nestmeyer2020learning, wang2020single, qiu2020towards, yu2020self} employed different networks with the guidance of corresponding losses to factor an image into multiple components, including albedo, normal, shading, \emph{etc}. To avoid accumulation of errors in decomposition and re-rendering procedure, some concurrent methods only insert the lighting priors (\emph{i.e.,~} spherical harmonics~\cite{xu2018deep, zhou2019deep} and environment map~\cite{sun2019single}) into the network for directly obtaining the desired relighting results. % These methods also inherit disadvantages of conventional physics-based methods. For instance, multiple calibrated images and lighting priors are expensive and laborious to acquire and usually absent in real-world applications. Though the accumulation of errors is eased by robustness of neural network, it is still a problem pending to be further solved. \subsubsection{\textbf{Neural radiance field viewpoint}} Neural radiance fields directly construct continuous representations for scenes, parameterized as a basic MLP network. Since Mildenhall \textit{et al.} proposed NeRF~\cite{mildenhall2020nerf}, numerous works have attempted to optimize it. Some works~\cite{srinivasan2020nerv, martin2021nerf} improve the abilities or extend the application scenarios of vanilla NeRF~\cite{mildenhall2020nerf}. NeRV~\cite{srinivasan2020nerv} enhances the ability for recovering relighting 3D scene representations and NeRF in the wild~\cite{martin2021nerf} succeeds in modeling ubiquitous, real-world phenomena in uncontrolled images. Attributed to the representation ability of GAN, GIRAFFE~\cite{niemeyer2021giraffe} and GARF~\cite{schwarz2020graf} make scenes editable. These successive works enable to manipulate light condition in a scene, which can be used in relighting. % The major disadvantage of these methods is their limited ability in generalization. One pretrained model can only take effect on a single scene. Besides, their inputs contain 3D camera pose and position information, which is inaccessible in some scenarios. In the contrary, our method needs no position information of real-world and can take effect on various scenes. Our method also benefits from few parameters and low computational cost. \subsubsection{\textbf{Image-to-image translation viewpoint}} Murmann{~\textit{et al.}~}~\cite{murmann2019dataset} attempt to discard explicit graphics prior and regarded the image relighting as the image-to-image translation problem. Shared with similar settings, Helou{~\textit{et al.}~} proposed a virtual image dataset~\cite{helou2020vidit} for illumination transfer and held image relighting competitions (\emph{i.e.,~} AIM 2020~\cite{2020AIM} and NTIRE 2021~\cite{helou2021ntire}). The competitions mainly include two tracks named ono-to-one relighting and any-to-any relighting. These proposed datasets and competitions motivate researchers to think relighting task from a brand-new viewpoint, namely image-to-image translation viewpoint. Some works involve and adjust existing modules or networks which have shown their representation abilities in other fields. Puthessery{~\textit{et al.}~}, winners of AIM 2020, proposed WDRN~\cite{puthussery2020wdrn} which employs the wavelet transformation for efficient multi-scale representations. Paul{~\textit{et al.}~}~\cite{gafton20202d} applied pix2pix~\cite{isola2017image} to their framework and utilize adversarial learning to further improve the quality of the generated images. Yang{~\textit{et al.}~}~\cite{yang2021multi} took the corresponding depth map into consideration and designed a depth-guided relighting network based on an RGB-D saliency detection method~\cite{pang2020hierarchical}. % This type of methods is easier to train than ones from another two viewpoints, for they relieve the constraints exerted on input. These methods consider relighting of general cases, which makes general applications possible. However, existing methods underperform above-mentioned two viewpoints and how to fully explore their ability remains to be solved. Our work follows image-to-image translation viewpoint and extends its boundary of performance. Besides, we blend physics-based ideas implicitly and design modules intrinsically suitable to relighting task. \begin{figure*}[ht] \centering \includegraphics[width=0.95\textwidth]{figs/Overview_new-crop.pdf} \caption{ Overview of the proposed illumination-aware network (IAN). The proposed network is designed as a pyramid-like structure and processes images in a coarse-to-fine manner. Geometry- and structure-related information is provided from depth-guided geometry encoder which will be demonstrated in \secref{ssec:aux_enc}. In this figure, ICSC, ILSC and CLSC denote image content, intra-level, and cross-level skip connections, respectively. IARB represents the proposed illumination-aware residual block which will be shown in \secref{ssec:res}. The output of the $l$-th level is up-sampled by bicubic interpolation and is concatenated with unprocessed image as input for the $(l-1)$-th level. } \label{fig:overview} \vspace{-5mm} \end{figure*} \section{Approach} We define that $I_{in}$ represents the input image under a pre-defined illumination condition. $I_{gt}$ represents the ground truth under the desirable illumination condition. Single image relighting aims to translate the input image $I_{in}$ to another image $I_{out}$ whose illumination condition is similar with $I_{gt}$ through a relighting network $G$. Considering relighting task commonly does not take substantial environmental changes into account, so above images are from the same scene if without particular explanation. In terms of task definition, we partially follow AIM 2020~\cite{2020AIM} and NTIRE 2021~\cite{helou2021ntire} competitions, which divides relighting task into 2 cases, namely one-to-one relighting and any-to-any relighting. We firstly focus on the former case in the competitions~\cite{2020AIM, helou2021ntire} where input and target illuminant settings are pre-determined and fixed for all scenes. In this setting, the output image is formulated as $I_{out} = G(I_{in}, g_{opt})$ where $I_{in}$ is the input and $g_{opt}$ is some optional guidance. Besides, the any-to-any relighting setting in the competitions~\cite{2020AIM, helou2021ntire} has only 40 pre-defined illumination condition and cannot fully validate the ability of our model in arbitrary illumination relighting. So we consider a more generalized setting that contains a continuous space of illumination and extend our method to this condition. An additional target light prior $l_p$ is needed, and the output is formulated as $I_{out} = G(I_{in}, g_{opt}, l_{p})$ In this section, we present an illumination-aware network (IAN) which enables high-resolution relighting image generation of arbitrary scenes. Specifically, we propose a pyramid-like network architecture (see \secref{ssec:pyr}) along with residual learning strategy. This network architecture progressively manipulate effects of light in order to generate relighting images with fine-grained details and global consistency. Besides, an illumination-aware residual block (IARB) (see \secref{ssec:res}) is proposed to parameterize attributes of light source and to leverage the extracted illumination descriptor for an implicit rendering process. To further utilize depth information, which can be estimated from RGB images or captured by advanced sensors, we propose a depth-guided geometry encoder (DGGE) (see \secref{ssec:aux_enc}) shared among levels. \subsection{Illumination-aware Network} \label{ssec:pyr} As the main body of our network (see Fig.~\ref{fig:overview}), a pyramid-like network architecture~\cite{zhang2019deep, das2020fast} utilizes multi-scale information from input images and conducts image relighting in a coarse-to-fine manner. We will introduce this architecture in this section. For relighting task, owing to the diverse effects of light which entangle with scene attributes, complex information from low-level features (\emph{e.g.,~} texture and edge) to high-level features (\emph{e.g.,~} class of object) is needed and accounts for the final relighting results. To capture information in various semantic levels, we utilize U-Net~\cite{ronneberger2015u} like structure which is proven to be effective in numerous previous works~\cite{zhou2019deep, sun2019single, puthussery2020wdrn}. However, a single U-Net is insufficient to tackle with relighting task. Firstly, due to the highly ill-posed property of relighting and diverse effects of light among scales, achieving high-quality relighting results by a single step is too complicated for a single U-Net. Besides, since the resolution of an input image is high and a single-level network has a limited receptive field, such network only concentrates on local features and ignores global cues. Since local features are easily affected by the material and color of the object, it is hard to extract intrinsic attributes of light from them. Consequently, we observed that the results from vanilla U-Net~\cite{ronneberger2015u} are trapped into local minima due to the disability of vanilla U-Net in capturing global light information. Considering above-mentioned difficulties, we resort to traditional rendering framework to get inspiration. In the field of voxel rendering, hierarchical sampling strategy~\cite{levoy1990efficient} is designed to tackle similar problems. This strategy arranges voxel rendering in a progressive manner to fulfill this process effectively and efficiently. As previous works revealed~\cite{yazdani2021physically, chen2020neural}, relighting task can be seen as a re-rendering process. So hierarchical sampling strategy intrinsically benefits to relighting task. Moreover, humans tend to focus on overall low-frequency and tone changes before they take local structures into account, which refers to a global-to-local architecture. Motivated by above rendering strategy and human preference, we further extend U-Net~\cite{ronneberger2015u} to a pyramid-like architecture. Compared with the previous one, the receptive field of this network is tremendously enlarged and is sufficient to capture global information. This design eases the difficulties of task assigned to each pyramid level and promotes the quality of final relighting results. This pyramid-like architecture has $3$ levels in total noted as $G_0$, $G_1$, $G_2$ from bottom to top. The full resolution input image and the $2\times/4\times$ bicubic down-sample ones are denoted as $I_{in}$, $I_{in}^{\downarrow 2}$ and $I_{in}^{\downarrow 4}$, respectively. The outputs of $G_0$, $G_1$, $G_2$ are $I_{out_{0}}$, $I_{out_{1}}$ and $I_{out_{2}}$. $G_0$ takes $I_{in}$ and $I_{out_{1}}^{\uparrow 2}$ which is $2\times$ bicubic up-sampled output of $G_1$ as input. $G_1$ takes $I_{in}^{\downarrow 2}$ and $I_{out_{2}}^{\uparrow 2}$ which is $2\times$ bicubic up-sampled output of $G_2$ as input. $G_{2}$ only takes $I_{in}^{\downarrow 4}$ as input, for no previous output is available. For each level, an encoder down-samples features for 2 times and a decoder up-samples them correspondingly, which resembles a U-Net~\cite{ronneberger2015u}. A bottleneck comprised of 4 illumination-aware residual blocks bridges the encoder and the decoder. We will detail this residual block in Sec.~\ref{ssec:res}. To alleviate illumination invariant information loss during encoding, we utilize intra-level skip connections (ILSC) as: \begin{equation} D_i = D_i + E_i, \end{equation} where $D_i$ and $E_i$ denote the features from encoder and decoder at the same pyramid level, respectively. $i$ represents the times of down-sampling at current level. Since all features involved in the above equation are in the same pyramid level, we ignore the superscript which presents the number of pyramid level. As higher levels already modeled global illumination changes, we preserve them when modeling local influences of light. Besides, every level is similarly assigned to relight image in a residual manner, so information is intrinsically shared among levels. It is more reasonable to refine features of previous level instead of encoding brand-new features in each level. To achieve the proposals, we introduce a cross-level skip connection (CLSC). This skip connection directly feeds information from a smaller scale to a larger one at the decoder side in order to reinforce scale-specific information learning in current resolution. This strategy also contributes to decoupling light effects on different scales, for the upper level only needs to take charge of modeling global information, regardless of local details that lower levels take charge of. The CLSC can be formulated as: \begin{equation} D^l_i = D^l_i + [D^{l+1}_i]_{\uparrow 2}, \end{equation} where $l$ represents the level in the pyramid-like structure, and $[D^{l+1}_i]_{\uparrow 2}$ represents bilinear up-sampled features for 2 times from previous level. Although the CLSC succeeds in preserving illumination invariant information, detailed textural information is hard to fully reconstruct during decoding. As the domain of inputs and outputs of the network are consistent, differing from the vanilla U-Net~\cite{ronneberger2015u} designed for segmentation, an image content skip connection (ICSC) is used to directly deliver input images to the output side of network for retaining fine-grained textural details. Besides, the CLSC makes decoders aware of cross-scale information, while encoders are blind to it. To extract the most beneficial features at the encoder side, up-sampled output from previous level is taken as part of input, which helps encoders be aware of cross-scale information as well. Eventually, entire pipeline is formulated as: \begin{equation} \begin{aligned} I_{out_{l}} &= & \left\{ \begin{aligned} &G_{l}(I_{in}^{\downarrow 2^{l}}) + I_{in}^{\downarrow 2^{l}} & l=2 \\ &G_l(\mathbf{cat}(I_{in}^{\downarrow 2^l}, I_{out_{l+1}}^{\uparrow 2})) + I_{in}^{\downarrow 2^l} & otherwise \end{aligned} \right. \end{aligned} \end{equation} where $\mathbf{cat}$ represents a concatenation operator. \subsection{Illumination-aware residual block} \label{ssec:res} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figs/ModifiedResBlock-crop.pdf} \caption{Structure of the proposed illumination-aware residual block (IARB). This delicate block implicitly embeds rendering process and is able to utilize various statistics under diverse sampling frequencies to obtain an accurate light descriptor. Along with a target light projector, IARB further enables the manipulation of light condition.} \label{fig:res} \vspace{-5mm} \end{figure} Physics-based rendering model which has shown its strong ability in computer graphics is defined as: \begin{equation} \label{equ:brdf} I(x,\omega_o)=\int_{\Omega}f(x,\omega_i,\omega_o)L(x,w_i)n\cdot\omega_id\omega_i, \end{equation} where $f(x,\omega_i,\omega_o)$ denotes Bidirectional Reflectance Distribution Function, $L(x,w_i)$ is the radiance that arrives at the point $x$ from the incoming direction $\omega_i$, $n$ is the surface normal at current position, and $\omega_o$ is the outgoing direction. Inspired by this, we attempt to design a module to implicitly embed this process into our network, as shown in \figref{fig:res}. Due to the difficulties in directly calculating integral in \eqref{equ:brdf}, we utilize the spherical harmonic lighting to approximate original light condition. We replace the integral and $L(x,w_i)$ with $c_j \odot Y_j$ where $Y_j$ is the $j$-th spherical harmonic basis and $c_j$ is its coefficient. Finally, \eqref{equ:brdf} is rewritten as \eqref{equ:brdf_sim}: \begin{equation} \label{equ:brdf_sim} I(x,\omega_o)= \sum_j \underbrace{[f(x,\omega_i,\omega_o) \odot (n\cdot\omega_i)]}_{surface~attributes} \odot \underbrace{[c_j \odot Y_j]}_{light}, \end{equation} where $\odot$ represents the element-wise scalar product and $\cdot$ is the dot product. The \eqref{equ:brdf_sim} can be divided into 2 parts. The first part $f(x,\omega_i,\omega_o) \odot (n\cdot\omega_i)$ is mainly related to the geometry and texture attributes of objects. The second part $c_j \odot Y_j$ describes the attributes of light. Corresponding to the two parts of~\eqref{equ:brdf_sim}, we hope to design a module which has two abilities accordingly. One is the illumination-aware ability. We hope this module to extract a credible spatial-invariant illumination descriptor which represents a specific group of spherical harmonic coefficients implicitly. Then this descriptor is projected into a desirable descriptor which offsets the effects of old illumination and exerts influences of new one. Another is the geometry-maintenance ability. This module should preserve local surface features which contain textures, surface normals, and positional information. We first describe how to design a module that can extract the light descriptor under the spherical harmonic lighting assumption, which represents the lights by several bases in different domain. The coefficients of bases in spherical harmonic lighting are calculated as~\eqref{eq:sh}: \begin{equation} c_j = \sum_\omega^\Omega{F_{light}(\omega)\cdot Y_j}, \label{eq:sh} \end{equation} where $F_{light}(\omega)$ is the function which represents the global illumination and $\Omega$ is the set of frequencies. Noted that the convolution in the spatial domain is the multiplication in frequency domain, we intuitively utilize kernels with diverse dilation rates which bring diverse sample rates as different bases in spherical harmonic lighting. In practice, we utilize dilation rates ranging from $1$ to $3$ and stack several modules to simulate dilation rates in a larger range. The outputs of dilation convolutions are formulated as: \begin{equation} R_{ori} = \{R_{d0}, R_{d1}, R_{d2}\}. \end{equation} Besides, the concatenation and the linear projection can be seen as a generalized case of summation (The summation can be written as $sum=\mathbf{w^T} f$ when $\mathbf{w^T}=\mathbf{1^T}/N$, where $f$ is the concatenated feature and $N$ is the dimension of $f$). Thus, we replace the summation with the concatenation and the linear projection, and we convert the convolution in the spatial domain to the multiplication in the frequency domain to build the connection between the proposed illumination-aware residual block (IARB) with~\eqref{eq:sh}. The approximation of~\eqref{eq:sh} in our IARB can be formulated as: \begin{equation} Desc_\mu = \mathbf{Linear}(\mathbf{cat}(f_k \ast f_f)) \xrightarrow{\mathscr{F}} \sum_k^{k\in\mathscr{K}} [w_k \cdot F_f(\omega)]\cdot F_k(\omega), \label{eq:conv_sum} \end{equation} where $F_k(\omega)$ and $F_f(\omega)$ are the representation of convolution kernels and features in the frequency domain respectively, $f_k$, $f_f$ are their representations in the spatial domain respectively ($f_k \ast f_f \in R_{ori}$), $w_k$ is the weight in linear projection for the $j$-th kernel, $\mathscr{K}$ is the set of kernels, $\mathscr{F}$ presents the Fourier transformation and $\ast$ is the convolution operation. Now we obtain the first item $Desc_\mu$ to describe attributes of light source. Besides, we also introduce $Desc_\sigma$ which is the deviation of features to measure non-linear relations for a better model of lighting. Then considering the invariance of the illumination condition in a scene, we conduct a global average pooling before linear projection to obtain a 1D feature to diminish influences caused by spatial positions. In practice, $Desc_\mu$ and $Desc_\sigma$ are calculated as: \begin{equation} Desc_{\mu} = F_{\mu}(\mu), \mu=[\mu_1,...\mu_c], \end{equation} \begin{equation} \label{equ:mean} \mu_i = \frac{1}{H\times W}\sum_{j, k}^{H, W}{{R_i(j, k)}}, \end{equation} \begin{equation} Desc_{\sigma} = F_{\sigma}(\sigma), \sigma=[\sigma_1,...,\sigma_c], \end{equation} \begin{equation} \label{equ:std} \sigma_i = \sqrt{\frac{1}{H\times W}\sum_{j, k}^{H, W}{(R_i(j, k)-\mu_i)^2}}, \end{equation} where $\mu_i$ and $\sigma_i$ are the mean and standard deviation of the $i$-th channel of features $R_{ori}$ respectively. Except for the branch to extract attributes of light, another branch is designed to preserve spatial information correlated with normals, textures. Eventually, this module has two components corresponded to two above-mentioned abilities, and we obtain two items which represent surface attributes and light conditions respectively as: \begin{equation} \frac{Desc_{\mu} + Desc_{\sigma}}{2} \sim [c_j \odot Y_j], \end{equation} \begin{equation} R_{ori} \sim [f(x,\omega_i,\omega_o) \odot (n\cdot\omega_i)]. \end{equation} So the rendering process in~\eqref{equ:brdf_sim} express as \eqref{equ:rere} in network and $R_{rr}$ is the re-rendered feature: \begin{equation} R_{rr} = R_{ori} \odot \frac{Desc_{\mu} + Desc_{\sigma}}{2}. \label{equ:rere} \end{equation} For matching the original feature space, we use a $3\times 3$ convolutional layer to compress the re-rendered feature $R_{rr}$. Eventually, due to the orthogonality of spherical harmonic lighting, this module can modify a subset of lighting components without influencing others. We thus can simply add the re-rendered feature to the original one, and the output feature $F_{out}$ is calculated as: \begin{equation} F_{out} = C_f(R_{rr}) + F_{in}. \end{equation} To fully model light conditions under a spherical harmonic setting, we utilize multiple modules to re-render different lighting components as the summation process in~\eqref{equ:brdf_sim}. In order to further enable manipulation of light condition, we design a target light projector which receives parameterized light as input and produces its descriptors as illumination guidance for IARBs. The modified IARB for manipulation of light condition calculates the illumination descriptor as \begin{equation} Desc_{\mu/\sigma} = F_{\mu/\sigma}(\mathbf{cat}(\mu/\sigma, l_p)), \end{equation} where $l_p$ is the target illumination prior under spherical harmonic lighting assumption. Compared with the previous SOTA method in the portrait relighting task (\emph{i.e.,~} DPR~\cite{zhou2019deep}), our method designs a unique way for arbitrary illumination manipulation. DPR~\cite{sun2019single} directly concatenates image features with lighting features from the encoder and feeds them into its decoder. This way broadcasts the 1D lighting parameters to a 3D tensor whose size matches that of the image, which brings highly redundant and memory consumption. Considering the invariance of global illumination in a scene, the network should describe illumination in a 1D representation. According to this intuition, we investigate the attention mechanism for lighting prior injection and progressively manipulate the illumination condition. To the best of our knowledge, our method is the first one which utilizes the attention mechanism to represent illumination conditions. Owing to the attention mechanism, we maintain the lighting parameters in the 1D shape in the entire procedure, which achieves a better efficiency. \subsection{Depth-guided geometry encoder} \label{ssec:aux_enc} Depth is important information for relighting task to make the network understand 3D dependencies. In order to take fully advantages of depth, we further introduce surface normal derived from depth, which is strongly related to the local brightness of surface and the orientations of reflected light. Besides, normal fetches detailed structural information and is conducive to local structure preservation. We firstly select depth and normal as additional inputs which promote network to understand more complicated geometric and structural information for guiding relighting. When only depth is given, we can calculate surface normal based on the following formulation: \begin{equation} \begin{split} \vec{n}_{x, y} &= \frac{(\frac{\partial D_{x, y}}{\partial x}, \frac{\partial D_{x, y}}{\partial y}, -1)}{|\vec{n}_{x, y}|},\\ &= \frac{(\frac{D_{x+1, y}-D_{x-1, y}}{2}, \frac{D_{x, y+1}-D_{x, y-1}}{2}, -1)}{|\vec{n}_{x, y}|}. \end{split} \end{equation} Besides, the convolution is shift-invariant, which means image patches at any position are treated equally, while the intensity of incident light depends on the global position in an image. The convolution is blind to such global positional information~\cite{islam2020much, kayhan2020translation}. To alleviate this problem, we utilize positional encoding, which encodes global distance into local patches to assist the convolution to be aware of global distance. Widely used sinusoidal positional encoding~\cite{vaswani2017attention} encodes positional information through sine and cosine functions with different periods. It needs lots of channels and is memory-consuming in the high-resolution case. So we choose to use a light-weight linear positional encoding based on the Cartesian coordinate that encodes positional information by a 2D feature: \begin{equation} PE(x,y)=2 \cdot [\frac{x}{W}, \frac{y}{H}] - 1. \end{equation} The value range of the linear positional encoding is $[-1,1]$. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figs/auxiliary_new-crop.pdf} \caption{Illustration of depth-guided geometry encoder (DGGE). DGGE has 5 levels in total and each level in the form of ``[ReLU-Conv] $\times 2$''. This encoder is used to extract geometry- and structure-related features from additional information. } \label{fig:aux_enc} \end{figure} We feed the above three guidance (\emph{i.e.,~} depth, normal, and positional encoding) into the proposed depth-guided geometry encoder (DGGE) to extract geometry- and structure-related features as illustrated in \figref{fig:aux_enc}, aiming to assist the network in understanding geometric relationships in scenes and in recovering structure details in shadows. To establish connections among levels for encoding the shared information, we design an encoder shared among levels. The DGGE has $5$ stages and provide $5$ intermediate feature maps in total. We attempt to utilize as much information from depth as possible, so we densely merge features into the main stem. To achieve dense merging, $5$ intermediate feature maps are overlappingly divided into $3$ groups as $\{C^0, C^1, C^2\}, \{C^1, C^2, C^3\}, \{C^{2}, C^{3}, C^{4}\}$. Besides, this design is memory-efficient and suitable to relighting on the high resolution. These groups of features correspond to $3$ levels of network. Then they are merged with RGB features extracted by the encoders in the pyramid-like architecture: \begin{equation} E^l_i = E^l_i + C^{l+i}. \end{equation} In this way, we enrich those RGB features by extracting geometry- and structure-related information. \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{figs/guided_ab-crop.pdf} \caption{ Two groups of methods in the respect of guidance usage. The first group named `single-branch method' simply concatenates RGB images with guidance and the second one named `multi-branch method' individually encodes images and guidance. Our method belongs to the second group. } \label{fig:2g} \end{figure} \noindent \textbf{Discussions.} Basically, the existing relighting methods can be divided into two groups in the respect of guidance usage, as illustrated in~\figref{fig:2g}. The first group of methods~\cite{helou2021ntire, wang2021multi} concatenate RGB images with guidance and feed them together into a single-branch network. However, they ignore the diversity in the distributions of RGB images and guided information, leading to inferior results. The second group of methods~\cite{wang2021multi} develop an additional branch to individually extract guided information. Our method belongs to this group as well. Along with this idea, we make several modifications for a further improvement in performance. Except for depth, we detect two extra guided information (\emph{i.e.,~} surface normal and linear positional encoding). Surface normal derived from depth serves as an indispensable role in traditional rendering. Linear positional encoding considers the bias caused by diverse positions in the image while convolutions alone are unable to recognize such bias due to its shift-invariance. Besides, we reveal that the finest information on the original resolution is crucial. Nevertheless, previous methods~\cite{wang2021multi} only fuse features after down-sampling. We also take efficiency into account. Previous methods (\emph{e.g.,~} MBNet~\cite{yang2021multi} and ADNet~\cite{helou2021ntire}) utilize complicated modules to conduct information fusion, which lead to highly redundant. Our method proves that simple addition operation is sufficient, which is friendly for high-resolution applications in practical usage. \subsection{Implementation Details} Our proposed network is jointly trained by optimizing reconstruction loss and grayscale SSIM loss to balance the fidelity of color and regional consistency of luminance. \noindent \textbf{Loss function}. SSIM loss~\cite{zhao2016loss} has been widely used in relighting task ~\cite{puthussery2020wdrn,hu2020sa,yang2021multi,yazdani2021physically} to enhance the structure consistency. Since L1 loss has already supervised color fidelity, we intend SSIM loss~\cite{zhao2016loss} to pay more attention to illumination consistency rather that color consistency. Thus, we further alter it to grayscale version as \begin{equation} L_{SSIM}(\phi(I_{out}), \phi(I_{gt})) = 1 - SSIM(\phi(I_{out}), \phi(I_{gt})). \end{equation} where $\phi$ means the function that converts color image into grayscale one, $I_{out}$ is a relighting image and $I_{gt}$ is the ground truth. Gradient loss are also utilized to yield sharper results~\cite{murmann2019dataset}, which can be formulated as \begin{equation} \begin{aligned} L_{Gradient} &= \sum_x{\sum_y{\norm{\nabla I_{out}(x,y) - \nabla I_{gt}(x,y)}_2}}, \\ \nabla I(x,y) &= (\frac{\partial I(x, y)}{\partial x}, \frac{\partial I(x, y)}{\partial y}), \\ \frac{\partial I(x, y)}{\partial x} &= I(x+1, y) - I(x-1, y), \\ \frac{\partial I(x, y)}{\partial y} &= I(x, y+1) - I(x, y-1), \end{aligned} \end{equation} where $I(x,y)$ means pixel value of $I$ at position $(x,y)$. For the $l$-th level, the total loss is \begin{equation} \begin{aligned} L^l(I_{out}, I_{gt}) =& \alpha L_1(I_{out}, I_{gt}) + \beta L_{SSIM}(\phi(I_{out}), \phi(I_{gt})) \\ &+ \gamma L_{Gradient}(I_{out}, I_{gt}). \end{aligned} \end{equation} All levels of the pyramid-like architecture utilize the same form of the loss function. So the final loss is \begin{equation} L = \sum_{l=0}^{2}{\mu^l L^l(I_{out}, I_{gt})}. \end{equation} We assign the same weight for each level, where $\mu^0=\mu^1=\mu^2=1.0$. The choice will be discussed in \secref{ssec:ab}. \begin{table}[ht] \centering \renewcommand\arraystretch{1.2} \renewcommand{\tabcolsep}{3.4mm} \caption{Details of encoder, decoder, and light projection modules.} \begin{tabular}{c|c|c} \hline Module & Structure & Output size \\ \hline \multirow{5}{*}{Encoder} & $ \begin{bmatrix} (3\times 3, 48, 1) \& ReLU \\ \end{bmatrix} \times 2$ &\multirow{1}{*}{$H\times W\times 48$} \\ \cline{3-3} & $ \begin{bmatrix} (3\times 3, 48, 2) \& ReLU \\ \end{bmatrix} \times 1 $ &\multirow{2}{*}{$\frac{H}{2}\times \frac{W}{2}\times 48$} \\ & $ \begin{bmatrix} (3\times 3, 48, 1) \& ReLU \\ \end{bmatrix} \times 1 $ & \\ \cline{3-3} & $ \begin{bmatrix} (3\times 3, 48, 2) \& ReLU \\ \end{bmatrix} \times 1 $ & \multirow{5}{*}{$\frac{H}{4}\times \frac{W}{4}\times 48$} \\ & $(3\times 3, 48, 1)$ & \\ \cline{1-2} Bottleneck & IARB $\times 4$ & \\ \cline{1-2} \multirow{7}{*}{Decoder} & $ReLU$ & \\ & $ \begin{bmatrix} (3\times 3, 48, 2) \& ReLU \\ \end{bmatrix} \times 3$ & \\ \cline{3-3} & $Bilinear$ $2\times$ & \multirow{2}{*}{$\frac{H}{2}\times \frac{W}{2}\times 48$} \\ & $ \begin{bmatrix} (3\times 3, 48, 1) \& ReLU \\ \end{bmatrix} \times 3$ & \\ \cline{3-3} & $Bilinear$ $2\times$ & \multirow{2}{*}{$H\times W\times 48$} \\ & $ \begin{bmatrix} (3\times 3, 48, 1) \& ReLU \\ \end{bmatrix} \times 3$ & \\ \cline{3-3} & $(3\times 3, 3, 1)$ & $H\times W\times 3$ \\ \hline \hline \multirow{3}{*}{Projector} & $ \begin{bmatrix} 9\rightarrow 1024 \& ReLU \\ \end{bmatrix} $ & \multirow{2}{*}{$1024$} \\ & $ \begin{bmatrix} 1024\rightarrow 1024 \& ReLU \\ \end{bmatrix} $ & \\ \cline{3-3} & $ \begin{bmatrix} 1024\rightarrow 12\times 192 \\ \end{bmatrix} $ & $12\times 192$ \\ \hline \end{tabular} \label{tab:arch} \end{table} \noindent \textbf{Network details.} In practice, our network has three levels and each level shares the same encoder-bottleneck-decoder structure, which has 2.67M parameters in total. The detailed structure is presented in \tabref{tab:arch}, where $(k\times k, ch, s)$ represents a $k\times k$ convolution with stride $s$ whose output has $ch$ channels. The bottleneck consists of 4 IARBs. In IARB, channels of each dilated convolution are 48 as well. Note that IARB has convolutions with 3 different dilation rates, the intermediate channels are 144, and we utilize a $3\times 3$ convolution to reduce number of channels back to 48. The structure of light projection module (\emph{i.e.,~} Projector) is also presented in \tabref{tab:arch}. $X\rightarrow Y$ means a linear layer which project an $X$-dimension input to a $Y$-dimension output. \section{Experiment} \label{sec:exp} \subsection{Datasets} \label{ssec:data} We train and test our proposed method on the Virtual Image Dataset for Illumination Transfer (\emph{i.e.,~} VIDIT~\cite{helou2020vidit}), which is utilized in AIM 2020~\cite{helou2021ntire} and NTIRE 2021~\cite{2020AIM} competitions. It contains 15,600 images rendered by Unreal Engine 4 which are captured from 390 virtual different outdoor scenes. Miscellaneous objects with various surfaces and materials appear in VIDIT~\cite{helou2020vidit} dataset, such as metal, wood, stone, water, plant, fabric, smoke, fire, plastic, \emph{etc}. The illumination settings are all the combinations of 5 color temperatures (2500K, 3500K, 4500K, 5500K and 6500K) and 8 light directions (N, NE, E, SE, S, SW, W, NW). The size of images is $1024\times 1024$ and corresponding depth maps of the same size are provided as well. Similar with AIM 2020 and NTIRE 2021 competitions, we mainly force on 2 specific illumination settings ($\theta_i=North,T_i=4500K; \theta_o=East, T_o=6500K$) for one-to-one relighting task. Specifically, we convert images under illumination setting ($\theta_i=North,T_i=4500K$) to ones under another ($\theta_o=East, T_o=6500K$). 300 images in total are used for training and 45 images are used for validation. This setting is the same as when we participated the one-to-one relighting track~\cite{helou2021ntire} in NTIRE 2021 competition. Under this setting, our team named 'MCG-NKU' achieves the best performance on the VIDIT~\cite{helou2020vidit} validation set. Besides, we validate the proposed method on a dataset which captures from indoor scenes, \emph{i.e.,~} Multi-Illumination dataset~\cite{murmann2019dataset}. This dataset consists of 1016 interior scenes in 95 different rooms throughout 12 residential and office buildings. Each scene is filled with miscellaneous objects and clutter of various materials, which decorate in a typical domestic and office style. All images are photographed under 25 pre-determined illumination settings. 25400 images whose size is $1500\times 1000$ and dense material labels segmented by crowd workers are provided. HDR images obtained by merging exposures are furnished as well. In our experiment, we take \textit{dir\_0} as input illumination settings and \textit{dir\_17} as output illumination settings in dataset to train all methods for evaluation. On this dataset, 985 images are used for training and 30 images are used for validation. We also conduct experiments on a domain-specific relighting dataset which is proposed by DPR~\cite{zhou2019deep} (\emph{i.e.,~} DPR dataset). The DPR~\cite{zhou2019deep} dataset is built on the high-resolution CelebA~\cite{liu2015faceattributes} dataset (\emph{i.e.,~} CelebA-HQ) which contains 30,000 face images from the CelebA~\cite{liu2015faceattributes} dataset with size of $1024\times 1024$. For each image, they randomly select 5 lighting conditions from a lighting prior dataset to generate relit face images, leading to 138,135 relit images. The authors of DPR~\cite{zhou2019deep} do not release their test dataset and detailed test setting, so we separate the images for the last 100 human faces under two diverse light conditions as test pairs. The remainder pairs are used for training in our experiment. \subsection{Training details} The parameters of our IAN are initialized by Xavier initialization~\cite{glorot2010understanding}. We use Adam~\cite{Adam15} optimizer during training. On VIDIT~\cite{helou2020vidit} dataset, we train the network for 24k iterations in total and utilize horizontal flip as data augmentation. Specifically, We select images of light source in the west and utilize horizontal flipping to fabricate images of light source in the east. We directly feed full resolution images whose size is $1024\times1024$ into network. The weights of losses are set to $\alpha=1.0, \beta=0.5, \gamma=0.0$. For other comparison methods, we keep training settings in original papers for ones which conducted experiments on the VIDIT~\cite{helou2020vidit} dataset and use our training settings for ones which did not. On Multi-Illumination~\cite{murmann2019dataset} dataset, we train our network for 120k iterations. We remove the DGGE when train on this dataset, for depth or normal information is unavailable. For a fair comparison, the weights of losses are set to $\alpha=1.0, \beta=0.0, \gamma=0.5$, following the setting for training the baseline on Multi-Illumination~\cite{murmann2019dataset} dataset. Due to the limitation of GPU memory, we crop images to $992\times 992$ in training. For fair comparison, all comparison methods on Multi-Illumination~\cite{murmann2019dataset} dataset are trained under the same setting. Our IAN is trained under our setting as mentioned in \secref{ssec:data}. Owing to the light projection module, one pretrained model of our IAN can manipulate arbitrary light condition. We utilize both the official pretrained model and retrain DPR~\cite{zhou2019deep} under our setting for comparison. Both methods are trained for 144k iterations on full resolution images, and we follow the loss setting in their paper~\cite{zhou2019deep} as $\alpha=1.0, \beta=0.0, \gamma=1.0$. For all experiments, learning rate is set to $1e-4$ and batch size is set to $5$. We train our IAN on one NVIDIA RTX TITAN and each 10k iterations consume about 6 hours. \subsection{Ablation studies} \label{ssec:ab} In this section, we show ablation studies we conduct to set forth the effectiveness of our method and give detailed analyses about the proposed modules. We mainly investigate four factors influencing the performance of our proposed network. We validate the effectiveness of the proposed illumination-aware residual block (IARB) firstly and then discuss the choice of loss weights. Besides, various skip connections which are introduced in our work are experimented. At last, diverse additional information fed to DGGE will be analyzed. All ablation studies are conducted on the VIDIT~\cite{helou2020vidit} dataset. \noindent \textbf{Number of levels}. \begin{figure*}[ht] \centering \includegraphics[width=0.95\textwidth]{figs/results/woaux_new-crop.pdf} \caption{Representative results (w/o guidance). We select methods in AIM 2020~\cite{2020AIM}, including DRN~\cite{wang2020deep}, WDRN~\cite{puthussery2020wdrn}. We also select prestigious image-to-image translation method pix2pix~\cite{isola2017image} and portrait relighting method DPR~\cite{zhou2019deep}.} \label{fig:rep_wog} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=0.95\textwidth]{figs/results/waux_new-crop.pdf} \caption{Representative results (w/ guidance). We compare our method with award-winning methods in NTIRE 2021~\cite{helou2021ntire}, including MBNet~\cite{yang2021multi} and OIDDR-Net~\cite{yazdani2021physically}.} \label{fig:rep_wg} \end{figure*} Image pyramid is an effective tool to utilize multi-scale information. With the increase of levels, total calculating cost will slightly grow, due to the decrease of image scale in each newer level. However, introducing excessive levels brings redundant parameters and may lead to over-fitting problem. We thus examine how the number of pyramid level effects the performance of network and find a balance between efficiency and performance. We conduct experiments for $L=1,2,3,4$ and the results are shown in \tabref{tab:cmp_lvl}. \begin{table}[ht] \centering \renewcommand{\tabcolsep}{4.4mm} \caption{Quantitative evaluation for different levels.} \resizebox{0.48\textwidth}{9mm}{ \begin{tabular}{l|c|c|c|c} \hline \makecell[c]{Levels} & PSNR & SSIM & Parameters & GMacs\\ \hline $L=1$ & 19.19 & 0.7116 & 943.51k & 203.22 \\ $L=2$ & 19.49 & 0.7213 & 1.81 M & 245.43 \\ $L=3$ & 19.70 & 0.7234 & 2.67 M & 255.99 \\ $L=4$ & 19.84 & 0.7300 & 3.53 M & 258.63 \\ \hline \end{tabular}} \label{tab:cmp_lvl} \end{table} As \tabref{tab:cmp_lvl} shown, performance continuously increases when the number of pyramid level grows. However, quantities of parameters need to be stored when too many levels are involved in, which makes the network less efficient. So we finally set the number of pyramid levels $L$ to 3. \noindent \textbf{Choice of loss weights}. The loss weights of three levels in the network are set individually. In this part, we investigate how the choice of loss weights influences the final performance. We first conduct experiments on loss weights among levels. Intuitively, it is beneficial to increase loss weight with respect to the level, for the eventual aim of network is to obtain high-resolution relighting results on the finest level. However, if we increase loss weights on finer levels too much, the network will degrade to a single large UNet-like network and loss its ability in progressive modeling. Contrarily, if we decrease loss weights, network undoubtedly outputs results ignored local details, which hinders the performance of the network. We conduct experiments to prove above hypothesis. As shown in \tabref{tab:weight_lvls}, the results reveal that neither increasing nor decreasing is a good option. We finally select to assign the equal weight on each level, which shows the best performance in our experiments. \begin{table}[ht] \centering \renewcommand{\tabcolsep}{4.6mm} \caption{Quantitative evaluations for loss weights among levels.} \begin{tabular}{c|c|c} \hline Zoom ratio & PSNR & SSIM \\ \hline $10.0\times$ & 19.42 & 0.7031 \\ $5.0\times$ & 19.52 & 0.7147 \\ $1.0\times$ & 19.70 & 0.7234 \\ $0.5\times$ & 19.53 & 0.7206 \\ $0.1\times$ & 19.52 & 0.7206 \\ \hline \end{tabular} \label{tab:weight_lvls} \end{table} \noindent \textbf{Investigation on IARB}. The proposed IARB serves as an important role in the IAN, which is designed based on the conventional ideas of rendering. In this section, we confirm the effectiveness of components of IARB in \tabref{tab:cmp_res} and in \figref{fig:ab_vis}. How the number of IARBs influences the performance is shown in \tabref{tab:cmp_num}. \begin{table}[h] \centering \renewcommand{\tabcolsep}{4.6mm} \caption{Quantitative evaluations for IARB.} \begin{tabular}{c|c|c|c|c} \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{w/ guidance} & \multicolumn{2}{c}{w/o guidance} \\ \cline{2-5} & PSNR & SSIM & PSNR & SSIM \\ \hline vanilla & 19.24 & 0.7123 & 18.02 & 0.6717 \\ w/o att & 19.28 & 0.7184 & 18.05 & 0.6741 \\ w/o dilated & 19.46 & 0.7208 & 18.16 & 0.6728 \\ mean att & 19.43 & 0.7223 & 18.17 & 0.6722 \\ std att & 19.62 & 0.7201 & 18.18 & 0.6527 \\ full model & 19.70 & 0.7234 & 18.27 & 0.6861 \\ \hline \end{tabular} \label{tab:cmp_res} \end{table} In \tabref{tab:cmp_res}, `vanilla' represents vanilla residual block~\cite{he2016deep}, `w/o att' means IARB without our statistical-coupled attention mechanism, `w/o dilated' denotes vanilla residual block with our proposed attention, `mean att' represents dilated residual block with only mean attention and `std att' denotes dilated residual block with only standard deviation attention. From \tabref{tab:cmp_res}, we can see that the performance decreases dramatically without the proposed statistical-coupled attention mechanism, indicating the significance of decoupling illumination descriptors. Only involving attention mechanism is also insufficient for acquiring accurate illumination descriptor under diverse frequencies. Without distant samples obtained by dilated convolution, receptive field of the block is constrained and acquired descriptors thus become unreliable. We also examine mean attention and standard deviation attention, respectively, which proves the effectiveness of these illumination-related statistics. In particular, standard deviation brings the most striking improvement, which indicates this statistic has a strong correlation with high frequent components of light as discussed in \secref{ssec:res}. Besides, we display the qualitative results from these variants in \figref{fig:ab_vis}. We can see that compared with other variants, our full model is superior in these aspects. Among all results from the first and second row, only our full model can generate shadows in correct direction and position. Without the usage of two-branch structure, the `vanilla' and `w/o att' module cannot extract precise light information. So we observe shadows from these variants are in wrong directions as shown in the first and second row. Without dilated conventions, the `vanilla' and `w/o dilated' module fail to capture long-distance dependencies which are crucial for the consistency of illumination. As a result, generated shadows or relit surfaces from these variants are hollow or broken. Besides, only mean descriptor or deviation descriptor is insufficient to model the attributes of light source. So surface in the results of `mean att' and `std att' which should be relit remains dark in the last row. Owing to a large receptive field from dilated convolutions and precise modeling of light by diverse statistics, our method generates relighting images of high visual quality. Through these examples, we reveal the strong ability of proposed IARB in modeling light and relighting images. Without our IARB, the ability of network in illumination-awareness is highly reduced, and it cannot extract precise attributes of light source for further light manipulation. \begin{table}[h] \centering \renewcommand{\tabcolsep}{4.6mm} \caption{Quantitative evaluation for different number of block(s).} \resizebox{0.48\textwidth}{12mm}{ \begin{tabular}{l|c|c|c|c} \hline \makecell[c]{Levels} & PSNR & SSIM & Parameters & GMacs\\ \hline $N=1$ & 19.09 & 0.7147 & 1.50 M & 223.76 \\ $N=2$ & 19.28 & 0.7149 & 1.89 M & 234.50 \\ $N=3$ & 19.51 & 0.7179 & 2.28 M & 245.24 \\ $N=4$ & 19.70 & 0.7234 & 2.67 M & 255.99 \\ $N=5$ & 19.74 & 0.7111 & 3.06 M & 266.73 \\ $N=6$ & 19.71 & 0.7203 & 3.45 M & 277.47 \\ \hline \end{tabular}} \label{tab:cmp_num} \end{table} Besides, we conduct experiments on number of IARBs in order to further prove the effectiveness of our proposed module and to find the optimal number in practical usage. We experiment the number from 1 to 6 and the results is in \tabref{tab:cmp_num}. While the number of IARBs increases, the performance increases correspondingly, which proves that IARB indeed benefits to relighting task. However, due to inaccessibility of large mount of data, the performance reaches a plateau, which indicates network saturates after number of blocks exceeds 4. So we eventually select 4 blocks in our settings. \noindent \textbf{Investigation on skip connections}. \begin{figure*}[ht] \centering \includegraphics[width=0.95\textwidth]{figs/results/ab_vis-crop.pdf} \caption{ Qualitative comparisons between our IARB with other variants. The results reveal the strong ability of proposed IARB in modeling light and relighting images. } \label{fig:ab_vis} \vspace{-5mm} \end{figure*} \begin{figure}[ht] \centering \includegraphics[width=0.48\textwidth]{figs/results/wointersk-crop.pdf} \caption{Once we remove cross-level skip connection, reconstruction information from high levels is hard to transfer across levels, which makes network likely to generate images whose local and global illumination is incoherent. We highlight artifacts in images by red rectangle boxes.} \label{fig:inter} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{figs/results/norm-crop.pdf} \caption{Normal is the most significant information among all additional information. Without it, network cannot reconstruct detailed structures in dark environment.} \label{fig:norm} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.48\textwidth]{figs/complexity_comp-crop.pdf} \caption{Comparison about performance, computational cost and number of parameters.} \label{fig:cplx} \vspace{-5mm} \end{figure} \begin{figure*}[ht] \centering \includegraphics[width=0.95\textwidth]{figs/results/real-crop.pdf} \caption{Representative results on multi-illumination dataset. We present enlarged noteworthy patches in the lower right corner of overall images for detailed comparison. } \label{fig:rep_mi} \vspace{-5mm} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=0.95\textwidth]{figs/results/cmp_dpr-crop.pdf} \caption{ Qualitative comparison on the DPR~\cite{zhou2019deep} dataset. Enlarged noteworthy patches are presented in the lower left corner of overall images for detailed comparison, and visualizations of target light are presented in the upper right corner. } \label{fig:cmp_dpr} \vspace{-5mm} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=0.9\textwidth]{figs/results/dyn_res-crop.pdf} \caption{ Qualitative results on the DPR~\cite{zhou2019deep} dataset with arbitrary light conditions. The first row is visualizations of target spherical harmonic lights. Input images in the first column are relit by above lights accordingly. } \label{fig:dyn_res} \vspace{-5mm} \end{figure*} Previous works~\cite{puthussery2020wdrn, li2020multi, zhou2019deep} introduce skip connections aiming to ease the difficulty of training or to share information among scales. We utilize various skip connections in our network as well. In our work, three kinds of skip connections are introduced and named cross-level skip connection (CLSC), intra-level skip connection (ILSC), and image content skip connection (ICSC), respectively. In this section, we conduct experiments about these skip connections and discuss their contributions in the proposed network. \tabref{tab:sk} shows how objective metrics changes when we remove specific skip connections. \begin{table}[h] \renewcommand{\tabcolsep}{4.6mm} \caption{Quantitative evaluation for diverse skip connections. `w/ 'and `w/o' represent `with' and `without', respectively.} \centering \begin{tabular}{l|c|c} \hline \makecell[c]{Type} & PSNR & SSIM \\ \hline full model & 19.70 & 0.7234 \\ w/o ICSC & 19.54 & 0.7104 \\ w/o ILSC & 19.49 & 0.7172 \\ w/o CLSC & 19.53 & 0.7159 \\ w/o all SC & 19.18 & 0.7027 \\ \hline \end{tabular} \label{tab:sk} \end{table} As demonstrated in \secref{ssec:pyr}, CLSC enables the networks to be aware of global illumination when higher levels refine local details. Once we remove CLSC, we can observe conflicts between local details and global illumination, which leads to severe artifacts in relighting images as shown in \figref{fig:inter}. ILSC and ICSC aim to transfer illumination invariant attributes to decoder side directly. If we remove ILSC or ICSC connections, the detailed structures and object textures are hard to reconstruct, leading to the performance drop. \noindent \textbf{Investigation on depth-guided geometry encoder}. Depth-guided geometry encoder (DGGE) uses provided depth information, estimated surface normal and the positional encoding as input. These additional information is evaluated as in \tabref{tab:cmp_aux} by removing depth, surface normal, and linear positional encoding, respectively. Full model means utilizing all aforementioned guidance. Bare model means the model that removes DGGE, indicating there is no additional input information. \begin{table}[h] \centering \renewcommand{\tabcolsep}{4.6mm} \caption{Quantitative evaluation for different additional information in DGGE. `lpe' means linear positional encoding.} \resizebox{0.48\textwidth}{10mm}{ \begin{tabular}{l|c|c|c|c} \hline \makecell[c]{Structure} & PSNR & Diff & SSIM & Diff \\ \hline full model & 19.70 & ----- & 0.7234 & ----- \\ w/o normal & 18.33 & -1.37 & 0.6928 & -0.0306 \\ w/o depth & 19.57 & -0.13 & 0.7147 & -0.0087 \\ w/o lpe & 19.68 & -0.02 & 0.7006 & -0.0280 \\ bare model & 18.27 & -1.43 & 0.6861 & -0.0373 \\ \hline \end{tabular}} \label{tab:cmp_aux} \end{table} Results reveal that surface normal brings the biggest performance gain among all additional information. Specifically, on the condition that we remove surface normal from DGGE, the performance of proposed network drops 1.37dB while performance only drops 0.13 dB if we remove depth map. In our observations, we find that removing normal results in blurring artifacts and ambiguous edges (see \figref{fig:norm}). This phenomenon indicates that normal is an essential cue for local structure reconstruction. This result is consistent with the subjective metrics of results from the model guided by different information. Besides, linear positional encoding is used as a spatial inductive bias to assist the network in capturing global structure with high fidelity. As shown in \tabref{tab:cmp_aux}, after removing positional encoding, PSNR drops slightly while SSIM~\cite{wang2004image} drops heavily, which indicates that positional encoding exerts the influence on structural consistency. \subsection{Comparisons with state-of-the-art methods} In this section, we compare our method with other state-of-the-art (SOTA) relighting methods. The methods without depth guidance are mainly from AIM 2020~\cite{2020AIM} competition, including WDRN~\cite{puthussery2020wdrn} which won the first position of AIM 2020~\cite{2020AIM} and DRN~\cite{wang2020deep} which achieves the best PSNR score. The methods with depth guidance are from NTIRE 2021~\cite{helou2021ntire} competition\footnote {Results of NTIRE 2021 competition are available at \textit{https://competitions.codalab.org/competitions/28030\#results} and our team name is NK\_ZZL}, including MBNet~\cite{yang2021multi} which won the first position of NTIRE 2021~\cite{helou2021ntire} and OIDDR-Net~\cite{yazdani2021physically} which is the runner-up method. Besides, we select pix2pix~\cite{isola2017image} which is a typical image-to-image translation method and DPR~\cite{zhou2019deep} which is a SOTA portrait relighting method for further comparison. For DPR~\cite{zhou2019deep}, we train a variant which removes light prediction module, because accurate light setting to train light prediction module is not provided in one-to-one relighting task. For quantitative comparison, PSNR and SSIM~\cite{wang2004image} metrics are applied on RGB channel of relit results. Moreover, the LPIPS metric~\cite{zhang2018unreasonable}, which is proven to be highly correlated with human ratings~\cite{jinjin2020pipal}, is also used for evaluation. \noindent \textbf{Efficiency.} In this part, we give quantitative comparison about efficiency of relighting methods. Three main factors are selected for comparison, \emph{i.e.,~} performance, computational cost, and the number of parameters. We use the number of composite multiply-accumulate operations~\cite{ahn2018fast} (Multi-Adds/Macs) for a single image as the measurement of computational cost. We assume the input image size to be $1024\times 1024$ to calculate Multi-Adds. This comparison is conducted on the VIDIT~\cite{helou2020vidit} dataset. As illustrated in \figref{fig:cplx}, our proposed method uses relative few parameters and Macs to achieve the best performance than previous SOTA methods with or without additional guidance. \noindent \textbf{VIDIT Dataset.} Compared with methods in AIM 2020~\cite{2020AIM} which lacks of additional guidance, our method outperforms them by a large margin in both distortion- and perception-oriented metrics. With extra guidance, the performance of our proposed method still surpasses the existing SOTA methods. Except for comparisons on objective metrics, we also show representative results to compare perceptual quality of these methods and to further illustrate effectiveness of our method. In \figref{fig:rep_wog}, we show the qualitative comparison results of methods without guidance. Though network trained without geometric and structural guidance, we can see that it has the ability to reconstruct coarse object structure preliminarily as shown in $3$-rd row of \figref{fig:rep_wog}. For these methods with guidance, the results are shown in \figref{fig:rep_wog}. After utilizing guidance, our method can provide more consistent illumination and structural details with high fidelity as shown in the first and third row of \figref{fig:rep_wg}. \begin{table}[h] \centering \renewcommand{\tabcolsep}{4.6mm} \caption{Quantitative evaluation on the VIDIT~\cite{helou2020vidit} dataset.} \begin{tabular}{l|c|c|c} \hline \makecell[c]{Method} & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$\\ \hline \multicolumn{4}{c}{w/o guidance} \\ \hline pix2pix~\cite{isola2017image} & 15.59 & 0.4890 & 0.4827 \\ DRN~\cite{wang2020deep} & 17.59 & 0.6151 & 0.3920 \\ WDRN~\cite{puthussery2020wdrn} & 17.46 & 0.6442 & 0.3299 \\ DPR~\cite{zhou2019deep} & 18.01 & 0.6389 & 0.3599 \\ Ours (w/o guidance) & \textbf{18.27} & \textbf{0.6861} & \textbf{0.3077} \\ \hline \multicolumn{4}{c}{w/ guidance} \\ \hline OIDDR-Net~\cite{yazdani2021physically} & 18.40 & 0.7039 & 0.2837 \\ MBNet~\cite{yang2021multi} & 19.36 & 0.7175 & 0.2928 \\ Ours (w/ guidance) & \textbf{19.70} & \textbf{0.7234} & \textbf{0.2755} \\ \hline \end{tabular} \label{tab:cmp_sota} \end{table} \noindent \textbf{Adobe Multi-Illumination Dataset.} We further apply our experiments to a real scene dataset, \emph{i.e.,~} Adobe Multi-Illumination Dataset. As mentioned in the paper~\cite{murmann2019dataset}, we mask chrome sphere and gray sphere which can be used as a prior to describe environment illumination during training and test phases. Qualitative and quantitative results are shown in \tabref{tab:cmp_sota_mi} and \figref{fig:rep_mi}, respectively. In \figref{fig:rep_mi}, the first and second row illustrate our method produces specular reflection on the surface of bottle with the best quality, which others either highlight wrong positions or cannot handle specular reflection at all. The third and fourth row illustrate that our proposed method can generate more realistic shadows. \tabref{tab:cmp_sota_mi} reveals quantitative comparison with previous methods and the performance of our proposed method is superior in the real scene dataset as well. \begin{table}[h] \centering \renewcommand{\tabcolsep}{4.6mm} \caption{Quantitative evaluation on Adobe Multi-Illumination dataset. Relighter is the official baseline which is proposed in ~\cite{murmann2019dataset}.} \begin{tabular}{l|c|c|c} \hline \makecell[c]{Method} & PSNR$\uparrow$ & SSIM$\uparrow$ & LPIPS$\downarrow$\\ \hline pix2pix~\cite{isola2017image} & 17.46 & 0.6660 & 0.3597 \\ DRN~\cite{wang2020deep} & 14.40 & 0.6323 & 0.8004 \\ WDRN~\cite{puthussery2020wdrn} & 18.89 & 0.8160 & \textbf{0.1949} \\ DPR~\cite{zhou2019deep} & 19.40 & 0.8377 & 0.2241 \\ Relighter~\cite{murmann2019dataset} & 16.54 & 0.7308 & 0.6261 \\ Ours & \textbf{19.67} & \textbf{0.8474} & 0.2100 \\ \hline \end{tabular} \label{tab:cmp_sota_mi} \end{table} \noindent \textbf{DPR Dataset.} We extend our network to handle relighting task with arbitrary lighting condition and evaluate it on the DPR~\cite{zhou2019deep} dataset which is a portrait relighting dataset. To ensure the fairness, when compared with the previous SOTA method DPR~\cite{zhou2019deep}, we present both results from the official pretrained model (\emph{i.e.,~} DPR(official)) and them from the retrained model under our training setting (\emph{i.e.,~} DPR(retrained)). \begin{table}[ht] \centering \renewcommand{\tabcolsep}{4.6mm} \caption{Quantitative evaluations on the DPR~\cite{zhou2019deep} dataset.} \begin{tabular}{l|c|c} \hline \makecell[c]{Type} & PSNR & SSIM \\ \hline DPR (official)~\cite{zhou2019deep} & 22.41 & 0.9189 \\ DPR (retrained)~\cite{zhou2019deep} & 24.00 & 0.8491 \\ Ours (w/o normal) & 25.61 & 0.9509 \\ Ours (w/ normal) & 27.98 & 0.9667 \\ \hline \end{tabular} \label{tab:cmp_dpr} \end{table} The results in \figref{fig:cmp_dpr} reveal that with simple modification, our method can tackle various light condition. When DPR~\cite{zhou2019deep} attempts to recast shadows, it frequently introduces obvious artifacts near edges of shadows, as the first to third rows shown. From the fourth row, we observe inconsistent effects of illumination in DPR~\cite{zhou2019deep}. These results indicate that though DPR~\cite{zhou2019deep} supervises light attributes in spherical harmonic manner, it cannot fully take advantages of such explicit light condition. Instead, the results provided by our method have natural brightness change of high visual quality without obvious artifacts or undesirable sudden change of brightness. These results prove that implicitly injecting light condition into network is a better idea, which reduces explicit accumulation of errors. Besides, we provide visual results under various light condition in \figref{fig:dyn_res}. By only modified the illumination branch, the network enables relighting for arbitrary light condition, which proves that our IARB can faithfully extract illumination-related information and this descriptor is effective when re-render the input image. The quantitative results shown in \tabref{tab:cmp_dpr}. Both PSNR and SSIM values of our method surpass DPR~\cite{zhou2019deep} by a large margin, which reveals the effectiveness of our network. In summary, our design purpose is consistent with the final results, and it is proven that the IARB is suitable for relighting task. \section{Conclusion and Prospect} In this paper, we thoroughly investigate previous relighting methods from diverse viewpoints and get inspirations from ideas of the conventional physical based rendering. According to these inspirations, we design an illumination-aware network intrinsically suitable to the relighting task and deploy an illumination-aware residual block which approximates conventional rendering process to assist relighting. Besides, we employ a depth-guided geometry encoder and utilize additional information beyond RGB images to acquire geometry- and structure-related information which benefits to relighting. Adequate comparisons with previous SOTA methods and ablation studies reveal the effectiveness and efficiency of our proposed method. However, there exists room for future improvement. Based on our observations in practice, we list the following aspects that should be emphasized in future work. \begin{itemize} \item Relighting specular objects and transparent objects to be more realistic (see \figref{fig:rep_mi} Row 1-2). \item Completing textures of relighting regions according to surrounding patches or global style (see \figref{fig:rep_wg} Row 1-3). \item Utilizing inaccurate or sparse guidance which is more practical in reality to generate comparable results. \end{itemize} \bibliographystyle{IEEEtran}
49658e0156abffb59f2991df4bac9955a7da753a
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} A photonic quantum memory is a device that can store and re-emit photons on demand~\cite{lvovsky2009,simon2010,heshami2016}. It is an essential component in quantum information processing applications such as quantum networks~\cite{kimble2008,simon2017}, quantum repeaters~\cite{qrepeater2007,qrepeater2011} and long range quantum communication~\cite{duan2001}. In a typical atomic ensemble based quantum memory, a weak light pulse is absorbed as delocalized atomic excitation over all the atoms in the ensemble. This collective atomic excitation is then transferred to a long lived spin state of the atoms using control pulses. In order to retrieve the photons from the atomic ensemble, a trigger pulse is used to transfer the excitation from the long-lived spin state to the excited state of the atom, which emits the photons at a desired time~\cite{simon2010,heshami2016}. Some of the commonly used quantum memory protocols are electromagnetically induced transparency (EIT)~\cite{fleischhauer2002,david2010,hsiao2018,wang19}, controlled reversible inhomogeneous broadening (CRIB)~\cite{moiseev2001,kraus2005,alexander2006,sangouard2007}, gradient echo memory (GEM),~\cite{hetet08,hedges2010,hosseini11}, Raman memory~\cite{klein09,kozhekin2000,guo19}, photon-echo using atomic frequency comb (AFC)~\cite{afzelius2009,afzelius2010,afc2010,jobez2016} and intra-atomic frequency comb (I-AFC)~\cite{iafc2019,teja2021,iafc2021}. All these techniques use a large ensemble of atoms or bulk materials to store photons. To gain scalability and the practical advantage in quantum information processing, many efforts are being devoted towards integrated photonic chips~\cite{crespi2011,kohnen2011,meng2015,hacker2016,freer2017,titchener2018,elshaari2020,wang2020}. On-chip single photon sources, on-chip beamsplitters and on-chip photon detectors are already available on integrated platform~\cite{uppu2020,laucht2012,lu2021,yin2022,gyger2021}, while on-chip quantum memory is still a work in progress and is highly sought after device~\cite{zhong2017,liu2020}. Atomic ensemble based quantum memories pose challenges for their on-chip integration. So far only the AFC based quantum memory protocol has been repurposed for the on-chip implementation~\cite{zhong2017,liu2020}. Further, the Raman quantum memory protocol has been extended to a single-atom level which can potentially be used on integrated photonic chips~\cite{specht2011}. In this article, we propose a scheme for storing weak light pulses and single photons using a single atom coupled to an optical cavity. The trapped atom contains an I-AFC. We show that this joint single-atom-cavity setup results in a photon-echo, similar to the I-AFC based quantum memory protocol~\cite{iafc2019}. The efficiency of storing the light in this setup depends on the finesse of the optical cavity and the quality of the frequency comb. We can also achieve robust and efficient storage for polarization and time-bin qubits using this setup. As examples, we show that Cesium and Rubidium atoms coupled to nanophotonic waveguide cavities can serve as promising candidates for the implementation of this quantum memory protocol. In principle, the efficiency of this protocol can reach up to $100\%$, whereas in AFC and I-AFC protocols the maximum efficiency in the forward propagation can only be $54\%$. This is because, unlike in the bulk AFC and I-AFC protocols, the reabsorption of the photon in the remission process is eliminated by keeping the atom-cavity setup in the Purcell regime and using only one atom for storing the light. One of the biggest advantages of the proposed scheme is that it provides a possible realization of an on-chip quantum memory. Furthermore, since our protocol requires only a frequency comb coupled to a cavity, it can also be implemented using the quantum dots inside a cavity~\cite{kiraz2003,an2007,zhang2018,westmoreland2019}. On-demand single-photon sources have already been realized using quantum dots~\cite{heinze2015,liu2018,uppu2020,lu2021}. Combining these two can pave the way for efficient on-chip photonic quantum computation. The article is organized as follows: In Sec.~\ref{Sec:Background} we introduce the relevant background required for our results. In Sec.~\ref{Sec:Results}, we present our result where we show the photonic quantum memory using a single atom trapped inside an optical cavity. The examples of Rubidium and Cesium atoms for the implementation of the quantum memory are presented in this section. We conclude in Sec.~\ref{Sec:Conclusion}. \section{Background}\label{Sec:Background} In this section, we introduce the I-AFC based quantum memory protocol and the dynamics of an atom-cavity setup interacting with electromagnetic field. \subsection{Intra-atomic frequency comb} In I-AFC based quantum memory, we consider an atom with degenerate ground and excited hyperfine levels. This degeneracy is lifted by applying an external magnetic field which results in multiple ground and excited states. All these multiple dipole allowed transitions considered together result in a comb like structure known as I-AFC~\cite{iafc2019}. The interaction picture Hamiltonian for an atom that exhibits I-AFC, interacting with electric field $\mathcal{E}(z,t)$ with mean frequency $\omega_L$ reads \begin{align} H=\hbar \sum_{n=1}^{N} \delta_n \dyad{e_n}{e_n} -\hbar \qty[\sum_{n} \Omega_{n}(z,t) \dyad{e_n}{g_n}+\text{h.c.}]. \end{align} We have considered the atom with $N$ number of ground states $\{\ket{g_n}\}$ and $N$ number of excited states $\{\ket{e_n}\}$. For simplicity, the transition is allowed only between $\ket{e_n}\leftrightarrow\ket{g_n}$ for all $n$ and the corresponding transition dipole moment is given by $d_{n}$. Here, the Rabi frequency $\Omega_{n}(z,t)={d_{n}\mathcal{E}(z,t)}/{2\hbar}$, and $\delta_{n}=\omega_{n}^e-\omega_{n}^g-\omega_L$ is the detuning between the $\ket{e_n}\leftrightarrow\ket{g_n}$ transition frequency and $\omega_L$. All these dipole allowed multiple transitions collectively yield a frequency comb. The line width of each of the transition is $\gamma_n$ and the mean frequency is $\delta_n + \omega_L$. The spacing between the $(n+1)$-th and the $n$-th teeth is given by $\Delta_n = \delta_{n+1} -\delta_n$. For simplicity, we consider an ideal I-AFC with uniform comb spacing $\Delta_n \equiv \Delta$ and equal tooth width $\gamma$ such that the detuning can be written as $\delta_{n}\equiv n \Delta$. We consider the absorption of a single photon pulse $\mathcal{E}(0,t)$ with spectral width $\gamma_p$ in the I-AFC such that $\gamma_p\gg\Delta$. If the atom is initially prepared in an equal superposition of multiple ground states, i.e., $\ket{G}=\dfrac{1}{\sqrt{N}}\sum_n{\ket{g_n}}$, the initial state for the ensemble of $M$ atoms becomes $\ket{G}^{\otimes M}$. The state of the I-AFC at time $t$ after absorbing the single photon pulse can be written as \begin{align} \ket{\psi(t)} \equiv \sum_{j=1}^M \qty(\alpha_j \ket{G}^{\otimes (M-1)}\ket{E(t)}_j), \label{cstate} \end{align} where the index $j$ runs over the number of atoms in the ensemble. The state $\ket{E(t)}_j=\dfrac{1}{\sqrt{N}}\sum_n e^{\mathrm{i} \delta_n t}\ket{e_n}_j$ represents the collective excited state of the $j^{th}$ atom and $\alpha_j$ denote the absorption coefficient of each atom. The probability of the photon-emission is given by $P(t) \propto \abs{\mel {G}{\sum_jS_j}{\psi}}^2$, where $S_j=\sum_{n}{\ket{g_n}_{j}{\bra{e_n}_j}}$ is the step down operator for the $j$-th atom~\cite{iafc2019}. Using $\delta_{n}= n \Delta$ in Eq.~\eqref{cstate}, we see that the probability of the photon emission maximizes at $t=2\pi m/\Delta$ for integer m. The output light corresponding to different m values are called photon echoes. The time of echo can be adjusted by changing $\Delta$ which in turn is controlled by the strength of the applied magnetic field. To achieve on-demand storage using this protocol, the excitation is transferred from the excited level to a spin state with long lifetime by applying a $\pi$-pulse. Applying another $\pi$-pulse transfers the excitation back to the excited state causing the photon-echo at an appropriate time. The efficiency $\eta$ of the I-AFC quantum memory protocol is defined as the ratio of the intensity of light obtained in the first echo to the total intensity of the input light, which reads as~\cite{iafc2019} \begin{align} \eta= \dfrac{\int_{\pi/\Delta}^{3\pi/\Delta} \abs{\mathcal{E}(z=L,t)}^2 \, dt}{\int \abs{\mathcal{E}(z=0,t)}^2 \, dt}. \label{eta} \end{align} where $L$ is the length of the atomic ensemble along the direction of propagation of light. The maximum efficiency that can be achieved using the standard I-AFC scheme is $54\%$ in forward mode and $100\%$ in backward mode~\cite{iafc2019}. \subsection{ Dynamics of an atom-cavity system interacting with electromagnetic field}\label{acs} Consider a two-level atom coupled to an optical cavity. Let $\{\ket{g},\ket{e}\}$ be the two energy levels of the atom, $\hat a$ be the annihilation operator of the cavity mode, $\omega_c$ and $\omega_{eg}$ be the cavity mode frequency and the atomic transition frequency, respectively. The Hamiltonian for the atom-cavity setup can be written as \cite{reiserer2015} \begin{align} H&= \hbar\omega_c{\hat a}^\dagger \hat{a} + \hbar \omega_{eg}\dyad{e}{e} -\hbar \qty[g \sigma_{eg}\hat{a}+ g^*\sigma_{ge} \hat{a}^\dagger], \end{align} where $\sigma_{ij} \equiv \ket{i}\bra{j}$ represents the transition operators for the two-level atom, $g=\dfrac{d_{eg}}{\hbar}\sqrt{\dfrac{\hbar \omega_c}{2\epsilon_0V}}$ is the coupling constant between the atom and the cavity, and $d_{eg}$ is the transition dipole moment between the levels $\ket{e}$ and $\ket{g}$. The optical cavity is also coupled to an input electromagnetic field mode $\hat a_{\rm in}$ and yields an output mode $\hat a_{\rm out}$ upon interaction of the input field with the cavity. Using the standard input-output formalism~\cite{gardiner1985}, the input, output and the cavity modes are related as \begin{align} {\hat{a}}_{\text{out}}(t)- {\hat{a}}_{\text{in}}(t)&=\sqrt{\kappa}~ {\hat{a}}(t).\label{tin} \end{align} Our goal is to calculate the output field mode $\hat a_{\rm out}(t)$ as a function of time for the given input field mode $\hat a_{\rm in}(t)$. In order to achieve that we need to solve for $\hat a(t)$. The dynamical equations for the atom-cavity setup can be written as~\cite{gardiner1985} \begin{align} \dv{}{t} \hat{a}(t) &= \dfrac{1}{\mathrm{i} \hbar}[\hat a, H] -\qty(\dfrac{\kappa}{2}) \hat{a} -\sqrt{\kappa} \hat{a}_{\text{in}},\label{ip1} \\ \dv{}{t} {\sigma_{ge}}(t) &= \dfrac{1}{\mathrm{i} \hbar}[\sigma_{ge}, H]-\qty(\dfrac{\gamma}{2})\sigma_{ge}, \label{ip2} \end{align} where $\kappa$ is the decay rate of the cavity mode at which it leaks out from the cavity and $\gamma$ is the free space spontaneous emission rate of the atom. The atom-cavity setup is usually operated in two parameter regimes, (i) the strong coupling regime where the atom-cavity coupling is the highest i.e. $g \gg \kappa,\gamma$. As a result, the system undergoes a series of damped Rabi oscillations before the photon leaks out of the cavity~\cite{reiserer2015}. (ii) The bad cavity regime or Purcell regime in which the cavity decay rate $\kappa$ is maximum and is characterized by \mbox{$\kappa>g^2/\kappa>\gamma$}~\cite{agarwal2012}. In the Purcell regime, the atomic decay rate into the cavity mode is enhanced over the free space decay, as a result of which the atom predominantly decays into the cavity mode~\cite{agarwal2012}. We use this regime in our protocol, since it enables the emission of the photon-echo into the cavity mode, and the emitted photon quickly leaks out of the cavity at a rate $\kappa$ before it can get reabsorbed. \section{Results}\label{Sec:Results} The conventional I-AFC based quantum memory scheme uses an ensemble of atoms to store single photons. However, since each atom in I-AFC contains a frequency comb, it can be argued that a single atom is capable of storing photons provided the atom and light couple strongly. In principle, one can use a single mode optical cavity to tune the coupling between the atom and the photons and can realize quantum memory using a single atom. In this section, we explore this feasibility and show that with a proper choice of atom-cavity parameters, one can realize an efficient quantum memory. We also present the implementation scheme using Rubidium and Cesium atoms, as examples, coupled to nanophotonic waveguide cavities. \subsection{Quantum memory using single-atom I-AFC coupled to a cavity} \label{saqm} Consider an atom that contains a frequency comb, coupled to a high finesse single-mode optical cavity~ [Fig.~\ref{iafccav}]. The Hamiltonian for such atom-cavity system consists of three parts, the free Hamiltonian of the single-mode cavity, the free Hamiltonian of the atom and the interaction between the two systems, which reads \begin{align}\label{ham} H=&H_{\text{c}}+H_{\text{a}}+H_{\text{int}}\nonumber\\ \begin{split} =&\hbar \omega_c\hat{a}^\dagger \hat{a} + \sum_{n=1}^{N_e}\hbar \omega_n^e\dyad{e_n}{e_n}+\sum_{m=1}^{N_g}\hbar \omega_m^g\dyad{g_m}{g_m}\\ &-\hbar\qty[\sum_{n,m} g_{nm}\dyad{e_n}{g_m}\hat{a}+\sum_{n,m} g_{nm}^*\dyad{g_m}{e_n} \hat{a}^\dagger], \end{split} \end{align} where $\hat{a}$ is the photon annihilation operator for the cavity mode. $\ket{g_m}$ and $\ket{e_n}$ denote the $m$-th ground state and the $n$-th excited state, respectively, with coupling strength $g_{nm}=\dfrac{d_{nm}}{\hbar}\sqrt{\dfrac{\hbar \omega_c}{2\epsilon_0V}}$. The $d_{nm}$ is the transition dipole moment between $\ket{g_m}\leftrightarrow\ket{e_n}$ transition and $\omega_c$ is the resonance frequency of the cavity. The dynamical equations for the cavity field operator $\hat{a}$ and the atomic lowering operator $\sigma_{mn}^-\equiv\dyad{g_m}{e_n}$ using input-output formalism as discussed in Sec.~\ref{acs} read \begin{align} \dv{\hat{a}}{t}&=-i\omega_c\hat{a}+i\sum_{n,m}g_{nm}^*\sigma_{mn}^{-}-\dfrac{\kappa}{2}\hat{a}-\sqrt{\kappa}{\hat{a}_{\text{in}}}, \label{r1} \\ \dv{{\sigma}_{mn}^-}{t}&=-\mathrm{i}(\omega_n^e-\omega_m^g)\sigma_{mn}^-+\mathrm{i} g_{nm}(\sigma_{mm}-\sigma_{nn})\hat{a} -\dfrac{\gamma}{2}\sigma_{mn}^-, \label{r2} \\ \sqrt{\kappa}~\hat{a}(t) & = \hat{a}_{\text{out}}(t)-\hat{a}_{\text{in}}(t). \label{r3} \end{align} Here $\gamma$ is the spontaneous decay rate of the atom in free space and $\kappa$ is the decay rate of the cavity field. Solving these equations in the frequency domain using the low intensity approximation ($\expval{\hat{a}^\dag{_\text{in}} \hat{a}_{\text{in}}}\lesssim 1$) which amounts to $\sigma_{nn} \thickapprox0$~\cite{hu2015} yields \begin{align} {\hat{a}}_{\text{out}}(\omega) =\qty[1-\dfrac{\kappa}{\mathrm{i}(\omega+\Delta_c)+\mathcal{D}(\omega)+\dfrac{\kappa}{2}}]{\hat{a}}_{\text{in}} (\omega). \label{e6} \end{align} Here \begin{align} \mathcal{D}(\omega)&=\sum_{n,m}\dfrac{\sigma_{mm} \abs{g_{nm}}^2}{\qty[i(\omega+\delta_{nm})+\dfrac{\gamma}{2}]}, \label{e7} \end{align} is the I-AFC propagator~\cite{iafc2019}, and $\Delta_c=\omega_c-\omega_L$, $\delta_{nm}=(\omega_n^e-\omega_m^g)-\omega_L$ are the detunings with respect to the input light. Inverse Fourier-transform of Eq.~(\ref{e6}) yields the output field in time $\hat a_{\rm out}(t)$~\cite{milburn2015}. In order for this atom-cavity setup to qualify for a quantum memory, there must be a delay between the input and output light. We solve Eq.~(\ref{e6}) numerically taking the initial state of the atom as an equal superposition of ground states and plot the intensity of the output field $I_{\rm out}=\left\langle \hat a^\dagger_{\rm out}(t)\hat{a}_{\rm out}(t)\right\rangle$ as function of time $t$ [Fig.~\ref{figa}]. Here we have considered an atom with an I-AFC having seven teeth with uniform comb spacing $\Delta=300$ MHz, tooth width $\gamma = 7.5$ MHz and detuning $\Delta_c=0$. The solid curve and the dashed curve in this figure corresponds to different cavity decay rates $\kappa$. In this figure, we can clearly see that the first prominent output pulse of light is at time $t = 2\times 10^{-9}$s which is due to the immediate reflection from the cavity. The second prominent output pulse occurs at $t \sim 5.5\times 10^{-9}$s which is due to the emission from the cavity. There is a delay of $3.5$ns which is approximately $2\pi/\Delta$ due to the interaction of light with the setup. Hence the atom-cavity setup behaves like an I-AFC. \begin{figure}[!htb] \subfigure[\label{iafccav}]{\includegraphics[width=6cm]{figures/labelc.pdf}} \hspace{1cm} \subfigure[\label{figa}]{\includegraphics[width=6cm]{figures/idlecho2.pdf}} \caption{\subref{iafccav} Schematic diagram for an I-AFC inside a cavity. Here, the I-AFC is interacting with a single cavity mode with decay rate $\kappa$. $\hat{a}_{\rm in}$ and $\hat{a}_{\rm out}$ represent the input and output cavity field operators. $\gamma$ is the spontaneous decay rate of the atom into free space. \subref{figa} Photon-echo after a delay of $3.5$ns for an ideal IAFC coupled to a cavity with uniform comb spacing of $\Delta=300$ MHz, tooth width $\gamma = 7.5$ MHz and cavity detuning $\Delta_c=0$. The two photon echoes shown in dashed and solid curve correspond to the cavity decay rate $7$ and $4$ GHz, respectively, with the corresponding efficiencies $94.22\%$ and $72.02\%$, respectively.}\label{f1a} \end{figure} \begin{figure*}[!htb] \subfigure[\label{figb}]{\includegraphics[width=4cm]{figures/idealg.pdf}} \subfigure[\label{figb1}]{\includegraphics[width=4cm]{figures/idealk.pdf}} \subfigure[\label{figb2}]{\includegraphics[width=4cm]{figures/dcidvar.pdf}} \subfigure[\label{finesse}]{\includegraphics[width=4cm]{figures/finesse.pdf}} \caption{Effect of various parameters on the efficiency ($\eta)$ of quantum memory in the single-atom-cavity setup for an ideal comb with uniform comb spacing $\Delta=300$ MHz. In \subref{figb} we plot the variation of $\eta$ as a function of $g'$ for fixed values of $\kappa$. \subref{figb1} shows the variation of $\eta$ as a function of $\kappa$ for fixed values of $g'$. In \subref{figb2} and \subref{finesse}, we plot the variation of $\eta$ as a function of the cavity detuning $\Delta_c=\omega_c-\omega_L$ and the comb finesse ($\mathcal{F}$), respectively for $(g',\kappa)=(1.8,11)$ GHz.}\label{f1} \end{figure*} Eqs.~\eqref{e6} and~\eqref{e7} suggest that the cavity parameters $g_{nm},\kappa$ and $\Delta_c$ also play a role in the output field and can affect the quality of the memory. To quantify the quality of the quantum memory we can generalize the definition of the efficiency [Eq.~\eqref{eta}] for the bulk I-AFC protocol to the current scenario as \begin{align} \eta= \dfrac{\int_{\pi/\Delta}^{3\pi/\Delta} \expval{\hat a^\dagger_{\text{out}}(t) \hat a_{\text{out}}(t)} \, dt}{\int \expval{\hat a^\dagger_{\text{in}}(t) \hat a_{\text{in}}(t)} dt}. \label{etac} \end{align} For an ideal I-AFC, since all the peaks are identical, i.e., $d_{nm} \equiv d$, we may write $g_{nm}=g$. We define {$g'=\sqrt{\sigma_{mm}} g$} as the effective coupling constant. In Fig.~\ref{figb}, we plot the variation of $\eta$ as a function of $g'$ keeping $\kappa$ constant and cavity detuning $\Delta_c = 0$. We have considered the I-AFC having seven teeth with uniform comb spacing $\Delta = 300$ MHz and tooth width $\gamma = 7.5$ MHz. We have also numerically optimized the efficiency with respect to the spectral width of the incoming pulse for each parameter. Fig.~\ref{figb} shows that an optimum value of $g'$ exists for every given value of $\kappa$ which maximizes the efficiency. A similar trend is observed, when we vary $\kappa$ with fixed value of $g'$ in Fig.~\ref{figb1}. From these two figures, we can find that the efficiency is maximum for $(g',\kappa) = (1.8,11)$ GHz for our case. In Fig.~\ref{figb2}, we plot the variation of the efficiency $\eta$ as a function of the cavity detuning, $\Delta_c(=\omega_c-\omega_L)$ while keeping $(g',\kappa) = (1.8,11)$ GHz. As expected, it shows a drop in the efficiency as the cavity detuning $\Delta_c$ increases. Apart from these parameters, the optimized efficiency also depends on the comb finesse $\mathcal{F}$ which is defined as the ratio of the comb spacing and the peak width i.e. $\mathcal{F}\equiv\Delta/\gamma$. In Fig.~\ref{finesse}, we plot the efficiency as a function of comb finesse for the ideal comb with fixed comb spacing, $\Delta=300$ MHz by changing the peak width $\gamma$ while keeping the cavity parameters to be fixed at $(g',\kappa)=(1.8,11)$ GHz. This plot shows that the efficiency saturates to $\sim 100\%$ asymptotically for high values of finesse. Note that the solution for the output field in Eq.~\eqref{e6} is derived under the approximation that there is negligible absorption of the input field by the atom ($\sigma_{nn}\sim 0$), however the effficiency of the storage by this system is still high. To understand this, we consider the expression for the susceptibility $\chi$ of the joint atom-cavity system, which reads~\cite{chang2011} \begin{align} \chi =\sqrt{\dfrac{2 V}{\epsilon_0\hbar \omega_c}}\dfrac{\expval{P}}{ {\expval{a}}},\label{chi} \end{align} where $P=\dfrac{1}{V}\sum_{n,m}d^*_{mn}\sigma_{mn}^-$ is the atomic polarization of the atom exhibiting the I-AFC and $V$ is the cavity mode volume. The above expression for susceptibility is equivalent to the classical field susceptibility $\chi_e=\dfrac{\expval{P}}{\epsilon_0 \mathcal{E}}$, where the classical field amplitude $\mathcal{E}$ being replaced by the expectation value of $\hat{a}$ operator. In Fig.~\ref{comb} we plot the absorption for the atom-cavity system where the absorption is the imaginary part of the joint susceptibility $\chi$. From this figure, we can see that the absorption profile of the joint atom-cavity system shows the comb like structure similar to the absorption profile in I-AFC. This comb like structure is responsible for the photon-echo, as shown in Fig.~\ref{figa}. Thus the atom and the cavity together account for the photon storage, even though the absorption by the atom is negligible. \begin{figure}[!htb] \includegraphics[height=4.5cm,width=6cm]{figures/idscomb2.pdf} \caption{Plot for absorption of joint atom-cavity system for an ideal frequency comb with uniform comb spacing $\Delta=300$ MHz, tooth width $\gamma = 7.5$ MHz and cavity detuning $\Delta_c=0$.} \label{comb} \end{figure} \begin{figure*}[!thb] \subfigure[\label{figrb}]{\includegraphics[height=5cm,width=5.5cm]{figures/rbscomb1.pdf}} \hspace{2.5pt} \subfigure[\label{figcs}]{\includegraphics[height=5cm,width=5.5cm]{figures/csscomb1.pdf}} \caption{\subref{figrb} and \subref{figcs} represent frequency comb in Rb and Cs atoms for transitions between $5s_{1/2}\leftrightarrow 6p_{3/2}$ for Rb and $6s_{1/2} \leftrightarrow 7p_{3/2}$ for Cs. The applied magnetic field strength for Rb and Cs are taken to be 0.15 and 0.1 T, respectively.}\label{acomb} \end{figure*} This scheme can also be used to store polarization and time-bin photonic qubits. AFC and I-AFC based quantum memories are known for storing time-bin qubits efficiently~\cite{afzelius2009,gundougan2015,ortu2022}. Moreover, it has been shown that I-AFC can store polarization states of light \cite{iafc2021}. For storing polarization qubit, we can consider the same atom-cavity setup with the single atom consisting of two overlapping frequency combs corresponding to two different polarizations. Generally, the efficiency and the photon-echo time for the two polarizations can be different. By choosing the cavity parameters appropriately, one can store arbitrary polarization in these systems~\cite{iafc2021}. To conclude this section, we have shown that a single atom with I-AFC coupled to an optical cavity can store photons efficiently. We have shown the effect of various parameters on the quality of the storage and estimated the optimum values of the parameters for the most efficient storage. The results obtained here are also interesting from a fundamental point of view. We see that even though the I-AFC is necessary to store the photons in the atom-cavity system proposed here, the excitation probability of the atom is negligible. The interaction of the I-AFC with the cavity yields the comb like absorption profile of the joint atom-cavity system, which enables efficient quantum memory. In the following section, we present examples of systems capable of realizing this quantum memory protocol. \subsection{Realizing the quantum memory using Rb and Cs atoms} \label{Implemen} \begin{figure*}[!thb] \subfigure[\label{figc}]{\includegraphics[height=4.8cm,width=6.5cm]{figures/rbcsecho1.pdf}} \hspace{2.5pt} \subfigure[\label{figv}]{\includegraphics[height=4.8cm,width=5cm]{figures/csrbk.pdf}} \caption{\subref{figc} Photon-echo for the IAFC in Rb and Cs atoms. \subref{figv} Variation of efficiency as a function of $\kappa$ in Rb and Cs atoms.}\label{eff} \end{figure*} So far, we have discussed the photon storage assuming an ideal frequency comb with uniform comb spacing and having same peak height, which interacts with an optical cavity. However, if we consider realistic systems such as Rb and Cs atoms, the frequency combs obtained from them are usually non-uniform with unequal peak heights which affects the storage process~\cite{iafc2019,teja2021}. In this section, we discuss the possibilities for experimental implementation of the single atom based quantum memory protocol in realistic systems such as Rb and Cs atoms coupled to nanophotonic waveguide cavity and show that the current scheme can be implemented with the existing experimental techniques. As discussed in Sec.~\ref{saqm}, one of the requirements to achieve efficient quantum memory in I-AFC-cavity setup is a cavity with high coupling strength $g$ of the order of GHz (see Fig.~\ref{f1}). This, in turn requires cavity with low mode volume of the order of $(\sim \mu \text{m})^3$. Such strong coupling is difficult to achieve using the conventional Fabry-P\'erot cavities, but can be achieved using the nano-cavities~\cite{van2011,thompson2013} where mode volume $V \sim \lambda^3$ have already been realized. Apart from this, the strong coupling has been realized in fiber-based Fabry-P\'erot cavity ~\cite{hunger2010} where the mirror surface of the cavity is designed on the optical fiber end faces. This tight confinement using nano-photonic cavities gives an additional advantage of potential integration with nano-photonics. Trapping in such low mode volumes results in the atom-cavity strong coupling of the order of $g\sim$ GHz along with the quality factor $Q=\omega_c/\kappa \sim 10^5$~\cite{van2011}. \begin{table}[!htb] \small \begin{tabular}{|c|c|c|c|c|c|c|} \hline Atom & Transition & $\lambda$ (nm) & $B$ (T) & $V(\mu \text{m})^3$ & $\kappa$ (GHz) & $Q$\\ \hline Rb~\cite{sansonetti2006}& $5s_{1/2}\leftrightarrow 6p_{3/2} $ &420.3 & 0.15 & $20$ & $\sim 7$ & $10^5$\\ \hline Cs~\cite{sansonetti2009}& $6s_{1/2} \leftrightarrow 7p_{3/2} $ &455.66 & 0.1 & $ 20$ & $\sim 8$ & $10^5$\\ \hline \end{tabular} \caption{Rb and Cs parameters used in numerical calculations. $\lambda$ is the wavelength of transition $B$ is the magnetic field used in obtaining IAFC. $V, ~\kappa$ and $Q$ are the mode volume, decay rate and quality factor of the cavity, respectively.}\label{tab} \end{table} Although the scheme presented in this paper is applicable to a large class of atoms, molecules and quantum dots, here we consider Cs and Rb atoms as examples to realize this quantum memory protocol. The parameters such as the transitions, the wavelength, applied magnetic field strength and so on for Rb and Cs atoms used for our calculations are given in Table~\ref{tab}. In Figs.~\ref{figrb} and \ref{figcs}, we plot the frequency comb obtained in Rb and Cs atoms. Clearly, these frequency combs are neither uniform in the comb spacing nor do they have equal absorption peaks. In Fig.~\ref{figc} we show the photon-echo from Rb and Cs atoms calculated numerically by solving Eq.~\eqref{e6}. The maximum efficiencies for Rb and Cs atoms are found to be $92.9\%$ and $90.36\%$, respectively, for the parameters specified in Table.~\ref{tab}. In Fig.~\ref{figv}, we plot the variation of the efficiency as a function of the cavity decay rate $\kappa$ for Rb and Cs atoms. It is clear that the trend is similar to that of an ideal comb with peak value of $\sim 90\%$. The lesser value of the efficiencies in the case of Rb and Cs atoms is due to the inherent non-uniformity present in the frequency combs. This non-uniformity is attributed to different values of the comb spacing $\Delta_{n}$ and the dipole matrix element $d_{nm}$ corresponding to the transition $\ket{e_n}\leftrightarrow \ket{g_m}$. Our calculations show that an efficient quantum memory using a single atom coupled to an optical cavity can be implemented using the current experimental techniques. \section{Conclusion}\label{Sec:Conclusion} On-chip photonic quantum memories are essential for scalable and integrated photonic quantum information processing. Most of the currently available quantum memory protocols require atomic ensembles or bulk materials to store photons. Here we have proposed a new scheme to store photons using only a single atom coupled to an optical cavity. The atom exhibits an I-AFC which enables the joint atom-cavity system to store photons. This provides us with a possibility to realize an on-chip quantum memory suitable for integrated photonic chips. The proposed setup is capable of storing time-multiplexed photons, along with their polarization degree of freedom efficiently, hence providing multi-mode photonic quantum memory. Theoretically, our quantum memory protocol can store photons with $\sim100\%$ efficiency. Although we have presented this quantum memory protocol using trapped atoms, this can very well work with quantum dots and quantum defect centers. The advantage of working with the quantum dots and the defect centers is that for the case of atoms, the working temperature of the protocol is $10^{-3}-10^{-6}$ K, whereas this temperature is $\sim 1$ K for quantum dots. Since deterministic single photon sources have already been realized using quantum dots, combining it with the on-chip quantum memory can provide a robust integrated platform for photonic quantum computation. \begin{acknowledgments} Chanchal acknowledges the Council of Scientific and Industrial Research (CSIR), Government of India, for financial support through a research fellowship (Award No. 09/947(0106)/2019-EMR-I). S.K.G. acknowledges the financial support from the Inter-disciplinary Cyber Physical Systems (ICPS) program of the Department of Science and Technology, India (Grant No. DST/ICPS/QuST/Theme-1/2019/12). \end{acknowledgments}
32c4493573fc11af40cfea5cef2ade7fe765efae
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{sec1} The population of low-mass black holes (BHs) in nearby dwarf galaxies, i.e. galaxies with $M_\mathrm{g}\leq10^{9.5}M_{\sun}$, plays a key role in shedding light on BH formation and growth in the early Universe. One of the reasons of such the role is related to the fact that the probability of undergoing merging events in the population of dwarf galaxies is lower than for their larger counterparts, and therefore the masses of their central BHs are likely to remain close to their ``birth'' values \citep[e.g.][]{Reines2016, Greene2020}. These BHs with masses of $10^2 M_{\sun} \leq M_\mathrm{bh} \leq 10^6 M_{\sun}$ are typically classified as intermediate-mass black holes (IMBHs). Finding and weighing them would enable us to distinguish betwen different BH seed formation mechanisms, those involving a direct collapse at the mass level of $M_\mathrm{bh}\sim10^4M_{\sun}$ and population III star seeds with a typical mass of $M_\mathrm{bh}\sim10^2M_{\sun}$ \citep[e.g., the simulation by][]{Volonteri2010}. At present, there are several hundred accreting IMBH candidates in dwarf galaxies that exhibit optical spectroscopic or X-ray signatures \citep[fraction $<1$ per cent,][]{Reines2013, Pardo2016}. Among them, only a small number of IMBHs \citep[e.g., twelve in][]{Schutte2019} have been identified in dwarf galaxies hosting active galactic nuclei (AGN). About 0.3 per cent of dwarf galaxies have radio counterparts \citep{Reines2020} in the FIRST \citep[Faint Images of the Radio Sky at Twenty Centimeters,][]{Becker1995} survey. High-resolution very long baseline interferometry (VLBI) observations of these radio counterparts provide direct insight in their nature and mechanism of emission, involving non-thermal radio jet/outflow activity. Both jets and wide opening angle winds are major ingredients in the feedback mechanisms which are reflected in the co-evolution between BHs and galaxy bulges, i.e. the $M_{\rm bh}$--$\sigma$ correlation in which $\sigma$ is the stellar velocity dispersion \citep[e.g.][]{Greene2020, Kormendy2013} and the $M_{\rm bh}$--$M_{\rm bulge}$ correlation \citep[e.g.,][]{Schutte2019}. Revealing radio AGN activity would enable us to probe the feedback of the IMBHs \citep[e.g.][]{Greene2020, Manzano-King2019}. In addition, VLBI detections of compact radio cores would provide data points for filling the gap between supermassive and stellar-mass BHs for testing the mass-dependent relations \citep[e.g. the fundamental plane relation, ][]{Yuan2014} and allow us to search for off-nuclear IMBHs \citep{Reines2020} formed by galaxy mergers \citep[e.g.][]{Bellovary2019}. To date, there are some high-resolution imaging observations of IMBHs, e.g. POX 52 \citep{Thornton2008}, Henize~2--10 \citep{Reines2012} and NGC~404 \citep{Paragi2014}. However, radio jets or steady radio-emitting polar outflows, compact on sub-pc scales, have been revealed in only one dwarf galaxy, NGC~4395 \citep{Wrobel2006}. The dwarf elliptical galaxy SDSS J090613.77$+$561015.2{} at the redshift $z=0.0465$ hosts an AGN and has a stellar mass of $2.3\times10^{9} M_{\sun}$ \citep[source ID: 9, ][]{Reines2013}. It displays not only some narrow-line AGN signatures but also a persistent broad H$\alpha$ emission \citep[source ID: RGG 9,][]{Baldassare2016}. Based on high spectral resolution observations, \citet{Baldassare2016} estimated the mass of its BH as $M_{\rm bh}=3.6^{+5.9}_{-2.3}\times10^5 M_{\sun}$ (including the systematic uncertainty of 0.42 dex). In the long-slit spectroscopy with the Keck I telescope, it shows some spatially extended ionized gas outflows that are most likely AGN-driven because their velocity (701$\pm$7 km~s$^{-1}$) exceeds the escape velocity (303$\pm$35 km~s$^{-1}$) of its halo \citep{Manzano-King2019}. It is a slightly resolved point-like source with total the flux density of 22.4$\pm$4.1~mJy in the the GMRT (Giant Metrewave Radio Telescope) 150~MHz all-sky radio survey \citep{Intema2017} and 4.7$\pm$0.2~mJy in the 1.4~GHz FIRST survey \citep{Becker1995}. It has a flux density of 0.93$\pm$0.05~mJy at 9.0~GHz and 0.78$\pm$0.04~mJy at 10.65~GHz in the Karl G. Jansky Very Large Array (VLA) observations \citep[source ID: 26,][]{Reines2020}. This Letter is composed as follows. In Section~\ref{sec2}, we describe our European VLBI Network (EVN) observations and data reduction. In Section~\ref{sec3}, we present the high-resolution imaging results. In Section~\ref{sec4}, we discuss the physical nature of the detected components and the implication from our findings. Throughout the paper, a standard $\Lambda$CDM cosmological model with H$_\mathrm{0}$~=~71~km~s$^{-1}$~Mpc$^{-1}$, $\Omega_\mathrm{m}$~=~0.27, $\Omega_{\Lambda}$~=~0.73 is adopted; the VLBI images then have a scale of 0.9~pc mas$^{-1}$. \begin{table} \caption{Summary of the 1.66 GHz EVN observations of SDSS J090613.77$+$561015.2{}. } \label{tab1} \begin{tabular}{cl} \hline\hline Observing date and time & Project code and participating stations \\ \hline 2017 Jan 17, 18h--20h UT & RSY05: JbWbEfMcO8TrT6 \\ 2017 Nov 6, 00h--12h UT & EY029: JbWbEfMcO8TrT6UrSvZcBd \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.47\textwidth, clip=true]{fig1.eps} \\ \caption{ The jet structure of the phase-referencing calibrator J0854$+$5757. The 1.66-GHz EVN image has a dynamic range of $\frac{S_{\rm pk}}{\sigma_{\rm rms}}=$30\,250. The contours start at 0.016 mJy beam$^{-1} ($3$\sigma$) and increase by a factor of 2. The peak brightness is 484~mJy beam$^{-1}$. To properly show the faint jet structure, we used a large circular restoring beam with a FWHM of 8~mas.} \label{fig1} \end{figure} \section{VLBI observations and data reduction} \label{sec2} We observed SDSS J090613.77$+$561015.2{} for two hours on 2017 January 17 with the e-EVN and for twelve hours on 2017 November 6 with the full EVN. Both observations used the standard 1-Gbps experiment setup (16 subbands, 16~MHz bandwidth per sub-band, dual circular polarisation, 2-bit quantisation) at the frequency of 1.66~GHz. The participating telescopes were Jodrell Bank Mk2 (Jb), Westerbork (Wb, single dish), Effelsberg (Ef), Medicina (Mc), Onsala (O8), Tianma (T6), Urumqi (Ur), Toru\'n (Tr), Svetloe (Sv), Zelenchukskaya (Zc), Badary (Bd). Table~\ref{tab1} gives the observing time and the used stations at each epoch. The correlation was done by the EVN software correlator \citep[SFXC,][]{Keimpema2015} at JIVE (Joint Institute for VLBI, ERIC) using standard correlation parameters of continuum experiments. Both experiments were performed in the phase-referencing mode to gain the calibration solutions and a reference position for our faint target. The bright source J0854$+$5757, about $2\fdg4$ apart from SDSS J090613.77$+$561015.2{} and a key source in the International Celestial Reference Frame \citep{Ma1998}, was observed periodically as the phase-referencing calibrator. The calibrator has a J2000 poisition at RA~$=08^{\rm h}54^{\rm m}41\fs996408$ ($\sigma_{\rm ra}=0.2$~mas), Dec.~$=57\degr57\arcmin29\farcs93914$ ($\sigma_{\rm dec}=0.1$~mas) in the source catalogue of 2015a from the Goddard Space Flight Centre VLBI group\footnote{ \url{http://astrogeo.org/vlbi/solutions/rfc_2015a/}}. The calibrator position has an offset of 0.7 mas with respect to the optical position in the second data release \citep[DR2,][]{Brown2018} of the \textit{Gaia} mission \citep{Prusti2016}. The nodding observations used a duty-cycle period of about five minutes (1~min for J0854$+$5757, 3~min for SDSS J090613.77$+$561015.2{}, 1~min for two scan gaps). During the observations, the sources was at an elevation of $\geq$18$\degr$ at all European telescopes. The visibility data were calibrated using the National Radio Astronomy Observatory (NRAO) Astronomical Image Processing System \citep[\textsc{aips} version 31DEC17,][]{Greisen2003} software package. We removed the visibility data of side channels because of their low correlation amplitude while loading the data into \textsc{aips} and then ran the task \textsc{accor} to re-normalize the correlation amplitude. \textsc{A priori} amplitude calibration was performed with the system temperatures and the antenna gain curves if provided by the telescopes. If these data were missing, the nominal values were used. The ionospheric dispersive delays were corrected according to the map of total electron content provided by the Global Positioning System (GPS) satellite observations. Phase errors due to the antenna parallactic angle variations were removed. After a manual phase calibration was carried out, the global fringe-fitting and bandpass calibration were performed. The calibrator J0854$+$5757 was imaged using iterations of model fitting with a group of point sources (delta functions) and self-calibration (Stokes \textit{I}) in the software package \textsc{difmap} \citep[version 2.5e, ][]{Shepherd1994}, fringe-fitting and self-calibration (Stokes \textit{RR} and \textit{LL}) in \textsc{aips}. The calibrator had a single-sided core--jet structure with a total flux density of 0.61$\pm$0.03~Jy in the first observing epoch. Due to a firmware bug in the European digital base-band converters, there were significant sensitivity losses in the 2nd epoch. According to the long-term light curve at 15 GHz observed by the 40-meter telescope at the Owens Valley Radio Observatory \citep{Richards2011} and published online\footnote{\url{http://www.astro.caltech.edu/ovroblazars}}, the calibrator had stable flux densities in 2017. Assuming that the calibrator had no flux density variation between the two epochs, we derived an amplitude correction factor of 1.63 in the imaging procedures. We used the jet base, the brightest component, as the reference point in the phase-referencing calibration. After about ten iterations, the deconvolved map of Stokes $I$ using natural weighting reached an image noise level of $0.016$~mJy~beam$^{-1}$, as low as the map of zero-flux-density Stokes $V$. The core--jet structure in the phase-referencing calibrator J0854$+$5757 observed in the second epoch is shown in Fig.~\ref{fig1}. In the final high dynamic range image, 82 positive point sources were used. Both the phase and amplitude self-calibration solutions were also transferred and applied to the target source. In the residual map of the target source, there are no clearly-seen systematic errors (noise peaks, strips and rings). This indicates that the phase-referencing calibration worked properly. \begin{table*} \caption{Summary of the circular Gaussian model-fitting results. Columns give (1) epoch, (2) component name, (3) peak brightness, (4) integrated flux density, (5--6) relative offsets in right ascension and declination with respect to component N, (7) deconvolved angular size (FWHM), (8) brightness temperature and (9) radio luminosity.} \label{tab2} \begin{tabular}{ccccrrccc} \hline\hline Epoch & Name & $S_{\rm pk}$ & $S_{\rm obs}$ & $\Delta\alpha\cos\delta$ & $\Delta\delta$ & $\theta_{\rm size}$ & $T_{\rm b}$ & $L_{\rm R}$ \\ & & (mJy beam$^{-1}$) & (mJy) & (mas) & (mas) & (mas) & (K) & erg s$^{-1}$ \\ \hline 1 & N & 0.616$\pm$0.022 & 0.94$\pm$0.06 & $0.00\pm$0.20 & $0.00\pm$0.28 & 2.14$\pm$0.12 & 9.5$\times10^7$ & 7.7$\times10^{37}$ \\ 1 & S & 0.428$\pm$0.022 & 1.04$\pm$0.06 & $-21.06\pm$0.44 & $-51.14\pm$0.57 & 4.10$\pm$0.32 & 2.9$\times10^7$ & 8.5$\times10^{37}$ \\ \hline 2 & N & 0.781$\pm$0.016 & 0.88$\pm$0.02 & $0.00\pm$0.04 & $0.00\pm$0.05 & 1.82$\pm$0.14 & 1.2$\times10^8$ & 7.2$\times10^{37}$ \\ 2 & S & 0.449$\pm$0.016 & 0.98$\pm$0.04 & $-22.47\pm$0.21 & $-47.19\pm$0.21 & 8.02$\pm$0.42 & 7.1$\times10^6$ & 8.0$\times10^{37}$ \\ \hline \end{tabular} \end{table*} \section{Radio structure in SDSS J090613.77$+$561015.2{}} \label{sec3} The full-EVN image of SDSS J090613.77$+$561015.2{} obtained on 2017 November 6 is shown in Fig.~\ref{fig2}. There are two components detected and labelled as N and S in the \textsc{clean} map. The optical centroid, reported by the \textit{Gaia} DR2 \citep{Brown2018}, is marked as a yellow cross (J2000, RA $=09^{\rm h}06^{\rm m}13\fs77047$, Dec$=56\degr10\arcmin15\farcs1482$, $\sigma_{\rm ra}=\sigma_{\rm dec}=0.8$~mas, the astrometric excess noise to set the reduced $\chi^2_{\rm r}=1$ is 7.0~mas). The large excess noise is most likely related to the extended optical morphology and a certain level of asymmetric brightness distribution in the bulge \citep{Schutte2019}. The total error, added in quadrature, is shown as a dotted yellow circle. The two components are also detected in the first short e-EVN observations. Due to its relatively low image quality, that image is not shown here. In order to determine the parameters of the obtained brightness distribution, we fitted two circular Gaussian components to the visibility data using \textsc{difmap}. The model-fitting results including the 1$\sigma$ formal uncertainties at the reduced $\chi_{\rm r}^2=1$ are listed in Table~\ref{tab2}. The systematic error of $S_{\rm pk}$, $S_{\rm obs}$ and $L_{\rm R}$ are about ten per cent. Using J0854$+$5757 as reference, we estimate the coordinates of the component N as RA$=09^{\rm h}06^{\rm m}13\fs77005$ and Dec$=56\degr10\arcmin15\farcs1429$ with a total systematic error $<1$ mas. The component N is partly resolved and has a faint ($\sim$0.2 mJy) extension of about 4~mas toward south. The separation between the components N and S is 52.3$\pm$0.3 mas. With respect to the component N, the component S has a position angle of about $-154\degr$. All the available to date total flux density measurements of the source SDSS J090613.77$+$561015.2{} are plotted in Fig.~\ref{fig3}. Assuming no flux density variability, we fit the blue data points collected from the literature \citep{Becker1995, Intema2017, Reines2020} to a power-law spectrum $S_\nu=S_{0} \nu^\alpha$ and determine $S_0=5.94\pm0.39$ mJy, the spectral index $\alpha=-0.84\pm0.04$. According to this model, SDSS J090613.77$+$561015.2{} has a total flux density of 3.9$\pm$0.3 mJy at 1.66~GHz. Compared to this estimate, the high-resolution EVN image recovers only about 50 per cent. We also tried to search for diffuse radio emission. On the shortest baseline Ef--Wb, there is a hint for a faint and diffuse structure connecting the two components and extending farther on both sides. However, because of the lack of the shorter baselines, it is hard to make a reliable image for the diffuse structure from the available data. The existence of the diffuse radio structure is also expected since the source is slightly resolved (deconvovled FWHM: $2\farcs1 \times 1\farcs1$ in the major axis position angle 40$\degr$) in the VLA FIRST image \citep{Becker1995} and the elongation direction is roughly consistent with the overall EVN morphology extent. The next to last column in Table~\ref{tab2} presents an average brightness temperature, estimated as \citep[e.g.,][]{Condon1982}, \begin{equation} T_{\rm b} = 1.22\times10^{9}\frac{S_\mathrm{obs}}{\nu_\mathrm{obs}^2\theta_\mathrm{size}^2}(1+z), \label{eq1} \end{equation} where $S_\mathrm{obs}$ is the observed total flux density in mJy, $\nu_\mathrm{obs}$ is the observing frequency in GHz, $\theta_\mathrm{size}$ is the full width at half-maximum (FWHM) of the circular Gaussian model in mas, and $z$ is the redshift. The components N and S have average brightness temperatures of 1.2$\times10^8$ K and 7.1$\times10^6$ K, respectively, at 1.66~GHz in the second-epoch 12-h full-EVN observations. Due to the very limited $(u,v)$ coverage and the low image sensitivity, the component S has an underestimated $\theta_{\rm size}$ and thus an overestimated $T_{\rm b}$ in the first-epoch 2-hour-long e-EVN observation. \begin{figure} \centering \includegraphics[width=0.47\textwidth, clip=true]{fig2.eps} \\ \caption{ A two-component brightness distribution found by the EVN at 1.66 GHz in the dwarf galaxy SDSS J090613.77$+$561015.2{} hosting an accreting IMBH. The yellow cross and circle mark the optical (\textit{Gaia} DR2) centroid and the total 1$\sigma$~error, respectively. The contours start at 0.048 mJy~beam$^{-1}$ (3$\sigma$) and increase by a factor of two. The peak brightness is 0.76 mJy~beam$^{-1}$. The restoring beam is 5.81~mas~$\times$~4.05~mas (FWHM) at $-3\fdg03$ position angle and plotted in the bottom-left corner.} \label{fig2} \end{figure} \begin{figure} \centering \includegraphics[width=0.47\textwidth, clip=true]{fig3.eps} \\ \caption{ The broad-band radio spectrum of SDSS J090613.77$+$561015.2{}. The blue points are from the earlier total flux density observations by the VLA \citep{Becker1995, Reines2020} and the GMRT \citep{Intema2017}. The two red points are from our high-resolution EVN observations. The black line shows the best-fit power-law spectrum to the low-resolution total flux density measurements (blue points).} \label{fig3} \end{figure} \section{Discussion} \label{sec4} \subsection{The nature of the components N and S} \label{sec4-1} The only plausible explanation of the radio structure in SDSS J090613.77$+$561015.2{} appears to be an AGN manifestation. The optical spectroscopic observations of SDSS J090613.77$+$561015.2{} show that there are no signatures for on-going star-forming activity in the BPT \citep{Baldwin1981} diagrams formed with some emission line ratios \citep{Reines2020}. The persistent broad H$\alpha$ emission is consistent with an AGN origin \citep{Baldassare2016}. Therefore, we reject a possibility that the observed components represent a superposition of supernova remnants (SNRs) like in, e.g., Arp 220 \citep{Varenius2019}. Moreover, they cannot be explained as two young radio supernovae because of their radio structures resolved on the pc scales although their radio luminosities ($L_{\rm R}\sim8\times10^{37}~$erg~s$^{-1}$) are in the luminosity range of young radio supernovae \citep[maximum: $L_{\rm R}\sim5\times10^{38}$~erg~s$^{-1}$,][]{Weiler2002}. The component N is either the radio core, i.e. the jet base, or a newly-emerging jet component. Its proximity (within the 1$\sigma$ positional error) to the optical centroid of SDSS J090613.77$+$561015.2{} is consistent with this hypothesis. There also exists a jet-like faint extension toward South in the component N when the image resolution in North-South is slightly improved to 4 mas. We can also fit the component N to two point sources with flux densities 0.73$\pm$0.02 and 0.19$\pm$0.02 mJy and a separation of 3.9$\pm$0.4 mas. The component S is most likely an expanding ejecta. Because of its large distance ($\sim$58.5~mas) to the \textit{Gaia} centroid, it cannot be explained as the radio core. Compared to the component N, the component S has the more extended structure and the lower brightness temperature. Moreover, its position and elongation (position angle about 36$\degr$) are roughly aligned with the extension of the component N. The \textit{Gaia} positioning of the optical centroid close to the radio component N provides a strong indication on the nature of this component as the compact radio core. However, we cannot rule out that SDSS J090613.77$+$561015.2{} is a young compact symmetric object \citep[CSO, ][]{Wilkinson1994}. In this scenario, the radio components N and S are a pair of moving-out radio ejectae (or mini-lobes) within its host galaxy, and the radio core is located somewhere in–between and undetected. The latter would be still consistent with the \text{Gaia} position within its $3\sigma$ error. Assuming a typical separation speed of $0.2c$ among CSOs \citep[e.g.][]{An2012}, the pair of components would have a kinematic lifetime of $\sim$7.5$\times$10$^2$ yr. This is a typical value in young extragalactic radio sources \citep[e.g.][]{An2012b}. Steep spectra at $\ga$1~GHz are not unknown among CSOs, e.g. J0132$+$5620 and J2203$+$1007 \citep{An2012}. Another example is the CSO PKS~1117$+$146, which has a spectral index of ~$-$0.7 \citep{Torniainen2007} and a large angular separation between the opposite jet components,$\sim$70~mas \citep{Bondi1998}, and thus is similar to SDSS J090613.77$+$561015.2{}. A conclusive test on possible CSO identification of the source will be provided by its future multi-frequency and multi-epoch VLBI studies. \subsection{Implications of the presence of a radio jet associated with the IMBH} \label{sec4-2} If the component N is the stationary radio core associated with the IMBH, SDSS J090613.77$+$561015.2{} will have a relatively high radio luminosity. The source has an X-ray luminosity of $L_\mathrm{X}=4.5\times10^{40}$ erg\,s$^{-1}$ in the 2--7~keV band \citep{Baldassare2017}. This leads to a ratio of $\frac{L_\mathrm{R}}{L_\mathrm{X}}=1.8\times10^{-2}$, much higher than the typical value $\sim10^{-5}$ in the radio-quiet Palomar--Green quasar sample \citep{Laor2008}. Due to the high ratio, it is hard to explain them as wide-angle radio-emitting winds. At the low accretion rate state, there exists a correlation \citep[e.g.][]{Merloni2003} between the radio core luminosity at 5 GHz, the X-ray luminosity ($L_{\rm X}$) in the 2--10~keV band, and the BH mass ($M_{\rm bh}$): \begin{equation} \log L_{\rm R} =(0.60^{+0.11}_{-0.11}) \log L_{\rm X} + (0.78^{+0.11}_{-0.09}) \log M_{\rm bh} + 7.33^{+4.05}_{-4.07} \label{eq2} \end{equation} According to Equation~\ref{eq2}, we would expect $L_{\rm R}=10^{36.1\pm0.9}$ erg\,s$^{-1}$. This estimate would be one order of magnitude lower while still in the acceptable range in view of the large scatter of the correlation. The radio jet is rarely-seen in dwarf galaxies. Compared to the first and only case, NGC~4395 \citep{Wrobel2006}, SDSS J090613.77$+$561015.2{} has a jet 160 times longer and 10$^5$ times more luminous. The finding of the large jet structure indicates that violent ejections might appear at the BH growth stage of $M_{\rm bh}\sim$10$^5M_{\sun}$ in the early Universe. Most of these jets associated with low-mass BHs might be short-lived and sub-kpc objects because they have rather low radio luminosities \citep[$L_{\rm R}\leq10^{41}$ erg~s$^{-1}$ at 1.4 GHz,][]{Kunert2010, An2012b}. \citet{Manzano-King2019} report some AGN-driven outflows in dwarf galaxies, indicating significant AGN impact on the large-scale kinematics and gas content. The kpc-scale high-velocity ionized gas outflows in SDSS J090613.77$+$561015.2{} might be driven by the AGN jet activity, in particular the diffuse structure completely resolved out in our EVN image because of missing the shorter baselines. The unseen kpc-scale (relic) jet component has a flux density of 2.0$\pm$0.2 mJy. According the scaling relation $P_{\rm jet} = 5.8 \times10^{43} (\frac{L_{\rm R}}{10^{40}})^{0.70}$ erg\,s$^{-1}$ between jet kinematic power and the radio luminosity derived by \citet{Cavagnolo2010} using \textit{Chandra} X-ray and VLA radio data of radio galaxies, the unseen relic jet has a power of $P_{\rm jet}=10^{42.6\pm0.7}$ erg\,s$^{-1}$, reaching about ten percent of the Eddington luminosity $L_{\rm Edd}=10^{43.6\pm0.4}$ erg\,s$^{-1}$ of the IMBH. Thus, the AGN jet activity might have significant impact on the host galaxy. \section*{Acknowledgements} \label{ack} The European VLBI Network is a joint facility of independent European, African, Asian, and North American radio astronomy institutes. Scientific results from data presented in this publication are derived from the following EVN project codes: RSY05 and EY029. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. LIG acknowledges support by the CSIRO Distinguished Visitor Programme. We thank the staff of the GMRT that made these observations possible. The GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. This research has made use of data from the OVRO 40-m monitoring program \citep{Richards2011} which is supported in part by NASA grants NNX08AW31G, NNX11A043G, and NNX14AQ89G and NSF grants AST-0808050 and AST-1109911.
03d57530b79287b169c289260cfd3f6a7a91e4cb
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{sec:introduction} Language modeling is a probabilistic description of language phenomenon. It provides essential context to distinguish words which sound similar and therefore has one of the most useful applications in Natural Language Processing (NLP) especially in downstreaming tasks like Automatic Speech Recognition (ASR). Recurrent Neural Networks (RNN) especially Long Short Term Memory (LSTM) networks \cite{hochreiter1997long} have been the typical solution to language modeling which do achieve strong results. In spite of these results, their fundamental sequential computation constraint has restricted their use in the modeling of long-term dependencies in sequential data. To address these issues Transformer architecture was introduced. Transformers relies completely on an attention mechanism to form global dependencies between input and output. It also offers more parallelization and has achieved SOTA results in language modeling outperforming LSTM models \cite{NIPS2017_7181}. In recent years,we have seen a lot of development based on this standard transformer models particularly on unsupervised pre-training(\cite{Radford2018ImprovingLU,DBLP:journals/corr/abs-1810-04805,Dai2019TransformerXLAL,yang2019xlnet,DBLP:journals/corr/abs-1802-05365,DBLP:journals/corr/abs-1801-06146} which have set state-of-the art results on multiple NLP benchmarks. One such model architecture has been the Bidirectional Encoder Representations from Transformers (BERT) model which uses a deep bidirectional transformer architecture. Another architecture of interest would be the Transformer-XL, which introduces the notion of recurrence in a self-attention model. The primary research focus though has been mostly on English language for which abundant data is present. It is interesting to see the performance of these models for an agglutinative language like Finnish, which is morphologically richer than English. In this project, we explore the implementation of Transformer-based models (BERT and Transformer-XL) in language modeling for Finnish. We will use the same training data as in \cite{Aaltodoc:http://urn.fi/URN:ISBN:978-952-60-8566-1} so that we can do fair comparisons with the performance of the LSTM models. Also, as the BERT model is a bi-directional transformer, we will have to approximate the conditional probabilities given a sequence of words. We also experiment with using sub-word units with Transformer-XL to cope with the large vocabulary problems associated with the Finnish Language. With smaller units, the modeled sequences are longer, and we hope that the recursive XL architecture can allow us to still model long term effects. To the best of our knowledge this is the first work with the Finnish language to use the following: \begin{itemize} \item Approximation of perplexity using a BERT architecture \item Using Transformer-XL architecture with sub-word units. \item Comparison of Transformer and LSTM models as language models in the same comparable settings with an agglutinative language. \end{itemize} \section{Background \& Methods} The goal of an language model is to assign meaningful probabilities to a sequence of words. Given a set of tokens $\mathbf{X}=(x_1,....,x_T)$, where $T$ is the length of a sequence, our task is to estimate the joint conditional probability $P(\mathbf{X})$ which is \begin{equation} \label{cond} P(\mathbf{X})=\prod_{i=1}^{T} p\left(x_{i} | x_{1}, \ldots, x_{i-1}\right) , \end{equation} were $(x_{1}, \ldots, x_{i-1})$ is the context. An Intrinsic evaluation of the performance of Language Models is perplexity (PPL) which is defined as the inverse probability of the set of the tokens and taking the $T^{th}$ root were $T$ is the number of tokens \begin{equation} \label{ppl} PPL(\mathbf{X})= P(\mathbf{X})^{-1/T}. \end{equation} In our two approaches we use transformer based architectures: BERT and Transformer-XL as mentioned before. Calculating the auto-regressive $P(\mathbf{X})$ for the transformer-XL is quite straight-forward as the model is unidirectional but it doesn't factorize the same way for a bi-directional model like BERT. BERT's bi-directional context poses a problem for us to calculate an auto-regressive joint probability. A simple fix could be that we mask all the tokens $\mathbf{x}_{>i}$ and we calculate the conditional factors as we do for an unidirectional model. By doing so though, we loose upon the advantage of bi-directional context the BERT model enables. We propose an approximation of the joint probability as, \begin{equation} \label{approx} P(\mathbf{X}) \approx \prod_{i=1}^{T} p\left(x_{i} | x_{1}, \ldots, x_{i-1}, x_{i+1}, \ldots, x_{T}\right). \end{equation} This type of approximations has been previously explored with Bi-directional RNN LM's \cite{inproceedings} but not for deep transformer models. We therefore, define a pseudo-perplexity score from the above approximated joint probability. The original BERT has two training objectives: 'Masked language modelling', in which you mask input tokens randomly and then predict the masked tokens using the left and right context. Additionally, there is the 'next sentence prediction' task that jointly trains text-pair representations. For training the Masked language model the original BERT used Byte Pair Encoding (BPE) \cite{10.5555/177910.177914} for subword tokenization \cite{DBLP:journals/corr/SennrichHB15}.For example the rare word "unaffable" to be split up into more frequent subwords such as ["un", "\text{\#\#aff"}, \text{"\#\#able"}]. To remain consistent with experiments performed with LSTM's we use the morfessor for the subword tokenization in the Finnish Language. In Addition, we also apply boundary markers as in (Table \ref{Tab:markings}) and train two separate models using this distinction. We train with left-marked markings as the original BERT was trained with such a scheme and the left+right-marked as it was the previous SOTA with the Finnish Language. For the transformer-XL experiments, we just train with the left+right marked scheme. \begin{table}[h!] \footnotesize \centering \caption {Two methods of marking subword units such that the original sentence 'two slippers' is reconstructed} \begin{tabular}{l| l } subword marking & Example \\ \hline left+right-marked (+m+) & two slipp+ +er+ +s \\ left-marked (+m) & two slipp +er +s \\ \hline \end{tabular} \label{Tab:markings} \end{table} The Next Sentence Prediction (NSP) is a binary classification task which predicts whether two segments follow each other in the original text. This pre-training task was proposed to further improve the performance on downstreaming tasks, like Natural Language Inference(NLI) but in reality removing the NSP loss matches or slightly improves the downstream task performance \cite{DBLP:journals/corr/abs-1907-11692}. In this paper, we have omitted the NSP task from the BERT pre-training procedure and changed the input from a SEGMENT-PAIR input to a SINGLE SEGMENT input. As seen in (Fig \ref{fig:BERT_label}) \begin{figure*}[t] \centering \includegraphics[scale=0.9]{bert.png} \caption{BERT-Original sentence 'how are you doing today'} \label{fig:BERT_label} \end{figure*} Transformer-XL introduced the notion of recurrence in self-attention by caching the hidden state sequence to compute the hidden states of a new segment. It also introduces a novel relative positional embedding scheme and both of them combined address the issue of fixed context lengths. Transformer-XL as mentioned is a unidirectional deep transformer architecture, therefore the perplexity can be calculated as (Eq \ref{ppl}). The only change is in the input format, were we use sub-word units rather than whole word units as Finnish is morphologically richer than English. \section{Data} \label{sec:data} The Finnish text data used for the language modeling task is provided by \cite{ftc-korp_en}. The dataset consists mainly of newspapers and books of around 144 million word tokens and 4.2 million unique tokens. We use a Morfessor 2.0 \cite{smit2014morfessor} using the basic unsupervised Morfessor Baseline algorithm \cite{10.1145/1187415.1187418} with a corpus weight parameter ($\alpha$) of 0.001. We have a vocabulary of 34K subword tokens for the left+right-marked (+m+) markings and 19K subword tokens for the left-marked (+m) markings. We also pre-process the data to remove any punctuation marks such that we can use the same data with an ASR system. The input is one sentence per line and we shuffle the sentences at each epoch. The data is randomly divided into- training dataset and a validation dataset. The test dataset consists of 2850 Finnish news articles obtained from the Finnish national broadcaster YLE. \section{Experiments \& Results} \subsection{BERT} All BERT experiments were trained for 500K steps. The code was written in Python and we used the Tensorflow libraries to create the models. The experiments were trained on a single NVIDIA Tesla V100 32 GB graphic card. The data was first processed into Tensorflow records as the input to the model. The set of hyperparameters which we found optimal after experimenting with different sets are in (Table \ref{Tab:BERT}). \begin{table}[h!] \footnotesize \centering \caption {BERT hyperparameters} \begin{tabular}{l l } \hline Number of hidden layers & 20 \\ Hidden size of transformer & 896 \\ Number of attention heads & 16 \\ Intermediate size(Size of the feed forward layer) & 3584 \\ hidden activation function & Gaussian Error Linear Units \\ dropout probability & 0.1 \\ max position embeddings & 300 \\ \hline \end{tabular} \label{Tab:BERT} \end{table} This set of parameters were chosen as there training performances were better than smaller models on modelling the long sequences of sub-words. We use the Adam optimizer \cite{Kingma2014AdamAM} same as the English BERT. A maximum sequence length of 300 encompasses 98 percent of the training data and also allows us to fit larger models on the GPU card. Hyper-parameter optimization is very difficult in case of these models as they take around 15 days to train given the resources. The hyperparameter choices were therefore more dependant on the original BERT with little tweaks. We assess the training performance of the the model in the (Table \ref{Tab:BERT-Train}). \begin{table}[h!] \footnotesize \centering \caption {BERT training performance} \begin{tabular}{l | l | l } Model & Masked LM Loss & Masked LM Accuracy \\ \hline left+right-marked (+m+) & 2.24 & 0.56\\ left-marked (+m) & 2.03 & 0.59 \\ \hline \end{tabular} \label{Tab:BERT-Train} \end{table} When we train the BERT model we mask some percentage of the input tokens at random, and then predict those masked tokens, this is known as Masked LM. The masked LM loss, refers specifically to the loss when the masked language model predicts on the masked tokens. The masked LM accuracy refers specifically to the accuracy with which the model predicts on the masked tokens. The loss for both the models are far off from the Masked LM loss of the English BERT, key difference being the pre-training data for both the language models are quite different. Google training their model on 3.3 Billion words from BooksCorpus \cite{DBLP:journals/corr/ZhuKZSUTF15} and the English Wikipedia and our model being trained on 144 million words. Comparing the two Finnish models, the left-marked model has a better training performance than left+right-marked model. The results of the pseudo-perplexity described in the previous section to evaluate the above models on the test data-set is in table (Table \ref{Tab:BERT-Test}).The test dataset is of a different context when compared to the training data, and interestingly both the models are quite confident when it comes to the test dataset. The pseudo-perplexity values of left-marked are lower when compared to left-right-marked signifying that it is more confident. We cannot directly compare the perplexity scores BERT model with a unidirectional LSTM model as both are calculated in a different manner. We can experiment to compare it with a Bi-directional LSTM or use a downstreaming task to compare both the performances. We could also randomly mask tokens and then compare the prediction accuracy on the masked tokens. \begin{table}[h!] \footnotesize \centering \caption {BERT Test performance} \begin{tabular}{l | l } Model & Pseudo perplexity \\ \hline left+right-marked (+m+) & 17.1\\ left-marked (+m) & 14.5\\ \hline \end{tabular} \label{Tab:BERT-Test} \end{table} \subsection{Transformer-XL} All Transformer-XL experiments are also trained equally for 500K steps. The code was written in Python and we used the PyTorch libraries for model creation. The experiments were trained on a single NVIDIA Tesla V100 32 GB graphic card. Two sets of hyperparameters were chosen to be compared after some initial optimization and are in (Table \ref{Tab:trxl}) \begin{table}[h!] \footnotesize \centering \caption {Tr-XL hyperparameters} \begin{tabular}{l l | l} Hyperparameters & Model 1 & Model 2\\ \hline Number of hidden layers & 4 & 4 \\ Hidden size of transformer & 512 & 1024 \\ Number of attention heads & 8 & 8 \\ Size of attention head & 80 & 128 \\ Intermediate size(Size of the feed forward layer) & 2048 & 4096 \\ Warmup & 10000 & 40000 \\ Batch-size & 64 & 224 \\ Segment Length & 150 & 32 \\ Memory Length & 150 & 32 \\ \hline \end{tabular} \label{Tab:trxl} \end{table} From the above parameter choice, we wanted to experiment, whether providing more Segment and Memory length is advantageous (longer context) than a larger model. These parameters where chosen after some hyperparameter optimization. Same as for BERT we use the Adam optimizer, but we also use a cosine annealing learning rate scheduler to speed-up training \cite{DBLP:journals/corr/LoshchilovH16a}. The training performance results are in (Table \ref{Tab:trxl-train}) \begin{table}[h!] \footnotesize \centering \caption {Tr-XL training perplexity scores} \begin{tabular}{l | l | l } Model & Mem-seg len \\ & 150-150 & 32-32 \\ \hline left+right-marked (+m+) & 45.22 & 33.86\\ left-marked (+m) & 47.83 & 35.78 \\ \hline \end{tabular} \label{Tab:trxl-train} \end{table} As opposed to BERT, the left+right-marked models have a better training performance than their counterpart. Interestingly the larger model trains much better when compared to providing larger contexts. The same set of parameters for the 32-32 model cannot be replicated for 150-150 model as the latter takes a lot of space on the GPU card. The test set is same as that used with BERT and the results are in (Table \ref{Tab:trxl-test}). The test performance is similar to that of the training performance with left-right-marked large model(32-32) performing the best. We can directly compare the perplexity scores with the previous best \cite{Hmm-dmm} as both are unidirectional models, Transformer-XL model has outperformed the latter by 27\%. \begin{table}[h!] \footnotesize \centering \caption {Tr-XL test perplexity scores, (-): The experiment models are not available} \begin{tabular}{l | l | l | l } Model & Mem-seg len & & \\ & 150-150 & 32-32 & (prev best) \\ \hline left+right-marked (+m+) & 82.3 & 73.58 & 93.2\\ left-marked (+m) &84.79 & 74.39 & - \\ \hline \end{tabular} \label{Tab:trxl-test} \end{table} \subsection{Result comparisons for Transformer architectures} Transformer-XL and BERT both have low perplexity and pseudo-perplexity scores, but both cannot be directly compared as they are calculated quite differently (Eq.\ref{cond}, Eq.\ref{approx}). The dramatically low scores of BERT indicate that per word predicted probability is higher than that of a uni-directional model. Thus the predicted word probability distribution is much sharper when compared to the XL model probability distribution. At this point, we cannot say which model architecture has performed better- BERT or Transformer-XL, despite both of them achieving good low perplexity scores. We would need to experiment with a downstreaming task in-order to fairly compare model performances. \section{Conclusion} Recent migration to transformer based architectures in language modeling from LSTM models is justified as Transformer-XL obtains strong perplexity results. BERT model also obtains very low pseudo-perplexity scores but it is inequitable to the unidirectional models. Our major contributions in this project, is the use of Transformer-XL architectures for the Finnish language in a sub-word setting, and the formulation of pseudo perplexity for the BERT model. Further comparisons between the transformer architectures can be made by downstreaming it to an ASR task, which will be explored in the future. \bibliographystyle{ieeetr}
e8d0304ceeaf7648686900eba5babfd6aed021ed
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Conclusion} \label{sec:conclusion} This paper presents a vision-inertial telepresence concept in which onboard sensors, an object tracking algorithm and databases of objects were utilized to provide a 3D visualization of the scene in real-time. From our experiences in the field, we believe that providing a 3D visual feedback to the tele-operator is required in aerial manipulation applications at remote sites where a direct and close visual contact to the objects are genuinely difficult. Our demonstration of advanced aerial manipulation shows that SAM with telepresence is a viable concept for inspection and maintenance applications. \section{Acknowledgements} Special thanks to Michael Vilzmann for the support on FCC, Thomas Hulin for the support on peg-in-hole experiments and Nari Song for the support on video editing. \section{Introduction} Aerial manipulators exploit the manipulation capabilities of robotic arms located on a flying platform \cite{Ruggiero_survey}. These systems can be deployed for tasks that are unsafe and costly for humans. Some notable examples are repairing rotor blades of wind turbines and inspecting oil and gas pipelines in refineries. However, building an autonomous aerial manipulator \cite{Kondak2014, Ruggiero2015, Kim2018_compliance} poses several challenges to the current state-of-the-art robotic technologies. To this end, existing and close-to-market aerial manipulators are often tailored to a specific task such as contact inspection \cite{karen2019, Angel2019, Cuevas2019}. An alternative is the remote control of an aerial manipulator (namely, aerial tele-manipulation). Aerial tele-manipulation, by having a human-in-the-loop, has an advantage that several demands on robot's cognitive modules can be replaced by its teleoperator. Furthermore, recent studies show promising results that indicate a possibility for deployment of such systems under an imperfect communication between the robot and the operator. For example, bilateral teleoperation with force feedback has been demonstrated in Kontur-2 mission \cite{spacejoystick} where a cosmonaut from the International Space Station successfully operated a robot on Earth. In aerial tele-manipulation, the works on force feedback \cite{Mohammadi2016} and shared control \cite{Franchi2012} can be notably found. Additionally, 3D visual feedback is an another important aspect of aerial tele-manipulation systems for enhancing their manipulation capabilities. During our field experiments with such platforms, we experienced that a 2D visual feedback solely based on the live video streams is not sufficient to achieve precise manipulation tasks. Thus, we deduced that aerial telepresence systems must involve both real-time force and 3D visual feedback, which accurately displays the interactions of the robotic arm with the objects. Several studies confirm that a virtual environment where one can change its sight-of-view and provide haptic guidance (e.g. virtual fixtures) improves the system capabilities \cite{Falk2001InfluenceOT, Bettini2004, Huang2019}. \begin{figure} \centering \includegraphics[width=0.485\textwidth]{figures/fig_2_illu.pdf} \caption{An illustration of the proposed concept. Our aerial robot SAM \cite{sam2019} is designed to achieve a manipulation task in a remote location where humans find it difficult to reach (see left side of the figure). Consequently teleoperator from a ground station does not have any visual contact to the scene. Therefore, the robot's onboard perception system must provide a visual feedback to the operator with both 2D and 3D information which overall enhance its manipulation capabilities (depicted in the right side).} \label{fig:sec1:1} \end{figure} Therefore, we propose an advanced visual-inertial telepresence system, which utilizes visual and inertial sensors to provide 3D visual feedback to the operator. The resulting system is equipped with a haptic feedback and a virtual reality with virtual fixtures. In particular, for creating the 3D display of a remote scene, we consider an object localization approach where an object database and a marker tracking algorithm are used. As existing marker tracking methods did not suffice our requirements in terms of robustness and run-time, we propose a new object tracking algorithm by extending ARToolKitPlus \cite{artoolkit} with onboard visual-inertial odometry (VIO). Lastly, an extension of the framework to multiple objects is also addressed for pick-and-place tasks. The proposed concept is tightly integrated to a collision-safe aerial manipulator called cable-Suspended Aerial Manipulator (SAM\cite{sam2019}). In particular, the main scenario of interest is to deploy and retrieve an inspection robotic crawler (as illustrated in Fig. \ref{fig:sec1:1}). This scenario, which was designed under the scope of EU project AEROARMS, is relevant to inspection and maintenance of gas and oil pipelines in refineries\cite{aeroarms}. It involves grasping, placing and pressing tasks which need to be performed by a remotely located operator. The proposed algorithm is validated indoors and a peg-in-hole task with a margin of error less than 1cm is studied, which further displays SAM's advanced manipulation skills. In summary, our main contributions are as follows. \begin{itemize} \item A visual-inertial telepresence system for aerial manipulation where a new object localization approach is proposed for creating virtual reality of the remote scene. \item An extension of ARToolKitPlus \cite{artoolkit} with onboard VIO for improving its run-time and robustness. \item Experimental validations showing advanced manipulation skills with SAM for the first time. In particular, our field experiments indicate overall system as a viable option for inspection and maintenance applications. \end{itemize} Experiments can be seen in the video: \url{https://www.youtube.com/watch?v=onOc05Ymxzs}. \subsection{Related Works} Several researchers aimed to provide 3D information of the remote scene for tele-manipulation. For this, 3D reconstruction techniques have been notably applied so far \cite{Gerd1999, ni_song_xu_li_zhu_zeng_2017, Dejing2019, Leeper2012icra, Ryden2013} where they aimed to create 3D visualization of an unknown environment. However, their applicability to our use-case is limited as the scene has to be mapped first, and then pre-processed for coping with the noisy 3D vision data. Unlike these methods, our approach differs as we use object localization algorithms. Two benefits are: (i) a real-time display is possible, and (ii) the framework can also be extended to a pick-and-place task, which requires the visualization of both the hand-held object and the target of placement. The later is difficult with the existing methods when the hand-held object is not rigidly fixed to a gripper. A recent work AeroVR \cite{yashin2019aerovr} uses a similar concept to ours. While the system demonstrates an inspiring way to also include tactile feedback, the scope differs as AeroVR uses VICON system for indoor usage. For object localization, learning-based \cite{learning_1, learning_2, Sundermeyer} and geometry-based \cite{Wang2016, artoolkit} approaches can be found. Recent learning-based methods with deep neural networks can be broadly formulated with either explicit \cite{learning_1} or implicit \cite{Sundermeyer} representations. However, we do not consider machine learning approaches as the assumption that the test data distribution to come from training distribution is routinely violated in the context of field robotics. Within the geometric methods, Fidicual marker systems (based on creating artificial features on the scene) are widely used in robotics for ground truths \cite{Wang2016}, applications where environments are known \cite{Malyuta2019}, simplifying the perception problem in lieu of sophistication \cite{Laiacker2016} and calibration \cite{nissler18simultaneous}. However, as we aim for creating the real-time virtual reality, our use-case provides stringent requirements on their limitations in run-time and inherent time-delays. Note that authors \cite{predictive19} show that coping with time delays in the display improves the performance of the tele-operation. Furthermore, as we use hand-eye cameras, our localization method should be robustness to loss-of-sight as the camera is not guaranteed to see the markers during the operations. Robustness is important when using haptic guidance or virtual fixtures for example, where inaccurate haptic feedback can cause negative effects in terms of the manipulation performance \cite{7047839, Boessenkool}. \section{Cable Suspended Aerial Manipulator} \label{sec:sam} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/integration.pdf} \caption{Illustration of our collision-safe aerial manipulation concept; SAM with helicopter as an aerial carrier (left). Both hand-eye and eye-to-hand cameras are now integrated (right). We denote CAM1 as mako and CAM2 as hand-eye camera (hc for brevity).} \label{fig:sec2:1} \end{figure} \subsubsection{Robot hardware} An aerial manipulator SAM \cite{sam2019} is a complex flying robot composed of an aerial carrier, a cable-Suspended platform and a 7 degrees of freedom (DoF) industrial robotic arm KUKA LWR \cite{kuka}. An aerial carrier (e.g. crane, manned/unmanned helicopter\footnote{The purpose of the aerial carrier is to transport the system and hover. We use a crane in this study which also provides better safety, versatility, robustness and applicability for our considered application scenario.}) provides means to transport the robotic platform to a location (see Fig. \ref{fig:sec2:1}). Then, a platform suspended to the carrier performs balancing act by autonomously damping out the disturbances induced by the carrier and the manipulator. This oscillation damping control is performed using eight omni-directional propellers and three winches as its actuators. Design and control aspects of SAM have been presented previously in \cite{sam2019}. \subsubsection{Sensors choices and integration} Relevant sensors for realizing our vision-based telepresence system are as follows. KUKA LWR \cite{kuka} is equipped with torque and position sensors as its \textit{proprioceptive} sensors. Each joint contains a torque sensor, incremental and absolute position sensors which measure its joint torques and angles. Furthermore, SAM is equipped with optical devices as its \textit{exteroceptive} sensors. As shown in Fig. \ref{fig:sec2:1}, a monocular camera (Allied-vision Mako) is installed on the frame of the platform to display the overall operational space of the robotic arm. This is because the operator prefers eye-to-hand view which is more natural to a human. The camera provides high resolution images of 1292 by 964 px at 30Hz. Additionally, a stereo camera is integrated near the tool-center-point (tcp). Accuracy of the ficidual marker systems depends on the distance and its size which justifies the integration of a hand-eye camera \cite{Wang2016}. We use a commercial 3D vision sensor that provides built-in VIO. Rcvisard provides 1280 by 960 px images at 25Hz and VIO estimates can be acquired at 200Hz. Details on VIO algorithm can be found in \cite{rcvisard} \subsubsection{Haptic device} A portable and space-qualified haptic device, the Space Joystick \cite{spacejoystick} has been integrated to teleoperate the LWR located on SAM remotely. \section{Vision-Inertial Aerial Telepresence} \label{sec:core} \subsection{3D Visual Feedback with Object Localization} The aim is virtually displaying the robot and the objects so that an operator can tele-manipulate remotely. If done in real-time, the operator can \textit{see} the virtual remote scene and perform the tasks. Here, accuracy is crucial as the virtual world has to closely match the real remote scene. In our approach, we realize such 3D visual feedback using cameras, object localization algorithms and known object database (see Fig. \ref{fig:sec3:2}). Once objects to be actively manipulated are known a-priori, the essence of the problem simplifies to computing relative transformation of an objects with respect to the camera $\textbf{T}_{\text{object}}^{\text{hc}}(t)$ and robot's tcp $\textbf{T}_{\text{object}}^{\text{tcp}}(t)$. Here, t denotes time. A fixed transformation $\textbf{T}_{\text{hc}}^{\text{tcp}}$ can be precisely estimated from CAD models or hand-eye calibration \cite{Strobl2006}. \begin{equation} \label{sec:3:2:eq1} \textbf{T}_{\text{object}}^{\text{tcp}}(t) = \textbf{T}_{\text{hc}}^{\text{tcp}} \textbf{T}_{\text{object}}^{\text{hc}}(t) \end{equation} In this way, one can exploit object localization methods based on fiducial markers systems. These systems are widely adopted in robotics community and have been used as ground truths for its accuracy \cite{Wang2016}. While learning-based pose estimation methods \cite{Sundermeyer} can be leveraged under the same framework (for several applications where markers are not readily available), we limit our scope to validating the virtual reality concept in lieu of sophisticated object localization methods. Note that we use Instant Player \cite{hulin2012} for creating the display as it supports various hierarchies of a scene graph. Using a nested hierarchy, relative transformation between an object and tools can be routed to display the scene, while a flat hierarchy can be used to extend the framework in order to display multiple objects and tools. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/Figures.pdf} \caption{An example of pre-generated object database.} \label{fig:sec3:2} \end{figure} However, fiducial markers systems and their extensions \cite{artoolkit, Wang2016, Malyuta2019, Laiacker2016} have also significant drawbacks. It arises as we consider floating base manipulation outdoors. For example, shadows are inevitable for outdoor experiments and once it destroys certain shapes of the tags, the methods would naturally fail as its assumptions on the artificial visual features are violated. Similarly, the hand-eye camera (hc) can lose the view on the marker as the manipulator and the base can move rapidly. These failure modes (reported in Fig. \ref{fig:sec3:3}) have consequences on the mission success rates. This is because it is difficult for the operator to remotely perform precise manipulation with live streams of 2D images. Eye-to-Hand views typically suffer from the occlusions of the grasping points by the robotic arm (also found in humanoid robots) and lacks depth information. Lastly, time delays that are inherent in these systems must be corrected in order to create a real-time virtual display of the scene. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/failuremodes.pdf} \caption{Failure modes of fidicual marker system in the field experiments. The figure shows a nominal case (left), and failure modes namely lost of sight and shadow occlusion (others).} \label{fig:sec3:3} \end{figure} For tackling these problems we propose Algorithm 1 for which multiple tags are placed on an object with a target tag x. The algorithm initializes by detecting all the tags (we denote multiART+ which is based on ArtoolKitPlus \cite{artoolkit}), and saving their relative poses to the target (tag$\_$init). While the process is running, k detected tags and their IDs are counted (counter$\_$multiART+). If all the tags are detected, n+1 pose estimates of the target tag x can be computed by transforming pose estimates of non-target tags $T_{\text{y}}^{\text{hc}}$ and their relative transformation to the target tag $T_{\text{x}}^{\text{y}}$ (trafo3d). Then, RANSAC \cite{ransac} is applied to these estimates to remove outliers, and then averaging to reduce variance (ransac$\_$avg). Then, relative transformations are updated by applying RANSAC for the saved estimates, and averaging. In case atleast one tag is detected, the same step is applied to estimate the target tag x. These steps have advantages that (1) accuracy and orientation ambiguity of ArtoolKitPlus can be improved with RANSAC, and (2) the algorithm is robust to loss-of-sight of a target (similar to \cite{Laiacker2016, Nissler, Malyuta2019}). However, the algorithm must be robust to loss-of-sight on all the tags, as we consider object tracking for floating base manipulators. Algorithm 1 addresses this problem by integrating VIO estimates of camera motion with respect to its inertial coordinate $\textbf{T}_{\text{hc}}^{\text{w}}(t)$. If no tags are detected, (\ref{eq:vio_integrate}) can be used to still estimate the target $\textbf{T}_{\text{x,avg}}^{\text{hc}}(t)$ (vio$\_$integrate). In (\ref{eq:vio_integrate}), $\textbf{T}_{\text{w}}^{\text{hc}}(t)\textbf{T}_{\text{hc}}^{\text{w}}(t-1)$ is a relative transformation of camera motion from time t-1 to t and assumes a static object. \begin{equation} \label{eq:vio_integrate} \textbf{T}_{\text{x,avg}}^{\text{hc}}(t) = \textbf{T}_{\text{w}}^{\text{hc}}(t)\textbf{T}_{\text{hc}}^{\text{w}}(t-1)\textbf{T}_{\text{x,avg}}^{\text{hc}}(t-1) \end{equation} In a similar fashion, the delay of the system $t_d$ can be computed (delay$\_$computation) and corrected with VIO algorithm by using (\ref{eq:vio_compensator}) (vio$\_$delay$\_$compensator). The herein delay is present in any perception system (e.g. rectifying an image) and fiducial marker systems (they are not real-time). In (\ref{eq:vio_compensator}), $\textbf{T}_{\text{hc}}^{\text{w}}(t)$ and $\textbf{T}_{\text{x,avg}}^{\text{hc}}(t)$ are computed using VIO and multi-tag tracking. On the other hand, $\textbf{T}_{\text{w}}^{\text{hc}}(t+t_d)$ can be computed using linear and angular velocity estimates of VIO, multiplied by the delay time $t_d$. \begin{equation} \label{eq:vio_compensator} \textbf{T}_{\text{x,avg}}^{\text{hc}}(t+t_d) = \textbf{T}_{\text{w}}^{\text{hc}}(t+t_d)\textbf{T}_{\text{hc}}^{\text{w}}(t)\textbf{T}_{\text{x,avg}}^{\text{hc}}(t) \end{equation} These two steps have several advantages. The algorithm is robust to failure modes of fidicual marker systems (see Fig. \ref{fig:sec3:3}) as it copes with missing tag detection, and time delays are incorporated by using velocity signals and computed delay time. Furthermore, maximum run-time of the algorithm can be pushed to that of VIO data. The algorithm deals also with drifts of VIO estimates by using relative motion estimates only when the tag detection is lost. Note that the method is one way to use commodity vision sensors with VIO modules in order to further improve performance. Illustration of these two steps are found in Fig. \ref{illustration}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/method_illustration.pdf} \caption{Illustration of \eqref{eq:vio_integrate} and \eqref{eq:vio_compensator}. Left: \eqref{eq:vio_integrate} uses VIO position and orientation estimates of camera motion to still estimate the object (denoted x,avg) when marker is not detected. Right: \eqref{eq:vio_compensator} uses linear (yellow arrow) and angular velocity (blue arrow), and computed time delay $t_d$ to predict the motion of the camera in $t+t_d$ seconds.} \label{illustration} \end{figure} \begin{figure} \let\@latex@error\@gobble \begin{algorithm}[H] \caption{Robust marker localization} \textbf{Input}: Image $\textit{I}$, target marker ID x, n multi marker IDs y and mapping to object $\textbf{T}_{\text{object}}^{\text{x}}$. \\ \textbf{Output}: Pose of the object $\textbf{T}_{\text{object}}^{\text{stereo}}$(t). \\ \textbf{Algorithm}: \\ $\textbf{T}_{\text{x}}^{\text{hc}}$(0), $\textbf{T}_{\text{y}_1}^{\text{hc}}$(0), ..., $\textbf{T}_{\text{y}_n}^{\text{hc}}$(0) $\leftarrow$ multiART+($\textit{I}$); \\ $\textbf{T}_{\text{x}}^{\text{y}_1}$, $\textbf{T}_{\text{x}}^{\text{y}_2}$, ..., $\textbf{T}_{\text{x}}^{\text{y}_n}$ $\leftarrow$ tag$\_$init($\textbf{T}_{\text{x}}^{\text{hc}}$(0),$\textbf{T}_{\text{y}_1}^{\text{hc}}$(0), ..., $\textbf{T}_{\text{y}_n}^{\text{hc}}$(0))\\ \While(){\emph{object$\_$localization == True}} { k, id $\leftarrow$ counter$\_$multiART+($\textit{I}$); \\ \uIf{\emph{k == n+1}}{ $\textbf{T}_{\text{x}}^{\text{hc}}$(t), ..., $\textbf{T}_{\text{x,yn}}^{\text{hc}}$(t) $\leftarrow$ trafo3d($\textbf{T}_{\text{x}}^{\text{hc}}$(t), $\textbf{T}_{\text{y}}^{\text{hc}}$(t), $\textbf{T}_{\text{x}}^{\text{y}}$)\; $\textbf{T}_{\text{x,avg}}^{\text{hc}}$(t) $\leftarrow$ ransac$\_$avg($\textbf{T}_{\text{x}}^{\text{hc}}$(t), ..., $\textbf{T}_{\text{x,yn}}^{\text{hc}}$(t))\; $\textbf{T}_{\text{x}}^{\text{y}_1}$, $\textbf{T}_{\text{x}}^{\text{y}_2}$, ..., $\textbf{T}_{\text{x}}^{\text{y}_n}$ $\leftarrow$ tag$\_$init$\_$update($\textbf{T}_{\text{x,pre}}^{\text{y}}$, $\textbf{T}_{\text{x}}^{\text{y}}$)\; } \uElseIf{\emph{0 $<$ k $<$ n+1}}{ \uIf{\emph{x $\in $ id == False}}{ $\textbf{T}_{\text{x,y1}}^{\text{hc}}$(t), ..., $\textbf{T}_{\text{x,yn}}^{\text{hc}}$(t) $\leftarrow$ trafo3d($\textbf{T}_{\text{y}}^{\text{hc}}$(t), $\textbf{T}_{\text{x}}^{\text{y}}$)\; $\textbf{T}_{\text{x,avg}}^{\text{hc}}$(t) $\leftarrow$ ransac$\_$avg($\textbf{T}_{\text{x,y1}}^{\text{hc}}$(t), ..., $\textbf{T}_{\text{x,yn}}^{\text{hc}}$(t))\; } \Else{ $\textbf{T}_{\text{x}}^{\text{hc}}$(t), ..., $\textbf{T}_{\text{x,yn}}^{\text{hc}}$(t) $\leftarrow$ trafo3d($\textbf{T}_{\text{y}}^{\text{hc}}(t)$, $\textbf{T}_{\text{x}}^{\text{y}}$)\; $\textbf{T}_{\text{x,avg}}^{\text{hc}}$(t) $\leftarrow$ ransac$\_$avg($\textbf{T}_{\text{x}}^{\text{hc}}$(t), ..., $\textbf{T}_{\text{x,yn}}^{\text{hc}}$(t))\; } } \Else{ $\textbf{T}_{\text{x,avg}}^{\text{hc}}$(t) $\leftarrow$ Eq. (2)\; } $t_d$ $\leftarrow$ delay$\_$computation() \\ $\textbf{T}_{\text{x,avg}}^{\text{hc}}(t+t_d)$ $\leftarrow$ Eq. (3) \\ $\textbf{T}_{\text{object}}^{\text{hc}}(t)=\textbf{T}_{\text{x,avg}}^{\text{hc}}(t+t_d)\textbf{T}_{\text{object}}^{\text{x}}$ } \end{algorithm} \end{figure} \subsection{Extension of 3D Visualization to Multiple Objects} For tasks such as placing, virtually displaying multiple objects and their relative pose is required. For example, if an operator would like to place a cage (with inspection robot inside) on a pipe which have roughly the same dimension, the virtual reality should reflect it by displaying the pipe, the cage, and the orientation changes of the cage with respect to TCP (e.g. a hook). With 3D reconstruction methods, this is difficult as one explores the environment for mapping and process the noisy data points for displaying. In our system, we tackle this challenge by using the hand-eye camera to estimate the orientation of the held object, while the eye-to-hand camera estimates the pose of other objects (e.g. a pipe). Then, the forward kinematics are leveraged as given below. \begin{equation} \label{sec:3:3:eq1} \textbf{T}_{\text{object,2}}^{\text{object,1}}(t) = \textbf{T}_{\text{mako}}^{\text{object,1}}(t) \textbf{T}_{\text{base}}^{\text{mako}}\textbf{T}_{\text{tcp}}^{\text{base}}(t)\textbf{T}_{\text{hc}}^{\text{tcp}}\textbf{T}_{\text{object,2}}^{\text{hc}}(t) \end{equation} In (\ref{sec:3:3:eq1}), transformation from the base to eye-to-hand camera (mako) $\textbf{T}_{\text{base}}^{\text{mako}}$ and tcp to hand-eye camera $\textbf{T}_{\text{hc}}^{\text{tcp}}$ can be computed using hand-eye calibration \cite{Strobl2006}. $\textbf{T}_{\text{mako}}^{\text{object,1}}$ is essentially updating the local base frame, and the forward kinematics of the robotic arm $\textbf{T}_{\text{tcp}}^{\text{base}}$ is typically accurate. $\textbf{T}_{\text{object,2}}^{\text{hc}}$ displays the pose of the held object. For this, one can use only multi-marker tracking without linear velocity integration. This is because markers can always made visible when the objects are held by the robot. \subsection{Force Feedback with Space Joystick and LWR} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/controller_overview.pdf} \caption{Controller overview. Communication time delays, packet loss and jitter can cause instability of the overall system. For coping with this, TDPA is used for force feedback tele-manipulation.} \label{fig:sec3:1} \end{figure} The controller design must ensure a stable bilateral tele-manipulation with force feedback. The main technical challenge is to deal with communication time delays, packet loss and jitters, which can cause instability of the system. For tackling this, a four channel architecture with time-domain passivity approach (proposed in \cite{spacejoystick}) has been used. A schematic of the system is shown in Fig. 5 and it is briefly explained as follows. The human operator sends both position (velocity analogously) and force signals from the master device (Space Joystick) to the slave (KUKA LWR mounted on the SAM). As these signals pass through communication channels (in the considered scenario, a wireless communication), they will get affected by time delay. To ensure stable tele-manipulation, we employ time domain passivity approach (TDPA \cite{Hannaford2001TimeDP}). Readers can refer to \cite{spacejoystick} for more details and implementations. \subsection{Haptic Guidance with Virtual Fixtures} On top of real-time virtual reality and haptic device, another aspect of our telepresence system is haptic guidance via virtual fixtures \cite{Bettini2004}. In this work, the virtual fixtures are implemented as artificial walls that guide the motion of the slave to the desired target point. If the teleoperator tries to move the slave device outside these walls, artificial forces are activated to limit the motion of tcp (slave) and also to provide haptic feedback to the teleoperator. The virtual fixtures in this work are based on Voxmap-PointShell algorithm \cite{hulin2012, Sagardia2018} and more details on their implementation and parameter tuning can be found in \cite{tubiblio109900}. \section{Experiments and Results} \label{sec:results} \subsection{Robust Object Localization: Validation and Analysis} An object localization approach is taken for 3D visualization and thus, accuracy, run-time and robustness of the proposed algorithm is reported. These results are important as the created virtual reality should closely match the real remote scene. For this, we measure the ground truth of the relative poses between the object and the camera using Vicon tracking system and evaluate the performance on sequences that represent peg-in-hole insertion task (see video attachment). The algorithm is also compared to Apriltag2 \cite{Wang2016} (AP2) and ARToolKitPlus \cite{artoolkit} without (2) and (3) (multiART). \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{figures/x_vicon.pdf} \vspace{0.1cm} \includegraphics[width=0.45\textwidth]{figures/y_vicon.pdf} \vspace{0.1cm} \includegraphics[width=0.43\textwidth]{figures/z_vicon.pdf} \caption{Our proposed algorithm 1 for object tracking (denoted ours) is compared to ground truth (Vicon measurements). The algorithm is compared with two other popular fidicual detection frameworks namely AprilTag 2 (AP2) and ARToolKitPlus (multiART). Our proposed algorithm is robust to losing the fidicuals in an image, and compensates the delay.} \label{vicon_results} \end{center} \end{figure} In Fig. \ref{vicon_results}, estimated trajectories of relative poses are compared with Vicon measurements. As depicted, our proposed algorithm is robust against loss-of-sight problems of object localization with a hand-eye camera while AP2 and multiART produce jumps as no markers are detected (t=3s to t=8s as an example). This is due to the design of the algorithm where we utilize VIO estimates of the camera pose when the marker is not detected. Furthermore, multiART suffers from time delay, while AP2 has both the time delay, and slow run-time. On the other hand, our proposed algorithm compensates the time delay, resulting in accurate estimates. Five experiments have been conducted to determine the accuracy of the selected methods with respect to the ground truth. Note that the trajectory selected includes loss-of-sight and time delay. The corresponding root mean squared errors (RMSE) have been reported in Table \ref{rmse}. However, as seen in Table \ref{rmse}, AP2 is slow while using high-resolution images, and this results in more errors as we compare the trajectories. In our approach, these trajectories are relevant as we aim for creating virtual reality with object localization methods. Within our experiments, the analysis of the accuracy, robustness and run-time further justifies the proposed algorithm and its additional complexity. \begin{figure*} \centering \includegraphics[width=1\textwidth]{figures/flighttesting.pdf} \caption{Results of field experiments for AEROARMS \cite{aeroarms} industrial scenario. SAM successfully deployed and retrieved a pipe inspection robot by performing grasping (left), placing (middle) and pressing (right). As we consider outdoor manipulation tasks with an industry relevancy, the system has to both address force feedback, and 3D visual feedback. 2D visual feedback (bottom row), as depicted above, is not sufficient as the depth information is missing and subject to under exposure. On the other hand, the virtual environment (middle row) does not suffer from these problems, and the operator can zoom-in and out, and change its sight-of-view. These experiments show SAM with telepresence as a viable option for future applications.} \label{flightexperiments} \end{figure*} \begin{table} \centering \caption{Accuracy and run-time analysis} \label{rmse} \begin{tabular}{|c|c|c|c|} \hline & AP2 & multiART+ & ours \\ \hline $e_{\text{x,rmse}}$ [m] & 0.1690 & 0.1124 & \textbf{0.0252} \\ \hline $e_{\text{y,rmse}}$ [m] & 0.1265 & 0.0847 & \textbf{0.0503} \\ \hline $e_{\text{z,rmse}}$ [m] & 0.1308 & 0.077 & \textbf{0.0316} \\ \hline $e_{\text{$\phi$,rmse}}$ [rad] & 0.2843 & 0.1867 & \textbf{0.1232} \\ \hline $e_{\text{$\theta$,rmse}}$ [rad] & 0.1955 & 0.1232 & \textbf{0.0703} \\ \hline $e_{\text{$\psi$,rmse}}$ [rad] & 0.2565 & 0.1755 & \textbf{0.1153} \\ \hline $t_{\text{run}}$ [s] & 0.839 $\pm$ 0.0616 & 0.0525 $\pm$ 0.0218 & \textbf{0.0049 $\pm$ 0.013} \\ \hline \end{tabular} \end{table} \subsection{Peg-in-Hole Insertion with Virtual Fixtures} A peg-in-hole insertion task with margins of error less than 1cm is considered in which operator does not have any direct visual contact to the real scene. The main challenge in this setting is on the fidelity of virtual reality and resulting virtual fixtures. With the fidelity provided by our proposed algorithm and resulting virtual fixtures, a peg-in-hole task has been performed (see the attached video material). The results are depicted in Fig. \ref{peg_in_hole_forces} and Fig. \ref{peg_in_hole_tracking}. Fig. \ref{peg_in_hole_forces} plots force signals acting on the slave end-effector which constitutes computed force from master's position commands, and force due to the virtual fixtures. Position tracking of tcp towards the target (hole) is shown in Fig. \ref{peg_in_hole_tracking}. As these position signals are expressed in LWR base frame (see Fig. \ref{fig:sec3:1} for definition), the target also moves due to the motion of SAM. This experiment shows the benefits of our proposed telepresence system, as SAM is able to perform a precise manipulation task. Note that the accuracy of object localization improves over reported values in Table \ref{rmse} when the peg is near the hole (shown in Fig. \ref{vicon_results}) which makes the task feasible. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{figures/xyz_forces.pdf} \caption{Force signals on slave's end-effector expressed in LWR base frame. These forces compose of artificial force from a virtual fixture, and computed forces from master's commanded positions.} \label{peg_in_hole_forces} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{figures/x_peg_in_hole.pdf} \vspace{0.1cm} \includegraphics[width=0.4\textwidth]{figures/y_peg_in_hole.pdf} \vspace{0.1cm} \includegraphics[width=0.4\textwidth]{figures/z_peg_in_hole.pdf} \caption{TCP and target positions expressed in LWR base frame. For peg-in-hole insertion, tcp is commanded to follow the target. Note that the target position changes as SAM moves, and it is expressed i in LWR base frame.} \label{peg_in_hole_tracking} \end{center} \end{figure} \subsection{Field Experiments and Validation} A field experiment is conducted in order to demonstrate the applicability of SAM within a relevant industrial scenario for aerial manipulation. This scenario involves a maintenance and inspection task in which SAM has to deploy and retrieve a 6.4kg inspection robot to a remotely located pipe. To transport the inspection robot, a cage (approximately of the same size as the pipe and the inspection robot) has been designed. For this mission, SAM has to (a) grasp the cage with a hook at location A with a hook used as end-effector for the LWR, (b) move to location B where the pipe is located, (c) place the cage on the pipe, and (d) press the cage while the inspection robot moves out. The teleoperator is located in a ground station and thus, has no direct visual contact to the scene. For this scenario, we tackle precision grasping, placing and pressing tele-manipulation tasks at a remote location, and the results are depicted in Fig. \ref{flightexperiments}. In particular, 2D images alone do not show the depth information (placing task) and are often occluded (grasping and pressing phases). With only force feedback, a precise manipulation is difficult for this scenario. On the other hand, the virtual reality provides 3D information of the remote scene, and moreover, one can change the sight-of-view to avoid an occluded visual feedback. These results show the benefits of our telepresence system. By touching and seeing, the teleoperator is able to perform precise manipulation tasks for an industrial use-case. The field experiments for AEROARMS industrial scenario did not use the haptic guidance using virtual fixtures and VIO compensations for achieving the basic teleoperation tasks. For further improving the inspection and maintenance scenario, we plan to perform a user-study to investigate the degree of improvements with this shared autonomy concept and further joint demonstration with recent developments on SAM \cite{sarkisov20, coelho20}. Lastly, robotic introspection \cite{grimmett2013knowing} for object localization is another research direction that can support in industrial deployments of these systems. \section*{APPENDIX} \bibliographystyle{IEEEtran}
4a9d9fe9a08e9088afb50c69bd0397c8279b4655
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section*{Introduction} Let $G$ be a second countable, locally compact group. The Baum--Connes conjecture with coefficients (BCC) states that the Baum--Connes assembly map (see \cite{BCH93}, \cite{Valette02}, \cite{GJV19}) \begin{equation*} \mu_r^{B, G}\colon K_\ast^{\mathrm{top}}(G; B) =\varinjlim_{Y\subset \underline{E}G, \mathrm{Gcpt}}KK^G(C_0(Y), B) \to K_\ast(B\rtimes_rG) \end{equation*} is an isomorphism for any separable $G$-$C^*$-algebra $B$. Here, $B\rtimes_rG$ is the reduced crossed product of $B$ and $\underline{E}G$ is the so-called universal proper $G$-space \cite{BCH93} which exists uniquely up to $G$-equivariant homotopy. The classical Baum--Connes conjecture (BC) \cite{BaumConnes2000} corresponds to the case when $B=\C$. For a countable discrete group $G$, there is a controlled algebraic reformulation of the Baum--Connes assembly map, due to Yu, in terms of the so-called forget-control map \cite{Yu10}. For simplicity, we let $B=\C$ and assume that a proper $G$-space $X$ is equipped with a proper $G$-invariant metric $d$ compatible with its topology. The equivariant Roe algebra $C^*(H_X)^G$ \cite{Roe96} is defined as the norm-completion of the $\ast$-algebra $\C(H_X)^G$ consisting of $G$-equivariant, locally compact operators with finite propagation on an $X$-$G$-module $H_X$, a $G$-Hilbert space equipped with a non-degenerate representation of the $G$-$C^*$-algebra $C_0(X)$. Here, an operator $S$ on $H_X$ is locally compact if $\phi S, S\phi$ is compact for any $\phi$ in $C_0(X)$ and has finite propagation if there is $r\geq0$ such that $\phi S\psi=0$ whenever the distance between the support of $\phi, \psi$ in $C_0(X)$ is greater than $r$. The infimum of such $r$ for $S$ is called the propagation of $S$. An important fact is an isomorphism (see \cite{Roe96}) \[ C^*(H_X)^G \cong \mathfrak{K}(l^2(\mathbb{N}))\otimes C^*_r(G) \] when $X$ is $G$-compact and when $H_X$ is big enough (if $H_X$ is ample as an $X$-$G$-module). The equivariant localization algebra (or the localized equivariant Roe algebra) $C_L^*(H_X)^G$ is defined as the norm-completion of the $\ast$-algebra consisting of uniformly norm-continuous functions $t\mapsto S_t \in \C(H_X)^G$ on $[1, \infty)$ such that the propagation $\mathrm{prop}(S_t)\to 0$ as $t\to \infty$. The evaluation at $t=1$ induces a $\ast$-homomorphism (the forget-control map) \[ \mathrm{ev}_1\colon C_L^*(H_X)^G \to C^*(H_X)^G \] whose induced map on K-theory groups \[ \mathrm{ev}_{1\ast} \colon K_\ast(C_L^*(H_X)^G) \to K_\ast(C^*(H_X)^G) \] is known to be equivalent to the Baum--Connes assembly map \[ \mu^G_X\colon KK_\ast^G(C_0(X), \C) \to K_\ast(C^*_r(G)) \] if $X$ is $G$-compact (see \cite{Shan08},\cite{Yu10}, \cite{FuWang}, \cite{FuWangYu}). Taking the inductive limit of $\mathrm{ev}_{1\ast}$ over the $G$-compact $G$-invariant closed subspaces $Y$ of $\underline{E}G$, we get \[ \mathrm{ev}_{1\ast} \colon \lim_{Y\subset \underline{E}G, \mathrm{Gcpt}}K_\ast(C_L^*(H_Y)^G) \to K_\ast(C^*_r(G)) \] which is equivalent to the Baum--Connes assembly map $\mu_r^G$. This is the controlled algebraic reformulation of the Baum--Connes conjecture due to Yu. For a (second countable) locally compact space $X$ and an $X$-module $H_X$, a representable localization algebra $RL^*_c(H_X)$ (see below for definition) is introduced in \cite[Section 9.4]{WY2020} (see \cite{Yu1997} for a localization algebra). It is shown that for a suitable choice of an $X$-module $H_X$ for each locally compact space $X$ and for a suitable choice of covering isometries $(V^f_t\colon H_X\to H_Y)_{t\in [1,\infty)}$ for each continuous map $f\colon X\to Y$, the assignment \[ X\mapsto \bD_\ast(X)=K_\ast(RL^*_c(H_X)), \,\,\,f\mapsto \bD_\ast(f)=\mathrm{Ad}_{V^f_t\ast} \] is a functor from the category $\mathcal{LC}$ of locally compact spaces to the category $\mathcal{GA}$ of abelian groups. It is shown in \cite{WY2020} that this functor is naturally the representable K-homology $RK_\ast$ on locally compact spaces. \noindent \textbf{Main Results:} Let $G$ be a second countable, locally compact group, $X$ be a (second countable) locally compact, proper $G$-space and $B$ be a separable $G$-$C^*$-algebra. For any $X$-$G$-module $H_X$, we define the $G$-$C^*$-algebra (the representable localization algebra) $RL^*_c(H_X\otimes B)$ as the completion of the $\ast$-algebra of bounded, $G$-continuous, norm-continuous $\mathfrak{K}(H_X)\otimes B$-valued functions $T$ on $[1, \infty)$ such that \begin{enumerate} \item $\lim_{t\to \infty}||[\phi, T_t]||= \lim_{t\to \infty}||\phi T_t - T_t \phi|| = 0$ for any $\phi \in C_0(X)$, and \item $T$ has uniform compact support in a sense that for some compact subset $K$ of $X$, $T_t=\chi_KT_t\chi_K$ for all $t\geq1$ ($\chi_K$ is the characteristic function of $K$). \end{enumerate} Here, $\mathfrak{K}(H_X)$ is the $G$-$C^*$-algebra of compact operators on $H_X$. We have a natural evaluation map (the forget-control map) \[ \mathrm{ev}_{1}\colon RL^*_c(H_X\otimes B) \to \mathfrak{K}(H_X)\otimes B \] at $t=1$ and it induces a $\ast$-homomorphism \[ \mathrm{ev}_{1}\colon RL^*_c(H_X\otimes B)\rtimes_rG \to (\mathfrak{K}(H_X)\otimes B)\rtimes_rG \] on the reduced crossed product. For a suitable choice of an $X$-$G$-module $H_X$ (a universal $X$-$G$-module, see Definition \ref{def_universalXG}) for each proper $G$-space $X$ and for a suitable choice of $G$-equivariant covering isometries $(V^f_t\colon H_X\to H_Y)_{t\in [1,\infty)}$ for each $G$-equivariant continuous map $f\colon X\to Y$, we obtain a well-defined functor (see Definition \ref{def_DBG}) \[ X\mapsto \bD_\ast^{B, G}(X)=K_\ast(RL^*_c(H_X\otimes B)\rtimes_rG), \,\,\,f \mapsto \bD^{B, G}_\ast(f) = \mathrm{Ad}_{V^f_t\ast} \] from the category $\mathcal{PR}^G$ of (second countable, locally compact) proper $G$-spaces to the category $\mathcal{GA}$. The functor $\bD_\ast^{B, G}\colon \mathcal{PR}^G \to \mathcal{GA}$ satisfies several expected properties for the representable $G$-equivariant K-homology (see Theorem \ref{thm_coeff}) and one may extend $\bD_\ast^{B, G}$ to all (not necessarily locally compact) proper $G$-spaces by declaring $\bD_\ast^{B, G}(X)=\varinjlim_{Y\subset X, \mathrm{Gcpt}}\bD_\ast^{B, G}(Y)$. The forget-control map $\mathrm{ev}_{1}$ induces a group homomorphism (the forget-control map) \[ \mathcal{F} \colon \bD_\ast^{B, G}(X) \to K_\ast(B\rtimes_rG). \] The following is one of our main results. \begin{theoremA*}(Theorem \ref{thm_main_isom}, Theorem \ref{thm_main_equivalent}) The forget-control map $\mathcal{F}\colon \bD_\ast^{B, G}(X) \to K_\ast(B\rtimes_rG)$ is naturally equivalent to the Baum--Connes assembly map \[ \mu_X^{B, G}\colon \varinjlim_{Y\subset X, \mathrm{Gcpt}}KK_\ast^G(C_0(Y), B) \to KK_\ast(\C, B\rtimes_rG) \] for any second countable, locally compact group $G$, for any proper $G$-space $X$ and for any separable $G$-$C^*$-algebra $B$. That is, there is a natural isomorphism \[ \rho_X\colon \bD_\ast^{B, G}(X) \to \varinjlim_{Y\subset X, \mathrm{Gcpt}}KK_\ast^G(C_0(Y), B) \] of the functors from $\mathcal{PR}^G$ to $\mathcal{GA}$ and the following diagram commutes \begin{equation*} \xymatrix{ \bD_\ast^{B, G}(X) \ar[dr]^{\rho_X}_-{\cong} \ar[rr]^{\mathcal{F}} & & K_\ast(B\rtimes_rG) \\ & \varinjlim_{Y\subset X, \mathrm{Gcpt}}KK_\ast^G(C_0(Y), B). \ar[ur]^{\mu^{B, G}_X} & } \end{equation*} \end{theoremA*} Let $RL^0_c(H_X\otimes B)$ be the kernel of the evaluation map $\mathrm{ev}_1$ on $RL^*_c(H_X\otimes B)$. The short exact sequence \[ 0 \to RL^0_c(H_X\otimes B) \to RL^*_c(H_X\otimes B) \to \mathfrak{K}(H_X)\otimes B \to 0 \] admits a $G$-equivariant c.c.p.\ splitting and thus it descends to the short exact sequence \[ 0 \to RL^0_c(H_X\otimes B)\rtimes_rG \to RL^*_c(H_X\otimes B)\rtimes_rG \to (\mathfrak{K}(H_X)\otimes B)\rtimes_rG \to 0. \] Hence, the following is a consequence of Theorem A. \begin{corollaryB*}(Corollary \ref{cor_N}) Let $G$ be a second countable, locally compact group and $B$ be a separable $G$-$C^*$-algebra. The Baum--Connes assembly map $\mu^{B, G}_r$ is an isomorphism if and only if \[ K_\ast(RL^0_c(H_X\otimes B)\rtimes_rG)=0 \] for a universal $X$-$G$-module $H_X$ for $X=\underline{E}G$. \end{corollaryB*} We remark that for any ample $X$-module $H_X$, the $X$-$G$-module $H_X\otimes L^2(G)$ with structure given by the left (or right) regular representation of $C_0(X)$, is an example of a universal $X$-$G$-module (Proposition \ref{prop_universal}). One of the motivations of this article is to describe the so-called gamma element method (see below) for the Baum--Connes conjecture using the representable localization algebra $RL^*_c(H_X)$. Recall that a $G$-$C^*$-algebra $A$ is called a proper $G$-$C^*$-algebra if for some proper $G$-space $X$, there is a non-degenerate, central representation of $C_0(X)$ to the multiplier algebra $M(A)$ of $A$. Kasparov's equivariant $KK$-theory $KK^G$ \cite{Kasparov88} is an additive category with separable $G$-$C^*$-algebras as objects and with Kasparov's group $KK^G(B_1, B_2)$ as the morphism group between two $G$-$C^*$-algebras $B_1, B_2$. The composition law is given by the Kasparov product. In particular, the group $KK^G(\C, \C)$ has a commutative ring structure and it is often denoted as $R(G)$ and called Kasparov's representation ring. The unit of the ring $R(G)$ is denote by $1_G$. We say that an element $x$ in $R(G)$ factors through a proper $G$-$C^*$-algebra if there is a proper $G$-$C^*$-algebra $A$ and elements $y\in KK^G(\C, A)$, $z\in KK^G(A, \C)$ such that $x=z\circ y$ in $R(G)=KK^G(\C, \C)$. There is a natural restriction functor $KK^{G}(A, B) \to KK^{H}(A, B)$ for any closed subgroup $H$ of $G$, and in particular we have a ring homomorphism $R(G) \to R(H)$. The following is a formulation of the gamma element method by Tu, formalizing the work of Kasparov \cite{Kasparov88}. \begin{theorem*}(The gamma element method \cite{Tu00}) Suppose there is an element $x \in R(G)$ such that \begin{enumerate} \item[(a)] $x$ factors through a proper $G$-$C^*$-algebra, and \item[(b)] $x=1_K$ in $R(K)$ for any compact subgroup $K$ of $G$. \end{enumerate} Then, $x$ is the unique idempotent in $R(G)$ characterized by these properties and called the gamma element $\gamma$ for $G$. The existence of $\gamma$ implies \begin{enumerate} \item the Baum--Connes assembly map $\mu_r^{B, G}$ is split injective for any separable $G$-$C^*$-algebra $B$, and \item the image of $\mu_r^{B, G}$ coincides with the image of the action of $\gamma$ on the K-theory group $K_\ast(B\rtimes_rG)$ defined by the canonically defined ring homomorphism \[ R(G) \to \mathrm{End}\left(K_\ast(B\rtimes_rG) \right). \] \end{enumerate} In particular, if $\gamma=1_G$ in $R(G)$, BCC holds for $G$. \end{theorem*} Based on Theorem A, we seek a natural assumption under which we have a splitting (natural with respect to $B$) \[ K_\ast(B\rtimes_rG) \to \bD_\ast^{B, G}(X) \] of the forget-control map $\mathcal{F}$. By Theorem $A$, this automatically gives us a natural splitting of the Baum--Connes assembly map $\mu_r^{B, G}$. Our next result provides us such an assumption which is analogous to the one for the gamma element method, but it is purely in term of the representable localization algebra $RL^*_c(H_X)$. Since this assumption is satisfied when the gamma element exists, we would like to think this as a controlled algebraic aspect of the gamma element method. For this, we first recall the definition of cycles of $R(G)=KK^G(\C, \C)$. A cycle for $R(G)$ consists of pairs of the form $(H, T)$ where $H$ is a (separable) graded $G$-Hilbert space and $T$ is an odd, self-adjoint, bounded, $G$-continuous operator on $H$ satisfying \[ 1-T^2, \,\,\, g(T)-T \in \mathfrak{K}(H) \,\,\,\,\, (g\in G). \] A graded $X$-$G$-module is just the direct sum of $X$-$G$-modules $H_X^{0}$ (even space) and $H_X^{(1)}$ (odd space). \begin{definition*}[Definition \ref{def_XGlocalized}] An $X$-$G$-localized Kasparov cycle for $KK^G(\C, \C)$ is a pair $(H_X, T)$ of a graded $X$-$G$-module $H_X$ and an odd, self-adjoint, $G$-continuous element $T$ in the multiplier algebra $M(RL_c^*(H_X))$ satisfying for any $g\in G$, \[ 1-T^2, \,\,\, g(T)-T \in RL^*_c(H_X). \] \end{definition*} If one prefers, an $X$-$G$-localized Kasparov cycle for $KK^G(\C, \C)$ is a cycle for $KK^G(\C, RL^*_c(H_X))$ but we do not take it as our definition. The evaluation at $t=1$, \[ \mathrm{ev}_{1}\colon RL^*_c(H_X) \to \mathfrak{K}(H_X) \] extends to \[ \mathrm{ev}_{1}\colon M(RL^*_c(H_X)) \to \mathfrak{L}(H_X). \] For any $T \in M(RL^*_c(H_X))$, we write $T_1 \in \mathfrak{L}(H_X)$, its image by $\mathrm{ev}_{1}$. For any $X$-$G$-localized Kasparov cycle $(H_X, T)$, the pair $(H_X, T_1)$ is a cycle for $KK^G(\C, \C)$. \begin{definition*}(Definition \ref{def_XGlocalized_element}) Let $X$ be a proper $G$-space. We say that an element $x$ in $R(G)=KK^G(\C, \C)$ is $X$-$G$-localized if there is an $X$-$G$-localized Kasparov cycle $(H_X, T)$ for $KK^G(\C, \C)$ such that \[ [H_X, T_1] = x \,\,\text{in}\,\,\, KK^G(\C, \C). \] \end{definition*} That is, an element $x$ in $R(G)$ is $X$-$G$-localized if it can be represented by a cycle $(H_X, T_1)$ with a graded $X$-$G$-module $H_X$, which extends to a continuous family $(H_X, T_t)_{t\in [1,\infty)}$ of cycles of $R(G)$ so that the family $T=(T_t)_{t\in [1,\infty)}$ of operators on $H_X$ multiplies the representable localization algebra $RL^*_c(H_X)$ and satisfies the support condition \[ 1-T^2, \,\,\, g(T)-T \in RL_c^*(H_X) \,\,\,\,\, (g\in G). \] We remark that $T\in M(RL^*_c(H_X))$ holds if $\lim_{t\to \infty}||[T_t, \phi]|| = 0$. The following is our controlled algebraic reformulation of the gamma element method. For $x\in R(G)$, let us denote by $x^{B\rtimes_rG}_\ast$, its image under the canonically defined ring homomorphism \[ \xymatrix{ KK^G(\C, \C) \ar[r]^{\sigma_B} & KK^G(B, B) \ar[r]^-{j^G_r} & KK(B\rtimes_rG, B\rtimes_rG) \ar[r]^-{} & \mathrm{End}\left(K_\ast(B\rtimes_rG) \right). } \] \begin{theoremC*}(Theorem \ref{thm_XGfactor}, Theorem \ref{thm_XGgamma}) Let $X$ be a proper $G$-space. \begin{enumerate} \item Suppose that an element $x\in R(G)$ is $X$-$G$-localized, that is $x=[H_X, T_1]$ for an $X$-$G$-localized Kasparov cycle $(H_X, T)$ for $KK^G(\C, \C)$. Then, there is a homomorphism \[ \nu^{B, T}\colon K_\ast(B\rtimes_rG) \to \bD^{B, G}_\ast(X) \] for any separable $G$-$C^*$-algebra $B$ which is natural with respect to a $G$-equivariant $\ast$-homomorphism $B_1\to B_2$, such that \[ x^{B\rtimes_rG}_\ast = \mathcal{F} \circ \nu^{B, T} \colon K_\ast(B\rtimes_rG) \to \bD^{B, G}_\ast(X) \to K_\ast(B\rtimes_rG). \] In particular, $x^{B\rtimes_rG}_\ast$ factors through the Baum--Connes assembly map $\mu_r^{B ,G}$. \item If there is an $X$-$G$-localized element $x\in R(G)$ such that $x=1_K$ in $R(K)$ for any compact subgroup $K$, the Baum--Connes assembly map $\mu_r^{B, G}$ is split-injective for any $B$ and in this case the image of $\mu_r^{B, G}$ coincides with the image of $x^{B\rtimes_rG}_\ast$. In particular, if $x=1_G$, BCC holds for $G$. \end{enumerate} \end{theoremC*} The relationship with the gamma element is as follows. \begin{theoremD*}(Theorem \ref{thm_XGgamma0}) The gamma element for $G$, if exists, is $X$-$G$ localized for $X=\underline{E}G$. \end{theoremD*} According to Mayer and Nest \cite{MeyerNest}, for any (second countable) locally compact group $G$, there is a separable $G$-$C^*$-algebra $\mathcal{P}$ and the canonical Dirac element \[ D \in KK^G(\mathcal{P}, \C) \] such that for $\mathcal{P}_B=\mathcal{P}\otimes B$, $D_B=D\otimes 1_B \in KK^G(\mathcal{P}_B, B)$, the induced map \[ j^G_r(D_B)_\ast \colon K_\ast(\mathcal{P}_B\rtimes_rG) \to K_\ast(B\rtimes_rG) \] on K-theory is naturally equivalent to the Baum--Connes assembly map $\mu^{B, G}_r$ for any separable $G$-$C^*$-algebra $B$. According to Theorem A, the representable localization algebra $RL^*_c(H_X\otimes B)$ (for a universal $X$-$G$-module $H_X$ for $X=\underline{E}G$) can be, at a formal level, regarded as $\mathcal{P}_B$ and the evaluation map \[ \mathrm{ev}_1\colon RL^*_c(H_X\otimes B) \to \mathfrak{K}(H_X)\otimes B \] can be regarded as $D_B$. The identification is formal since, for example, $RL^*_c(H_X\otimes B)$ is not separable. At least when $G$ is discrete, the comparison between $\mathcal{P}$ and $RL^*_c(H_X)$ can be made stronger because of the following. \begin{theoremE*}(Theorem \ref{thm_discrete_natural_isom}, Theorem \ref{thm_otimesB}) Let $G$ be a countable discrete group, $X$ be a proper $G$-space which is $G$-equivariantly homotopic to a $G$-CW complex, and $H_X$ be a universal $X$-$G$-module. Then, the (well-defined) inclusion \[ RL^*_c(H_X)\otimes B \to RL^*_c(H_X\otimes B) \] induces an isomorphism \[ K_\ast(\left(RL^*_c(H_X)\otimes B\right)\rtimes_rG) \cong K_\ast(RL^*_c(H_X\otimes B)\rtimes_rG). \] Hence, the group homomorphism \[ K_\ast(\left(RL^*_c(H_X)\otimes B\right)\rtimes_rG) \to K_\ast(\left(\mathfrak{K}(H_X)\otimes B\right)\rtimes_rG) \cong K_\ast(B\rtimes_rG) \] induced by the evaluation map $\mathrm{ev}_1\colon RL^*_c(H_X) \to \mathfrak{K}(H_X)$ is naturally equivalent to the Baum--Connes assembly map $\mu^{B, G}_X$. \end{theoremE*} That is, at a formal level, $RL^*_c(H_X)\otimes B$ (for $X=\underline{E}G$) can be regarded as $\mathcal{P}_B=\mathcal{P}\otimes B$ for any separable $G$-$C^*$-algebra $B$ (at least for $G$ discrete). \begin{corollaryF*}(Corollary \ref{cor_discrete_N}) Let $G$ be a countable discrete group and $B$ be a separable $G$-$C^*$-algebra. The Baum--Connes assembly map $\mu^{B, G}_r$ is an isomorphism if and only if \[ K_\ast((RL^0_c(H_X)\otimes B)\rtimes_rG)=0 \] for a universal $X$-$G$-module $H_X$ for $X=\underline{E}G$. \end{corollaryF*} In the last section, as an application of Theorem C, we extend the recently obtained new proof \cite{BGHN} of the Baum--Connes conjecture with coefficients for CAT(0)-cubical groups to the non-cocompact setting: \begin{theoremG*}(Theorem \ref{thm_cube}) Let $G$ be a second countable, locally compact group $G$ which acts properly and continuously on a finite-dimensional CAT(0)-cubical space with bounded geometry by automorphisms. Then, the Baum--Connes assembly map $\mu^{B, G}_r$ is an isomorphism for any separable $G$-$C^*$-algebra $B$, i.e. BCC holds for $G$. \end{theoremG*} We remark that any group $G$ which acts properly on a CAT(0)-cubical space has the Haagerup approximation property \cite{CCJJV}, so $G$ is a-T-menable. Thus, BCC for these groups are already known by the Higson--Kasparov Theorem \cite{HK97}, \cite{HigsonKasparov}. \section*{Acknowledgement } I would like to thank Arthur Bartels, Siegfried Echterhoff, Julian Kranz, Rufus Willett, Rudolf Zeidler for helpful discussions and comments. I would also like to thank Jacek Brodzki, Erik Guentner and Nigel Higson for their encouragement on extending the work \cite{BGHN} to the non-cocompact setting. \section*{Notations} For a $C^*$-algebra $A$, for Hilbert $A$-modules $\mathcal{E}_1$ and $\mathcal{E}_2$ and for a locally compact topological space $X$, \begin{itemize} \item $\mathfrak{L}(\mathcal{E}_1)$, resp. $\mathfrak{K}(\mathcal{E}_1)$, is the $C^*$-algebra of adjointable bounded, resp. compact, operators on $\mathcal{E}_1$, \item $\mathfrak{L}(\mathcal{E}_1, \mathcal{E}_2)$, resp. $\mathfrak{K}(\mathcal{E}_1, \mathcal{E}_2)$, is the space of the adjointable bounded, resp. compact, operators from $\mathcal{E}_1$ to $\mathcal{E}_2$, \item $C_0(X, A)$ is the $C^*$-algebra of $A$-valued, bounded, norm-continuous functions on $X$ vanishing at infinity, \item $C_b(X, A)$, resp. $C_{b, u}(X, A)$, is the $C^*$-algebra of $A$-valued, bounded, norm-continuous, resp. uniformly norm-continuous, functions on $X$, \item $M(A)$ is the multiplier algebra of $A$. \end{itemize} We use the notation $\otimes$, resp. $\otimes_{\mathrm{max}}$, for the minimal, resp. maximal, tensor product and the notation $\rtimes_r$, resp. $\rtimes_{\mathrm{max}}$, for the reduced, resp. maximal, crossed product. \section{Representable localization algebra} Most of the materials in this and the next section are basically what are proven in \cite{WY2020}. We will carefully review these because they will be important in later sections. Let $X$ be a second countable, locally compact topological space (a locally compact space in short). An $X$-module $H_X$ (or a module over $X$) is a separable Hilbert space equipped with a representation of the $C^*$-algebra $C_0(X)$ which is non-degenerate, i.e. $C_0(X)H_X$ is dense in $H_X$. Any Borel function on $X$ is naturally represented on $H_X$ by Borel functional calculus. The characteristic function of a Borel subset $E$ of $X$ is denoted by $\chi_E$. We will use a convention that a module over the empty set is the zero Hilbert space. An $X$-module $H_X$ is ample if no nonzero element in $C_0(X)$ acts as a compact operator. Let $X, Y$ be locally compact spaces. The support $\mathrm{supp}(T)$ of a bounded operator $T$ from an $X$-module $H_X$ to a $Y$-module $H_Y$ is defined as the set of $(y, x)$ in $Y\times X$ such that $\chi_VT\chi_U\neq0$ for all open neighborhoods $U$ of $x$ and $V$ of $y$. The support is a closed subset of $Y\times X$. When $X$ is a metric space, the propagation of $T$ in $\mathfrak{L}(H_X)$ is defined as \[ \mathrm{prop}(T) = \sup\{ \, d(x,y)\mid \, (x, y) \in \mathrm{supp}(T)\, \}. \] For an $X$-module $H_X$, we introduce a representable localization algebra $RL^*_c(H_X)$ as follows. It is slightly different from the representable localization algebra $RL^*(H_X)$ defined in \cite[Definition 9.4.1]{WY2020} but the two algebras behave basically in the same way. Our definition has an advantage that the ideal $RL^*_0(H_X)$ of ``the negligible part'' of $RL^*_c(H_X)$ is $C_0([1,\infty), \mathfrak{K}(H_X))$, not $C_0([1,\infty), \mathfrak{L}(H_X))$. This will be important when we define the forget-control map, for example. \begin{definition}\label{def_alg} Let $H_X$ be an $X$-module. We define a $\ast$-subalgebra $RL^{\mathrm{alg}}_c(H_X)$ of $C_b([1, \infty), \mathfrak{K}(H_X))$ consisting of bounded, norm-continuous $\mathfrak{K}(H_X)$-valued functions $T\colon t\mapsto T_t$ on $[1, \infty)$, such that \begin{enumerate} \item $T$ has uniform compact support in a sense that there is a compact subset $K$ of $X$ such that $T_t=\chi_KT_t\chi_K$ (i.e. $\mathrm{supp}(T_t)\subset K\times K$) for all $t\geq1$, \item for any $\phi$ in $C_0(X)$, we have \[ \lim_{t\to \infty }||[\phi, T_t]||=\lim_{t\to \infty}||\phi T_t-T_t\phi|| = 0. \] \end{enumerate} We define $RL^{\mathrm{alg}}_u(H_X)$ to be a subalgebra of $RL^{\mathrm{alg}}_c(H_X)$ consisting of uniformly norm-continuous functions. \end{definition} \begin{definition} Let $H_X$ be an $X$-module. We define a $C^*$-algebra $RL_c^\ast(H_X)$ as the norm completion of $RL^{\mathrm{alg}}_c(H_X)$ inside $C_b([1, \infty), \mathfrak{K}(H_X))$. A $C^*$-algebra $RL_u^\ast(H_X)$ is defined as the completion of $RL^{\mathrm{alg}}_u(H_X)$. We call the algebras $RL_c^\ast(H_X)$, $RL_u^\ast(H_X)$ as the (continuous/ uniformly continuous) representable localization algebras. \end{definition} \begin{remark} When $X$ is compact, the first condition in Definition \ref{def_alg} is vacuous and the algebra $RL_u^\ast(H_X)$ is same as the localization algebra $\mathcal{C}_L(\pi)$ introduced and studied in \cite{DWW18} for the structure map $\pi\colon C(X) \to \mathfrak{L}(H_X)$. For general $X$, $RL_u^\ast(H_X)$ is just the inductive limit (the union) of $RL_u^\ast(H_Y)$ over the compact subspaces $Y$ of $X$ where $H_Y=\chi_YH_X$. \end{remark} We will mostly study $RL_c^\ast(H_X)$ but the entire discussion and results on $RL_c^\ast(H_X)$ have obvious analogues for $RL_u^\ast(H_X)$. The non-degeneracy of the representation of $C_0(X)$ on $H_X$ implies that $RL_u^\ast(H_X)$ and hence $RL_c^\ast(H_X)$ contain the following (essential) ideal \[ RL_0^\ast(H_X)=C_0([1, \infty), \mathfrak{K}(H)). \] We define $RL_{c, Q}^\ast(H_X)$ to be the quotient of $RL_c^\ast(H_X)$ by the ideal $RL_0^\ast(H_X)$. Similarly, $RL_{u, Q}^\ast(H_X)$ is the quotient of $RL_u^\ast(H_X)$ by $RL_0^\ast(H_X)$. Thus, we have the following diagram of short exact sequences \begin{equation*} \xymatrix{ 0 \ar[r]^-{} & RL_0^\ast(H_X) \ar[r]^-{} \ar[d]^-{=} & RL_u^\ast(H_X) \ar[r]^-{} \ar[d]^-{} & RL_{u, Q}^\ast(H_X) \ar[r]^-{} \ar[d]^-{} & 0 \\ 0 \ar[r]^-{} & RL_0^\ast(H_X) \ar[r]^-{} & RL_c^\ast(H_X) \ar[r]^-{} & RL_{c, Q}^\ast(H_X) \ar[r]^-{} & 0. } \end{equation*} \begin{proposition}\label{prop_quotient_isom} The quotient map from $RL^*_c(H_X)$ to $RL^*_{c, Q}(H_X)$ induces an isomorphism on the K-theory groups. The same holds for the quotient map from $RL^*_u(H_X)$ to $RK^*_{u, Q}(H_X)$. \end{proposition} \begin{proof} This follows from $K_\ast(RL_0^\ast(H_X))=K_\ast(C_0([1, \infty), \mathfrak{K}(H)))=0$. \end{proof} \begin{lemma}(See \cite[Proposition 6.1.1]{WY2020}) \label{lem_commutator0} Let $X$ be a compact space, $H_X$ be an $X$-module and $T \in C_b([1, \infty), \mathfrak{L}(H_X))$. Fix any metric $d$ on $X$. The following are equivalent: \begin{enumerate} \item For any $\phi \in C(X)$, $\lim_{t\to \infty }||[\phi, T_t]||= 0$. \item There is $S \in C_b([1, \infty), \mathfrak{L}(H_X))$ such that propagation $\mathrm{prop}(S_t) \to 0$ as $t \to \infty$ and such that $\limt||T_t-S_t||= 0$. \item There is $S \in C_b([1, \infty), \mathfrak{L}(H_X))$ such that for any open neighborhood $U$ of the diagonal in $X\times X$ there exists $t_U\geq1$ so that for all $t> t_U$, $\mathrm{supp}(S_t)\subset U$ and such that $\limt||T_t-S_t||= 0$. \end{enumerate} The same is true if we replace $C_b([1, \infty), \mathfrak{L}(H_X))$ to any one of $C_{b, u}([1, \infty), \mathfrak{L}(H_X))$, $C_b([1, \infty), \mathfrak{K}(H_X))$ or $C_{b, u}([1, \infty), \mathfrak{K}(H_X))$ everywhere. \end{lemma} \begin{proof} The equivalence of (1) and (2) is proven in \cite[Proposition 6.1.1]{WY2020} and the condition on $S_t$ in (2) and the one in (3) are equivalent. The proof is still valid when we use $C_b([1, \infty), \mathfrak{K}(H_X))$ instead. One can replace continuous to uniform continuous because if $T_t$ is uniformly continuous, $S_t$ is continuous and if $||T_t-S_t||\to 0$ as $t\to \infty$, then $S_t$ is uniformly continuous. \end{proof} For any locally compact space $X$, let $X^+=X\cup \{\infty\}$ be the one-point compactification of $X$. Any $X$-module is naturally an $X^+$-module. \begin{lemma}\label{lem_commutator} Let $X$ be a locally compact space, $H_X$ be an $X$-module and $T \in C_b([1, \infty), \mathfrak{L}(H_X))$. Fix any metric $d$ on $X^+$. The following are equivalent: \begin{enumerate} \item For any $\phi \in C_0(X)$, $\lim_{t\to \infty }||[\phi, T_t]||= 0$. \item There is $S \in C_b([1, \infty), \mathfrak{L}(H_X))$ such that propagation $\mathrm{prop}(S_t) \to 0$ as $t \to \infty$ and such that $\limt||T_t-S_t||= 0$. \item There is $S \in C_b([1, \infty), \mathfrak{L}(H_X))$ such that for any open neighborhood $U$ of the diagonal in $X^+\times X^+$ there exists $t_U\geq1$ so that for all $t> t_U$, $\mathrm{supp}(S_t)\subset U$ and such that $\limt||T_t-S_t||= 0$. \end{enumerate} The same is true if we replace $C_b([1, \infty), \mathfrak{L}(H_X))$ to any one of $C_{b, u}([1, \infty), \mathfrak{L}(H_X))$, $C_b([1, \infty), \mathfrak{K}(H_X))$ or $C_{b, u}([1, \infty), \mathfrak{K}(H_X))$ everywhere. \end{lemma} \begin{proof} This follows from Lemma \ref{lem_commutator0} by viewing $H_X$ as an $X^+$-module. \end{proof} \begin{proposition}\label{prop_same} The $C^*$-algebra $RL_c^\ast(H_X)$ coincides with the completion of the $\ast$-subalgebra $R\mathbb{L}^{\mathrm{alg}}_c(H_X)$ of $C_b([1, \infty), \mathfrak{K}(H_X))$ that consists of $T$ such that \begin{enumerate} \item $T$ has eventually, uniform compact support in a sense that there is a compact subset $K$ of $X$ and $t_0\geq1$ such that $T_t=\chi_KT\chi_K$ for all $t\geq t_0$, \item for any open neighborhood $U$ of the diagonal in $X^+\times X^+$ there exists $t_U\geq1$ such that for all $t> t_U$, $\mathrm{supp}(T_t)\subset U$. \end{enumerate} \end{proposition} \begin{proof} Since both $RL_c^\ast(H_X)$ and the completion of $R\mathbb{L}^{\mathrm{alg}}_c(H_X)$ contain the ideal $RL_0^*(H_X)$, using Lemma \ref{lem_commutator}, we can see that each one of the algebras contains all the generators of the other. \end{proof} Any bounded Borel function on $X$ multiplies $RL_c^\ast(H_X)$ and hence the ideal $RL_0^\ast(H_X)$ so it naturally defines a multiplier on the quotient $RL_{c, Q}^\ast(H_X)$. In particular, we have a $\ast$-homomorphism \begin{equation}\label{eq_Xalgebra} C_b(X) \to M(RL_{c, Q}^\ast(H_X)). \end{equation} Recall that a $C_0(X)$-algebra is a $C^*$-algebra $A$ equipped with a representation \[ C_0(X) \to M(A) \] of $C_0(X)$ to the multiplier algebra of $A$ which is non-degenerate and central. Here, non-degenerate means $C_0(X)A$ is dense in $A$ and central means it maps $C_0(X)$ to the center of the multiplier algebra. \begin{proposition} The $C^*$-algebra $RL_{c, Q}^\ast(H_X)$ is naturally a $C_0(X)$-algebra. That is, the natural representation \eqref{eq_Xalgebra} of $C_0(X)$ to the multiplier algebra $M(RL_{c, Q}^\ast(H_X))$ is non-degenerate and central. \end{proposition} \begin{proof} The image of $RL^{\mathrm{alg}}_c(H_X)$ in $RL_{c, Q}^\ast(H_X)$ is dense. On this dense subalgebra, the non-degeneracy follows from the first condition in Definition \ref{def_alg}. The centrality follows from the second condition in Definition \ref{def_alg}. \end{proof} \begin{remark} In fact, we can see that $RL_{c, Q}^\ast(H_X)$ is the largest possible $C_0(X)$-subalgebra inside the quotient $C_b([1, \infty), \mathfrak{K}(H_X)) / C_0([1, \infty), \mathfrak{K}(H_X))$ with respect to the natural $C_0(X)$-action. \end{remark} Let $H_X$ be an $X$-module. Let $L=C_0[1, \infty)$. Consider a Hilbert $L$-module $H_X\otimes L$. The $C^*$-algebra $\mathfrak{K}(H_X\otimes L)$ of ($L$-)compact (adjointable) operators on $H_X\otimes L$ is naturally identified as $\mathfrak{K}(H_X)\otimes L \cong C_0([1, \infty), \mathfrak{K}(H_X))$. Similarly, the $C^*$-algebra $\mathfrak{L}(H_X\otimes L)$ of adjointable operators on $H_X\otimes L$ is identified as $C_{b, \mathrm{SOT}^*}([1, \infty), \mathfrak{L}(H_X))$ consisting of bounded, $\mathrm{SOT}^*$-continuous $\mathfrak{L}(H_X)$-valued functions. We sometimes view $RL_c^*(H_X)$ as a subalgebra of $\mathfrak{L}(H_X\otimes L)$ so we have \begin{align*} \mathfrak{K}(H_X\otimes L) \cong C_0([1, \infty), \mathfrak{K}(H_X)) = RL^*_0(H_X) \subset RL_c^*(H_X) \\ \subset C_b([1, \infty), \mathfrak{K}(H_X)) \subset C_{b, \mathrm{SOT}^*}([1, \infty), \mathfrak{L}(H_X)) \cong \mathfrak{L}(H_X\otimes L). \end{align*} Note that the multiplier algebra $M(RL^*_c(H_X))$ of $RL^*_c(H_X)$ is naturally a subalgebra of $\mathfrak{L}(H_X\otimes L)$. This is because for any $C^*$-algebra $J$, if a $C^*$-subalgebra $B\subset M(J)$ contains $J$, then $M(B)$ naturally coincides with the subalgebra of $M(J)$ consisting of multipliers of $B$ inside $M(J)$. The following is an easy exercise. \begin{lemma}\label{lem_mult} Let $T\in \mathfrak{L}(H_X\otimes L)\cong C_{b, \mathrm{SOT}^*}([1, \infty), \mathfrak{L}(H_X))$ such that for any $\phi \in C_0(X)$, $\limt ||[T_t, \phi]||= 0$. Then $T\in M(RL^*_c(H_X))$. In particular, if $\mathrm{prop}(T_t)\to 0$ as $t\to \infty$ with respect to a (any) fixed metric on $X^+$, then $T\in M(RL^*_c(H_X))$. \end{lemma} \section{Representable K-homology} Almost all the materials in this section are what are proven in \cite{WY2020}, in particular in \cite[Section 9.4]{WY2020}. We will review these because they will be important in later sections. As before, we mainly study $RL_{c}^\ast(H_X)$ but the entire discussion and results on $RL_{c}^\ast(H_X)$ have obvious analogues for $RL^*_u(H_X)$. Let $X$ and $Y$ be locally compact spaces. Let $H_X , H_Y$ be an $X$-module and a $Y$-module respectively. Given a continuous map $f\colon X\to Y$, a family of isometries $(V_t\colon H_X \to H_Y)_{t\in[1, \infty)}$ is called a continuous cover of $f$ \cite[Definition 4.4.8]{WY2020} if \begin{enumerate} \item the function $t\mapsto V_t$ from $[1, \infty)$ to $\mathfrak{L}(H_X, H_Y)$ is uniformly norm-continuous, and \item for any open neighborhood $U$ of the diagonal in $Y^+\times Y^+$, there exists $t_U\geq1$ such that for all $t\geq t_U$, \[ \mathrm{supp}(V_t) \subset \{\, (y, x)\in Y\times X \mid (y, f(x)) \in U \}. \] \end{enumerate} For example, if $H_X$ is an $X$-module and $f\colon X\to Y$ is a continuous map, we may view $H_X$ as a $Y$-module via $f^*\colon C_0(Y)\to C_b(X)$. The obtained representation of $C_0(Y)$ on $H_X$ is non-degenerate because $f^\ast\colon C_0(X) \to M(C_0(X))=C_b(X)$ is non-degenerate on $C_0(X)$. Let $(H_X)_Y$ be this $Y$-module. Then, the identity map $V\colon H_X\to (H_X)_Y$ is a strict cover of $f$ in a sense that for any open neighborhood $U$ of the diagonal in $Y^+\times Y^+$ \[ \mathrm{supp}(V) \subset \{\, (y, x)\in Y\times X \mid (y, f(x)) \in U \}, \] or equivalently, the support of $V$ is contained in the graph of $f$. In particular, $V$ as a constant family is a continuous cover of $f$ from $H_X$ to $(H_X)_Y$. \begin{lemma}\label{lem_coverid} A family of isometries $(V_t\colon H_X \to H_Y)_{t\in[1, \infty)}$ is a continuous cover of $f\colon X\to Y$ if and only if it is a continuous cover of the identity map on $Y$ when we view $H_X$ as a $Y$-module $(H_X)_Y$ via $f^\ast\colon C_0(Y) \to C_b(X)$. \end{lemma} \begin{proof} It can be checked directly that for any $V\colon H_X\to H_Y$ and for any open neighborhood $U$ of the diagonal in $Y^+\times Y^+$, for the following conditions \begin{enumerate} \item $\mathrm{supp}(V\colon (H_X)_Y \to H_Y) \subset \{\, (y_1, y_2)\in Y\times Y \mid (y_1, y_2) \in U \}$, \item $\mathrm{supp}(V\colon H_X \to H_Y) \subset \{\, (y, x)\in Y\times X \mid (y, f(x)) \in U \}$, \item $\mathrm{supp}(V\colon (H_X)_Y \to H_Y) \subset \{\, (y_1, y_2)\in Y\times Y \mid (y_1, y_2) \in \bar U \}$, \end{enumerate} we have (1) $\implies$ (2) $\implies$ (3). The assertion follows from this. \end{proof} Given a continuous cover $(V_t\colon H_X \to H_Y)_{t\in[1, \infty)}$ of a continuous map $f\colon X\to Y$, the conjugation by $V_t$ defines a $\ast$-homomorphism \[ \mathrm{Ad}_{V_t} \colon RL_{c}^\ast(H_X) \to RL_{c}^\ast(H_Y). \] This is because $\mathrm{Ad}_{V_t}$ maps $R\mathbb{L}^{\mathrm{alg}}_c(H_X)$ (see Proposition \ref{prop_same} for this algebra) to $R\mathbb{L}^{\mathrm{alg}}_c(H_Y)$. This $\ast$-homomorphism depends on the continuous cover $V_t$ of $f$, but the induced map on their K-theory groups is independent of the choice of $V_t$. This is because given two continuous covers $(V_{i, t}\colon H_X\to H_Y)_{t\in [1, \infty)}$ $(i=1,2)$ of $f$, the two maps \[ T_t \mapsto \begin{bmatrix} \mathrm{Ad}_{V_{1,t}}(T_t) & 0 \\ 0 & 0 \end{bmatrix}, \,\,\, T_t \mapsto \begin{bmatrix} 0 & 0 \\ 0 & \mathrm{Ad}_{V_{2,t}}(T_t) \end{bmatrix} \] from $RL_{c}^\ast(H_X)$ to the matrix algebra $M_2(RL_{c}^\ast(H_Y))$ are conjugate to each other by the unitary \[ \begin{bmatrix} 1-V_{1,t}V_{1,t}^\ast & V_{1,t}V_{2,t}^\ast \\ V_{2,t}V_{1,t}^\ast & 1-V_{2,t}V_{2,t}^\ast \end{bmatrix} \] in the $2\times 2$ matrix algebra $M_2(M(RL_{c}^\ast(H_Y)))$ of the multiplier algebra. Here, the following important fact is used: for any two continuous covers $V_{1,t}$, $V_{2,t}$ of $f$, the partial isometries $V_{1,t}V_{2,t}^\ast$ multiplies $RL_{c}^\ast(H_Y)$. Indeed, for a fixed metric $d$ on $Y^+$ and for any $\epsilon>0$, $\mathrm{supp}(V_{1,t}V_{2,t}^\ast)$ is contained in the closure of \[ \{ (y_1, y_2) \mid \text{there is $x \in X$ such that $d(y_1, f(x)), d(y_2, f(x)) < \epsilon$ } \} \] for large enough $t$. It follows that $\mathrm{prop} (V_{1,t}V_{2,t}^\ast) \to 0$ as $t\to 0$ with respect to $d$. By Lemma \ref{lem_mult}, the family $t\mapsto V_{1,t}V^\ast_{2,t}$ multiplies $RL^*_c(H_Y)$. Note that $\mathrm{Ad}_{V_t}$ for a cover $V_t$ of $f\colon X\to Y$ induces a $\ast$-homomorphism from $RL_{c, Q}^\ast(H_X)$ to $RL_{c, Q}^\ast(H_Y)$ as it maps the ideal $RL_{0}^\ast(H_X)$ to $RL_{0}^\ast(H_Y)$ and the induced map on their K-theory groups is again independent of the choice of a cover. If $H_Y$ is an ample $Y$-module, a continuous cover $(V_t\colon H_X \to H_Y)_{t\in[1, \infty)}$ exists for any continuous map $f\colon X\to Y$ and for any $X$-module $H_X$ \cite[Corollary 4.4.7]{WY2020}. \begin{definition}\cite[Definition 9.4.5]{WY2020} Choose any ample $X$-module $H_X$ for each locally compact space $X$ and any continuous cover $V^f_t\colon H_X\to H_Y$ for each continuous map $f\colon X\to Y$. A functor $\mathbb{D}_\ast$ from the category $\mathcal{LC}$ of (second countable) locally compact spaces to the category $\mathcal{GA}$ of graded abelian groups is defined as \[ \mathbb{D}_\ast(X)=K_\ast(RL^\ast_c(H_X)), \] \[ \mathbb{D}_\ast(f\colon X\to Y)=\mathrm{Ad}_{V^f_t\ast} \colon K_\ast(RL^\ast_c(H_X)) \to K_\ast(RL^\ast_c(H_Y)). \] \end{definition} \begin{proposition}\cite[Theorem 9.4.4]{WY2020}\label{prop_welldef} The functor $\mathbb{D}_\ast$ from $\mathcal{LC}$ to $\mathcal{GA}$ is well-defined. The functor does not depend on the choice of ample modules $H_X$ up to canonical equivalence. \end{proposition} \begin{proof} There is a technical point which is not mentioned in \cite{WY2020}, which we now explain. Let $f_1\colon X\to Y$, $f_2\colon Y\to Z$ be continuous maps and $V_1\colon H_X\to H_Y$ and $V_2\colon H_Y\to H_Z$ be continuous covers of $f_1$ and $f_2$ respectively. The thing is that $V_2V_1$ is a continuous cover of $f_2\circ f_1$ if $f_2$ is proper but it may not be in general. Also, if $V_3\colon H_X\to H_Z$ is a continuous cover of $f_2\circ f_1$, $V_3 (V_2V_1)^*$ may not multiply $RL^*_c(H_Z)$ if $f_2$ is not proper. Because of this, we reduce the functoriality to the case of proper maps. Thus, we first show the representability of $\mathbb{D}_\ast(X)$: if $(K_i)_{i \in I}$ is the net of compact subsets of $X$, ordered by the inclusion, the canonical maps $\bD_\ast(K_i) \to \bD_\ast(X)$ induce a natural isomorphism \begin{equation}\label{eq_representable} \varinjlim_{i \in I}\bD_\ast(K_i) \cong \bD_\ast(X). \end{equation} Note that the functoriality for proper maps is used for defining this inductive system. For this, it suffices to show that if $U_n$ is an increasing sequence of relatively compact, open subsets of $X$ such that $\cup_n U_n=X$, then for their closures $K_n=\bar U_n$, the natural inclusions induce an isomorphism \[ \varinjlim_nK_\ast(RL_c^\ast(H_{K_n})) \cong K_\ast(RL_c^\ast(H_{X})). \] Since $K_n=\bar U_n$, the subspace $\chi_{K_n}H_X$ is an ample $K_n$-module and we may assume $H_{K_n}=\chi_{K_n}H_X$. Then, we see that $RL_c^\ast(H_{K_n})=\chi_{K_n}RL_c^\ast(H_{X})\chi_{K_n}$ and $RL_c^\ast(H_{K_n})$ is an increasing sequence of $C^*$-subalgebras of $RL_c^\ast(H_{X})$ whose union is dense in $RL_c^\ast(H_{X})$. The claim follows from the continuity of K-theory. Next, we note that as in \cite[Corollary 9.4.10]{WY2020}, this identification \eqref{eq_representable} is compatible with $\mathbb{D}_\ast(f\colon X\to Y)$: if $(K_i)_{i \in I}$, $(K'_j)_{j\in J}$ are the nets of compact subsets of $X$ and $Y$ respectively, and if we consider the map \[ \varinjlim_{i\in I} \bD_\ast(f\mid_{K_i})\colon \varinjlim_{i \in I} \bD_\ast(K_i) \to \varinjlim_{j \in J}\bD_\ast(K'_j) \] defined as the limit of \[ \bD_\ast(K_i) \to \bD_\ast(f(K_i)) \to \varinjlim_{j \in J}\bD_\ast(K'_j) \] which is the composition of $\bD_\ast(f\mid_{K_i}\colon K_i\to f(K_i))$ and the natural map, the following diagram commutes \begin{equation*} \xymatrix{ \varinjlim_{i \in I}\bD_\ast(K_i) \ar[r]^-{\varinjlim_{i\in I} \bD_\ast(f\mid_{K_i})} \ar[d]^-{\cong} & \varinjlim_{j \in J}\bD_\ast(K_j) \ar[d]^-{\cong} \\ K_\ast(RL_c^\ast(H_{X})) \ar[r]^-{\mathbb{D}_\ast(f)} & K_\ast(RL_c^\ast(H_{Y})). } \end{equation*} The functoriality $\bD_\ast(f_2)\circ \bD_\ast(f_1) = \bD_\ast(f_2\circ f_1)$ for not necessarily proper maps $f_1\colon X\to Y$, $f_2\colon Y\to Z$ now follows from this. \end{proof} Note that we have naturally \[ \mathbb{D}_\ast(X)=K_\ast(RL^\ast_c(H_X)) \cong K_\ast(RL^\ast_{c, Q}(H_X)) \] by Proposition \ref{prop_quotient_isom}. \begin{theorem}\label{thm_homology}\cite[Section 9.4]{WY2020} The functor $\mathbb{D_\ast}$ satisfies the following. \begin{enumerate} \item $\mathbb{D}_\ast(\mathrm{empty\, set})\cong 0$. \item $\mathbb{D}_\ast(\mathrm{point})\cong \left\{ \begin{array}{cc} \mathbb{Z} & \ast=0, \\ 0 & \ast=1. \end{array} \right.$ \item Representable: if $(K_i)_{i \in I}$ is the net of compact subsets of $X$, ordered by the inclusion, the canonical maps $\bD_\ast(K_i) \to \bD_\ast(X)$ induce a natural isomorphism \[ \varinjlim_{i \in I}\bD_\ast(K_i) \cong \bD_\ast(X). \] \item Mayer--Vietoris sequence for an open cover: if $X=U\cup V$ for open subsets $U$ and $V$ of $X$, we have a natural Meyer--Vietoris sequence \[ \xymatrix{ \bD_0(U\cap V) \ar[r]^-{}& \bD_0(U) \oplus \bD_0(V) \ar[r]^-{} & \bD_0(X) \ar[d]^-{} \\ \bD_1(X) \ar[u]^-{} & \bD_1(U) \oplus \bD_1(V) \ar[l]^-{} & \bD_1(U\cap V) \ar[l]^-{}. } \] \item Homotopy invariance: if $h\colon X\times[0 ,1]\to Y$ is a continuous homotopy between $f_0, f_1\colon X\to Y$, then $\bD_\ast(f_0)=\bD_\ast(f_1)$. \end{enumerate} \end{theorem} \begin{proof} We review proofs of these properties of $\bD_\ast$ because they will be relevant when we consider an equivariant setting. (1) If $X$ is the empty set, $H_X=0$ and $RL_c^\ast(H_X)=0$. (2) If $X$ is a point, $RL_c^\ast(H_X)=C_b([1, \infty), \mathfrak{K}(H))$ for a separable, infinite-dimensional Hilbert space $H$. In the uniformly-continuous case, \[ K_\ast(C_{b, u}([1, \infty), \mathfrak{K}(H))\cong \left\{ \begin{array}{cc} \mathbb{Z} & \ast=0 \\ 0 & \ast=1 \end{array} \right. \] is proven in \cite[Proposition 6.3.3]{WY2020} by showing that the kernel $I_u$ of the evaluation map at $1$ \[ \mathrm{ev}_1\colon C_{b, u}([1, \infty), \mathfrak{K}(H)) \to \mathfrak{K}(H), \] has zero K-theory groups by a simple Eilenberg swindle argument. In \cite[Theorem 3.4]{WY2021}, it is shown that the canonical inclusion induces an isomorphism \[ K_\ast(C_{b, u}([1, \infty), \mathfrak{K}(H))) \cong K_\ast(C_{b}([1, \infty), \mathfrak{K}(H))), \] with a more elaborate Eilenberg Swindle argument. The assertion follows from these. (3) We have already proved this in the proof of Proposition \ref{prop_welldef}. (4) For any open subset $U$ of $X$, $C_0(U)H_X$ is an ample $U$-module and we may assume $H_U=C_0(U)H_X$. We see that $RL_c^\ast(H_{U})$ coincides with the $C^*$-subalgebra of $RL_c^\ast(H_{X})$ generated by $C_0(U)RL_c^\ast(H_{X})C_0(U)$. Now let $X=U\cup V$ with $U, V$ open. Recall that $RL_{c, Q}^\ast(H_X)$ is a $C_0(X)$-algebra. Let $RL_{c, Q}^\ast(H_X)_U$ be the ideal of $RL_{c, Q}^\ast(H_X)$ generated by $C_0(U)RL_{c, Q}^\ast(H_X)$. Define $RL_{c, Q}^\ast(H_X)_V$, $RL_{c, Q}^\ast(H_X)_{U\cap V}$ analogously. We see that $RL_{c, Q}^\ast(H_{U})$, $RL_{c, Q}^\ast(H_{V})$ and $RL_{c, Q}^\ast(H_{U\cap V})$ are naturally identified with $RL_{c, Q}^\ast(H_X)_U$, $RL_{c, Q}^\ast(H_X)_V$, and $RL_{c, Q}^\ast(H_X)_{U\cap V}$ respectively. These ideals of $RL_{c, Q}^\ast(H_{X})$ satisfy \[ RL_{c, Q}^\ast(H_X)_U + RL_{c, Q}^\ast(H_X)_V = RL_{c, Q}^\ast(H_X), \] \[ RL_{c, Q}^\ast(H_X)_U \cap RL_{c, Q}^\ast(H_X)_V = RL_{c, Q}^\ast(H_X)_{U\cap V}. \] The corresponding to these ideals, we have a Mayer--Vietoris sequence (see \cite[Proposition 2.7.15]{WY2020}) which is the desired one. The naturality can be checked directly. This argument is essentially same as the one given in \cite[Proposition 9.4.13]{WY2020}. (5) We recall a proof of the homotopy invariance in detail for later use. We follow the argument in \cite[Proposition 6.4.14]{WY2020}. See also the proof of \cite[Proposition 3.7]{Yu1997}. For any locally compact space $X$ and for $ r \in[0 ,1]$, let \[ f_r\colon X\times [0, 1] \to X\times [0 ,1], \,\, (x, t)\to (x, (1-r)t). \] It suffices to show the following claim: \[ \bD_\ast(f_1) = \bD_\ast(f_0) (=\mathrm{Id}). \] We let $Z=[0 ,1]\cap \mathbb{Q}$, regarded as a discrete set, and consider $l^2$-space $l^2(Z)$. We fix a separable infinite-dimensional Hilbert space $H$ with a decomposition \[ H=\bigoplus_{z\in Z} H_z \] with each $H_z$ infinite-dimensional. Let $W_z\colon H\to H$ be an isometry with range $H_z$. For an ample $X$-module $H_X$, we use the following ample $X\times [0, 1]$-module \[ H_{X\times[0,1]}=H_X\otimes l^2(Z)\otimes H \] where $\phi \in C_0(X)$ acts as \[ \phi (u\otimes \delta_z \otimes v) = \phi u \otimes \delta_z\otimes v, \] and $\phi \in C[0, 1]$ acts as \[ \phi (u\otimes \delta_z \otimes v) = u \otimes \phi(z) \delta_z\otimes v. \] For any $r\in Z$, we define an isometry $W(r)$ on $H_{X\times[0,1]}$ by \[ W(r)\colon u\otimes \delta_z \otimes v \mapsto u\otimes \delta_{(1-r)z} \otimes W_zv. \] The isometries $W(r)$ satisfy the following: \begin{enumerate} \item The support $\mathrm{supp}(W(r))$ is the graph of $f_r\colon X\times [0, 1] \to X\times [0, 1]$. That is, \[ \mathrm{supp}(W(r)) = \{ \, \left((x, (1-r)s), (x, s) \right) \mid (x, s) \in X\times[0, 1] \}. \] \item For $T\in \mathfrak{L}(H_{X\times[0,1]})$, if $((x_1, s_1), (x_2, s_2)) \in \mathrm{supp}\left( W(r)TW(r)^\ast \right)$, then \[ \left\{ \begin{array}{cc} ((x_1, (1-r)^{-1}s_1), (x_2, (1-r)^{-1}s_2)) \in \mathrm{supp}(T) & (r\neq 1) \\ ((x_1, s_0), (x_2, s'_0)) \in \mathrm{supp}(T) \,\,\, \text{for some $s_0, s_0'$ in $[0,1]$ and $s_1=s_2=0$} & (r=1). \end{array} \right. \] \end{enumerate} For $n \in \mathbb{N}_{>0}\cup \{\infty\}$, we define a uniformly continuous family $(V_{n,t}\colon H_{X\times[0,1]} \to H_{X\times[0,1]})_{t\in[1, \infty)}$ of isometries. For $n=\infty$, we set \[ V_{\infty,t}=W(0) \,\,\, (1\leq t< \infty). \] For $n\in \mathbb{N}$, we set \[ V_{n,t}=\left\{ \begin{array}{cc} W(0) & (0\leq t < n), \\ W(1) & (2n \leq t < \infty). \end{array} \right. \] For $j\in \{0, 1, \cdots, n-1\}$ and $n+j\leq t \leq n+j+1$, we set \[ V_{n,t}= |\cos(\frac{\pi}{2}(t-n-j))|W(\frac{j}{n}) + |\sin(\frac{\pi}{2}(t-n-j))|W(\frac{j+1}{n}) \] on $u\otimes \delta_z \otimes v$, $(z\neq0)$ and \[ V_{n,t}(u\otimes \delta_0 \otimes v)= u\otimes \delta_0 \otimes W_0v. \] The isometries $V_{n,t}$ satisfy the following. \begin{enumerate} \item The conjugation by $(V_{1,t})_{1\leq t< \infty}$ on $RL_c^\ast(H_{X\times[0,1]})$ induces the same map as $\bD_\ast(f_1)$ on the K-theory groups. \item The conjugation by $(V_{\infty,t})_{1\leq t< \infty}$ on $RL_c^\ast(H_{X\times[0,1]})$ induces the same map as $\bD_\ast(f_0)(=\mathrm{Id})$ on the K-theory groups. \item For any $1\leq t< \infty$, $V_{n,t}=V_{\infty,t}=W(0)$ for almost all $n$. \item For any metric $d_X$ on $X$ and for the standard metric $d_{[0 ,1]}$, define a metric $d_{X\times[0, 1]}=d_X + d_{[0, 1]}$ on $X\times [0,1]$. With respect to $d_{X\times[0 ,1]}$, the propagation $\mathrm{prop}(V_{n+1,t}V_{n,t}^\ast) \to 0$ uniformly in $n$ as $t\to \infty$. \end{enumerate} Let $\mathcal{A}=RL_{c}^\ast(H_{X\times[0, 1]})$ and $\mathcal{A}^\infty= RL_c^\ast(H_{X\times[0, 1]}\otimes l^2(\mathbb{N}))$ where $H_{X\times[0, 1]}\otimes l^2(\mathbb{N})$ is naturally viewed as an $X\times[0,1]$-module. Let $U_n$ be an isometry from $H_{X\times[0, 1]}$ to $H_{X\times[0, 1]}\otimes l^2(\mathbb{N})$ defined as $v\to v\otimes \delta_n$ for $v$ in $H_{X\times[0, 1]}$. It follows from the property (2) of $W(r)$ and from the definition of $V_{n,t}$ not only that for any $n\in \mathbb{N}_{>0}\cup \{\infty\}$, \[ \mathrm{Ad}_{V_{n,t}}\colon \mathcal{A} \to \mathcal{A} \] is well-defined but also that the diagonal map \[ \alpha = \sum_{1\leq n < \infty} \mathrm{Ad}_{U_nV_{n,t}} \] induces a $\ast$-homomorphism from $\mathcal{A}$ to the multiplier algebra $M(\mathcal{A}^\infty)$. Similarly, both \[ \beta= \sum_{1\leq n < \infty} \mathrm{Ad}_{U_nV_{n+1,t}} , \,\,\, \gamma= \sum_{1\leq n < \infty} \mathrm{Ad}_{U_nV_{\infty,t}} \] map $\mathcal{A}$ to $M(\mathcal{A}^\infty)$. Since for any $1\leq t< +\infty$, $V_{n,t}=V_{\infty,t}$ for almost all $n$, for any $T_t$ in $\mathcal{A}$, both pairs \[ (\alpha, \gamma)(T_t) = \left( \alpha(T_t), \gamma(T_t) \right), \,\, (\beta, \gamma)(T_t) = \left( \beta(T_t), \gamma(T_t) \right), \,\, \] of elements in $M(\mathcal{A}^\infty)$ define elements in the double \[ D=M(\mathcal{A}^\infty)\oplus_{\mathcal{A}^\infty}M(\mathcal{A}^\infty) = \{ (a_1, a_2) \in M(\mathcal{A}^\infty)\oplus M(\mathcal{A}^\infty) \mid a_1-a_2 \in \mathcal{A}^\infty \}. \] Note that the pairs $(\alpha, \gamma)(T_t)$, $(\beta, \gamma)(T_t)$ define elements in the subalgebra \[ C=\{ (a_1, a_2)\in D \mid a_2= \sum_{1\leq n < \infty} \mathrm{Ad}_{U_nV_{\infty, t}}(T_t), T_t \in \mathcal{A} \} \] of $D$. The $\ast$-homomorphisms $(\alpha, \gamma)$ and $(\beta, \gamma)$ from $\mathcal{A}$ to $C$ and hence as maps from $\mathcal{A}$ to $D$, induce the same map on the K-theory groups. This is because $(\alpha, \gamma)$ and $(\beta, \gamma)$ are conjugate in $C$ by a partial isometry $w=(w_1, w_2)$ in the multiplier algebra $M(C)$ of $C$ where \[ w_1= \sum_{1\leq n < \infty} U_n V_{n+1,t}V_{n,t}^\ast U_n^\ast, \,\,w_2= \sum_{1\leq n < \infty} U_n V_{\infty, t}V_{\infty, t}^\ast U_n^\ast. \] Indeed, it can be directly checked that \[ w(\alpha(a), \gamma(a))w^\ast = (\beta(a), \gamma(a)) \,\,\, (\alpha(a), \gamma(a))w^\ast w = (\alpha(a), \gamma(a)) \] for $a\in \mathcal{A}$. The fact that $w \in M(C)$ can be checked directly using $w_1, w_2 \in M(\mathcal{A}^\infty)$ and \[ \sum_{1\leq n<\infty} U_n \left( V_{n+1,t}V_{n,t}^\ast-V_{\infty,t} V_{\infty,t}^\ast \right) T_t U_n^\ast \in \mathcal{A}^\infty \] for any $T_t$ in $\mathcal{A}$. This follows from the properties (3) and (4) of $V_{n,t}$. Now we have \[ (\alpha, \gamma)_\ast = (\beta, \gamma)_\ast \colon K_\ast(\mathcal{A}) \to K_\ast(D). \] The unilateral shift $U$ on $l^2(\mathbb{N})$ defines an isometry $(U, U)$ in $M(D)$ and using this, we see that a $\ast$-homomorphism \[ T_t \mapsto \left( \mathrm{Ad}_{U_1V_{1,t}}(T_t), \mathrm{Ad}_{U_1V_{\infty, t}}(T_t) \right) \] from $\mathcal{A}$ to $D=M(\mathcal{A}^\infty)\oplus_{\mathcal{A}^\infty} M(\mathcal{A}^\infty)$ is zero on the K-theory groups. From here, it is routine to see that the two maps $\mathrm{Ad}_{V_{1,t}}$, $\mathrm{Ad}_{V_{\infty,t}}$ from $\mathcal{A}$ to $\mathcal{A}$ induce the same map on the K-theory groups. Thus, we have $\bD_\ast(f_0)=\bD_\ast(f_1)$. \end{proof} \begin{remark} What is shown in \cite[Theorem 3.4]{WY2021} implies that the canonical inclusion induces an isomorphism \[ K_\ast(RL^*_u(H_X)) \cong K_\ast(RL^*_c(H_X)) \] for any ample $X$-module $H_X$. Alternatively, we can see this by noting that Theorem \ref{thm_homology} also holds if we use $RL^*_u(H_X)$ in place of $RL^*_c(H_X)$. The listed properties of two functors can be used to deduce the isomorphisms for general $X$ from the case when $X$ is the point. \end{remark} \section{Crossed product of representable localization algebra} Let $G$ be a second countable, locally compact group (locally compact group in short). We fix a left-Haar measure $\mu_G$ on $G$ and use it to define $L^1(G)$ and $L^2(G)$. We have for $f\in L^1(G)$, \[ \int_{s\in G} f(s^{-1})\Delta(s)^{-1}d\mu_G(s)= \int_{s\in G} f(s)d\mu_G(s), \] \[ \int_{s\in G} f(st)\Delta(t)d\mu_G(s)= \int_{s\in G} f(s)d\mu_G(s), \] where $\Delta$ is the modular function. A $G$-Hilbert space is a separable Hilbert space $H$ equipped with a unitary representation $g\mapsto u_g$ of $G$. We use $g$ for $u_g$ when there is no confusion. The representation is continuous with respect to the strong topology, i.e. $G\times H \ni (g, v)\mapsto gv \in H$ is continuous with respect to the norm topology on $H$. A $C^*$-algebra $A$ equipped with an action of $G$ by $\ast$-automorphisms is called a $G$-$C^*$-algebra if all elements in $A$ are $G$-continuous. Here, $a\in A$ is $G$-continuous if $G \ni g\mapsto g(a) \in A$ is continuous in norm. A representation of a $G$-$C^*$-algebra is always assumed to be $G$-equivariant. A proper $G$-space $X$ is a locally compact space $X$ equipped with a continuous action of $G$ by homeomorphisms such that for any compact subset $X_0$ of $X$, $gX_0 \cap X_0 =\emptyset$ for any $g$ in $G$ outside a compact subset $K$ of $G$ (which depends on $X_0$). This is same as saying that the map $G\times X \ni (g, x) \mapsto (gx, x) \in X\times X$ is proper. A $G$-$C_0(X)$-algebra is a $G$-$C^*$-algebra $A$ equipped with a non-degenerate, central representation of $G$-$C^*$-algebra $C_0(X)$ to the multiplier algebra $M(A)$ of $A$. A $G$-$C^*$-algebra is proper if it is a $G$-$C_0(X)$-algebra for some proper $G$-space $X$. Let $X$ be a proper $G$-space. An $X$-$G$-module is a $G$-Hilbert space equipped with a non-degenerate representation of $G$-$C^*$-algebra $C_0(X)$. Let $H_X$ be an $X$-$G$-module. There is a natural $G$-action on the algebras $RL_{0}^\ast(H_X)$, $RL^\ast_c(H_X)$, $RL^\ast_u(H_X)$, $RL_{c, Q}^\ast(H_X)$, $RL_{u, Q}^\ast(H_X)$ which is not necessarily continuous except on $RL_{0}^\ast(H_X)$. The subalgebra $RL^\ast_c(H_X)_{\mathrm{Gcont}}$ of $RL^\ast_c(H_X)$ consisting of $G$-continuous elements is naturally a $G$-$C^*$-algebra. For discrete $G$, there is no difference between the two. To make our notations clean, {\bf from now on, we will use the notation $RL^\ast_c(H_X)$ to denote the $G$-$C^*$-algebra $RL^\ast_c(H_X)_{\mathrm{Gcont}}$}. This definition implicitly depends on the group $G$ but it should be clear from the context. The same remark applies to the algebras $RL^\ast_u(H_X)$, $RL_{c, Q}^\ast(H_X)$, $RL_{u, Q}^\ast(H_X)$. We shall mostly study $RL^\ast_c(H_X)$ but the entire discussion and results on $RL^\ast_c(H_X)$ have obvious analogues for $RL_{u}^\ast(H_X)$. We have the following short exact sequence of $G$-$C^*$-algebras, \begin{equation}\label{eq_seqG} \xymatrix{ 0 \ar[r]^-{} & RL_0^\ast(H_X) \ar[r]^-{} & RL_c^\ast(H_X) \ar[r]^-{} & RL_{c, Q}^\ast(H_X) \ar[r]^-{} & 0. } \end{equation} The natural representation of $C_0(X)$ to $M(RL_{c}^\ast(H_X))$ and to $M(RL_{c, Q}^\ast(H_X))$ are $G$-equivariant. \begin{proposition} The $G$-$C^*$-algebras $RL_{c, Q}^\ast(H_X)$ is naturally a $G$-$C_0(X)$-algebra. That is, the natural representation of the $G$-$C^*$-algebra $C_0(X)$ to the multiplier algebra $M(RL_{c, Q}^\ast(H_X))$ is non-degenerate and central. \end{proposition} Since $RL_{c, Q}^\ast(H_X)$ is a proper $G$-$C^*$-algebra, the reduced crossed product $RL_{c, Q}^\ast(H_X)\rtimes_rG$ and the maximal crossed product $RL_{c, Q}^\ast(H_X)\rtimes_{\mathrm{max}}G$ coincide \cite[Theorem 5.3]{AD02}. \begin{lemma}\label{lem_shortG} The short exact sequence \eqref{eq_seqG} of $G$-$C^*$-algebras descends to the short exact sequence of reduced crossed product algebras, \[ \xymatrix{ 0 \ar[r]^-{} & RL_0^\ast(H_X)\rtimes_rG \ar[r]^-{} & RL_c^\ast(H_X)\rtimes_rG \ar[r]^-{} & RL_{c, Q}^\ast(H_X)\rtimes_rG \ar[r]^-{} & 0. } \] \end{lemma} \begin{proof} The exactness can be seen from the diagram \begin{equation*} \xymatrix{ RL_c^\ast(H_X)\rtimes_{\mathrm{max}}G / RL_0^\ast(H_X)\rtimes_{\mathrm{max}}G \ar[d]^-{} \ar[r]^-{\cong} & RL_{c, Q}^\ast(H_X)\rtimes_{\mathrm{max}}G \ar[d]^-{=} \\ RL_c^\ast(H_X)\rtimes_rG / RL_0^\ast(H_X)\rtimes_rG \ar[r]^-{} & RL_{c, Q}^\ast(H_X)\rtimes_rG, } \end{equation*} where the left vertical map and the bottom horizontal map are surjective from which their injectivity follows. \end{proof} \begin{proposition}\label{prop_quotient_isomG1} The quotient map from $RL_c^\ast(H_X)\rtimes_rG$ to $RL_{c, Q}^\ast(H_X)\rtimes_rG$ induces an isomorphism on the K-theory groups. The same holds for the maximal crossed product. \end{proposition} \begin{proof} Note that \[ RL_0^\ast(H_X)\rtimes_rG = C_0([1, \infty), \mathfrak{K}(H_X))\rtimes_rG \cong C_0([1, \infty), \mathfrak{K}(H_X))\otimes C^*_r(G) \] and \[ RL_0^\ast(H_X)\rtimes_{\mathrm{max}}G = C_0([1, \infty), \mathfrak{K}(H_X))\rtimes_{\mathrm{max}}G \cong C_0([1, \infty), \mathfrak{K}(H_X))\otimes_{\mathrm{max}} C^*_{\mathrm{max}}(G) \] since the $G$-action on $RL_0^\ast(H_X)=C_0([1, \infty), \mathfrak{K}(H_X))$ is inner. We see that both $RL_0^\ast(H_X)\rtimes_rG$ and $RL_0^\ast(H_X)\rtimes_{\mathrm{max}}G$ have zero K-theory groups. The claim follows from this and Lemma \ref{lem_shortG}. \end{proof} \begin{proposition}\label{prop_quotient_isomG} The quotient map from $RL_c^\ast(H_X)\rtimes_{\mathrm{max}}G$ to $RL_{c}^\ast(H_X)\rtimes_rG$ induces an isomorphism on the K-theory groups. \end{proposition} \begin{proof} This follows from the following commutative diagram \begin{equation*} \xymatrix{ K_\ast( RL_c^\ast(H_X)\rtimes_{\mathrm{max}}G )\ar[r]^-{\cong} \ar[d]^-{} & K_\ast( RL_{c, Q}^\ast(H_X)\rtimes_{\mathrm{max}}G ) \ar[d]^-{=} \\ K_\ast( RL_c^\ast(H_X) \rtimes_rG ) \ar[r]^-{\cong} & K_\ast( RL_{c, Q}^\ast(H_X)\rtimes_rG ) . } \end{equation*} \end{proof} Let $H$ be an open subgroup of $G$. Let $Y$ be a proper $H$-space and consider the balanced product \[ G\times _H Y = \left( G\times Y \right) / H \] where the right $H$-action on $G\times Y$ is given by $(g, y)\mapsto (gh, h^{-1}y)$. The balanced product $G\times _H Y $ is a proper $G$-space by the left-translation. The $H$-space $Y$ is naturally an $H$-invariant open subset of $G\times _H Y$. \begin{proposition}\label{prop_compactopen} Let $H$ be an open subgroup of $G$, $Y$ be a proper $H$-space and $X$ be the balanced product $G\times _H Y$. For any $X$-$G$-module $H_X$, let $H_Y=\chi_YH_X$ which we naturally view as an $H$-$Y$-module. Then, the natural inclusion \[ RL^*_c(H_Y)\rtimes_rH \to RL^*_c(H_X)\rtimes_rG \] induces an isomorphism on K-theory groups. \end{proposition} \begin{proof} Note that the natural inclusion $RL^*_c(H_Y) \to RL^*_c(H_X)$ is well-defined since $H$ is open so $H$-continuous elements are automatically $G$-continuous. In view of Proposition \ref{prop_quotient_isomG1}, it is enough to show that the natural inclusion \[ RL^*_{c, Q}(H_Y)\rtimes_rH \to RL^*_{c, Q}(H_X)\rtimes_rG \] induces an isomorphism on K-theory groups. Note that $RL^*_{c, Q}(H_X)$ is a $G$-$C_0(G/H)$-algebra whose fiber at the coset $H$ is naturally the $H$-$C^*$-algebra $RL^*_{c, Q}(H_Y)$. Note that since $G/H$ is discrete, the fiber is not only a quotient but also a subalgebra. In general, for any $G$-$C_0(G/ H)$-algebra $A$, with fiber $A_0$ at $H$, the natural inclusion \[ A_0\rtimes_rH \to A\rtimes_rG \] induces an isomorphism on K-theory since we have an isomorphism $A\rtimes_rG\cong (A_0\rtimes_rH)\otimes \mathfrak{K}(l^2(G/H))$ and the inclusion corresponds to the corner embedding of $A_0\rtimes_rH$ into $(A_0\rtimes_rH)\otimes \mathfrak{K}(l^2(G/H))$. This is just a special case of a more general Morita-equivalence between $(\mathrm{Ind}_H^GA_0)\rtimes_rG$ and $A_0\rtimes_rH$ which holds for any closed subgroup $H$ of $G$ and for any $H$-$C^*$-algebra $A_0$ \cite[Theorem 17]{Green78}. \end{proof} \section{Universal $X$-$G$-module} For discrete $G$, there is a notion of an ample $X$-$G$-module. An $X$-$G$-module $H_X$ is ample (as an $X$-$G$-module) if it is ample as an $X$-module and if it is locally free in a sense that for any finite subgroup $F$ of $G$ and for any $F$-invariant Borel subset $E$ of $X$, $\chi_EH_X$ is $F$-equivariantly isomorphic to $l^2(F)\otimes H_0$ for some Hilbert space $H_0$ where $l^2(F)$ is equipped with the left-regular representation of $F$. For a locally compact group $G$, the exact analogue would not give us a good definition. For example, for $G=\mathbb{R}$ and $X=\mathbb{R}$, an $X$-$G$-module $L^2(\mathbb{R})$ would be ample in such a definition, although it is rather the smallest $X$-$G$-module. In this article, we refrain from defining an ample $X$-$G$-module for locally compact $G$ but we will define some substitute, a universal $X$-$G$-module (see below). Let $X, Y$ be proper $G$-spaces and $H_X, H_Y$ be an $X$-$G$-module and a $Y$-$G$-module respectively. For a $G$-equivariant continuous map $f\colon X\to Y$, a family of isometries $(V_t\colon H_X \to H_Y)_{t\in[1, \infty)}$ is called an equivariant continuous cover of $f$ (\cite[Definition 4.5.11]{WY2020}) if it is a continuous cover of $f$ (ignoring the $G$-actions) and if $V_t$ is $G$-equivariant for all $t\geq1$. For discrete $G$, such an equivariant continuous cover of $f$ exists whenever $H_Y$ is an ample $Y$-$G$-module \cite[Proposition 4.5.12]{WY2020}. \begin{lemma}\label{lem_coveridG} Let $f\colon X\to Y$ be a $G$-equivariant continuous map of proper $G$-spaces. A family of isometries $(V_t\colon H_X \to H_Y)_{t\in[1, \infty)}$ is an equivariant continuous cover of $f$ if and only if it is an equivariant continuous cover of the identity on $Y$ by regarding $H_X$ as a $Y$-$G$-module $(H_X)_Y$ via $f^*\colon C_0(Y)\to C_b(X)$. \end{lemma} \begin{proof} The proof of Lemma \ref{lem_coverid} works verbatim. \end{proof} \begin{definition}\label{def_universalXG} We say that an $X$-$G$-module $H_X$ is a universal $X$-$G$-module if there is an equivariant continuous cover $(V_t\colon H^0_X \to H_X)_{t\in[1, \infty)}$ of the identity map on $X$ for any $X$-$G$-module $H^0_X$. \end{definition} Hence, for discrete $G$, any ample $X$-$G$-module is universal. \begin{lemma} Let $H_Y$ be a universal $Y$-$G$-module. Then, for any $G$-equivariant continuous map $f\colon X\to Y$ and for any $X$-$G$-module $H_X$, there is an equivariant continuous cover $(V_t\colon H_X \to H_Y)_{t\in[1, \infty)}$ of $f$. \end{lemma} \begin{proof} This follows from Lemma \ref{lem_coveridG} and from the definition of a universal $Y$-$G$-module. \end{proof} We will show that a universal $X$-$G$-module exists for any locally compact group $G$ and for any proper $G$-space $X$ but we first do some preparation. \begin{definition}\label{def_amplify} Let $X$ be a proper $G$-space. Let $H_X$ be an $X$-module. We define an $X$-$G$-module \[ \bar H_X=H_X\otimes L^2(G) \cong L^2(G, H_X) \] where a representation of $G$ on $\bar H_X$ is given by the left-regular representation on $L^2(G)$, \[ (gf)(h)=f(g^{-1}h) \] for $g\in G$ and $(f\colon G\ni h\mapsto f(h) \in H_X)\in L^2(G, H_X)$ and where the representation of $C_0(X)$ on $\bar H_X$ is given by \[ (\phi f)(h)=h^{-1}(\phi) f(h) \] for $\phi \in C_0(X)$. \end{definition} \begin{definition} We say that an $X$-$G$-module $H_X$ is very ample (as an $X$-$G$-module) if it is, up to isomorphisms, of the form $\bar H_X^0$ for some ample $X$-module $H_X^0$. \end{definition} \begin{lemma} Let $H_X$ be a very ample $X$-$G$-module. Let $Y$ be any $G$-invariant closed subset of $X$ which is the closure of an open subset of $X$. Then, $\chi_YH_X$ is a very ample $Y$-$G$-module. Similarly, if $U$ is any $G$-invariant open subset of $X$, then $\chi_UH_X$ is a very ample $U$-$G$-module. \end{lemma} \begin{proof} If $H_X=\bar H_X^0=H^0_X\otimes L^2(G)$ for an ample $X$-module $H^0_X$, it is easy to see that $\chi_YH_X=\bar H^0_Y$ where $H^0_Y=\chi_YH^0_X$. Since $Y$ is the closure of an open subset of $X$, $H^0_Y$ is an ample $Y$-module. The claim now follows. The case of a $G$-invariant open subset is proved in the same way. \end{proof} Recall that for a proper $G$-space $X$, $c\in C_b(X)$ is a cut-off function on $X$ if $c\geq0$, $\int_Gg(c)^2d\mu_G(g)=1$, and for any compact subset $X_0$ of $X$, $\mathrm{supp}(c)\cap gX_0=\emptyset$ for $g$ outside a compact subset of $G$. When $X$ is $G$-compact, $c$ is (automatically) compactly supported. We fix any cut-off function $c$ on $X$. For any $X$-$G$-module $H_X$, an $X$-$G$-module $\bar H_X$ is defined as in Definition \ref{def_amplify} by regarding $H_X$ as an $X$-module. We define an isometry \[ V_c\colon H_X\to \bar H_X=H_X\otimes L^2(G) \] by sending $v\in H_X$ to \begin{equation}\label{eq_V_c} V_c(v) (h) = cu_h^{-1}v\,\,\, (\text{$h\in G$}), \end{equation} where $u_h$ is the unitary operator on $H_X$ corresponding to $h\in G$. \begin{lemma} The isometry $V_c\colon H_X \to \bar H_X$ is $G$-equivariant and intertwines the representations of $C_0(X)$. That is, $V_c$ is a strict cover of the identity map on $X$. \end{lemma} \begin{proof} This can be checked directly. \end{proof} \begin{lemma}\label{lem_universal} Let $H_X$ be an ample $X$-module. Let $\bar H_X$ be the $X$-$G$-module as in Definition \ref{def_amplify}. Then, for any $X$-$G$-module $H^0_X$, and for any open neighborhood $U$ of the diagonal in $X^+\times X^+$, there is a $G$-equivariant isometry $V$ from $H^0_X$ to $\bar H_X$ such that $\mathrm{supp}(V) \subset U$. \end{lemma} \begin{proof} Fix any metric $d$ on $X^+$. It is enough to find, for any compact subset $X_0$ of $X$ and $\epsilon>0$, a $G$-equivariant isometry $V$ from $H^0_X$ to $\bar H_X$ such that the support $\mathrm{supp}(V)$ is contained in \[ \{ (x, y) \in X_0\times X \mid d(x, y)<\epsilon \} \cup \{ (x, y) \in X\times X_0 \mid d(x, y)<\epsilon \} \cup (X-X_0) \times (X-X_0). \] We first explain how we can assume $X$ is $G$-compact for this. We let $U$ be any relatively compact, open subset of $X$ containing $X_0$ and let $Y$ be the closure of $GU$ which is a $G$-compact closed subspace of $X$. Suppose we get a $G$-equivariant isometry $V$ from $H^0_Y=\chi_YH^0_X$ to $\bar H_Y =\chi_Y\bar H_X$ (where $H_Y=\chi_YH_X$), then we obtain a desired isometry $V$ from $H^0_X$ to $\bar H_X$ by adding $V$ to any $G$-equivariant isometry from the complement $H^0_{X-Y}=\chi_{X-Y}H^0_X$ to $\chi_{X-Y}\bar H_{X}=\bar H_{X-Y}$ (where $H_{X-Y}=\chi_{X-Y}H_X$). Such an (arbitrary) $G$-equivariant isometry from $H^0_{X-Y}$ to $\bar H_{X-Y}$ exists, for example we can take the composition of $V_c\colon H^{0}_{X-Y} \to \bar H^0_{X-Y}$ and the amplification \[ W\otimes1 \colon H^0_{X-Y}\otimes L^2(G) \to H_{X-Y}\otimes L^2(G) \] of any isometry $W\colon H^0_{X-Y} \to H_{X-Y}$. Now, we assume $X$ is $G$-compact. Let $\epsilon>0$, $X_0$ be a compact subset of $X$ and $X_1$ be any compact neighborhood of $X_0$. Let $A$ be the support of $c$ which is compact (here we used $G$-compactness) and $B$ be any compact neighborhood of $A$ in $X$. We let $K$ be a compact subset of $G$ such that $X_1\cap gB=\emptyset$ for $g\in G\backslash K$. Let $X_2$ be any compact neighborhood of $KB$. Note that we have $X_2\supset KB \supset X_1 \supset X_0$. Since \[ C= K^{-1} \{(x,y)\in X_2\times X_2 \mid d(x, y)\geq \epsilon/2 \} \] \[ = \{(k^{-1}x,k^{-1}y)\in X \times X \mid k\in K, (x, y) \in X_2\times X_2, d(x, y)\geq \epsilon/2 \} \] is a compact subset of $K^{-1}X_2\times K^{-1}X_2$ which does not contain the diagonal, there is $\delta>0$ such that any $(x, y)\in C$ satisfies $d(x, y)\geq \delta$. We let $V_0\colon H_X^0 \to H_X$ be an isometry so that the following is satisfied (here we used the ampleness of $H_X$): \begin{enumerate} \item $V_0\chi_A = \chi_BV_0\chi_A$. \item $\mathrm{prop}(V_0)<\delta$. \end{enumerate} Let $V_c\colon H^0_X \to \bar H^0_X$ be as in \eqref{eq_V_c} and let $\bar V_0\colon \bar H^0_X \to \bar H_X$ be given by \[ \bar V_0=V_0\otimes 1\colon H^0_X\otimes L^2(G)\to H_X\otimes L^2(G). \] We show that a $G$-equivariant isometry $V=\bar V_0V_c\colon H_X^0 \to \bar H_X$ satisfies the required support condition. More explicitly, the isometry $V$ sends $v\in H_X^0$ to \[ (V(v)\colon h \mapsto V(v)(h)= V_0cu_h^{-1}v\in H_X) \in L^2(G, H_X) \] where $u_h$ is the unitary on $H^0_X$ corresponding to $h\in G$. We see that for any Borel functions $\phi_1, \phi_2$ on $X$, $\phi_1 V \phi_2=0$ if \[ h^{-1}(\phi_1)V_0ch^{-1}(\phi_2)=0 \,\,\, \text{in $\mathfrak{L}(H_X^0, H_X)$ for all $h\in G$}. \] We have \[ h^{-1}(\phi_1)V_0ch^{-1}(\phi_2) = h^{-1}(\phi_1)V_0\chi_Ach^{-1}(\phi_2) = h^{-1}(\phi_1)\chi_BV_0\chi_Ach^{-1}(\phi_2), \] so if either of $\phi_1$, $\phi_2$ has support contained in $X_1$, this is zero unless $h\in K$. Moreover, if $\mathrm{supp}(\phi_1)\subset X_1$ (resp. $\mathrm{supp}(\phi_2)\subset X_1$), then this is zero for all $h\in G$ if $\mathrm{supp}(\phi_2)\cap KB=\emptyset$ (resp. $\mathrm{supp}(\phi_1)\cap KB=\emptyset$). In particular, $X_0\times (X-KB)$ and $(X-KB)\times X_0$ are disjoint from $\mathrm{supp}(V)$. Now, let $(x, y)\in X_0\times X$ such that $d(x, y)\geq \epsilon$. We show that $(x, y) \notin \mathrm{supp}(V)$. The case when $y\in (X-KB)$ is already proved so we assume $y\in KB$. Then, there are open sets $U_x\ni x$ in $X_1$ and $U_y\ni y$ in $X_2$ such that the distance $d(U_x, U_y)\geq \epsilon/2$. It follows from the definition of $\delta>0$ that $d(k^{-1}U_x, k^{-1}U_y)\geq \delta$ for any $k\in K$. It follows $\chi_{U_x}V\chi_{U_y}=0$ i.e. \[ \chi_{h^{-1}U_x}V_0c\chi_{h^{-1}U_y}=0 \,\,\, \text{for all $h\in G$}. \] For $h \in G\backslash K$, this is because $U_x\subset X_1$, and for $h\in K$, this is because $\mathrm{prop}(V_0)<\delta$. Similarly, we can show that $(x, y) \notin \mathrm{supp}(V)$ for any $(x, y)\in X\times X_0$ such that $d(x, y)\geq \epsilon$. \end{proof} \begin{proposition}\label{prop_universal} For any second countable, locally compact group $G$ and for any proper $G$-space $X$, a universal $X$-$G$ module exists. In fact, for any ample $X$-$G$-module $H_X$, the $X$-$G$-module $\bar H_X$ (as in Definition \ref{def_amplify}) is a universal $X$-$G$ module, i.e. any very ample $X$-$G$-module is universal. \end{proposition} \begin{proof} Let $H_X$ be any ample $X$-module and let $\bar H_X$ be the $X$-$G$-module as in Definition \ref{def_amplify}. For any $X$-$G$-module $H^0_X$, it is enough to find a uniformly continuous family $(V_t\colon H^0_X\to \bar H_X)_{t\in[1,\infty)}$ of $G$-equivariant isometries such that $\mathrm{prop}(V_t)\to 0$ as $t \to \infty$ for a fixed metric $d$ on $X^+$. Lemma \ref{lem_universal} gives us for any $n>0$, a $G$-equivariant isometry $V_n\colon H^0_X\to \bar H_X$ with $\mathrm{prop}(V_n)<1/n$. It is easy to see from the construction (using the ampleness of $H_X$) that we can arrange $V_n$ and $V_{n+1}$ to have mutually orthogonal ranges for any $n>0$. For $t\in (n, n+1)$, we let \[ V_t= \sqrt{n+1-t} V_n + \sqrt{t-n} V_{n+1}. \] The family $(V_t\colon H^0_X\to \bar H_X)_{t\in[1,\infty)}$ is an equivariant continuous cover of the identity. \end{proof} \section{Representable equivariant K-homology} Let $X, Y$ be proper $G$-spaces and $H_X, H_Y$ be an $X$-$G$-module and a $Y$-$G$-module respectively. Given a $G$-equivariant continuous cover $(V_t\colon H_X \to H_Y)_{t\in[1, \infty)}$ of a $G$-equivariant continuous map $f\colon X\to Y$, the conjugation by $V_t$ defines a $G$-equivariant $\ast$-homomorphism \[ \mathrm{Ad}_{V_t} \colon RL_{c}^\ast(H_X) \to RL_{c}^\ast(H_Y). \] Thus, it defines a $\ast$-homomorphism \[ \mathrm{Ad}_{V_t} \colon RL_{c}^\ast(H_X)\rtimes_rG \to RL_{c}^\ast(H_Y)\rtimes_rG \] on the reduced crossed product. This $\ast$-homomorphism depends on the cover $V_t$ of $f$ but the induced map on their K-theory groups is independent of the choice of $V_t$. This is because given two continuous covers $(V_{i, t}\colon H_X\to H_Y)_{t\in [1,\infty)}$ $(i=1,2)$ of $f$, the two maps \[ a \mapsto \begin{bmatrix} \mathrm{Ad}_{V_{1,t}}(a) & 0 \\ 0 & 0 \end{bmatrix}, \,\,\, a\mapsto \begin{bmatrix} 0 & 0 \\ 0 & \mathrm{Ad}_{V_{2,t}}(a) \end{bmatrix} \] from $RL_{c}^\ast(H_X)\rtimes_rG$ to the matrix algebra $M_2(RL_{c}^\ast(H_Y)\rtimes_rG)$ are conjugate to each other by the unitary \[ \begin{bmatrix} 1-V_{1,t}V_{1,t}^\ast & V_{1,t}V_{2,t}^\ast \\ V_{2,t}V_{1,t}^\ast & 1-V_{2,t}V_{2,t}^\ast \end{bmatrix} \] in the $2\times2$-matrix algebra $M_2\left( M(RL_{c}^\ast(H_Y))\right) \subset M_2\left( M(RL_{c}^\ast(H_Y)\rtimes_rG)\right)$ of the multiplier algebra. Note $\mathrm{Ad}_{V_t}$ induces a $\ast$-homomorphism on the quotients \[ \mathrm{Ad}_{V_t} \colon RL_{c, Q}^\ast(H_X)\rtimes_rG \to RL_{c, Q}^\ast(H_Y)\rtimes_rG \] as it maps the ideal $RL_{0}^\ast(H_X)\rtimes_rG$ into $RL_{0}^\ast(H_Y)\rtimes_rG$ and the induced map on their K-theory groups is again independent of the choice of a cover $V_t$ of $f$. \begin{definition} Choose any universal $X$-$G$-module $H_X$ for each proper $G$-space $X$ and any $G$-equivariant continuous cover $(V^f_t\colon H_X\to H_Y)_{t\in [1,\infty)}$ for each $G$-equivariant continuous map $f\colon X\to Y$. A functor $\mathbb{D}^G_\ast$ from the category $\mathcal{PR}^G$ of (second countable, locally compact) proper $G$-spaces to the category $\mathcal{GA}$ of graded abelian groups is defined as \[ \mathbb{D}^G_\ast(X)=K_\ast(RL^\ast_c(H_X)\rtimes_rG), \] \[ \mathbb{D}^G_\ast(f\colon X\to Y)=\mathrm{Ad}_{V^f_t \ast}\colon K_\ast(RL^\ast_c(H_X)\rtimes_rG) \to K_\ast(RL^\ast_c(H_Y)\rtimes_rG). \] \end{definition} \begin{proposition}\label{prop_welldefG} The functor $\mathbb{D}^G_\ast$ from $\mathcal{PR}^G$ to $\mathcal{GA}$ is well-defined. The functor does not depend on the choice of universal $X$-$G$-modules $H_X$ up to canonical equivalence. \end{proposition} \begin{proof} As in the non-equivariant case (Proposition \ref{prop_welldef}), there is a technical point concerning the functoriality. This can be handled in the same manner as in the proof of Proposition \ref{prop_welldef}. A-priori, we only have the functoriality \[ \mathbb{D}^G_\ast(f_2) \circ \mathbb{D}^G_\ast(f_1) = \mathbb{D}^G_\ast(f_2\circ f_1) \] for $f_1\colon X\to Y$, $f_2\colon Y\to Z$ when $f_2$ is proper. Note that any $G$-equivariant continuous map from a $G$-compact proper $G$-space $X$ to a proper $G$-space $Y$ is proper. We reduce the functoriality to the case of proper maps. Thus, we first show the representability of $\mathbb{D}^G_\ast(X)$: if $(X_i)_{i \in I}$ is the net of $G$-compact, $G$-invariant closed subsets of $X$, ordered by the inclusion, the canonical maps $\bD_\ast(X_i) \to \bD_\ast(X)$ induce a natural isomorphism \begin{equation} \label{eq_representableG} \varinjlim_{i \in I}\bD^G_\ast(X_i) \cong \bD^G_\ast(X). \end{equation} Note that the functoriality for proper maps is used for defining this inductive system. It suffices to show that if $U_n$ is an increasing sequence of relatively $G$-compact $G$-invariant open subsets of $X$ such that $\cup_n U_n=X$, then for their closures $X_n=\bar U_n$, the natural inclusions induce an isomorphism \[ \varinjlim_nK_\ast(RL_c^\ast(H_{X_n})\rtimes_rG)\cong K_\ast(RL_c^\ast(H_{X})\rtimes_rG). \] We may assume $H_X$ is very ample. Since $X_n=\bar U_n$, the subspace $\chi_{X_n}H_X$ is a very ample $X_n$-$G$-module and we may assume $H_{X_n}=\chi_{X_n}H_X$. Then, we see that $RL_c^\ast(H_{X_n})=\chi_{X_n}RL_c^\ast(H_{X})\chi_{X_n}$ and $RL_c^\ast(H_{X_n})$ is an increasing sequence of $G$-$C^*$-subalgebras of $RL_c^\ast(H_{X})$ whose union is dense in $RL_c^\ast(H_{X})$. The claim follows from the continuity of K-theory. Note that this identification \eqref{eq_representableG} is compatible with $\mathbb{D}^G_\ast(f\colon X\to Y)$: if $(X_i)_{i \in I}$, $(Y_j)_{j\in J}$ are the nets of $G$-compact, $G$-invariant closed subsets of $X$ and $Y$ respectively, and if we consider the map \[ \varinjlim_{i\in I} \bD^G_\ast(f\mid_{X_i})\colon \varinjlim_{i \in I} \bD^G_\ast(X_i) \to \varinjlim_{j \in J}\bD^G_\ast(Y_j) \] defined as the limit of \[ \bD^G_\ast(X_i) \to \bD^G_\ast(f(X_i)) \to \varinjlim_{j \in J}\bD^G_\ast(Y_j) \] which is the composition of $\bD^G_\ast(f\mid_{X_i}\colon X_i\to f(X_i))$ and the natural map (note $f(X_i)$ is a $G$-compact, $G$-invariant closed subset of $Y$), the following diagram commutes \begin{equation*} \xymatrix{ \varinjlim_{i \in I}\bD^G_\ast(X_i) \ar[r]^-{\varinjlim_{i\in I} \bD^G_\ast(f\mid_{X_i})} \ar[d]^-{\cong} & \varinjlim_{j \in J}\bD^G_\ast(Y_j) \ar[d]^-{\cong} \\ K_\ast(RL_c^\ast(H_{X})) \ar[r]^-{\mathbb{D}^G_\ast(f)} & K_\ast(RL_c^\ast(H_{Y})). } \end{equation*} This follows from the independence for $\mathbb{D}^G_\ast(f)$ of the choice of an equivariant continuous cover $V_t\colon H_X\to H_Y$ and from that for any $G$-compact $G$-invariant closed subset $X_0$ of $X$, there is a $G$-compact $G$-invariant closed subset $Y_0$ of $Y$ such that $V_t\chi_{X_0}=\chi_{Y_0}V_t\chi_{X_0}$ for large enough $t$. The functoriality $\bD^G_\ast(f_2)\circ \bD^G_\ast(f_1) = \bD^G_\ast(f_2\circ f_1)$ for not necessarily proper maps $f_1\colon X\to Y$, $f_2\colon Y\to Z$ now follows. \end{proof} \begin{remark} Since $\mathbb{D}^G_\ast$ is representable as we explained in the proof of Proposition \ref{prop_welldefG}, we may extend this functor to any (not necessarily locally compact) proper $G$-spaces $X$ by defining \[ \mathbb{D}^G_\ast(X) = \varinjlim_{Y\subset X, \mathrm{Gcpt}} \mathbb{D}^G_\ast(Y) \] where the limit is over all $G$-compact, $G$-invariant subspaces $Y$ of $X$. \end{remark} Note that we have naturally \[ \mathbb{D}^G_\ast(X)=K_\ast(RL^\ast_c(H_X)\rtimes_rG) \cong K_\ast(RL^\ast_{c, Q}(H_X)\rtimes_rG) \] by Proposition \ref{prop_quotient_isomG1} and \[ \mathbb{D}^G_\ast(X) = K_\ast(RL^*_c(H_X)\rtimes_rG) \cong K_\ast(RL^*_c(H_X)\rtimes_{\mathrm{max}}G) \] by Proposition \ref{prop_quotient_isomG}. When $G$ is trivial, $\mathbb{D}^G_\ast$ is nothing but the representable K-homology $\mathbb{D}_\ast$. \begin{theorem}\label{thm_Ghomology} The functor $\mathbb{D}^G_\ast$ satisfies the following. \begin{enumerate} \item $\mathbb{D}^G_\ast(\mathrm{empty\, set})\cong 0$. \item Induction from an open subgroup: for any open subgroup $H$ of $G$ and any proper $H$-space $Y$, consider the balanced product $G\times_HY$ which is a proper $G$-space. We have a natural isomorphism $\mathbb{D}^G_\ast(G\times_H Y)\cong \mathbb{D}_\ast^H(Y)$. If $H$ is a compact open subgroup of $G$, we have $\mathbb{D}^G_\ast(G/H\times \Delta^k)\cong K_\ast(C^*_r(H))$ for any k-simplex (ball) $\Delta^k$ with the trivial $G$-action. \item Representable: if $(X_i)_{i \in I}$ is the net of $G$-compact, $G$-invariant closed subsets of $X$, ordered by the inclusion, the canonical maps $\mathbb{D}^G_\ast(X_i) \to \mathbb{D}^G_\ast(X)$ induce a natural isomorphism \[ \varinjlim_{i \in I}\mathbb{D}^G_\ast(X_i) \cong \mathbb{D}^G_\ast(X). \] \item Mayer--Vietoris sequence for a $G$-invariant open cover: if $X=U\cup V$ for $G$-invariant open subsets $U$ and $V$ of $X$, we have a natural Meyer--Vietoris sequence \[ \xymatrix{ \mathbb{D}^G_0(U\cap V) \ar[r]^-{}& \mathbb{D}^G_0(U) \oplus \mathbb{D}^G_0(V) \ar[r]^-{} & \mathbb{D}^G_0(X) \ar[d]^-{} \\ \mathbb{D}^G_1(X) \ar[u]^-{} & \mathbb{D}^G_1(U) \oplus \mathbb{D}^G_1(V) \ar[l]^-{} & \mathbb{D}^G_1(U\cap V) \ar[l]^-{}. } \] \item Homotopy invariance: if $h\colon X\times[0 ,1]\to Y$ is a $G$-equivariant continuous homotopy between $f_0, f_1\colon X\to Y$, then $\mathbb{D}^G_\ast(f_0)=\mathbb{D}^G_\ast(f_1)$. \end{enumerate} \end{theorem} \begin{proof} We prove (1)-(5) following the argument in the non-equivariant case (see the proof of Theorem \ref{thm_homology}). We would like to thank Rufus Willett for telling the author that the non-equivariant proof of (5), the homotopy invariance, should extend to this equivariant setting. (1) If $X$ is the empty set, $H_X=0$ and $RL_c^\ast(H_X)\rtimes_rG=0$. (2) The first claim: Let $X=G\times_HY$. Here, $Y$ is a $H$-invariant open subset of $X$. We may assume that the chosen $X$-$G$-module $H_X$ is very ample and that the chosen $Y$-$H$-module $H_Y$ is $\chi_YH_X$ as this is a very ample $Y$-$H$-module. By Proposition \ref{prop_compactopen}, the inclusion \[ RL^*_c(H_Y)\rtimes_rH \to RL^*_c(H_X)\rtimes_rG \] induces an isomorphism on K-theory groups. This gives \[ \mathbb{D}_\ast^H(Y) \cong \mathbb{D}^G_\ast(G\times_H Y). \] Naturality (with respect to $Y$) can be checked directly. The second claim will be proved after we prove the homotopy invariance (5). (3) We have already proved this in the proof of Proposition \ref{prop_welldefG}. (4) As in the proof of the non-equivariant case, consider $G$-invariant ideals $RL_{c, Q}^\ast(H_X)_U$, $RL_{c, Q}^\ast(H_X)_V$, $RL_{c, Q}^\ast(H_X)_{U\cap V}$ of $RL_{c, Q}^\ast(H_X)$. These are naturally identified as $RL_{c, Q}^\ast(H_U)$, $RL_{c, Q}^\ast(H_V)$, $RL_{c, Q}^\ast(H_{U\cap V})$ respectively. Note that $RL_{c, Q}^\ast(H_X)\rtimes_rG$ is naturally a $C_0(X/G)$-algebra and let \[ \left(RL_{c, Q}^\ast(H_X)\rtimes_rG\right)_{U/G}, \,\, \left(RL_{c, Q}^\ast(H_X)\rtimes_rG\right)_{V/G},\,\, \left(RL_{c, Q}^\ast(H_X)\rtimes_rG\right)_{(U\cap V)/G} \] be its ideals corresponding to open subsets $U/G$, $V/G$ and $(U\cap V)/G$ of $X/G$. It is not hard to see that \[ RL_{c, Q}^\ast(H_X)_U \rtimes_rG= \left(RL_{c, Q}^\ast(H_X)\rtimes_rG\right)_{U/G} \] inside $RL_{c, Q}^\ast(H_X) \rtimes_rG$ and similarly for other two. These ideals of $RL_{c, Q}^\ast(H_{X})\rtimes_rG$ satisfy \[ \left(RL_{c, Q}^\ast(H_X)\rtimes_rG\right)_{U/G} + \left(RL_{c, Q}^\ast(H_X)\rtimes_rG\right)_{V/G} = \left(RL_{c, Q}^\ast(H_X)\rtimes_rG\right), \] \[ \left(RL_{c, Q}^\ast(H_X)\rtimes_rG\right)_{U/G} \cap \left(RL_{c, Q}^\ast(H_X)\rtimes_rG\right)_{V/G} = \left(RL_{c, Q}^\ast(H_X)\rtimes_rG\right)_{(U\cap V)/G}. \] The corresponding Mayer--Vietoris sequence is the desired one. The naturality can be checked directly. (5) For any proper $G$-space $X$ and for $ r \in[0 ,1]$, let \[ f_r\colon X\times [0, 1] \to X\times [0 ,1], \,\, (x, t)\to (x, (1-r)t). \] As in the proof of the non-equivariant case, it suffices to show the following claim: \[ \bD^G_\ast(f_1) = \bD^G_\ast(f_0) (=\mathrm{Id}). \] We reuse the notations from the proof of the non-equivariant case except that now $H_X$ is a universal $X$-$G$-module. Note for example, the isometries $W_z$ for $z\in Z$ and $V_{n,t}$ are all $G$-equivariant and all the properties proved in the non-equivariant case are still valid in this context. As in the non-equivariant case, we let $\mathcal{A}=RL_{c}^\ast(H_{X\times[0, 1]})$ and $\mathcal{A}^\infty= RL_c^\ast(H_{X\times[0, 1]}\otimes l^2(\mathbb{N}))$. As before, we have $G$-equivariant $\ast$-homomorphisms \[ \mathrm{Ad}_{V_{n,t}}\colon \mathcal{A} \to \mathcal{A} \] for $n\in \mathbb{N}_{>0}\cup\{\infty\}$ and \[ \alpha = \sum_{1\leq n < \infty} \mathrm{Ad}_{U_nV_{n,t}},\,\, \beta= \sum_{1\leq n < \infty} \mathrm{Ad}_{U_nV_{n+1,t}}, \,\,\, \gamma= \sum_{1\leq n < \infty} \mathrm{Ad}_{U_nV_{\infty,t}} \] from $\mathcal{A}$ to the multiplier algebra $M(\mathcal{A}^\infty)$. We obtain $\ast$-homomorphisms \[ \alpha \rtimes_r1, \beta\rtimes_r1, \gamma\rtimes_r1\colon \mathcal{A}\rtimes_rG \to M(\mathcal{A}^\infty)_\mathrm{Gcont}\rtimes_rG \to M(\mathcal{A}^\infty\rtimes_rG) \] where $M(\mathcal{A}^\infty)_\mathrm{Gcont}$ consists of $G$-continuous elements of $M(\mathcal{A}^\infty)$. Each of the following pairs of $\ast$-homomorphisms \[ ( \alpha\rtimes_r1, \gamma\rtimes_r1), \,\,\, ( \beta\rtimes_r1, \gamma\rtimes_r1) \] defines a $\ast$-homomorphism from $\mathcal{A}\rtimes_rG$ to the double \[ D'=M(\mathcal{A}^\infty\rtimes_rG)\oplus_{\mathcal{A}^\infty\rtimes_rG}M(\mathcal{A}^\infty\rtimes_rG). \] The two $\ast$-homomorphisms naturally factor through $C_{\mathrm{Gcont}}\rtimes_rG$ and $D_{\mathrm{Gcont}}\rtimes_rG$ as \[ \mathcal{A}\rtimes_rG \to C_{\mathrm{Gcont}}\rtimes_rG \to D_{\mathrm{Gcont}}\rtimes_rG \to D' \] where the first map is $(\alpha, \gamma)\rtimes_r1$ or $(\beta, \gamma)\rtimes_r1$. Here, we recall that \[ D=M(\mathcal{A}^\infty)\oplus_{\mathcal{A}^\infty}M(\mathcal{A}^\infty) = \{ (a_1, a_2) \in M(\mathcal{A}^\infty)\oplus M(\mathcal{A}^\infty) \mid a_1-a_2 \in \mathcal{A}^\infty \}, \] and \[ C=\{ (a_1, a_2)\in D \mid a_2= \sum_{1\leq n < \infty} \mathrm{Ad}_{U_nV_{\infty, t}}(T_t), T_t \in \mathcal{A} \}. \] The $\ast$-homomorphisms $(\alpha, \gamma)\rtimes_r1$ and $(\beta, \gamma)\rtimes_r1$ from $\mathcal{A}\rtimes_rG$ to $C_{\mathrm{Gcont}}\rtimes_rG$ are conjugate in $C_{\mathrm{Gcont}}\rtimes_rG$ by a partial isometry $w=(w_1, w_2)$ in the multiplier algebra $M(C_{\mathrm{Gcont}})\subset M(C_{\mathrm{Gcont}}\rtimes_rG)$ (note $w$ is $G$-equivariant). It follows $(\alpha\rtimes_r1, \gamma\rtimes_r1)$ and $( \beta\rtimes_r1, \gamma\rtimes_r1)$ from $\mathcal{A}\rtimes_rG$ to $D'$ induce the same map on the K-theory groups. Now we have that \[ (\alpha\rtimes_r1, \gamma\rtimes_r1)_\ast = ( \beta\rtimes_r1, \gamma\rtimes_r1)_\ast \colon K_\ast(\mathcal{A}\rtimes_rG) \to K_\ast(D'). \] From here, by the same argument as in the non-equivariant case, we see that the two maps $\mathrm{Ad}_{V_{1,t}}\rtimes_r1$, $\mathrm{Ad}_{V_{\infty,t}}\rtimes_r1$ from $\mathcal{A}\rtimes_rG$ to $\mathcal{A}\rtimes_rG$ induce the same map on the K-theory groups. Thus, we have $D^G_\ast(f_0)=D^G_\ast(f_1)$. (2) The second claim: by homotopy invariance, it is enough to show $\bD_\ast^{H}(\mathrm{point}) \cong K_\ast(C^*_r(H))$. This follows from the following proposition. \end{proof} \begin{proposition}\label{prop_pointcase} For any locally compact group $G$, for any $G$-$C^*$-algebra $B$, let \[ \mathrm{ev}_1 \colon C_b([1, \infty), \mathfrak{K}(l^2(\mathbb{N}))\otimes B)_{\mathrm{Gcont}} \to \mathfrak{K}(l^2(\mathbb{N}))\otimes B \] be the evaluation at $t=1$. Then, its reduced crossed product \[ \mathrm{ev}_1 \colon C_b([1, \infty), \mathfrak{K}(l^2(\mathbb{N}))\otimes B)_{\mathrm{Gcont}}\rtimes_rG \to \mathfrak{K}(l^2(\mathbb{N}))\otimes B\rtimes_rG \] induces an isomorphism on the K-theory groups. The same is true if we replace the reduced crossed product to the maximal one. More generally for any $G$-$C^*$-algebra $B_1$, the evaluation map \[ \mathrm{ev}_1\colon \left(C_b([1, \infty), \mathfrak{K}(l^2(\mathbb{N}))\otimes B)_{\mathrm{Gcont}}\otimes B_1\right) \rtimes_rG \to \left( \mathfrak{K}(l^2(\mathbb{N}))\otimes B\otimes B_1\right) \rtimes_rG \] induces an isomorphism on the K-theory groups. The same is true if we replace either the minimal tensor product or the reduced crossed product or both to the maximal one. \end{proposition} \begin{proof} Let $I_u$ be the kernel of the evaluation map \[ \mathrm{ev}_1\colon C_{b, u}([1, \infty), \mathfrak{K}(l^2(\mathbb{N}))\otimes B)_{\mathrm{Gcont}}\to \mathfrak{K}(l^2(\mathbb{N}))\otimes B \] at $t=1$. It is easy see that the Eilenberg swindle argument in \cite[Proposition 6.3.3]{WY2020} descends to the crossed product to show $K_\ast(I_u\rtimes_rG)=0$. Since the evaluation map admits a $G$-equivariant splitting, it follows that its reduced crossed product \[ \mathrm{ev}_1 \colon C_{b, u}([1, \infty), \mathfrak{K}(l^2(\mathbb{N}))\otimes B)_{\mathrm{Gcont}}\rtimes_rG \to \mathfrak{K}(l^2(\mathbb{N}))\otimes B\rtimes_rG \] induces an isomorphism on the K-theory groups. Thus, it suffices to show the natural inclusion \[ C_{b, u}([1, \infty), \mathfrak{K}(l^2(\mathbb{N}))\otimes B)_{\mathrm{Gcont}}\rtimes_rG \to C_{b}([1, \infty), \mathfrak{K}(l^2(\mathbb{N}))\otimes B)_{\mathrm{Gcont}}\rtimes_rG \] induces an isomorphism on the $K$-theory groups. For this, we just need to check that the argument in the proof of \cite[Theorem 3.4]{WY2021} generalizes to this setting. Indeed, the relevant pushout (pullback) squares that appear in the proof \cite[Theorem 3.4]{WY2021} are preserved by the reduced crossed product. This is because for any $G$-$C^*$-algebra $B_1$, in the following pushout square \[ \xymatrix{ C_b([1, \infty), B_1)_{\mathrm{Gcont}}\ar[d]^-{} \ar[r]^-{} & C_b(E, B_1)_{\mathrm{Gcont}} \ar[d]^-{} \\ C_b(O, B_1)_{\mathrm{Gcont}} \ar[r]^-{} & C_b(\mathbb{N}, B_1)_{\mathrm{Gcont}} } \] for $E=\sqcup_{n\geq1}[2n, 2n+1]$ and $O=\sqcup_{n\geq1}[2n-1, 2n]$, each of the two vertical (and horizontal) surjections admits a $G$-equivariant c.c.p.\ splitting (by extending functions constantly and by multiplying a bump function). The uniformly continuous case is same. The rest of the argument in their proof works verbatim except a few points we have to be careful which we explain. We refer the reader to the proof of \cite[Theorem 3.4]{WY2021} to follow the following explanations. Let \[ C^0_b(E) \subset C_b(E, \mathfrak{K}(l^2(\mathbb{N}))\otimes B)_{\mathrm{Gcont}} \] be the subalgebra of functions $f$ on $E$ such that $f(2n)=0$ for all $n$. Using the above pullback square and similar ones, it comes down to showing that the K-theory of $C^0_b(E) \rtimes_rG$ is zero. In fact, the K-theory of $(C^0_b(E)\otimes B_1)\rtimes_rG$ is zero for any $G$-$C^*$-algebra $B_1$. In this generality, it is enough to consider $K_0$ (taking $B_1=C_0(\mathbb{R})$). We consider \[ x = [p] - [1_k] \in K_0( (C^0_b(E)\otimes B_1) \rtimes_rG) \] for $p$ in the matrix algebra of the unitization of $(C^0_b(E)\otimes B_1)\rtimes_rG$. The first point we explain is that using the inclusion (this is well-defined) \[ (C^0_b(E)\otimes B_1) \rtimes_rG \to C_b(E, (\mathfrak{K}(l^2(\mathbb{N}))\otimes B \otimes B_ 1)\rtimes_rG), \] we view $p$ as a function on $E$ with values in the matrix algebra of the unitization of $(\mathfrak{K}(l^2(\mathbb{N}))\otimes B \otimes B_ 1)\rtimes_rG$. Using the uniform continuity of $p$ on each interval $[2n, 2n+1]$, we choose $r_n\in ( 0, 1)$ as in their proof. The second point we explain is that the element \[ x_\infty= [\sum_{l=0}^\infty s_l p^{(l)}s_l^* ] - [\sum_{l=0}^{\infty} s_l1_k s_l^\ast] \] is well-defined in our context, that is $x_\infty \in K_0((C^0_b(E)\otimes B_1) \rtimes_rG )$. Here, $s_l$ are the isometries on $l^2(\mathbb{N})$ such that $\sum_{l=0}^\infty s_l s_l^\ast=1$. This is because the function \[ \sum_{l=0}^\infty s_l p^{(l)}s_l^* \] on $E$ is defined by the reduced crossed product of the tensor product of the identity on $B_1$ with a $G$-equivariant $\ast$-homomorphism on $C^0_b(E)$ which sends $T \in C^0_b(E)$ to \[ \sum_{l=0}^\infty s_l T^{(l)}s_l^* \] where \[ T^{(l)}_t= T_{2n+(t-2n)(r_n)^l}\,\,\,\, \text{for $t\in [2n, 2n+1]$}. \] Only $r_n\in (0 ,1)$ is used for this map to send $C^0_b(E)$ to itself. From here, their proof of $x_\infty +x = x_\infty$ works verbatim. The case of maximal crossed product and the case with an extra coefficient $B_1$ can be proved in the same way. \end{proof} In fact, it is not hard to see that the argument in the proof \cite[Theorem 3.4]{WY2021} generalizes to show the following. \begin{proposition}\label{prop_ucsame} For any universal $X$-$G$-module $H_X$ or for any $X$-$G$-module of the form $H_X^0\otimes l^2(\mathbb{N})$ for an $X$-$G$-module $H_X^0$, the natural inclusion induces an isomorphism \[ K_\ast(RL^*_u(H_X)\rtimes_rG) \cong K_\ast(RL^*_c(H_X)\rtimes_rG). \] The same is true if we replace the reduced crossed product to the maximal one. More generally, for any $G$-$C^*$-algebra $B$, the natural inclusion induces an isomorphism \[ K_\ast(\left(RL^*_u(H_X)\otimes B\right) \rtimes_rG) \cong K_\ast(\left(RL^*_c(H_X)\otimes B\right)\rtimes_rG). \] The same is true if we replace either the minimal tensor product or the reduced crossed product or both to the maximal one. \end{proposition} \begin{proof} The proof of \cite[Theorem 3.4]{WY2021} generalizes to this setting just as we explained in the proof of Proposition \ref{prop_pointcase}. \end{proof} \section{With coefficients} Let $B$ be a separable $G$-$C^*$-algebra $B$. We will generalize our previous results on $G$-equivariant $K$-homology to the case with coefficient $B$. This will be quite straightforward so it will be brief. \begin{definition} For any proper $G$-space $X$ and for any $X$-$G$-module $H_X$, we define a $G$-$C^*$-algebra $RL^\ast_c(H_X\otimes B)$ as the norm completion of the $G$-$\ast$-subalgebra $RL^{\mathrm{alg}}_c(H_X\otimes B)$ of $C_b([1, \infty), \mathfrak{K}(H_X)\otimes B)$ that consists of bounded, $G$-continuous, norm-continuous $\mathfrak{K}(H_X)\otimes B$-valued functions $T_t$ on $[1, \infty)$, such that \begin{enumerate} \item $T_t$ has uniform compact support in a sense that there is a compact subset $K$ of $X$ such that $T_t=\chi_KT_t\chi_K$ for all $t\geq1$, and \item for any $\phi$ in $C_0(X)$, we have \[ \lim_{t\to \infty }||[\phi, T_t]||=\lim_{t\to \infty}||\phi T_t-T_t\phi|| = 0. \] \end{enumerate} A $G$-$C^*$-algebra $RL^\ast_u(H_X\otimes B)$ is similarly defined in $C_{b, u}([1, \infty), \mathfrak{K}(H_X)\otimes B)$. \end{definition} Both $RL^\ast_c(H_X\otimes B)$ and $RL^\ast_u(H_X\otimes B)$ contain a $G$-invariant ideal $RL^\ast_{0}(H_X\otimes B)=C_0([1,\infty), \mathfrak{K}(H_X)\otimes B)$. We denote by $RL^\ast_{c, Q}(H_X\otimes B)$, resp. $RL^\ast_{u, Q}(H_X\otimes B)$, the quotient of $RL^\ast_c(H_X\otimes B)$, resp. $RL^\ast_u(H_X\otimes B)$ by the ideal $RL^\ast_{0}(H_X\otimes B)$. These quotients are naturally $G$-$C_0(X)$-algebras. Let $X, Y$ be proper $G$-spaces and $H_X, H_Y$ be an $X$-$G$-module and a $Y$-$G$-module respectively. Given a $G$-equivariant continuous cover $(V_t\colon H_X \to H_Y)_{t\in[1, \infty)}$ of a $G$-equivariant continuous map $f\colon X\to Y$, the conjugation by $V_t$ defines a $G$-equivariant $\ast$-homomorphism \[ \mathrm{Ad}_{V_t} \colon RL_{c}^\ast(H_X\otimes B) \to RL_{c}^\ast(H_Y\otimes B). \] Thus, it defines a $\ast$-homomorphism \[ \mathrm{Ad}_{V_t}\colon RL_{c}^\ast(H_X\otimes B)\rtimes_rG \to RL_{c}^\ast(H_Y\otimes B)\rtimes_rG \] on the reduced crossed product. This $\ast$-homomorphism depends on the cover $V_t$ of $f$ but the induced map on their K-theory groups is independent of the choice of $V_t$. \begin{definition}\label{def_DBG} Choose any universal $X$-$G$-module $H_X$ for each proper $G$-space $X$ and any $G$-equivariant continuous cover $(V^f_t\colon H_X\to H_Y)_{t\in [1,\infty)}$ for each $G$-equivariant continuous map $f\colon X\to Y$. A functor $\mathbb{D}^{B, G}_\ast$ from the category $\mathcal{PR}^G$ of (second countable, locally compact) proper $G$-spaces to the category $\mathcal{GA}$ of graded abelian groups is defined as \[ \mathbb{D}^{B, G}_\ast(X) = K_\ast(RL^\ast_c(H_X\otimes B)\rtimes_rG), \] \[ \mathbb{D}^{B, G}_\ast(f\colon X\to Y)=\mathrm{Ad}_{V^f_t \ast} \colon K_\ast(RL^\ast_c(H_X\otimes B)\rtimes_rG) \to K_\ast(RL^\ast_c(H_Y\otimes B)\rtimes_rG). \] \end{definition} \begin{proposition}The functor $\mathbb{D}^{B, G}_\ast$ from $\mathcal{PR}^G$ to $\mathcal{GA}$ is well-defined. The functor does not depend on the choice of universal $X$-$G$-modules $H_X$ up to canonical equivalence. \end{proposition} \begin{proof} The proof of \ref{prop_welldefG} works verbatim. \end{proof} \begin{theorem}\label{thm_coeff} The functor $\mathbb{D}^{B, G}_\ast$ satisfies the following. \begin{enumerate} \item $\mathbb{D}^{B, G}_\ast(\mathrm{empty\, set})\cong 0$. \item Induction from an open subgroup: for any open subgroup $H$ of $G$ and any proper $H$-space $Y$, we have a natural isomorphism $\mathbb{D}^{B, G}_\ast(G\times_H Y)\cong \mathbb{D}_\ast^{B, H}(Y)$. If $H$ is a compact open subgroup of $G$, we have $\mathbb{D}^{B, G}_\ast(G/H\times \Delta^k)\cong K_\ast(B\rtimes_rH)$ for any k-simplex (ball) $\Delta^k$ with the trivial $G$-action. \item Representable: if $(X_i)_{i \in I}$ is the net of $G$-compact, $G$-invariant closed subsets of $X$, ordered by the inclusion, the canonical maps $\bD^{B, G}_\ast(X_i) \to \bD^{B, G}_\ast(X)$ induce a natural isomorphism \[ \varinjlim_{i \in I}\bD^{B, G}_\ast(X_i) \cong \bD^{B, G}_\ast(X). \] \item Mayer--Vietoris sequence for a $G$-invariant open cover: if $X=U\cup V$ for open $G$-invariant subsets $U$ and $V$ of $X$, we have a natural Meyer--Vietoris sequence \[ \xymatrix{ \bD^{B, G}_0(U\cap V) \ar[r]^-{}& \bD^{B, G}_0(U) \oplus \bD^{B, G}_0(V) \ar[r]^-{} & \bD^{B, G}_0(X) \ar[d]^-{} \\ \bD^{B, G}_1(X) \ar[u]^-{} & \bD^{B, G}_1(U) \oplus \bD^{B, G}_1(V) \ar[l]^-{} & \bD^{B, G}_1(U\cap V) \ar[l]^-{}. } \] \item Homotopy invariance: if $h\colon X\times[0 ,1]\to Y$ is a $G$-equivariant continuous homotopy between $f_0, f_1\colon X\to Y$, then $\bD^{B, G}_\ast(f_0)=\bD^{B, G}_\ast(f_1)$. \end{enumerate} \end{theorem} \begin{proof} It is straightforward to see that the proof of Theorem \ref{thm_Ghomology} generalizes to prove all the assertions except the second assertion of (2). Using the homotopy invariance (5) and the first assertion of (2), it is enough to show \[ \mathbb{D}^{B, H}_\ast(\mathrm{point}) = K_\ast(C_b([1, \infty), \mathfrak{K}(H_0\otimes B))_{\mathrm{Hcont}}\rtimes_rH)\cong K_\ast(B\rtimes_rH), \] for a separable infinite-dimensional $H$-Hilbert space $H_0=l^2(\mathbb{N})\otimes L^2(H)$. This follows from Proposition \ref{prop_pointcase}. \end{proof} We have the following analogue of Proposition \ref{prop_ucsameB} \begin{proposition}\label{prop_ucsameB} For any universal $X$-$G$-module $H_X$ or for any $X$-$G$-module of the form $H_X^0\otimes l^2(\mathbb{N})$ for an $X$-$G$-module $H_X^0$, the natural inclusion induces an isomorphism \[ K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \cong K_\ast(RL^*_c(H_X\otimes B)\rtimes_rG). \] The same is true if we replace the reduced crossed product to the maximal one. More generally, for any $G$-$C^*$-algebra $B_1$, the natural inclusion induces an isomorphism \[ K_\ast(\left(RL^*_u(H_X\otimes B)\otimes B_1\right) \rtimes_rG) \cong K_\ast(\left(RL^*_c(H_X\otimes B)\otimes B_1\right)\rtimes_rG). \] The same is true if we replace either the minimal tensor product or the reduced crossed product or both to the maximal one. \end{proposition} \begin{proof} Again, the proof of \cite[Theorem 3.4]{WY2021} generalizes to this setting just as we explained in the proof of Proposition \ref{prop_pointcase}. \end{proof} Any $G$-equivariant $\ast$-homomorphism $\pi \colon B_1\to B_2$ induces \[ \pi\colon RL^*_c(H_X\otimes B_1)\rtimes_rG \to RL^*_c(H_X\otimes B_2)\rtimes_rG. \] This defines a natural transformation \[ \pi_\ast\colon \bD^{B_1, G}_\ast(X) \to \bD_\ast^{B_2, G}(X) \] of the functors from $\mathcal{PR}^G$ to $\mathcal{GA}$. We may also define a variant of representable $G$-equivariant K-homology with coefficient $B$ in the following way. \begin{definition} Choose any universal $X$-$G$-module $H_X$ for each proper $G$-space $X$ and any $G$-equivariant continuous cover $(V^f_t\colon H_X\to H_Y)_{t\in [1,\infty)}$ for each $G$-equivariant continuous map $f\colon X\to Y$. A functor $\mathbb{D}^{\otimes B, G}_\ast$ from $\mathcal{PR}^G$ to $\mathcal{GA}$ is defined as \[ \mathbb{D}^{\otimes B, G}_\ast(X) = K_\ast(\left(RL^\ast_c(H_X)\otimes B \right)\rtimes_rG), \] \[ \mathbb{D}^{\otimes B, G}_\ast(f\colon X\to Y)=\mathrm{Ad}_{V^f_t \ast} \colon K_\ast(\left(RL^\ast_c(H_X)\otimes B \right)\rtimes_rG) \to K_\ast(\left(RL^\ast_c(H_Y)\otimes B \right)\rtimes_rG). \] \end{definition} Although the following sequence may not be exact in general, \begin{equation*} \xymatrix{ 0 \ar[r]^-{} & RL_0^\ast(H_X)\otimes B \ar[r]^-{} & RL_c^\ast(H_X)\otimes B \ar[r]^-{} & RL_{c, Q}^\ast(H_X)\otimes B \ar[r]^-{} & 0, } \end{equation*} this would not be a problem. The quotient of $RL_c^\ast(H_X)\otimes B$ by $RL_0^\ast(H_X)\otimes B$ is a $G$-$C_0(X)$-algebra which can be used as an enough substitute for $RL_{c, Q}^\ast(H_X\otimes B)$ for our purpose. \begin{proposition}The functor $\mathbb{D}^{\otimes B, G}_\ast$ from $\mathcal{PR}^G$ to $\mathcal{GA}$ is well-defined. The functor does not depend on the choice of universal $X$-$G$-modules $H_X$ up to canonical equivalence. \end{proposition} \begin{proof} The proof of \ref{prop_welldefG} works verbatim. \end{proof} \begin{theorem}\label{thm_coeff2} The functor $\mathbb{D}^{\otimes B, G}_\ast$ satisfies the following. \begin{enumerate} \item $\mathbb{D}^{\otimes B, G}_\ast(\mathrm{empty\, set})\cong 0$. \item Induction from an open subgroup: for any open subgroup $H$ of $G$ and any proper $H$-space $Y$, we have a natural isomorphism $\mathbb{D}^{\otimes B, G}_\ast(G\times_H Y)\cong \mathbb{D}_\ast^{B, H}(Y)$. If $H$ is a compact open subgroup, we have $\mathbb{D}^{\otimes B, G}_\ast(G/H\times \Delta^k)\cong K_\ast(B\rtimes_rH)$ for any k-simplex (ball) $\Delta^k$ with the trivial $G$-action. \item Representable: if $(X_i)_{i \in I}$ is the net of $G$-compact, $G$-invariant closed subsets of $X$, ordered by the inclusion, the canonical maps $\bD^{\otimes B, G}_\ast(X_i) \to \bD^{\otimes B, G}_\ast(X)$ induce a natural isomorphism \[ \varinjlim_{i \in I}\bD^{\otimes B, G}_\ast(X_i) \cong \bD^{\otimes B, G}_\ast(X). \] \item Mayer--Vietoris sequence for a $G$-invariant open cover: if $X=U\cup V$ for open $G$-invariant subsets $U$ and $V$ of $X$, we have a natural Meyer--Vietoris sequence \[ \xymatrix{ \bD^{\otimes B, G}_0(U\cap V) \ar[r]^-{}& \bD^{\otimes B, G}_0(U) \oplus \bD^{\otimes B, G}_0(V) \ar[r]^-{} & \bD^{\otimes B, G}_0(X) \ar[d]^-{} \\ \bD^{\otimes B, G}_1(X) \ar[u]^-{} & \bD^{\otimes B, G}_1(U) \oplus \bD^{\otimes B, G}_1(V) \ar[l]^-{} & \bD^{\otimes B, G}_1(U\cap V) \ar[l]^-{}. } \] \item Homotopy invariance: if $h\colon X\times[0 ,1]\to Y$ is a $G$-equivariant continuous homotopy between $f_0, f_1\colon X\to Y$, then $\bD^{\otimes B, G}_\ast(f_0)=\bD^{\otimes B, G}_\ast(f_1)$. \end{enumerate} \end{theorem} \begin{proof} Again, the proof of Theorem \ref{thm_Ghomology} works in this setting in a straightforward way except a few points which we explain. The second assertion of the (2) is proven by using the homotopy invariance (5), the first assertion of (2) and $\mathbb{D}^{\otimes B, H}_\ast(\mathrm{point})\cong K_\ast(B\rtimes_rH)$ which follows from Proposition \ref{prop_pointcase} with $B=\mathfrak{K}(L^2(H))$ and $B_1=B$ (for $G=H$). For the Mayer--Vietoris sequence (4), the previous argument works verbatim if we use, instead of $RL_{c, Q}^*(H_X)\otimes B$, the quotient of $RL_{c}^*(H_X)\otimes B$ by $RL_0^*(H_X)\otimes B$ which is a $G$-$C_0(X)$-algebra. \end{proof} We have a natural inclusion \[ \left(RL_c^*(H_X)\otimes B \right) \rtimes_rG \to RL_c^*(H_X\otimes B)\rtimes_rG \] induced from the inclusion \[ RL_c^*(H_X)\otimes B \to RL_c^*(H_X\otimes B). \] The latter inclusion is well-defined since the inclusion \[ C_b([1 ,\infty), \mathfrak{K}(H_X)) \otimes B \to C_b([1 ,\infty), \mathfrak{K}(H_X)\otimes B) \] is well-defined. This follows, for example, from that in general, the inclusion \[ \mathfrak{L}(\mathcal{E}) \otimes B \to \mathfrak{L}(\mathcal{E}\otimes B) \] is well-defined for any Hilbert $A$-module $\mathcal{E}$ for a $C^*$-algebra $A$. The inclusion induces a natural transformation \[ \mathbb{D}^{\otimes B, G}_\ast(X) \to \mathbb{D}^{B, G}_\ast(X) \] of functors from $\mathcal{PR}^G$ to $\mathcal{GA}$. We refer the reader to \cite[Appendix]{Valette02} for the definition of a proper $G$-CW complex for a discrete group $G$. \begin{theorem}\label{thm_discrete_natural_isom} Let $G$ be a countable discrete group, $B$ be a separable $G$-$C^*$-algebra and $X$ be a proper $G$-space $X$ which is $G$-equivariantly homotopy equivalent to a proper $G$-CW complex. Then, for any universal $X$-$G$-module $H_X$, the natural inclusion \[ \left(RL_c^*(H_X)\otimes B \right) \rtimes_rG \to RL_c^*(H_X\otimes B)\rtimes_rG \] induces an isomorphism on $K$-theory groups. In other words, the natural transformation \[ \mathbb{D}^{\otimes B, G}_\ast(X) \to \mathbb{D}^{B, G}_\ast(X) \] is an isomorphism if $X$ is $G$-equivariantly homotopy equivalent to a proper $G$-CW complex. \end{theorem} \begin{proof} The natural transformation $\mathbb{D}^{\otimes B, G}_\ast \to \mathbb{D}^{B, G}_\ast$ is an isomorphism when $X=G/H$ for any finite subgroup $H$. The claim follows from Theorem \ref{thm_coeff} and Theorem \ref{thm_coeff2}. \end{proof} \section{Comparison with the localized Roe algebra} In this section, we compare the crossed product algebra $RL^*_u(H_X)\rtimes_rG$ with the localized equivariant Roe algebra / the equivariant localization algebra $C^*_L(X)^G$ for any discrete group $G$ and for any proper $G$-space $X$. For later purpose, most of the results in this section will be proved in a general setting where $G$ is any locally compact group and $X$ is any proper $G$-space. We also let $B$ be any separable $G$-$C^*$-algebra which will be used as a coefficient. For any $X$-$G$-module $H_X$, we consider $H_X\otimes B$ as a $G$-Hilbert $B$-module equipped with the diagonal $G$-action. Any Borel function $\phi$ on $X$ is represented as $\phi\otimes 1\in \mathfrak{L}(H_X\otimes B)$ and we write $\phi=\phi\otimes 1$ as long as there is no confusion. Recall that $T \in \mathfrak{L}(H_X\otimes B)$ for an $X$-module $H_X\otimes B$ is locally compact if $\phi T, T\phi \in \mathfrak{K}(H_X\otimes B)$ for any $\phi \in C_0(X)$. \begin{definition}(c.f.\ \cite[Definition 5.2.1]{WY2020}) For any $X$-$G$-module $H_X$, we define the equivariant Roe algebra $C^*(H_X\otimes B)^G_{\mathrm{Gcpt}}$ with $G$-compact support as the norm completion of the $\ast$-algebra $\C(H_X\otimes B)_{\mathrm{Gcpt}}^G$ consisting of $T \in \mathfrak{L}(H_X\otimes B)$ satisfying: \begin{enumerate} \item $T$ has $G$-compact support, i.e. $\mathrm{supp}(T)\subset Y\times Y$ for some $G$-compact subset $Y$ of $X$. \item $T$ is $G$-equivariant. \item $T$ is locally compact. \item $T$ is properly supported, i.e. for any compact subset $A$, there is a compact subset $B$ so that $T\chi_A=\chi_BT\chi_A$ and $\chi_AT=\chi_AT\chi_B$. \end{enumerate} If $X$ is $G$-compact, we write $C^*(H_X\otimes B)^G=C^*(H_X\otimes B)^G_{\mathrm{Gcpt}}$ and call it the equivariant Roe algebra. \end{definition} \begin{remark}\label{rem_same} If $X$ has a $G$-invariant proper metric $d$ and if $T$ is $G$-equivariant and has $G$-compact support, the condition (4) is same as saying $T$ has finite propagation with respect $d$. Indeed, if $Y=GY_0$ is the support of $T$ for a compact subset $Y_0$ of $X$ and if we let $Y_1$ be any compact neighborhood of $Y_0$, we have $T\chi_{Y_1}=\chi_{Y_2}T\chi_{Y_1}$ and $\chi_{Y_1}T=\chi_{Y_1}T\chi_{Y_2}$ for some compact subset $Y_2$ of $Y$. This condition already implies that if $(x, y)\in \mathrm{supp}(T)$ and if either $x\in gY_0$ or resp. $y \in gY_0$, then $y \in gY_2$, resp $x\in gY_2$. Hence, $\mathrm{prop}(T)\leq d(Y_0, Y_2)$. The converse is obvious. \end{remark} \begin{proposition} Let $X$ be a $G$-compact proper $G$-space with a $G$-invariant proper metric $d$ and $H_X$ be an $X$-$G$-module. Then, the equivariant Roe algebra $C^*(H_X\otimes B)^G$ is same as the norm completion of the $\ast$-algebra $\C(H_X\otimes B)^G$ consisting of $G$-equivariant, locally compact operator in $\mathfrak{L}(H_X\otimes B)$ with finite propagation with respect to $d$. \end{proposition} \begin{proof} This follows from Remark \ref{rem_same}. \end{proof} In general, $C^*(H_X\otimes B)^G_{\mathrm{Gcpt}}$ is just the inductive limit (union) of $C^*(H_{X_n}\otimes B)^G$ where $X_n$ are $G$-compact, $G$-invariant closed subsets of $X$ with $\cup X_n=X$ and $H_{X_n}=\chi_{X_n}H_X$. \begin{definition}\label{def_tildeH} Let $H_X$ be an $X$-$G$ module. We define an $X$-$G$-module \[ \tilde H_X = H_X\otimes L^2(G) \] equipped with the following covariant representation of $G$ and $C_0(X)$: for $\phi \in C_0(X)$ \[ \phi \mapsto \phi\otimes 1 \in \mathfrak{L}(H_X\otimes L^2(G)), \] and for $g \in G$ \[ g \mapsto u_g\otimes \lambda_g \in \mathfrak{L}(H_X\otimes L^2(G)) \] where $u_g$ is the unitary on $H_X$ corresponding to $g$ and $\lambda_g$ is the left-translation by $g$. \end{definition} \begin{definition}\label{def_rightreg} The right-regular representation \[ \rho\colon\mathfrak{L}(H_X\otimes B)_{\mathrm{Gcont}}\rtimes_rG \to \mathfrak{L}(\tilde H_X\otimes B) \] of $\mathfrak{L}(H_X\otimes B)_{\mathrm{Gcont}}\rtimes_rG$ is defined for $T \in \mathfrak{L}(H_X\otimes B)$, \[ (\rho(T)f)(h)=h(T)f(h) \] for $(f\colon h\mapsto f(h)\in H_X \otimes B) \in L^2(G, H_X\otimes B)$ and for $g\in G$, \[ \rho(g)= 1\otimes 1\otimes \rho_g \in \mathfrak{L}(H_X\otimes B \otimes L^2(G)) \] where $\rho_g$ is the right-regular representation of $g\in G$, \[ \rho_gf(s) = f(sg)\Delta^{1/2}(g) \] for $f \in L^2(G)$ where $\Delta$ is the modular function. \end{definition} \begin{lemma} For any locally compact group $G$, for any proper $G$-space $X$ and for any $X$-$G$-module $H_X$, the right-regular representation $\rho$ maps $\mathfrak{K}(H_X\otimes B)\rtimes_rG$ into the equivariant Roe algebra $C^*(\tilde H_X\otimes B)_{\mathrm{Gcpt}}^G \subset \mathfrak{L}(\tilde H_X\otimes B)$ with $G$-compact support. \end{lemma} \begin{proof} It is easy to see that the image of $\rho$ consists of $G$-equivariant operators on $\tilde H_X\otimes B$ and that $\rho(g)$ for any $g$ in $G$ commutes with $C_0(X)$. We show that for any $T \in \mathfrak{K}(H_X\otimes B)$ with compact support $X_0$ of $X$, and for any function $f \in C_c(G)$, $\rho(T)\rho(f)\in \C(\tilde H_X\otimes B)_{\mathrm{Gcpt}}^G$. This implies our assertion. The operator $\rho(T)$ is of the form \[ (h \mapsto h(T)) \in C_b(G, \mathfrak{K}(H_X\otimes B)) \] acting on $L^2(G, H_X\otimes B)$ in an obvious way and it is easy to see that $\rho(T)$ has $G$-compact support $GX_0$. Moreover, for any compact subset $A$ of $X$, \[ \chi_Ah(T)=h(T)\chi_A=0 \] for all $h$ in $G$ outside a compact subset $K$. In particular, by letting $B=KX_0$ which is compact, we have \[ \chi_Ah(T)= \chi_Ah(T)\chi_B, h(T)\chi_A=\chi_Bh(T)\chi_A \] for all $h$ in $G$. This shows $\rho(T)$ is properly supported. Finally, for any $\phi\in C_c(X)$, the operator $\phi \rho(T)$ is of the form \[ (h \mapsto \phi h(T)) \in C_c(G, \mathfrak{K}(H_X\otimes B)) \] acting on $L^2(G, H_X\otimes B)$ in an obvious way. It follows $\phi \rho(T)\rho(f) \in \mathfrak{K}(\tilde H_X\otimes B)$ and similarly $\rho(T)\rho(f)\phi \in \mathfrak{K}(\tilde H_X\otimes B)$. We see that $\rho(T)\rho(f)$ is locally compact so we are done. \end{proof} \begin{proposition}\label{prop_isom} For any locally compact group $G$, for any proper $G$-space $X$ and for any $X$-$G$-module $H_X$, the right-regular representation \[ \rho\colon \mathfrak{K}(H_X\otimes B)\rtimes_rG \to \mathfrak{L}(\tilde H_X\otimes B) \] induces an isomorphism \[ \mathfrak{K}(H_X\otimes B)\rtimes_rG \cong C^*(\tilde H_X\otimes B)_{\mathrm{Gcpt}}^G. \] \end{proposition} Before giving a proof of this, we recall a useful lemma. \begin{lemma}\label{lem_sum}\cite[Lemma 2.5, 2.6]{Nishikawa19} Let $X$ be a proper $G$-space and $H_X$ be an $X$-$G$-module. For any $\phi_0,\phi_1 \in C_c(X)$, there is a constant $C>0$ such that the following holds. Let $(T_g)_{g \in G}$ be a uniformly bounded family of operators on $H_X$ which defines a bounded operator on $L^2(G, H_X)$. Then, the map \[ v\mapsto \int_{g\in G}g(\phi_0)T_g g(\phi_1)vd\mu_G(g) \] on $H_X$ defines the bounded operator $\int_{g\in G}g(\phi_0)T_g g(\phi_1)d\mu_G(g)$ on $H_X$, and we have \[ ||\int_{g\in G}g(\phi_0)T_g g(\phi_1)d\mu_G(g)|| \leq C\sup_{g\in G}||T_g||. \] Moreover, if the function $g\mapsto T_g$ is in $C_0(G, \mathfrak{K}(H_X))$, the integral converges in norm to a compact operator on $H_X$. More generally, the same assertion holds if we consider, in place of $H_X$, $\mathcal{E}_X$ a $G$-Hilbert $B$-module equipped with a non-degenerate representation of $G$-$C^*$-algebra $C_0(X)$ by adjointable operators (in this case, operators are all assumed to be adjointable). \end{lemma} \begin{proof} The proof in \cite[Lemma 2.5, 2.6]{Nishikawa19} easily generalizes to the case of a $G$-Hilbert $B$-module $\mathcal{E}_X$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop_isom}] For any $G$-invariant closed subspace $Y$ of $X$, let $H_Y=\chi_YH_X$. Then, it is easy to see from the definition of $\tilde H_X$ and $\rho$, the following diagram commutes \[ \xymatrix{ \mathfrak{K}(H_X\otimes B)\rtimes_rG \ar[r]^-{\rho} & C^*(\tilde H_X\otimes B)_{\mathrm{Gcpt}}^G \\ \mathfrak{K}(H_Y\otimes B)\rtimes_rG \ar[u]^-{} \ar[r]^-{\rho} & C^*(\tilde H_Y\otimes B)_{\mathrm{Gcpt}}^G. \ar[u]^-{} } \] where the vertical maps are natural inclusions. Since both $\mathfrak{K}(H_X\otimes B)\rtimes_rG$ and resp. $C^*(\tilde H_X\otimes B)_{\mathrm{Gcpt}}^G$, are the inductive limit (the union) of $\mathfrak{K}(H_Y\otimes B)\rtimes_rG$, resp.$C^*(\tilde H_Y\otimes B)_{\mathrm{Gcpt}}^G$ for $G$-compact $G$-invariant closed subspaces $Y$ of $X$, it is enough to show the isomorphism when $X$ is $G$-compact. Now, suppose $X$ is $G$-compact. Let $T \in \C(\tilde H_X\otimes B)^G_{\mathrm{Gcpt}}$. We show that for any $\epsilon>0$, there is $(S\colon h\mapsto S_h) \in C_c(G, \mathfrak{K}(H_X\otimes B)) \subset \mathfrak{K}(H_X\otimes B)\rtimes_rG$ such that \[ || T - \rho(S) || < \epsilon. \] Let $c \in C_c(X)$ be a cut-off function on $X$. Since $T$ is locally compact and properly supported, $cT$ is a compact operator whose support is contained in $X_0\times X_0\subset X\times X$ for some compact subset $X_0$. We have \[ T = \int_{g\in G} g(c)^2T d\mu_G(g) = \int_{g\in G} g(c) g(cT) d\mu_g(g)= \int_{g\in G} g(c) g(cT) g(\chi)d\mu_G(g) \] where $\chi \in C_c(X)$ is fixed so that $\chi=1$ on $X_0$. These integrals converge weakly. Here we used $g(T)=T$. Let $C$ be as in Lemma \ref{lem_sum} for $\phi_0=c$, $\phi_1=\chi$. We approximate the compact operator $cT$ on $H_X\otimes B \otimes L^2(G)$ by any \[ S'\colon h_1\mapsto (S'_{h_1}\colon h_2\mapsto S'_{h_1}(h_2) ) \] in \[ C_c(G, C_c(G, \mathfrak{K}(H_X\otimes B)) \subset C_0(G, \mathfrak{K}(H_X\otimes B))\rtimes_rG\cong \mathfrak{K}(H_X\otimes B \otimes L^2(G)) \] so that $||cT-S'||<\epsilon/C$. Here, the identification \[ C_0(G, \mathfrak{K}(H_X\otimes B))\rtimes_rG\cong \mathfrak{K}(H_X\otimes B \otimes L^2(G)) \] is given by letting $C_0(G, \mathfrak{K}(H_X\otimes B))$ act on $H_X\otimes B \otimes L^2(G)$ in an obvious way and by letting the group $G$ act on $H_X\otimes B \otimes L^2(G)$ by $\rho$ i.e. the right-regular representation. We can arrange $\mathrm{supp}( S'_{h_1}(h_2) )\subset X_0\times X_0$ for all $h_1, h_2$, by multiplying $\chi_{X_0}$. This is possible because $\rho(g)$ commutes with functions $C_0(X)$. Let \[ \tilde S' = \int_{g \in G} g(c) g(S') d\mu_G(g) = \int_{g \in G} g(c) g(S') g(\chi) d\mu_G(g) \] which converges weakly on $\tilde H_X\otimes B$. By Lemma \ref{lem_sum} and from $||cT - S'|| < \epsilon/C$, we have \[ ||T - \tilde S'|| = ||\int_{g\in G} g(c)\left( g(cT) - g(S')\right) g(\chi) d\mu_g(g) || < C \cdot \epsilon/C =\epsilon. \] It is not hard to see that $\tilde S' $ is a locally compact, $G$-equivariant, properly supported operator and it is of the form $\rho(S)$ where \[ (S\colon h\mapsto S_h) \in C_c(G, \mathfrak{K}(H_X\otimes B)) \subset \mathfrak{K}(H_X\otimes B)\rtimes_rG, \] \[ S_h = \int_{g\in G} g(c) g((S'_h)(g^{-1})) g(\chi) d\mu_G(g) \in \mathfrak{K}(H_X\otimes B). \] Note that the integral is norm convergent as $S'_h(g^{-1})=0$ for a.e. $g\in G$. That is we obtained $S\in \mathfrak{K}(H_X\otimes B)\rtimes_rG$ such that $||T-\rho(S)||<\epsilon$ as desired. \end{proof} \begin{remark} This provides us an alternative proof of the well-known fact \cite{Roe96} that for any discrete group $G$, for any $G$-compact proper $G$-space $X$ and for any ample $X$-$G$-module $H_X$, $C^*(H_X)^G\cong \mathfrak{K}(l^2(\mathbb{N}))\otimes C^*_r(G)$ at least when $H_X$ is of the form $\tilde H^0_X$. Of course, the general case follows from this if use the fact that the equivariant Roe algebra $C^*(H_X)^G$ is independent of an ample $X$-$G$-module $H_X$ up to isomorphisms. \end{remark} \begin{definition}(c.f.\ \cite[Definition 6.6.1]{WY2020}) \label{def_localized} For any $X$-$G$-module $H_X$, we define the localized Roe algebra $C_{L, u}^*(H_X\otimes B)^G_{\mathrm{Gcpt}}$ with $G$-compact support as the norm completion of the $\ast$-algebra $\C_{L, u}(H_X\otimes B)_{\mathrm{Gcpt}}^G$ consisting of uniformly norm-continuous $\C(H_X\otimes B)_{\mathrm{Gcpt}}^G$-valued functions $T\colon t\mapsto T_t$ on $[1, \infty)$ satisfying: \begin{enumerate} \item $T$ has uniform $G$-compact support, i.e. $\mathrm{supp}(T_t)\subset Y\times Y$ for some $G$-compact subset $Y$ of $X$. \item For any open neighborhood $U$ of the diagonal in $X^+\times X^+$, there is $t_U\geq1$ such that for all $t> t_U$, $\mathrm{supp}(T_t)\subset U$. \end{enumerate} Note that the second condition is same as saying that $\mathrm{prop}(T_t)\to 0$ with respect to a (any) fixed metric on $X^+$. Similarly, $C_{L, c}^*(H_X\otimes B)^G_{\mathrm{Gcpt}}$ is defined by replacing uniformly norm-continuous to norm-continuous. If $X$ is $G$-compact, we write $C_{L, u}^*(H_X\otimes B)^G=C_{L, u}^*(H_X\otimes B)^G_{\mathrm{Gcpt}}$, $C_{L, c}^*(H_X\otimes B)^G=C_{L, c}^*(H_X\otimes B)^G_{\mathrm{Gcpt}}$ and call them the localized equivariant Roe algebras. \end{definition} \begin{remark}\label{rem_same2} If $X$ has a $G$-invariant proper metric $d$, the condition (2) is same as saying $\mathrm{prop}(T_t)\to 0$ as $t\to \infty$ with respect $d$ when $T$ is $G$-equivariant and has uniform $G$-compact support. Indeed, if $Y=GY_0$ is the support of $T$ for a compact subset $Y_0$ of $X$ and if $Y_1$ is any compact neighborhood of $Y_0$, (2) implies $\mathrm{prop}(\chi_{Y_1}T_t) \to 0$ with respect to $d$. Using $G$-equivariance, we can easily see $\mathrm{prop}(T_t)\to 0$ with respect to $d$. The converse is easier. \end{remark} \begin{remark} We used the terminology, the localized Roe algebra, from \cite[Section 6.6]{WY2020} where the terminology, the equivariant localization algebra, is reserved for some other slightly bigger algebra. \end{remark} \begin{proposition} Let $X$ be a $G$-compact proper $G$-space with a $G$-invariant proper metric $d$. For any $X$-$G$-module $H_X$, the localized equivariant Roe algebra $C^*_{L, u}(H_X\otimes B)^G$ is same as the norm completion of the $\ast$-algebra $\C_{L, u}(H_X\otimes B)^G$ of uniformly norm-continuous $\C(H_X\otimes B)^G$-valued functions $T\colon t\mapsto T_t$ on $[1, \infty)$ with $\mathrm{prop}(T_t)\to 0$ as $t\to \infty$. \end{proposition} \begin{proof} This follows from Remark \ref{rem_same2}. \end{proof} Let $H_X$ be an $X$-$G$-module and let $\tilde H_X$ be the $X$-$G$-module as in Definition \ref{def_tildeH}. Recall that the right-regular representation (Definition \ref{def_rightreg}) \[ \rho\colon \mathfrak{L}(H_X\otimes B)_{\mathrm{Gcont}}\rtimes_rG \to \mathfrak{L}(\tilde H_X\otimes B) \] restricts to an isomorphism \[ \rho\colon \mathfrak{K}(H_X\otimes B)\rtimes_rG \cong C^*(\tilde H_X\otimes B)^G_{\mathrm{Gcpt}}. \] Applying the right-regular representation $\rho$ for each $t\in [1, \infty)$, $\rho$ extends to a $\ast$-homomorphism \[ \rho\colon C_{b}([1, \infty), \mathfrak{L}(H_X\otimes B))_{\mathrm{Gcont}}\rtimes_rG \to C_b([1, \infty), \mathfrak{L}(\tilde H_X\otimes B)). \] \begin{proposition}\label{prop_rightreg_localized} For any locally compact group $G$, for any proper $G$-space $X$ and for any $X$-$G$-module $H_X$, the right-regular representation \[ \rho\colon C_{b}([1, \infty), \mathfrak{L}(H_X\otimes B))_{\mathrm{Gcont}}\rtimes_rG \to C_b([1, \infty), \mathfrak{L}(\tilde H_X\otimes B)) \] maps $RL^*_u(H_X\otimes B)\rtimes_rG\subset C_b([1, \infty), \mathfrak{K}(H_X\otimes B))_{\mathrm{Gcont}}\rtimes_rG$ into the localized equivariant Roe algebra $C_{L, u}^*(\tilde H_X\otimes B)_{\mathrm{Gcpt}}^G$ with $G$-compact support. Similarly, $\rho$ maps $RL^*_c(H_X\otimes B)\rtimes_rG$ into $C_{L, c}^*(\tilde H_X\otimes B)_{\mathrm{Gcpt}}^G$. \end{proposition} \begin{proof} Fix any metric $d$ on $X^+$. It is enough to show that for any $T \in RL^*_u(H_X)$ with uniform compact support $X_0$ such that $\mathrm{prop}(T_t)\to 0$ with respect to $d$, and for any $f \in C_c(G)$, \[ \rho(T)\rho(f) \in C_{L, u}^*(\tilde H_X\otimes B)_{\mathrm{Gcpt}}^G. \] We already know that $\rho(T_t)\rho(f) \in \C(\tilde H_X\otimes B)_{\mathrm{Gcpt}}^G$ for any $t\geq1$. Moreover it has uniform $G$-compact support $GX_0$. Note that $\rho(f)$ commutes with $C_0(X)$ so we just need to show the condition (2) in Definition \ref{def_localized} is satisfied for $\rho(T)$. The condition (2) for $\rho(T)$ is equivalent to that for any compact subset $X_1$ of $X$, $\mathrm{prop}(\rho(T_t)\chi_{X_1}), \mathrm{prop}(\chi_{X_1}\rho(T_t)) \to 0$ as $t\to \infty$ with respect to $d$. On the other hand, $\chi_{X_1}h(T_t)=h(T_t)\chi_{X_1}=0$ for $h \in G\backslash K$ for some compact subset $K$ of $G$ since $T_t$ has uniform compact support $X_0$. Thus, it is enough to show that $\mathrm{prop}(h(T_t))\to 0$ uniformly in $h\in K$. This can be proved as follows. For any $\epsilon>0$, let \[ A=\{(x, y)\in X^+ \times X^+ \mid d(x, y)\geq \epsilon \}. \] Then, $K^{-1}A$ is a compact subset of $X^+\times X^+$ which does not contain the diagonal, so there is $\delta>0$ such that all $(x, y)\in K^{-1}A$ satisfy $d(x, y)\geq \delta$. It follows for any $(x, y)\in X_0\times X_0$ with $d(x, y)<\delta$, $d(kx, ky)<\epsilon$ for all $k\in K$. We see that $\mathrm{prop}(h(T_t))\to 0$ uniformly in $h\in K$ as desired. The continuous case is proved in the same way. \end{proof} \begin{remark} The inclusion $\rho\colon RL^*_u(H_X\otimes B)\rtimes_rG \to C_{L, u}^*(\tilde H_X\otimes B)_{\mathrm{Gcpt}}^G$ is never surjective unless $G$ is finite. For example, if $G$ is compact and if $X$ is a point, then the inclusion is identified as the inclusion \[ C_{b, u}([1, \infty), \mathfrak{K}(H_X)\otimes B)\rtimes_rG \to C_{b, u}([1, \infty), (\mathfrak{K}(H_X)\otimes B)\rtimes_rG) \] which is not surjective unless $G$ is finite even if $H_X=\C$ and $B=\C$. \end{remark} Now we go back to the classical setting when $G$ is discrete. Here is the promised comparison between the crossed product algebra $RL^*_u(H_X)\rtimes_rG$ and the localized equivariant Roe algebra $C^*_{L, u}(H_X)^G$. A generalization with coefficient $B$ is possible but we only consider $B=\C$ here. The continuous case can be shown too but with an extra effort. \begin{theorem} Let $G$ be a countable discrete group, $X$ be a proper $G$-space which is $G$-equivariantly homotopic to a $G$-$CW$ complex and $H_X$ be any ample $X$-$G$-module. Then, the inclusion $\rho\colon RL^*_u(H_X)\rtimes_rG \to C_{L, u}^*(\tilde H_X)_{\mathrm{Gcpt}}^G$ induces an isomorphism \[ \rho_\ast\colon K_\ast( RL^*_u(H_X)\rtimes_rG) \cong K_\ast(C_{L, u}^*(\tilde H_X)_{\mathrm{Gcpt}}^G). \] In particular, when $X$ is $G$-compact, the inclusion $\rho\colon RL^*_u(H_X)\rtimes_rG \to C_{L, u}^*(\tilde H_X)^G$ induces an isomorphism \[ \rho_\ast\colon K_\ast( RL^*_u(H_X)\rtimes_rG) \cong K_\ast(C_{L, u}^*(\tilde H_X)^G). \] \end{theorem} \begin{proof} It suffices to show the case when $X$ is $G$-compact. For any $X$-$G$-module $H_X$, let $L^*(H_X)^G$ be the equivariant localization algebra as defined in \cite[Definition 6.5.1]{WY2020}. The $C^*$-algebra $L^*(H_X)^G$ is the norm completion of the $\ast$-algebra $\mathbb{L}[H_X]^G$ of all bounded functions $T_t$ on $[1 ,\infty)$ to $\mathfrak{L}(H_X)$ such that \begin{enumerate} \item $T_t$ is $G$-equivariant, \item for any compact subset $K$ of $X$, there exists $t_K\geq1$ such that for all $t\geq t_k$, $\chi_KT_t$, $T_t\chi_K$ are compact, and the functions $t\mapsto \chi_KT_t$, $t\mapsto T_t\chi_K$ are uniformly norm-continuous when restricted to $[t_k, \infty)$, \item for any open neighborhood of the diagonal in $X^+\times X^+$, there exists $t_U\geq1$ such that for all $t>t_U$, $\mathrm{supp}(T_t)\subset U$. \end{enumerate} It is proved in \cite[Proposition 6.6.2]{WY2020} that the natural inclusion induces an isomorphism \[ K_\ast(C_{L, u}^*(H_X)^G) \cong K_\ast(L^*(H_X)^G) \] whenever $H_X$ is ample. Thus, to prove our claim, it suffices to show that when $X$ is $G$-compact, for any ample $X$-$G$-module $H_X$, the natural inclusion \[ \rho\colon RL^*_u(H_X)\rtimes_rG \to C_{L, u}^*(\tilde H_X)^G \to L^*(\tilde H_X)^G \] induces an isomorphism on K-theory. For this, we consider more generally, for any not-necessary $G$-compact $X$, the equivariant localization algebra $L^*(H_X)^G_{\mathrm{Gcpt}}$ with $G$-compact support to be the completion of $\mathbb{L}[H_X]_{\mathrm{Gcpt}}^G$ which is the subalgebra of $\mathbb{L}[H_X]^G$ consisting of $T_t$ that has eventually uniform $G$-compact support, that is there is $t_0\geq1$ and a $G$-compact subset $X_0$ of $X$ such that for all $t\geq t_0$, $\mathrm{supp}(T_t)\subset X_0\times X_0$. Then, our claim is that the natural inclusion \[ \rho\colon RL^*_u(H_X)\rtimes_rG \to C_{L, u}^*(\tilde H_X)_{\mathrm{Gcpt}}^G \to L^*(\tilde H_X)^G_{\mathrm{Gcpt}} \] induces an isomorphism on K-theory. Now, the point is that just as in the case of our functor $\bD_\ast^G$, the assignment \[ X \mapsto RK_\ast^G(X)=K_\ast(L^*(H_X)^G_{\mathrm{Gcpt}}), \,\,\,f \mapsto RK_\ast^G(f)= \mathrm{Ad}_{V^f_t \ast} \] becomes a functor from $\mathcal{PR}^G$ to $\mathcal{GA}$, where $H_X$ is a chosen ample $X$-$G$-module for a proper $G$-space $X$ and $V^f_t$ is a chosen $G$-equivariant continuous cover of a $G$-equivariant continuous map $f\colon X\to Y$. As in the case of $\bD^G_\ast$ (see the proof of Proposition\ref{prop_welldefG}), one shows the composition law for not-necessarily proper maps by reducing it to the $G$-compact setting after showing the representability of the functor $RK^G_\ast$. Moreover, the functor $RK_\ast^G$ satisfies the five listed properties in Theorem \ref{thm_Ghomology}. Since this does not involve any new idea, we explain this very briefly. The first part of the second property (2) (induction from a finite subgroup) follows from Proposition \cite[Proposition 6.5.13]{WY2020} and from (3) the representability of $RK^G_\ast$. The fourth property (4) (the Mayer--Vietoris sequence) can be shown just as in the case of $\bD_\ast^G$ by using the quotient $L_Q^*(H_X)^G_{\mathrm{Gcpt}}$ of $L^*(H_X)^G_{\mathrm{Gcpt}}$ by the ideal $L_0^*(H_X)^G_{\mathrm{Gcpt}}$ which is the completion of the subalgebra of $\mathbb{L}[H_X]_{\mathrm{Gcpt}}^G$ consisting of $T_t$ such that for any compact subset $K$ of $X$, $\chi_KT_t, T_t\chi_K=0$ eventually. The quotient $L_Q^*(H_X)^G_{\mathrm{Gcpt}}$ is a $C_0(X/G)$-algebra (see \cite[Lemma 6.4.18]{WY2020}) and the quotient map induces an isomorphism \[ K_\ast(L^*(H_X)^G_{\mathrm{Gcpt}}) \cong K_\ast(L_Q^*(H_X)^G_{\mathrm{Gcpt}}) \] because $K_\ast(L_0^*(H_X)^G_{\mathrm{Gcpt}})=0$ (see \cite[Lemma 6.5.12]{WY2020}). For the fifth property (5) (the homotopy invariance), the proof for $\bD_\ast^G$ works verbatim. It remains to prove the second part of (2), that is \[ RK^H_\ast(\mathrm{point}) \cong K_\ast(C^*_r(H)) \] for any finite group $H$. Using $K_\ast(C_{L, u}^*(H_X)^G) \cong K_\ast(L^*(H_X)^G)$, we just need to show \[ K_\ast(C_{b, u}([1, \infty), \mathfrak{K}(l^2(\mathbb{N})\otimes l^2(H)))^H) \cong K_\ast(C^*_r(H)), \] but since $\mathfrak{K}(l^2(\mathbb{N})\otimes l^2(H))^H \cong \mathfrak{K}(l^2(\mathbb{N}))\otimes C^*_r(H)$, this follows because the evaluation \[ \mathrm{ev}_{1}\colon C_{b, u}([1,\infty), \mathfrak{K}(l^2(\mathbb{N}))\otimes C^*_r(H)) \to \mathfrak{K}(l^2(\mathbb{N}))\otimes C^*_r(H) \] at $t=1$ is an isomorphism on K-theory. Now, the inclusion \[ \rho\colon RL^*_u(H_X)\rtimes_rG \to L^*(\tilde H_X)_{\mathrm{Gcpt}}^G \] induces a natural transformation of the functors $\bD^G_\ast$ to $RK_\ast^G$. We can see that this is an isomorphism when $X=G/H$ for any finite subgroup $H$ of $G$, by considering \[ \rho\colon RL^*_u(H_X)\rtimes_rG \to C_{L, u}^*(\tilde H_X)^G \] instead. Indeed, at the level of K-theory, this inclusion is isomorphic to the inclusion \[ C_{b, u}([1 ,\infty), \mathrm{Ind}_H^G\mathfrak{K}(H_Y))\rtimes_rG \to C_{b, u}([1 ,\infty), \mathrm{Ind}_H^G\mathfrak{K}(H_Y)\rtimes_rG) \] where $\mathrm{Ind}_H^G\mathfrak{K}(H_Y)$ is the $G$-$C_0(G/H)$-algebra with fiber $\mathfrak{K}(H_Y)$ at the coset $H$, or more precisely \[ \mathrm{Ind}_H^G\mathfrak{K}(H_Y) = \{ f \in C_0(G, \mathfrak{K}(H_Y)) \mid f(gh)=h^{-1}(f(g)) \,\, \text{for $h \in H$} \} \] equipped with the left translation $G$-action. It is now easy to see that the above inclusion is an isomorphism on K-theory because both are, at the level of K-theory, isomorphic to $\mathrm{Ind}_H^G\mathfrak{K}(H_Y)\rtimes_rG$ via the evaluation at $t=1$. Of course, we have $K_\ast(\mathrm{Ind}_H^G\mathfrak{K}(H_Y)\rtimes_rG) \cong K_\ast(C^*_r(H))$. It follows since both functors $\bD^G_\ast$, $RK_\ast^G$ satisfy properties (1) - (5) in Theorem \ref{thm_Ghomology}, the natural transformation \[ \rho\colon \bD^G_\ast(X) \to RK^G_\ast(X) \] is an isomorphism for all $X$ which is $G$-equivariantly homotopic to a $G$-CW complex. Going back to the original question, we just showed that the inclusion $\rho$ induces an isomorphism \[ K_\ast(RL^*_u(H_X)\rtimes_rG) \cong K_\ast(C_{L, u}^*(\tilde H_X)_{\mathrm{Gcpt}}) \] for all such $X$. \end{proof} \section{The forget-control map and the Baum--Connes assembly map} \label{sec_forget} Let $G$ be a (second countable) locally compact group and $B$ be a separable $G$-$C^*$-algebra. Let $X$ be a proper $G$-space and $H_X$ be an $X$-$G$-module. The evaluation map \[ \mathrm{ev}_1\colon RL^\ast_c(H_X\otimes B) \to \mathfrak{K}(H_X)\otimes B \] at $1$, which we call the forget-control map, induces a $\ast$-homomorphism \[ \mathrm{ev}_1\colon RL^\ast_c(H_X\otimes B)\rtimes_rG \to \left(\mathfrak{K}(H_X)\otimes B \right) \rtimes_rG \cong \mathfrak{K}(H_X)\otimes \left(B\rtimes_rG\right). \] Here, the isomorphism on the right is obtained by trivializing the inner $G$-action on $\mathfrak{K}(H_X)$. It induces a group homomorphism (the forget-control map) \begin{equation}\label{eq_forgetK} \mathcal{F} = \mathrm{ev}_{1\ast}\colon K_\ast(RL^\ast_c(H_X\otimes B)\rtimes_rG) \to K_\ast(B\rtimes_rG). \end{equation} \begin{proposition} The forget-control map $\mathcal{F}$ \eqref{eq_forgetK} is functorial in $X$ and in $B$ in a sense that the following diagrams commute. For any $G$-equivariant continuous map $f\colon X\to Y$ and for any (if exists) $G$-equivariant continuous cover $(V_t\colon H_X\to H_Y)_{t\in[1,\infty)}$ of $f$, \[ \xymatrix{ K_\ast(RL^\ast_c(H_X\otimes B)\rtimes_rG) \ar[r]^-{\mathcal{F}} \ar[d]^-{\mathrm{Ad}_{V_t\ast}} & K_\ast(B\rtimes_rG) \ar[d]^-{=} \\ K_\ast(RL^\ast_c(H_Y\otimes B)\rtimes_rG) \ar[r]^-{\mathcal{F}} & K_\ast(B\rtimes_rG), } \] and for any $G$-equivariant $\ast$-homomorphism $\pi\colon B_1\to B_2$, \[ \xymatrix{ K_\ast(RL^\ast_c(H_X\otimes B_1)\rtimes_rG) \ar[r]^-{\mathcal{F}} \ar[d]^-{\pi_\ast} & K_\ast(B_1\rtimes_rG) \ar[d]^-{\pi\rtimes_r1_\ast} \\ K_\ast(RL^\ast_c(H_X\otimes B_2)\rtimes_rG) \ar[r]^-{\mathcal{F}} & K_\ast(B_2\rtimes_rG). } \] In particular, $\mathcal{F}$ $\eqref{eq_forgetK}$ induces a group homomorphism \begin{equation}\label{eq_forgetD} \mathcal{F} \colon \bD^{B, G}_\ast(X) \to K_\ast(B\rtimes_rG) \end{equation} which is natural in $X$ and in $B$. \end{proposition} \begin{proof} The first diagram commutes since $\mathrm{Ad}_{V_t}$ on $(\mathfrak{K}(H_X)\otimes B)\rtimes_rG$ is the identity on K-theory. The second diagram commutes at the level of $\ast$-homomorphisms. \end{proof} \begin{definition} We call the group homomorphism $\mathcal{F}$ in \eqref{eq_forgetD}, the forget-control map for the functor $\bD_\ast^{B, G}$. \end{definition} In the rest of this section, our goal is to show that the forget-control map $\mathcal{F}$ naturally factors through the Baum--Connes assembly map (see \cite{BCH93}, \cite{Valette02}, \cite{GJV19}) \begin{equation}\label{eq_BCKK} \mu_X^{B, G}\colon \varinjlim_{Y\subset X, \mathrm{Gcpt}}KK_\ast^G(C_0(Y), B) \to K_\ast(B\rtimes_rG) \end{equation} via a group homomorphism \[ \rho_X\colon \bD_\ast^{B, G}(X) \to \varinjlim_{Y\subset X, \mathrm{Gcpt}} KK_\ast^G(C_0(Y), B) \] which we will define first. We will also show that $\rho_X$ is an isomorphism for any $B$ when $G$ is a discrete group and $X$ is $G$-equivariantly homotopy equivalent to a $G$-CW complex. To obtain these, for technical reasons, we will use equivariant $E$-theory $E^G$ of Guentner, Higson and Trout (\cite{GHT}, see also \cite[Chapter 2]{HG04}) in place of equivariant $KK$-theory $KK^G$ of Kasparov \cite{Kasparov88}. For our purpose, this is no problem because the canonical natural transformation (\cite[Appendix]{KasparovSkandalis03}, see also \cite[Definition 7.2]{HigsonKasparov}) \begin{equation}\label{eq_KKE} KK_\ast^G(A, B) \to E_\ast^G(A, B) \end{equation} is an isomorphism when $A$ is a proper, nuclear $G$-$C^*$-algebra (more generally when $A\mapsto KK_\ast^G(A, B)$ is half-exact), in particular when $A=C_0(X)$ for a proper $G$-space $X$ (\cite[Corollary A.3, A.4]{KasparovSkandalis03}) and because the Baum--Connes assembly map in equivariant $E$-theory (see \cite{GHT}) \begin{equation}\label{eq_BCE} \mu^{B, G}_X\colon \varinjlim_{Y\subset X, \mathrm{Gcpt}}E_\ast^G(C_0(Y), B) \to K_\ast(B\rtimes_rG) \end{equation} is known to be equivalent to the one \eqref{eq_BCKK} in $KK^G$ via \eqref{eq_KKE} (see \cite[Remark A.5]{KasparovSkandalis03}). We first recall some materials from \cite{GHT}. For any (not necessarily separable) $G$-$C^*$-algebras $A$ and $B$, we let \[ \mathfrak{T}(B)= C_b([1, \infty), B)_{\mathrm{Gcont}}, \,\,\, \mathfrak{T}_0(B) = C_0([1, \infty), B). \] The asymptotic algebra of $B$ is defined as \[ \mathfrak{A}(B)= \mathfrak{T}(B) / \mathfrak{T}_0(B). \] An (equivariant) asymptotic morphism from $A$ to $B$ is an equivariant $\ast$-homomorphism from $A$ to $\mathfrak{A}(B)$. A homotopy of asymptotic morphisms is given by an asymptotic morphism from $A$ to $BI=B\otimes C[0 ,1]$. The set of homotopy equivalence classes of asymptotic morphisms from $A$ to $B$ is denoted by $[[A, B]]_1$. More generally, $[[A, B]]_n$ is the set of $n$-homotopy classes of equivariant $\ast$-homomorphisms from $A$ to $\mathfrak{A}^n(B)$ where $\mathfrak{A}^n$ is the $n$-fold composition of the functor $\mathfrak{A}$ with itself and an $n$-homotopy is given by an equivariant $\ast$-homomorphism from $A$ to $\mathfrak{A}^n(BI)$ (see \cite [Definition 2.6]{GHT}). The set $[[A, B]]$ is defined as the natural inductive limit of $[[A, B]]_n$ (see \cite[Definition 2.7]{GHT}). If $A$ is separable, the set $[[A, B]]$ can be naturally identified as $[[A, B]]_1$ (\cite[Theorem 2.16]{GHT}). For any $G$-$C^*$-algebras $A, B, C$, we have a well-defined associative composition law \[ [[A, B]]\times [[B, C]] \to [[A, C]] \] \cite[Proposition 2.12]{GHT}. If $A$ is separable, the composition of asymptotic morphisms $(\phi_t)_{t\in [1, \infty)}\colon A\to \mathfrak{A}(B)$ and $(\psi_t)_{t\in [1, \infty)}\colon B \to \mathfrak{A}(C)$ can be represented by an asymptotic morphism $\psi_{s(t)} \circ \phi_t \colon A\to \mathfrak{A}(C)$ where $(t\mapsto s(t))$ is an increasing function on $[1 ,\infty)$ such that $s(t)\to \infty$ as $t\to \infty$ sufficiently fast \cite{ConnesHigson} or alternatively by an asymptotic morphism $\psi_{t} \circ \phi_{r(t)}\colon A\to \mathfrak{A}(C)$ where $(t\mapsto r(t))$ is a continuous function on $[1 ,\infty)$ such that $r(t)\to \infty$ sufficiently slowly \cite[Lemma 2.17, Claim 2.18]{GHT} (both ways of representing the compositions are homotopic). For any $G$-$C^*$-algebras $A, B, C$, we have a well-defined functor given by the maximal tensor product with the identity \[ \otimes_{\mathrm{max}} \mathrm{id}_C\colon [[A, B]] \to [[A\otimes_{\mathrm{max}} C, B\otimes_{\mathrm{max}}C]] \] \cite[Proposition 4.4]{GHT} and the maximal descent \[ [[A, B]] \to [[A\rtimes_{\mathrm{max}}G, B\rtimes_{\mathrm{max}}G]] \] \cite[Theorem 4.12]{GHT}. Let $\mathcal{H}_G=l^2(\mathbb{N})\otimes L^2(G)$ and $\Sigma= C_0(\mathbb{R})\otimes$ be the suspension functor. For any (not necessarily separable) $G$-$C^*$-algebras $A , B$, the equivariant $E$-theory group $E^G(A, B)$ \cite[Definition 6.8]{GHT} is defined as \[ E^G(A, B)=[[\Sigma A\otimes \mathfrak{K}(\mathcal{H}_G), \Sigma B\otimes \mathfrak{K}(\mathcal{H}_G) ]]. \] If $A, B$ are separable, we define for $i=0, 1$, \[ E_i^G(A, B)=E^G(\Sigma^{i}A, B). \] For any $G$-$C^*$-algebras $A , B$, $E^G(A, B)$ is an abelian group and the composition law \[ E^G(A, B)\times E^G(B, C) \to E^G(A, C) \] is bilinear. In this way, we have the additive category $E^G$ whose objects are $G$-$C^*$-algebras and the morphisms groups are $E^G(A, B)$ \cite[Theorem 6.9]{GHT}. There are natural isomorphisms $K_0(B) \cong E(\C, B)$, $K_1(B) \cong E(\Sigma, B)$ for any $B$ (with trivial $G$-action). \cite[Theorem 6.24]{GHT}. The Bott periodicity theorem and the half-exactness of the bi-functor $(A, B)\mapsto E^G(A, B)$ are only proved for separable $A ,B$. On the other hand, as we recalled above, the composition law, the (maximal) tensor product and the (maximal) crossed product are defined for general $A, B$. We have a bilinear map \[ E^G(A, D_1\otimes_{\mathrm{max}}B)\times E^G(B\otimes_{\mathrm{max}} D_2, C) \to E^G(A\otimes_{\mathrm{max}}D_2, D_1\otimes_{\mathrm{max}}C) \] given by the maximal tensor product with the identity on $\mathrm{id}_{D_2}$ on the first slot and with the identity on $\mathrm{id}_{D_1}$ on the second slot, followed by the composition law. The maximal decent defines a group homomorphism \[ j^G_{\mathrm{max}}\colon E^G(A, B) \to E(A\rtimes_{\mathrm{max}}G,B\rtimes_{\mathrm{max}}G) \] which is functorial in both variables. When $A$ is a proper algebra (more generally, if $A\rtimes_rG=A\rtimes_{\mathrm{max}}G$), we define \[ j^G_{r}\colon E^G(A, B) \to E(A\rtimes_{r}G,B\rtimes_{r}G) \] by the composition of $j^G_{\mathrm{max}}$ and the map $E(A\rtimes_{r}G, B\rtimes_{\mathrm{max}}G) \to E(A\rtimes_{r}G, B\rtimes_{r}G)$ induced by the quotient map $B\rtimes_{\mathrm{max}}G \to B\rtimes_rG$. For any separable $G$-Hilbert space $H$, any asymptotic morphism $\phi\colon A \to \mathfrak{A}(B\otimes \mathfrak{K}(H))$ defines an element in $E^G(A, B)$. This is given by the suspension of the tensor product \[ \phi\otimes \mathrm{id}_{\mathfrak{K}(\mathcal{H}_G)}\colon A\otimes \mathfrak{K}(\mathcal{H}_G)\to \mathfrak{A}(B \otimes \mathfrak{K}(H)\otimes \mathfrak{K}(\mathcal{H}_G)) \cong \mathfrak{A}(B\otimes \mathfrak{K}(\mathcal{H}_G)) \] using any isomorphism $\mathcal{H}\otimes \mathcal{H}_G\cong \mathcal{H}_G$. With these in mind, we do some preparation for constructing a group homomorphism \[ \rho_X\colon \bD_\ast^{B, G}(X) \to \varinjlim_{Y\subset X, \mathrm{Gcpt}}E_\ast^G(C_0(Y), B). \] \begin{lemma} For any $X$-$G$-module $H_X$ and for any $T\in C_{L, c}^*(H_X\otimes B)_{\mathrm{Gcpt}}^G$ \[ \phi T \in C_b([1, \infty), \mathfrak{K}(H_X\otimes B)), \,\, [\phi, T] \in C_0([1, \infty), \mathfrak{K}(H_X\otimes B)) \] holds for any $\phi \in C_0(X)$. \end{lemma} \begin{proof} The first condition follows since $T_t$ is locally compact for each $t$. The second one holds because for any $T$ in the dense subalgebra $\C_{L, c}(H_X\otimes B)_{\mathrm{Gcpt}}^G$ which satisfies the second condition (2) in Definition \ref{def_localized}, we have $||[\phi, T_t]|| \to 0$ as $t\to \infty$. This follows from Lemma \ref{lem_commutator}. \end{proof} An important example of asymptotic morphisms is obtained by the (maximal) tensor product of two asymptotically commuting $\ast$-homomorphisms: \begin{lemma} Let $A_1, A_2$ and $B$ be $G$-$C^*$-algebras and let \[ \phi_i\colon A_i \to C_b([1,\infty), M(B)) \] (i=1, 2) be equivariant $\ast$-homomorphisms such that \[ \phi_1(a_1)\phi_2(a_2) \in C_b([1, \infty), B), \,\, [\phi(a_1), \phi(a_2)]\in C_0([1, \infty), B) \] for any $a_1\in A$ and $a_2 \in A_2$. Then, there is a (unique) equivariant asymptotic morphism \[ \phi_1\otimes \phi_2\colon A_1\otimes_{\mathrm{max}}A_2 \to \mathfrak{A}(B) \] such that the image of $a_1\otimes a_2$ is represented by \[ \phi_1(a_1)\phi_2(a_2) \in C_b([1, \infty), B). \] \end{lemma} \begin{proof} Note that the image of any equivariant $\ast$-homomorphism from a $G$-$C^*$-algebra consists of $G$-continuous elements. \end{proof} \begin{lemma} (c.f.\ \cite[Section 5]{DWW18}, \cite[Construction 6.7.5]{WY2020}) For any $X$-$G$-module $H_X$, the natural map \[ \pi_X\colon C_0(X) \to C_b([1,\infty), \mathfrak{L}(H_X\otimes B)) \] and the inclusion \[ \iota \colon C_{L, c}^*(H_X\otimes B)_{\mathrm{Gcpt}}^G \subset C_b([1,\infty), \mathfrak{L}(H_X\otimes B) ) \] induce an asymptotic morphism \[ \pi_X\otimes \iota \colon C_0(X)\otimes C_{L, c}^*(H_X\otimes B)_{\mathrm{Gcpt}}^G \to \mathfrak{A}(\mathfrak{K}(H_X\otimes B)) \] such that the image of $\phi \otimes T$ is represented by \[ \phi T \in C_b([1, \infty), \mathfrak{K}(H_X\otimes B)). \] \end{lemma} \begin{proof} This follows from the previous two lemmas. \end{proof} We recall from the previous section that the right-regular representation \[ \rho\colon C_{b}([1, \infty), \mathfrak{L}(H_X\otimes B))_{\mathrm{Gcont}}\rtimes_rG \to C_{b}([1, \infty), \mathfrak{L}(\tilde H_X\otimes B)) \] restricts to \[ \rho\colon RL^*_c(H_X\otimes B)\rtimes_rG \to C_{L, c}^*(\tilde H_X\otimes B)_{\mathrm{Gcpt}}^G. \] Thus, we obtain the following asymptotic morphism. \begin{proposition}\label{prop_asympXG} For any $X$-$G$-module $H_X$, the natural map \[ \pi_X\colon C_0(X) \to C_b([1,\infty), \mathfrak{L}(\tilde H_X\otimes B)) \] and the right-regular representation \[ \rho \colon RL^*_c(H_X\otimes B)\rtimes_rG \to C_b([1,\infty), \mathfrak{L}(\tilde H_X\otimes B)) \] induce an asymptotic morphism \[ \pi_X\otimes \rho \colon C_0(X)\otimes \left(RL^*_c(H_X\otimes B)\rtimes_rG \right) \to \mathfrak{A}(\mathfrak{K}(\tilde H_X\otimes B)) \] such that the image of $\phi \otimes T$ is represented by \[ \phi \rho(T) \in C_b([1, \infty), \mathfrak{K}(\tilde H_X\otimes B)). \] \end{proposition} \begin{definition} For any $X$-$G$-module $H_X$, the element \[ [\pi_X\otimes \rho] \in E^G(C_0(X)\otimes\left(RL^*_c(H_X\otimes B)\rtimes_rG \right), B) \] is defined by the asymptotic morphism $\pi_X\otimes \rho$ in Proposition \ref{prop_asympXG}. \end{definition} \begin{definition} We define a group homomorphism \[ \rho_X \colon K_i(RL^*_c(H_X\otimes B)\rtimes_rG) \to E_i^G(C_0(X), B) \] for $i=0, 1$, by sending \[ K_i(RL^*_c(H_X\otimes B)\rtimes_rG) \cong E(\Sigma^i , RL^*_c(H_X\otimes B)\rtimes_rG) \] to $E_i^G(C_0(X), B)$ by the composition with the class $[\pi_X\otimes \rho]$ under the composition law \[ E^G(\Sigma^i , RL^*_c(H_X\otimes B)\rtimes_rG) \times E^G(C_0(X)\otimes\left(RL^*_c(H_X\otimes B)\rtimes_rG \right), B) \] \[ \to E^G(\Sigma^iC_0(X), B). \] \end{definition} \begin{lemma}\label{lem_functorialB} The group homomorphism $\rho_X$ is natural with respect to any $G$-equivariant $\ast$-homomorphism $\pi\colon B_1\to B_2$ in a sense that the following diagram commutes \begin{equation*} \xymatrix{ K_\ast(RL^*_c(H_X\otimes B_1)\rtimes_rG) \ar[d]^{\pi_\ast} \ar[r]^-{\rho_X } & E_\ast^G(C_0(X), B_1) \ar[d]^{\pi_\ast} \\ K_\ast(RL^*_c(H_X\otimes B_2)\rtimes_rG) \ar[r]^-{\rho_X } & E_\ast^G(C_0(X), B_2). } \end{equation*} \end{lemma} \begin{proof} This follows because the following diagram commutes \begin{equation*} \xymatrix{ C_0(X)\otimes \left( RL^*_c(H_X\otimes B_1)\rtimes_rG\right) \ar[r]^-{ \pi_X\otimes \rho} \ar[d]^-{\mathrm{id}_{C_0(X)}\otimes \pi} & \mathfrak{A}(\mathfrak{K}(\tilde H_X\otimes B_1)) \ar[d]^{\pi} \\ C_0(X)\otimes \left( RL^*_c(H_X\otimes B_2)\rtimes_rG \right) \ar[r]^-{ \pi_X\otimes \rho} & \mathfrak{A}(\mathfrak{K}(\tilde H_X\otimes B_2)). } \end{equation*} \end{proof} Now suppose that a $G$-equivariant continuous map $f\colon X\to Y$ is proper, so that we have $f^\ast\colon C_0(Y) \to C_0(X)$. It defines \[ f_\ast = (f^\ast)^\ast \colon E_\ast^G(C_0(X), B) \to E_\ast^G(C_0(Y), B). \] Let $H_X$ and $H_Y$ be an $X$-$G$-module and a $Y$-$G$-module respectively and suppose that there is an equivariant continuous cover $(V_t\colon H_X\to H_Y)_{t\in [1\infty)}$ of $f$. \begin{lemma}\label{lem_functorialX} The following diagram commutes for any proper $f\colon X\to Y$ and for any equivariant continuous cover $(V_t\colon H_X\to H_Y)_{t\in [1\infty)}$ of $f$: \begin{equation*} \xymatrix{ K_\ast(RL^*_c(H_X\otimes B)\rtimes_rG) \ar[d]^{\mathrm{Ad}_{V_t, \ast}} \ar[r]^-{\rho_X } & E_\ast^G(C_0(X), B) \ar[d]^{f_\ast} \\ K_\ast(RL^*_c(H_Y\otimes B)\rtimes_rG) \ar[r]^-{\rho_Y} & E_\ast^G(C_0(Y), B). } \end{equation*} \end{lemma} \begin{proof} Recall that $V_t\colon H_X\to H_Y$ is a cover of $f$ if and only if it is a cover of the identity when we view $H_X$ as a $Y$-module $(H_X)_Y$ via $f^\ast$. It is clear from the definition of $\rho$ that the diagram commutes when $V_t$ is the identity map $H_X \to (H_X)_Y$. Thus, it is enough to show the claim when $X=Y$ and $(V_t\colon H^0_X\to H^1_X)_{t\in [1\infty)}$ is a cover of the identity map on $X$. For $i=0,1$, we have \[ \pi_X^i\otimes \rho^i \colon C_0(X)\otimes RL^*_c(H^i_X\otimes B)\rtimes_rG \to \mathfrak{A}(\mathfrak{K}(\tilde H_X^i\otimes B)), \] defined by the two asymptotically commuting $\ast$-homomorphisms \[ \pi^i_X\colon C_0(X) \to C_b([1,\infty), \mathfrak{L}(\tilde H^i_X\otimes B)), \] \[ \rho^i \colon RL^*_c(H^i_X\otimes B)\rtimes_rG \to C_b([1,\infty), \mathfrak{L}(\tilde H^i_X\otimes B)). \] We consider $H_X^0\oplus H_X^1$ and $\tilde H_X^0\oplus \tilde H_X^1$. We show that two asymptotic morphisms \[ \pi_X^0\otimes \rho^0, \, \pi_X^1\otimes (\rho^1\circ \mathrm{Ad}_{V_t}) \colon C_0(X)\otimes RL^*_c(H^0_X\otimes B)\rtimes_rG \to \mathfrak{A}(\mathfrak{K}((\tilde H_X^0\otimes B \oplus \tilde H_X^1\otimes B)) \] are homotopic. Note that $V_t\colon H^0_X\to H^1_X$ defines an $G$-equivariant isometry $\tilde V_t=V_t\otimes 1 \colon \tilde H^0_X\to \tilde H^1_X$ in $C_b([1,\infty), \mathfrak{L}(\tilde H^0_X\otimes B \oplus \tilde H^1_X\otimes B ))$ which conjugates $\rho^0$ to $\rho^1\circ \mathrm{Ad}_{V_t}$. Furthermore, since $V_t$ is a cover of the identity on $X$, it satisfies \[ \pi_X^1(\phi) \tilde V_t - \tilde V_t\pi_X^0(\phi) \in C_0([1,\infty), \mathfrak{L}(\tilde H^0_X\otimes B \oplus \tilde H^1_X\otimes B )), \] and so \[ \pi_X^1(\phi) \tilde V_t\tilde V_t^\ast - \tilde V_t\pi_X^0(\phi)\tilde V_t^\ast \in C_0([1,\infty), \mathfrak{L}(\tilde H^0_X\otimes B \oplus \tilde H^1_X\otimes B )). \] From this, we see that the isometry $\tilde V_t$ asymptotically conjugates $\pi_X^0\otimes \rho^0$ to $ \pi_X^1\otimes (\rho^1\circ \mathrm{Ad}_{V_t})= \pi_X^1\otimes (\tilde V_t\rho^0 \tilde V_t^\ast)$ in a sense that \[ \mathrm{Ad}_{\tilde V_t} \left( \pi_X^0\otimes \rho^0 \right) = \pi_X^1\otimes (\rho^1\circ \mathrm{Ad}_{V_t}) \] in $\mathfrak{A}(\mathfrak{K}((\tilde H_X^0\otimes B \oplus \tilde H_X^1\otimes B))$. It follows that the two asymptotic morphisms are homotopic. The claim follows from this. \end{proof} Thanks to the previous lemma, and since $X\to E^G(C_0(X), B)$ is functorial for proper $f\colon X\to Y$, the following definition is well-defined. \begin{definition} For any $G$-compact proper $G$-space $X$, we define a group homomorphism \[ \rho_X \colon \bD^{B, G}_\ast(X) \to E_\ast^G(C_0(X), B) \] by \[ \rho_X\colon K_\ast(RL^*_c(H_X\otimes B)\rtimes_rG ) \to E_\ast^G(C_0(X), B) \] where $H_X$ is the chosen universal $X$-$G$-module. More generally for any proper $G$-space $X$, we define a group homomorphism \[ \rho_X \colon \bD^{B, G}_\ast(X) \to \varinjlim_{Y\subset X, \mathrm{Gcpt}}E_\ast^G(C_0(Y), B) \] so that the following diagram commutes: \begin{equation*} \xymatrix{ \bD^{B, G}_\ast(X) \cong \varinjlim_{Y\subset X, \mathrm{Gcpt}}\bD^{B, G}(Y) \ar[r]^{\rho_X } & \varinjlim_{Y\subset X, \mathrm{Gcpt}}E_\ast^G(C_0(Y), B) \\ \bD^{B, G}_\ast(Y) \ar[u]^{} \ar[r]^{\rho_Y } & E_\ast^G(C_0(Y), B) \ar[u]^{}. } \end{equation*} \end{definition} \begin{theorem} The group homomorphisms \[ \rho_X \colon \bD^{B, G}_\ast(X) \to \varinjlim_{Y\subset X, \mathrm{Gcpt}}E_\ast^G(C_0(Y), B) \] define a natural transformation of functors from the category $\mathcal{PR}^G$ of (second countable, locally compact) proper $G$-spaces to the category $\mathcal{GA}$ of graded abelian groups. Furthermore, the transformation is natural with respect to a $G$-equivariant $\ast$-homomorphism $\pi\colon B_1\to B_2$ in a sense that the following diagram commutes \begin{equation*} \xymatrix{ \bD^{B_1, G}_\ast(X) \ar[d]^{\pi_\ast} \ar[r]^-{\rho_X} & \varinjlim_{Y\subset X, \mathrm{Gcpt}}E_\ast^G(C_0(Y), B_1) \ar[d]^{\pi_\ast} \\ \bD^{B_2, G}_\ast(X) \ar[r]^-{\rho_X } & \varinjlim_{Y\subset X, \mathrm{Gcpt}}E_\ast^G(C_0(Y), B_2). } \end{equation*} \end{theorem} \begin{proof} The first assertion follows from Lemma \ref{lem_functorialX}. The second assertion follows from Lemma \ref{lem_functorialB}. \end{proof} The next goal is to show the following. \begin{theorem}\label{thm_forget_factor} The forget-control map $\mathcal{F}\colon \bD_\ast^{B, G}(X) \to K_\ast(B\rtimes_rG)$ factors through the Baum--Connes assembly map $\mu_X^{B, G}$ via $\rho_X$ in a sense that the following diagram commutes: \begin{equation*} \xymatrix{ \bD_\ast^{B, G}(X) \ar[dr]^{\rho_X} \ar[rr]^{\mathcal{F}} & & K_\ast(B\rtimes_rG) \\ & \varinjlim_{Y\subset X, \mathrm{Gcpt}}E_\ast^G(C_0(Y), B). \ar[ur]^{\mu^{B, G}_X} & } \end{equation*} \end{theorem} For this, we need some preparation. Recall that for any $G$-Hilbert $B$-module $\mathcal{E}$, a Hilbert $B\rtimes_rG$-module $\mathcal{E}\rtimes_rG$ is defined \cite[Defintion 3.8]{Kasparov88}. For simplicity (for our purpose, this is enough) we will only consider $\mathcal{E}$ of the form $H\otimes B$ where $H$ is a $G$-Hilbert space and in this case $\mathcal{E}\rtimes_rG=H\otimes B \rtimes_rG $ is canonically identified as the Hilbert $B\rtimes_rG$-module $H\otimes (B\rtimes_rG)$. Kasparov defined for any $G$-equivariant $\ast$-homomorphism $\pi\colon A\to \mathfrak{L}(\mathcal{E})$, its crossed product $\pi\rtimes_r1\colon A\rtimes_rG \to \mathfrak{L}(\mathcal{E}\rtimes_rG)$. In the case when $\mathcal{E}=H\otimes B$, the $\ast$-homomorphism $\pi\rtimes_r1$ is defined by sending $a \in A$ to \[ \pi(a) \in \mathfrak{L}(\mathcal{E}) \subset \mathfrak{L}(\mathcal{E}\rtimes_rG) \] and $g \in G$ to \[ \left( g\otimes u_g\colon v\otimes f \mapsto gv\otimes u_gf \right) \in \mathfrak{L}(H\otimes B \rtimes_rG) \] where $u_g\in M(B\rtimes_rG)$ is the unitary corresponding to $g$. \begin{lemma}\label{lem_isom_Uc}(c.f.\ \cite[Proposition 5.2]{Nishikawa19}) Let $X$ be a $G$-compact proper $G$-space. Let $H_X$ be an $X$-$G$-module and let $\tilde H_X$ be the $X$-$G$-module as in Definition \ref{def_tildeH}. Consider \[ \pi_X\colon C_0(X) \to \mathfrak{L}(\tilde H_X\otimes B), \] the structure map for the $X$-$G$-module $\tilde H_X\otimes B$ and consider its Kasparov's crossed product (see the explanation right before the lemma) \[ \pi_X\rtimes_r1 \colon C_0(X)\rtimes_rG \to \mathfrak{L}(\tilde H_X\otimes B\rtimes_rG). \] Let $p_c(g)=\Delta(g)^{-1/2}cg(c)$ be the cut-off projection in $C_0(X)\rtimes_rG$ for a cut-off function $c \in C_c(X)$ and let \[ \rho\colon \mathfrak{L}(H_X\otimes B)_{\mathrm{Gcont}}\rtimes_rG \to \mathfrak{L}(\tilde H_X\otimes B) \subset \mathfrak{L}(\tilde H_X\otimes B\rtimes_rG) \] be the right-regular representation as in Definition \ref{def_rightreg}. There is an adjointable isometry \[ U_c\colon H_X\otimes B\rtimes_rG \to \tilde H_X\otimes B\rtimes_rG \] of Hilbert $B\rtimes_rG$-modules such that \[ U_cU_c^\ast = (\pi_X\rtimes_r1)(p_c) \] and such that the c.c.p.\ map \[ U_c^\ast \rho U_c \colon \mathfrak{L}(H_X\otimes B)_{\mathrm{Gcont}}\rtimes_rG \to \mathfrak{L}(H_X\otimes B\rtimes_rG) \] is identified as the c.c.p.\ map on $\mathfrak{L}(H_X\otimes B)_{\mathrm{Gcont}}\rtimes_rG$ (naturally viewed as a subalgebra of $\mathfrak{L}(H_X\otimes B\rtimes_rG)$ via Kasparov's crossed product) defined by the $G$-equivariant c.c.p.\ map \[ \mathfrak{L}(H_X\otimes B)\ni T \mapsto T'= \int_{g\in G} g(c)Tg(c) d\mu_G(g) \in \mathfrak{L}(H_X\otimes B). \] \end{lemma} \begin{proof} An isometry $U_c\colon H_X\otimes B\rtimes_rG \to \tilde H_X\otimes B\rtimes_rG$ is defined by sending $v \in H_X\otimes B\rtimes_rG$ to \[ (G\ni h\mapsto \Delta(h)^{-1/2} (\pi_X\rtimes_r1)(c)(h\otimes u_h) v \in H_X\otimes B\rtimes_rG ) \] in $L^2(G)\otimes H_X\otimes B\rtimes_rG$. Its adjoint $U_c^\ast$ is defined by sending \[ (G\ni h \mapsto v_h \in H_X\otimes B\rtimes_rG ) \in L^2(G)\otimes H_X\otimes B\rtimes_rG \] to \[ \int_{h \in G} \Delta(h)^{-1/2} (h^{-1}\otimes u_{h^{-1}})((\pi_X\rtimes_r1)(c) v_h) d\mu_G(h) \] which converges weakly in $H_X\otimes B\rtimes_rG$. We can first see that $U_c$ is well-defined and $||U_c||=1$ (as an a-priori not necessarily adjointable map) and from which the weak convergence of the formula of $U_c^\ast$ can be deduced. The equalities $U_c^\ast U_c=1$ and $U_cU_c^\ast=(\pi_X\rtimes_r1)(p_c)$ can be checked directly. Now, the following equalities can be checked directly, \[ U_c^\ast \rho(T)U_c = \int_{h\in G} h(c)Th(c) d\mu_G(h) \in \mathfrak{L}(H_X\otimes B) \subset \mathfrak{L}(H_X\otimes B\rtimes_rG) \] for any $T\in \mathfrak{L}(H_X\otimes B)$ and \[ U_c^\ast \rho_g U_c = g\otimes u_g \in \mathfrak{L}(H_X\otimes B\rtimes_rG) \] for any $g\in G$. \end{proof} \begin{lemma}\label{lem_compute} Let $X$ be a $G$-compact proper $G$-space. Let $H_X$ be any $X$-$G$-module. Consider the element \[ [\pi_X\otimes \rho] \in E^G(C_0(X)\otimes\left(RL^*_c(H_X\otimes B)\rtimes_rG \right), B). \] Let \[ j^G_r([\pi_X\otimes \rho]) \in E(C_0(X)\rtimes_rG \otimes\left(RL^*_c(H_X\otimes B)\rtimes_rG \right), B\rtimes_rG) \] be its reduced crossed product (which is defined since the first variable is a proper $G$-$C^*$-algebra). Let \[ [p_c] \in E(\C, C_0(X)\rtimes_rG) \] be the element corresponding to the cut-off projection $p_c$ in $C_0(X)\rtimes_rG$ for a cut-off function $c$ on $X$. Then, the class \[ j^G_r([\pi_X\otimes \rho]) \circ [p_c] \in E(RL^*_c(H_X\otimes B)\rtimes_rG, B\rtimes_rG) \] is equal to the class associated to the natural inclusion map \[ RL^*_c(H_X\otimes B)\rtimes_rG \to C_b([1, \infty), \mathfrak{K}(H_X\otimes B)\rtimes_rG). \] \end{lemma} \begin{proof} The crossed product $j^G_r([\pi_X\otimes \rho])$ is represented by the product of the two asymptotically commuting representations \[ \pi_X\rtimes_r1 \colon C_0(X)\rtimes_rG \to \mathfrak{L}(\tilde H_X\otimes B\rtimes_rG) \subset C_b([1, \infty), \mathfrak{L}(\tilde H_X\otimes B\rtimes_rG)) \] and \[ \rho\colon RL^*_c(H_X\otimes B)\rtimes_rG \to C_b([1, \infty), \mathfrak{L}(\tilde H_X\otimes B)) \subset C_b([1, \infty), \mathfrak{L}(\tilde H_X\otimes B\rtimes_rG)). \] The class $j^G_r([\pi_X\otimes \rho])\circ [p_c]$ is represented by the asymptotic morphism from $RL^*_c(H_X\otimes B)\rtimes_rG$ to $\mathfrak{A}( \mathfrak{K}(\tilde H_X\otimes B\rtimes_rG))$ which is represented by the c.c.p.\ map \[ \pi_X\rtimes_r1 (p_c) \rho \pi_X\rtimes_r1 (p_c) \colon RL^*_c(H_X\otimes B)\rtimes_rG \to C_b([1, \infty), \mathfrak{K}(\tilde H_X\otimes B\rtimes_rG)). \] In view of Lemma \ref{lem_isom_Uc}, this map can be identified as the c.c.p.\ map \[ RL^*_c(H_X\otimes B)\rtimes_rG \to C_b([1, \infty), \mathfrak{K}(H_X\otimes B\rtimes_rG)) \] defined as the natural inclusion \[ RL^*_c(H_X\otimes B)\rtimes_rG \to C_b([1, \infty), \mathfrak{K}(H_X\otimes B\rtimes_rG)) \] preceded by the c.c.p.\ map on $RL^*_c(H_X\otimes B)\rtimes_rG$ induced by the $G$-equivariant c.c.p.\ map \[ T_t \mapsto \int_{h\in G} h(c)T_th(c) d\mu_G(h) \] on $RL_c^\ast(H_X\otimes B)$. Thus, to prove our claim, we just need to show that for any $T \in RL^*_c(H_X\otimes B)$, \[ ||T_t - \int_{h\in G} h(c)T_th(c) d\mu_G(h)|| \to 0 \] as $t\to \infty$. We may assume $T$ has uniform compact support in $X$. In this case, $h(c)T_t=0$ for $h\in G\backslash K$ for some compact subset $K$ of $G$ and $||[h(c), T_t]||\to 0$ uniformly in $h\in K$ as $t\to \infty$. We see the last assertion holds so we are done. \end{proof} \begin{lemma}\label{lem_factor_Gcompact} Let $X$ be a $G$-compact proper $G$-space and $H_X$ be an $X$-$G$-module. The forget-control map $\mathcal{F}\colon K_\ast(RL^*_c(H_X\otimes B)\rtimes_rG) \to K_\ast(B\rtimes_rG)$ factors through $E_\ast^G(C_0(X), B)$ via $\rho_X$. That is, the following diagram commutes, \begin{equation*} \xymatrix{ K_\ast(RL^*_c(H_X\otimes B)\rtimes_rG) \ar[dr]^{\rho_X} \ar[rr]^{\mathcal{F}} & & K_\ast(B\rtimes_rG) \\ & E_\ast^G(C_0(X), B) \ar[ur]^{\mu^G_X}. & } \end{equation*} \end{lemma} \begin{proof} Let $[\phi] \in K_i(RL^*_c(H_X\otimes B)\rtimes_rG) = E(\Sigma^{i}, RL^*_c(H_X\otimes B)\rtimes_rG)$. Using the functoriality of the crossed product functor and the composition law, we see that the assembly map $\mu^G_X$ sends $\rho_X([\phi])$ to the composition of $[\phi]$ with the element \[ j^G_r([\pi_X\otimes \rho])\circ [p_c] \in E(RL^*_c(H_X\otimes B)\rtimes_rG, B\rtimes_rG) \] under the composition law \[ E(\Sigma^i, RL^*_c(H_X\otimes B)\rtimes_rG) \times E(RL^*_c(H_X\otimes B)\rtimes_rG, B\rtimes_rG) \to E(\Sigma^i, B\rtimes_rG). \] By Lemma \ref{lem_compute}, the element $j^G_r([\pi_X\otimes \rho])\circ [p_c]$ is represented by the natural inclusion \[ RL^*_c(H_X\otimes B)\rtimes_rG \to C_b([1, \infty), \mathfrak{K}(H_X\otimes B)\rtimes_rG). \] That is $j^G_r([\pi_X\otimes \rho])\circ [p_c]$ is represented by a continuous family $(\mathrm{ev}_{t})_{t\in [1, \infty)}$ of $\ast$-homomorphisms (evaluation at $t$). On the other hand, such a continuous family of $\ast$-homomorphisms is homotopic (as an asymptotic morphism) to the constant one $\mathrm{ev}_{1}$. It is now clear that the element $\mu^G_X \circ \rho_X ([\phi])$ coincides in $E_\ast(\C, B\rtimes_rG)$ with the one represented by the composition of $\phi$ with the evaluation map $RL^*_c(H_X\otimes B)\rtimes_rG \to \mathfrak{K}(H_X)\otimes B\rtimes_rG$ at $t=1$ so we are done. \end{proof} \begin{proof}[Proof of Theorem \ref{thm_forget_factor}] The theorem follows from Lemma \ref{lem_factor_Gcompact}. \end{proof} As the last thing in this section, we prove that the natural transformation \[ \rho_X\colon \bD_\ast^{B, G}(X) \to \varinjlim_{Y\subset X, \mathrm{Gcpt}}E_\ast^G(C_0(Y), B) \] is an isomorphism when $G$ is discrete and if $X$ is $G$-equivariantly homotopic to a $G$-CW complex. Recall that for any open subgroup $H$ of $G$ and for any proper $H$-space $Y$, if $X$ is the balanced product $G\times_HY$, we have a natural isomorphism (see Theorem \ref{thm_coeff} (2)) \[ \bD_\ast^{B, G}(X) \cong \bD_\ast^{B, H}(Y). \] We also have a natural isomorphism (\cite[Lemma 12.11]{GHT}, \cite[Proposition 5.14]{CE01}) \[ E_\ast^G(C_0(X), B) \cong E_\ast^H(C_0(Y), B). \] The rightward map is obtained by the restriction to the $H$-$C^*$-subalgebra $C_0(Y) \subset C_0(X)$. \begin{lemma} The natural transformation $\rho_X$ commutes with the induction. That is, for any open subgroup $H$ of $G$, for any proper $H$-space $Y$ and for $X=G\times_HY$, the following diagram commutes. \begin{equation*} \xymatrix{ \bD_\ast^{B, G}(X)\ar[d]^-{\cong} \ \ar[r]^-{\rho_X} & E^G(C_0(X), B) \ar[d]^-{\cong} \\ \bD_\ast^{B, H}(Y) \ar[r]^-{\rho_Y} & E^H(C_0(Y), B). \\ } \end{equation*} \end{lemma} \begin{proof} Recall that the isomorphism $\bD_\ast^{B, H}(Y) \cong \bD_\ast^{B, G}(X)$ is obtained by the canonical inclusion \[ RL^*_c(H_Y\otimes B)\rtimes_rH \to RL^*_c(H_X\otimes B)\rtimes_rG. \] With this in mind, the claim is direct to check. \end{proof} \begin{theorem}\label{thm_discrete_isom} Let $G$ be a countable discrete group and $X$ be a proper $G$-space which is $G$-equivariantly homotopy equivalent to a $G$-CW complex. Then, the group homomorphism \[ \rho_X\colon \bD_\ast^{B, G}(X) \to \varinjlim_{Y\subset X, \mathrm{Gcpt}}E_\ast^G(C_0(Y), B) \cong \varinjlim_{Y\subset X, \mathrm{Gcpt}}KK_\ast^G(C_0(Y), B) \] is an isomorphism for any separable $G$-$C^*$-algebra $B$. \end{theorem} \begin{proof} Since both functors satisfy the axioms (1)-(5) in Theorem \ref{thm_coeff} and since $\rho_X$ is a natural transformation which commutes with the induction from a finite subgroup (more generally from an open subgroup), it is enough to prove that $\rho_X$ is an isomorphism when $G$ is a finite group $H$ and $X$ is a point. On the other hand, we have the following commutative diagram \begin{equation*} \xymatrix{ \bD_\ast^{B, H}(\mathrm{point}) \ar[dr]^{\rho_X} \ar[rr]^{\mathcal{F}} & & K_\ast(B\rtimes_rH) \\ & E_\ast^H(\C, B). \ar[ur]^{\mu_r^{B, H}} & } \end{equation*} We know that the assembly map $\mu_r^{B, H}$ is an isomorphism for any finite group $H$. We also know that the forget-control map is an isomorphism for any finite group $H$ when $X$ is a point (see Proposition \ref{prop_pointcase}). Thus, $\rho_X$ is an isomorphism. Of course, we may also directly show that $\rho_X$ is an isomorphism. \end{proof} \begin{theorem} Let $G$ be a countable discrete group $G$ and $X$ be a proper $G$-space which is $G$-equivariantly homotopy equivalent to a $G$-CW complex. The forget-control map $\mathcal{F}\colon \bD_\ast^{B, G}(X) \to K_\ast(B\rtimes_rG)$ is naturally equivalent to the Baum--Connes assembly map $\mu_{X}^{B, G}$ for any separable $G$-$C^*$-algebra $B$. \end{theorem} \begin{proof} This follows from Theorem \ref{thm_forget_factor} and Theorem \ref{thm_discrete_isom}. \end{proof} Let $RL^0_c(H_X\otimes B)$ be the kernel of the evaluation map $\mathrm{ev}_1$ on $RL^*_c(H_X\otimes B)$. The short exact sequence \[ 0 \to RL^0_c(H_X\otimes B) \to RL^*_c(H_X\otimes B) \to \mathfrak{K}(H_X)\otimes B \to 0 \] admits a $G$-equivariant c.c.p.\ splitting (by extending constantly and by multiplying a bump function) and thus it descends to the short exact sequence \[ 0 \to RL^0_c(H_X\otimes B)\rtimes_rG \to RL^*_c(H_X\otimes B)\rtimes_rG \to (\mathfrak{K}(H_X)\otimes B)\rtimes_rG \to 0. \] \begin{corollary} For any countable discrete group $G$, the Baum--Connes assembly map $\mu^{B, G}_r$ is an isomorphism if and only if \[ K_\ast(RL^0_c(H_X\otimes B)\rtimes_rG)=0 \] for a universal (or ample) $X$-$G$-module $H_X$ for $X=\underline{E}G$. \end{corollary} We also have the following short exact sequence \[ 0 \to (RL^0_c(H_X)\otimes B)\rtimes_rG \to( RL^*_c(H_X)\otimes B)\rtimes_rG \to (\mathfrak{K}(H_X)\otimes B)\rtimes_rG \to 0. \] Recall that the natural transformation $\mathbb{D}^{\otimes B, G}_\ast(X) \to \mathbb{D}^{B, G}_\ast(X)$ is an isomorphism if $G$ is discrete and $X$ is $G$-equivariantly homotopy equivalent to a proper $G$-CW complex (Theorem \ref{thm_discrete_natural_isom}). Hence, we also have the following. \begin{theorem}\label{thm_otimesB} Let $G$ be a countable discrete group $G$ and $X$ be a proper $G$-space which is $G$-equivariantly homotopy equivalent to a $G$-CW complex. The forget-control map $\mathcal{F}\colon \bD_\ast^{\otimes B, G}(X) \to K_\ast(B\rtimes_rG)$ is naturally equivalent to the Baum--Connes assembly map $\mu_{X}^{B, G}$ for any separable $G$-$C^*$-algebra $B$. \end{theorem} \begin{corollary}\label{cor_discrete_N} For any countable discrete group $G$, the Baum--Connes assembly map $\mu^{B, G}_r$ is an isomorphism if and only if \[ K_\ast((RL^0_c(H_X)\otimes B)\rtimes_rG)=0 \] for a universal (or ample) $X$-$G$-module $H_X$ for $X=\underline{E}G$. \end{corollary} \section{$\rho_X$ is an isomorphism, part I} We begin our proof of the following. \begin{theorem*} Let $G$ be a locally compact group. The natural transformation \[ \rho_X\colon \bD_\ast^{B, G}(X) \to \varinjlim_{Y\subset X, \mathrm{Gcpt}}E_\ast^G(C_0(Y), B) \cong \varinjlim_{Y\subset X, \mathrm{Gcpt}}KK_\ast^G(C_0(Y), B) \] is an isomorphism for any proper $G$-space $X$ and for any separable $G$-$C^*$-algebra $B$. \end{theorem*} The proof will be given over the next four sections. We will entirely focus on the case when $X$ is $G$-compact. Note that the general case follows from this. Here is a rough idea of the proof. Let $X$ be a $G$-compact proper $G$-space. We first show (in this section) that the right-regular representation induces an isomorphism \[ \rho \colon K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \cong K_\ast(C^*_{L, u}(\tilde H_X\otimes B)^G) \] whenever $H_X$ is a universal $X$-$G$-module. Then, we prove an isomorphism \[ K_\ast(C^*_{L, u}(\tilde H_X\otimes B)^G) \cong E_\ast^G(C_0(X), B) \cong KK_\ast^G(C_0(X), B) \] following the idea of \cite{DWW18}. Let $X$ be a $G$-compact proper $G$-space and $H_X$ be an $X$-$G$-module. We recall that the equivariant Roe algebra $C^*(H_X\otimes B)^G=C^*(H_X\otimes B)^G_{\mathrm{Gcpt}}$ is defined as the completion of the $\ast$-algebra $\C(H_X\otimes B)^G$ consisting of $G$-equivariant, locally compact operators in $\mathfrak{L}(H_X\otimes B)$ that are properly supported. Note if $X_0$ is any compact subset of $X$ such that $GX_0=X$ then a $G$-equivariant operator $T$ on $H_X\otimes B$ is properly supported if and only if there is a compact subset $X_1$ of $X$ so that $\chi_{X_0}T=\chi_{X_0}T\chi_{X_1}$ and $T\chi_{X_0}=\chi_{X_1}T\chi_{X_0}$. We also recall that the localized equivariant Roe algebra $C^*_{L, u}(H_X\otimes B)^G=C^*_{L, u}(H_X\otimes B)^G_{\mathrm{Gcpt}}$ is the norm completion of the $\ast$-subalgebra $\C_{L, u}(H_X\otimes B)^G$ of $C_{b, u}([1, \infty), \mathfrak{L}(H_X\otimes B))$ consisting of $\C(H_X\otimes B)^G$-valued functions $T_t$ with $\mathrm{prop}(T_t)\to 0$ with respect to a (any) fixed metric $d$ on $X^+$. Let $H_X$ be any $X$-$G$-module. We introduce several $C^*$-subalgebras of $C_{b, u}([1, \infty), \mathfrak{L}(H_X\otimes B))$ containing the localized equivariant Roe algebra. Let \[ \pi\colon C_0(X) \to \mathfrak{L}(H_X\otimes B) \] be the structure map for the $X$-$G$-module $H_X\otimes B$. We let $\mathcal{C}(\pi)^G$ to be the $C^*$-subalgebra of $\mathfrak{L}(H_X\otimes B)$ consisting of $G$-equivariant, locally compact operators. We have an inclusion \[ C^*(H_X\otimes B)^G \subset \mathcal{C}(\pi)^G. \] \begin{itemize} \item $\mathcal{C}_L(H_X\otimes B)^G$ is the $C^*$-subalgebra of $C_{b, u}([1, \infty), C^*(H_X\otimes B)^G)$ consisting of $T_t$ such that $\limt||[\phi, T_t]||= 0$ for any $\phi \in C_0(X)$. \item $\mathcal{C}_L(\pi)^G$ is the $C^*$-subalgebra of $C_{b, u}([1, \infty), \mathcal{C}(\pi)^G)$ consisting of functions $T_t$ such that $\limt||[\phi, T_t]||= 0$ for any $\phi \in C_0(X)$. \end{itemize} We have inclusions \[ C^*_{L, u}(H_X\otimes B)^G \subset \mathcal{C}_L(H_X\otimes B)^G \subset \mathcal{C}_L(\pi)^G. \] Let $c$ be a cut-off function on $X$. A $G$-equivariant u.c.p.\ map $\psi_c$ on $\mathfrak{L}(H_X\otimes B)$ defined as \[ \psi_c\colon T\mapsto \int_{g\in G} g(c)T g(c) d\mu_G(g) \] extends to a $G$-equivariant u.c.p.\ map on $C_{b, u}([1, \infty), \mathfrak{L}(H_X\otimes B))$. Note that $\psi_c$ sends any $G$-equivariant, locally compact operator to a $G$-equivariant, locally compact operator which is properly supported. Thus, $\psi_c$ maps $\mathcal{C}(\pi)^G$ to $\C(H_X\otimes B)^G\subset C^*(H_X\otimes B)^G$. Moreover, $\psi_c$ sends $\mathcal{C}_L(\pi)^G$ to $\mathcal{C}_L(H_X\otimes B)^G$. \begin{lemma}\label{lem_properly} Let $T \in \mathcal{C}_L(\pi)^G$. The following are equivalent: \begin{enumerate} \item $\lim_{t\to \infty} ||T_t - \psi_c(T_t)|| = 0$ for any cut-off function $c$ on $X$. \item $\lim_{t\to \infty} ||T_t - \psi_c(T_t)|| = 0$ for some cut-off function $c$ on $X$. \item There is $S$ in $\mathcal{C}_L(H_X\otimes B)^G$ such that $\lim_{t\to \infty}||T_t-S_t||= 0$ and $S$ is properly supported in a sense that for any compact subset $A$ of $X$, there is a compact subset $B$ of $X$ such that $\chi_AS=\chi_A S\chi_B$ and $S\chi_A=\chi_B S\chi_A$. \item There is $S$ in $\mathcal{C}_L(\pi)^G$ such that $\lim_{t\to \infty} ||T_t-S_t||= 0$ and $S$ is properly supported. \end{enumerate} \end{lemma} \begin{proof} (1) $\implies$ (2): Obvious. (2) $\implies$ (3): This is because $\psi_c(T)$ is properly supported for any $T$. (3) $\implies$ (4): Obvious. (4) $\implies$ (1): It is enough to show that if $S$ in $\mathcal{C}_L(\pi)^G$ is properly supported, then $\limt||S_t - \psi_c(S_t)||= 0$ for any cut-off function $c$ on $X$. Since $S$ is properly supported, there is $\chi\in C_c(X)$ such that $c=c\chi$ and $cS=cS\chi$. We have \[ S_t - \psi_c(S_t) = \int_{g\in G} g(c)^2S_t - g(c)S_tg(c) d\mu_G(g) \] \[ = \int_{g\in G} g(c)^2S_tg(\chi) - g(c)S_tg(c)g(\chi) d\mu_G(g) = \int_{g\in G} g(c)[g(c), S_t]g(\chi) d\mu_G(g). \] We have $\limt||[g(c), S_t]|| = \limt||[c, S_t]|| = 0$ (uniformly in $g$ in $G$). It follows $\limt||S_t - \psi_c(S_t)||=0$. \end{proof} It is easy to see that the conditions (3) and (4) are preserved by taking sums, products and adjoint and that the conditions (1) and (2) pass to the norm limit. We define some $C^*$-subalgebras of $\mathcal{C}_L(H_X\otimes B)^G$ and $\mathcal{C}_L(\pi)^G$ as follows. \begin{itemize} \item $\mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}$ is the $C^*$-subalgebra of $\mathcal{C}_L(H_X\otimes B)^G$ consisting of $T$ satisfying the four equivalent conditions in Lemma \ref{lem_properly}. \item $\mathcal{C}_L(\pi)^G_{\mathrm{proper}}$ is the $C^*$-subalgebra of $\mathcal{C}_L(\pi)^G$ consisting of $T$ satisfying the four equivalent conditions in Lemma \ref{lem_properly}. \end{itemize} We have inclusions \[ \xymatrix{ \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}} \ar[r]^-{} \ar[d]^-{} & \mathcal{C}_L(H_X\otimes B)^G \ar[d]^-{} \\ \mathcal{C}_L(\pi)^G_{\mathrm{proper}} \ar[r]^-{} & \mathcal{C}_L(\pi)^G. } \] We have the following commutative diagram of short exact sequences \[ \xymatrix{ 0 \ar[r]^-{} & C_0([1, \infty), C^*(H_X\otimes B)^G) \ar[r]^-{} \ar[d]^-{} & \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}} \ar[r]^-{} \ar[d]^-{} & \mathcal{C}_{L, Q}(H_X\otimes B)^G_{\mathrm{proper}} \ar[d]^-{\cong} \ar[r]^-{} & 0 \\ 0 \ar[r]^-{} & C_0([1, \infty), \mathcal{C}(\pi)^G) \ar[r]^-{} & \mathcal{C}_L(\pi)^G_{\mathrm{proper}} \ar[r]^-{} & \mathcal{C}_{L, Q}(\pi)^G_{\mathrm{proper}} \ar[r]^-{} & 0. } \] In particular, the inclusion induces an isomorphism \[ K_\ast( \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}} ) \cong K_\ast( \mathcal{C}_{L}(\pi)^G_{\mathrm{proper}}). \] \begin{proposition} The two $C^*$-subalgebras $C^*_{L, u}(H_X\otimes B)^G$ and $\mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}$ in $C_{b, u}([1, \infty), \mathfrak{L}(H_X\otimes B))$ are identical. \end{proposition} \begin{proof} $C^*_{L, u}(H_X\otimes B)^G\subset \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}$: Take any $T\in C_{b, u}([1, \infty), \C(H_X\otimes B)^G)$ such that $\mathrm{prop}(T_t)\to 0$ with respect to a fixed metric $d$ on $X^+$. Let $X_0$ be a compact subset of $X$ such that $GX_0=X$ and $X_1$ be any compact neighborhood of $X_0$ in $X$. Since $\mathrm{prop}(T_t)\to 0$, there is $t_0\geq1$ such that for any $t>t_0$, $\chi_{X_0}T_t=\chi_{X_0}T_t\chi_{X_1}$ and $T_t\chi_{X_0}=\chi_{X_1}T_t\chi_{X_0}$. This implies that $T$ satisfies the condition (3) of Lemma \ref{lem_properly}. We already know $C^*_{L, u}(H_X\otimes B)^G\subset \mathcal{C}_L(H_X\otimes B)^G$ so we have $C^*_{L, u}(H_X\otimes B)^G\subset \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}$. $\mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}} \subset C^*_{L, u}(H_X\otimes B)^G$: Let $T\in \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}$. It is enough to show that $\psi_c(T)\in C^*_{L, u}(H_X\otimes B)^G$ for a cut-off function $c$ on $X$ since both algebras contain the common ideal $C_0([1, \infty), C^*(H_X\otimes B)^G)$. Since $\lim_{t\to \infty}||[T_t, \phi]||= 0$ for any $\phi \in C_0(X)$, there is $S \in C_{b, u}([1, \infty), \mathfrak{L}(H_X))$ such that $\lim_{t\to \infty}||T_t -S_t||=0$ and such that $\mathrm{prop}(S_t)\to 0$ as $t\to \infty$ for some (any) fixed metric $d$ on $X^+$. We can arrange it so that $S_t$ a $G$-continuous, locally compact operator in $\mathfrak{L}(H_X)$ for each $t\geq1$. In this case, since $\lim_{t\to \infty}||T_t -S_t||=0$ with $T$ $G$-equivariant, we see that $S$ is $G$-continuous in $C_{b, u}([1, \infty), \mathfrak{L}(H_X))$. Now consider $\tilde S \in C_{b, u}([1, \infty), \mathfrak{L}(H_X\otimes B))$ defined (weakly) by \[ \tilde S_t = \int_{g\in G} g(c)g(S_t) g(c) d\mu_G(g). \] Note this is uniformly continuous in $t$. Indeed $S \mapsto \tilde S$ is a u.c.p.\ map on $C_{b, u}([1, \infty), \mathfrak{L}(H_X\otimes B))$ ($S$ does not have to be $G$-continuous for this to be well-defined). We have \[ \psi_c(T_t) - \tilde S_t = \int_{g\in G} g(c)(T_t- g(S_t)) g(c) d\mu_G(g). \] Together with $\limt||T_t- g(S_t)|| = \limt||T_t- S_t|| = 0$ (uniformly in $g$ in $G$), we see that $\lim_{t\to \infty}||\psi_c(T_t) - \tilde S_t||=0$. Since $\tilde S_t \in \C(H_X\otimes B)^G \subset C^*(H_X\otimes B)^G$ for any $t\geq1$, to show $\psi_c(T) \in C^*_{L, u}(H_X\otimes B)^G$ it is enough to show $\tilde S \in C^*_{L, u}(H_X\otimes B)^G$. For this, it suffices to show $\mathrm{prop}(\tilde S_t)\to 0$ as $t \to \infty$ with respect to the fixed metric $d$ on $X^+$. It suffices to show $\mathrm{prop}(\chi_{X_0}\tilde S_t)\to 0$ and $\mathrm{prop}(\tilde S_t\chi_{X_0})\to 0$ as $t\to \infty$ for any compact subset $X_0$ of $X$. On the other hand, \[ \chi_{X_0}\tilde S_t = \int_{g\in G} \chi_{X_0}g(c)g(S_t) g(c) d\mu_G(g) \] and $\chi_{X_0}g(c)=0$ for any $g\in G\backslash K$ for some compact subset $K$ of $G$. The claim $\mathrm{prop}(\chi_{X_0}\tilde S_t)\to 0$ now follows since $\mathrm{prop}(g(S_t)) \to 0$ uniformly in $g \in K$. We also have $\mathrm{prop}(\tilde S_t\chi_{X_0})\to 0$ for the same reason, so we are done. \end{proof} Note we have a natural inclusion ($\ast$-homomorphism) \[ \iota\colon \mathcal{C}_L(\pi)^G \to M(RL^*_u(H_X\otimes B)) \subset M(RL^*_u(H_X\otimes B)\rtimes_rG) \] just because $T\in \mathcal{C}_L(\pi)^G$ satisfies $\lim_{t\to \infty}||[\phi, T_t]|| = 0$. Moreover, for any $T\in \mathcal{C}_L(\pi)^G$, $\phi$ in $C_0(X)$, we have \[ \phi T \in RL^*_u(H_X\otimes B) \] because $T_t$ is locally compact and $\lim_{t\to \infty}||[\phi, T_t]|| = 0$. We let \[ \pi\rtimes_r1\colon C_0(X)\rtimes_rG \to M(RL^*_u(H_X\otimes B)\rtimes_rG) \] be the crossed product of the structure map $\pi\colon C_0(X) \to M(RL^*_u(H_X\otimes B))$. Let $c \in C_c(X)$ be a cut-off function on $X$ and $p_c\in C_0(X)\rtimes_rG$ be the associated cut-off projection. We have a c.c.p.\ map \[ (\pi\rtimes_r1)(p_c) \iota (\pi\rtimes_r1)(p_c) \colon \mathcal{C}_L(\pi)^G \to RL^*_u(H_X\otimes B)\rtimes_rG \] which sends $T \in \mathcal{C}_L(\pi)^G$ to the compression $(\pi\rtimes_r1)(p_c) \iota(T) (\pi\rtimes_r1)(p_c) $ in $RL^*_u(H_X\otimes B)\rtimes_rG$. For any $T \in \mathcal{C}_L(\pi)^G$, we have \[ [\iota(T), (\pi\rtimes_r1)(p_c)] \in RL^*_0(H_X\otimes B)\rtimes_rG. \] Thus, after passing to the quotient $RL^*_{u,Q}(H_X\otimes B)\rtimes_rG$, the c.c.p.\ map $(\pi\rtimes_r1)(p_c) \iota (\pi\rtimes_r1)(p_c)$ becomes a $\ast$-homomorphism \[ (\pi\rtimes_r1)(p_c) \iota (\pi\rtimes_r1)(p_c) \colon \mathcal{C}_L(\pi)^G \to RL^*_{u, Q}(H_X\otimes B)\rtimes_rG. \] Recall that the quotient map induces an isomorphism \[ K_\ast(RL^*_{u}(H_X\otimes B)\rtimes_rG) \cong K_\ast(RL^*_{u, Q}(H_X\otimes B)\rtimes_rG). \] \begin{definition} We define a group homomorphism \[ \iota_{p_c} \colon K_\ast(\mathcal{C}_L(\pi)^G) \to K_\ast(RL^*_{u}(H_X)\rtimes_rG) \] as the composition of the group homomorphism \[ (\pi\rtimes_r1)(p_c) \iota (\pi\rtimes_r1)(p_c)_\ast \colon K_\ast(\mathcal{C}_L(\pi)^G) \to K_\ast(RL^*_{u, Q}(H_X)\rtimes_rG) \] and the inverse of the isomorphism \[ K_\ast(RL^*_{u}(H_X\otimes B)\rtimes_rG) \cong K_\ast(RL^*_{u, Q}(H_X\otimes B)\rtimes_rG) \] induced by the quotient map. If $A$ is any of the subalgebras \[ \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}, \mathcal{C}_L(\pi)^G_{\mathrm{proper}}, \mathcal{C}_L(H_X\otimes B)^G \] of $\mathcal{C}_L(\pi)^G$, we set \[ \iota_{p_c} \colon K_\ast(A) \to K_\ast(RL^*_{u}(H_X\otimes B)\rtimes_rG) \] be the restriction of $\iota_{p_c}$ to the subalgebra $A$. \end{definition} Now, for an $X$-$G$-module $H_X$, let $\tilde H_X=H_X\otimes L^2(G)$ be the $X$-$G$-module as before (see Definition \ref{def_tildeH}). Let \[ \tilde\pi\colon C_0(X) \to \mathfrak{L}(\tilde H_X\otimes B) \] be the structure map. We have the right-regular representation (see Proposition \ref{prop_rightreg_localized}) \[ \rho \colon RL^*_u(H_X\otimes B)\rtimes_rG \to C^*_{L, u}(\tilde H_X\otimes B)^G \] and the inclusions \[ \xymatrix{ C^*_{L, u}(\tilde H_X\otimes B)^G = \mathcal{C}_L(\tilde H_X\otimes B)^G_{\mathrm{proper}} \ar[r]^-{} \ar[d]^-{} & \mathcal{C}_L(\tilde H_X\otimes B)^G \ar[d]^-{} \\ \mathcal{C}_L(\tilde \pi)^G_{\mathrm{proper}} \ar[r]^-{} & \mathcal{C}_L(\tilde\pi)^G. } \] \begin{definition} If $A$ is any of the algebras \[ \mathcal{C}_L(\tilde H_X\otimes B)^G_{\mathrm{proper}}, \mathcal{C}_L(\tilde\pi)^G_{\mathrm{proper}}, \mathcal{C}_L(\tilde H_X\otimes B)^G, \mathcal{C}_L(\tilde \pi)^G, \] we set \[ \rho_\ast \colon K_\ast(RL^*_{u}(H_X\otimes B)\rtimes_rG) \to K_\ast(A) \] to be the group homomorphism induced by the right-regular representation \[ \rho\colon RL^*_u(H_X\otimes B)\rtimes_rG \to C^*_{L, u}(\tilde H_X\otimes B)^G \] followed by the inclusion from $C^*_{L, u}(\tilde H_X\otimes B)^G$ to $A$. \end{definition} We are now interested in computing the compositions \[ \iota_{p_c} \circ \rho_\ast\colon K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \to K_\ast(C^*_{L, u}(\tilde H_X\otimes B)^G) \to K_\ast(RL^*_u(\tilde H_X\otimes B)\rtimes_rG) \] and \[ \rho_\ast \circ \iota_{p_c} \colon K_\ast(C^*_{L, u}(H_X\otimes B)^G) \to K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \to K_\ast(C^*_{L, u}(\tilde H_X\otimes B)^G). \] Let us first compute the first one. We remind that a proper $G$-space $X$ has been assumed to be $G$-compact from the beginning of this section. \begin{lemma}\label{lem_genfix} For any (not-necessarily separable) $G$-$C^*$-algebra $A$ equipped with a non-degenerate representation of the $G$-$C^*$-algebra $C_0(X)$ to $M(A)$, the right regular representation \[ \rho\colon A\rtimes_rG \to M(A\otimes \mathfrak{K}(L^2(G))), \,\, A\ni a\mapsto (g(a))_{g\in G}, \,\, G\ni g\mapsto \rho_g, \] is an isomorphism onto the $C^*$-algebra $M(A\otimes \mathfrak{K}(L^2(G)))^{G, c}$ defined as the completion of the $\ast$-algebra consisting of $G$-equivariant, locally compact, properly supported operators in $M(A\otimes \mathfrak{K}(L^2(G)))\cong \mathfrak{L}(A\otimes L^2(G))$. \end{lemma} \begin{proof} The proof of Proposition \ref{prop_isom} works verbatim. \end{proof} \begin{remark} The algebra $M(A\otimes \mathfrak{K}(L^2(G)))^{G, c}$ is known as a generalized fixed point algebra and this lemma is proved in a more general setting (\cite[Corollary 3.25, Remark 3.26]{BE15} see also \cite{BE14} \cite{BE16} for a more general theory). \end{remark} \begin{lemma} The composition $ \iota_{p_c} \circ \rho_\ast\colon K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \to K_\ast(RL^*_u(\tilde H_X\otimes B)\rtimes_rG)$ coincides with the map induced by the strict cover $V_c\colon H_X\to \tilde H_X$ of the identity map on $X$ defined as \[ V_c\colon H_X \ni v\mapsto (g\mapsto g(c)v) \in \tilde H_X =H_X\otimes L^2(G). \] \end{lemma} \begin{proof} A c.c.p.\ map \[ \iota^1_{p_c}\colon RL^*_u(H_X\otimes B)\rtimes_rG \to \left(RL^*_u(H_X\otimes B)\otimes \mathfrak{K}(L^2(G)) \right) \rtimes_rG \] is defined as the compression of the right-regular representation \[ RL^*_u(H_X\otimes B)\rtimes_rG \ni T \to \rho(T) \in M(RL^*_u(H_X\otimes B)\otimes \mathfrak{K}(L^2(G))) \] by the projection $p^{(1)}_c$ which is the image of the cut-off projection $p_c$ in $C_0(X)\rtimes_rG$ by the map \[ \pi^1\rtimes_r1 \colon C_0(X)\rtimes_rG \to M( \left(RL^*_u(H_X\otimes B)\otimes \mathfrak{K}(L^2(G)) \right) \rtimes_rG) \] induced by the natural $G$-equivariant representation \[ \pi^{1}\colon C_0(X) \to M(RL^*_u(H_X)) \to M( RL^*_u(H_X\otimes B)\otimes \mathfrak{K}(L^2(G))). \] Passing to the quotient, the c.c.p.\ map $\iota^1_{p_c}$ becomes a $\ast$-homomorphism \[ \iota^1_{p_c}\colon RL^*_u(H_X\otimes B)\rtimes_rG \to \left(RL^*_{u, Q}(H_X\otimes B)\otimes \mathfrak{K}(L^2(G)) \right) \rtimes_rG. \] Composing its induced map on K-theory with the inverse of the isomorphism (the quotient map) \[ K_\ast(\left(RL^*_{u}(H_X\otimes B)\otimes \mathfrak{K}(L^2(G)) \right) \rtimes_rG) \cong K_\ast( \left(RL^*_{u, Q}(H_X\otimes B)\otimes \mathfrak{K}(L^2(G)) \right) \rtimes_rG), \] we obtain a group homomorphism \[ \iota^1_{p_c}\colon K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \to K_\ast( \left(RL^*_u(H_X\otimes B)\otimes \mathfrak{K}(L^2(G)) \right) \rtimes_rG). \] Here, we used that the reduced crossed product preserves the exact sequence \[ 0 \to RL^*_{0}(H_X\otimes B)\otimes \mathfrak{K}(L^2(G)) \to RL^*_{u}(H_X\otimes B)\otimes \mathfrak{K}(L^2(G)) \] \[ \to RL^*_{u, Q}(H_X\otimes B)\otimes \mathfrak{K}(L^2(G)) \to 0 \] since the quotient is a proper $G$-$C^*$-algebra. The composition $ \iota_{p_c} \circ \rho_\ast$ factors through the natural inclusion \[ \left(RL^*_u(H_X\otimes B)\otimes \mathfrak{K}(L^2(G))\right) \rtimes_rG \to RL^*_u(\tilde H_X\otimes B)\rtimes_rG \] via the group homomorphism $\iota^1_{p_c}$. We now consider \[ \mathrm{Ad}_{V_c \ast}\colon K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \to K_\ast(RL^*_u(\tilde H_X\otimes B)\rtimes_rG). \] Let $p_c^{(2)}=V_cV_c^\ast \in \mathfrak{L}(\tilde H_X) \subset \mathfrak{L}(\tilde H_X\otimes B)$. The $G$-equivariant projection $p^{(2)}_c$ is the image of $p_c \in C_0(X)\rtimes_rG$ by the right-regular representation \[ C_0(X)\rtimes_rG \to \mathfrak{L}(\tilde H_X)= \mathfrak{L}(H_X\otimes L^2(G)), C_0(X)\ni \phi \mapsto (g(\phi))_{g\in G}, \,\, G\ni g\mapsto \rho_g. \] We claim that the $\ast$-homomorphism \[ \mathrm{Ad}_{V_c}\colon RL^*_u(H_X\otimes B) \ni T \mapsto V_cTV_c^\ast \in RL^*_u(\tilde H_X\otimes B) \] and the $G$-equivariant c.c.p.\ map \[ \iota^2_{p_c}\colon RL^*_u(H_X\otimes B) \ni T \mapsto p^{(2)}_c(T)_{g\in G} p^{(2)}_c \in RL^*_u(\tilde H_X\otimes B) \] coincide after passing to the quotient $RL^*_{u, Q}(\tilde H_X\otimes B)$. Indeed, for $S$ in $\mathfrak{L}(H_X\otimes B)$, we have \[ V_c^\ast (S)_{g\in G}V_c = \int_{g\in G}g(c)S g(c) d\mu_g(g). \] Thus, for any $T \in RL^*_u(H_X\otimes B)$, we have \[ \lim_{t\to \infty}|| T_t - V_c^\ast (T_t)_{g\in G}V_c || = 0. \] We can see this for example by considering $T$ which has uniform compact support. The $G$-equivariant c.c.p.\ map $\iota^2_{p_c}$ naturally factors through the inclusion \[ RL^*_u(H_X\otimes B)\otimes \mathfrak{K}(L^2(G)) \to RL^*_u(\tilde H_X\otimes B). \] Overall, in order to show $\iota_{p_c} \circ \rho_\ast = \mathrm{Ad}_{V_c \ast}$ on $K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG)$, it suffices to show that the two group homomorphisms \[ \iota^1_{p_c}, \iota^2_{p_c}\colon K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \to K_\ast( \left(RL^*_u(H_X\otimes B)\otimes \mathfrak{K}(L^2(G)) \right) \rtimes_rG) \] induced by the c.c.p.\ maps $\iota^1_{p_c}, \iota^2_{p_c}$ (which become $\ast$-homomorphisms after passing to the quotient) coincide. On the other hand, by viewing \[ \left(RL^*_u(H_X\otimes B)\otimes \mathfrak{K}(L^2(G)) \right) \rtimes_rG \cong M(RL^*_u(H_X\otimes B)\otimes \mathfrak{K}(L^2(G)) \otimes \mathfrak{K}(L^2(G)) )^{G, c} \] as in Lemma \ref{lem_genfix}, we can see that the two map $\iota^1_{p_c}$ and $\iota^2_{p_c}$ are conjugate by the unitary $U$ in the multiplier algebra of \[ M(RL^*_u(H_X\otimes B)\otimes \mathfrak{K}(L^2(G)) \otimes \mathfrak{K}(L^2(G)) )^{G, c} \] defined as the flip on $G\times G$, i.e. $U \in \mathfrak{L}(L^2(G\times G))$ defined by $Uf(g_1, g_2)= f(g_2, g_1)$ for $f\in L^2(G\times G)$ so we are done. \end{proof} \begin{corollary} \label{cor_first_composition} The composition \[ \iota_{p_c} \circ \rho_\ast\colon K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \to K_\ast(RL^*_u(\tilde H_X\otimes B)\rtimes_rG) \] is an isomorphism whenever $H_X$ is a universal $X$-$G$-module. \end{corollary} We now study the other composition. \begin{lemma}\label{lem_second_composition} The composition \[ \rho_\ast \circ \iota_{p_c} \colon K_\ast(C^*_{L, u}(H_X\otimes B)^G) \to K_\ast(C^*_{L, u}(\tilde H_X\otimes B)^G) \] coincides with the map induced by the strict cover $V_c\colon H_X\to \tilde H_X$ of the identity map on $X$ defined as \[ V_c\colon H_X \ni v\mapsto (g\mapsto g(c)v) \in \tilde H_X =H_X\otimes L^2(G). \] \end{lemma} \begin{proof} We can directly see that the composition is induced by the c.c.p.\ map (which becomes a $\ast$-homomorphism after passing to the quotient), \[ p_c \cdot p_c \colon C^*_{L, u}(H_X\otimes B)^G \ni T \mapsto p_c(T)_{g\in G}p_c \in C^*_{L, u}(\tilde H_X\otimes B)^G \] where the cut-off projection $p_c$ is represented by the right-regular representation of $C_0(X)\rtimes_rG$: \[ C_0(X)\rtimes_rG \to \mathfrak{L}(\tilde H_X)= \mathfrak{L}(H_X\otimes L^2(G)), C_0(X)\ni \phi \mapsto (g(\phi))_{g\in G}, \,\, G\ni g\mapsto \rho_g. \] As before, more precisely, this c.c.p.\ map induces the map on K-theory as the composition of the $\ast$-homomorphism \[ p_c \cdot p_c \colon C^*_{L, u}(H_X\otimes B)^G \to C^*_{L, Q}(\tilde H_X\otimes B)^G = C^*_{L, u}(\tilde H_X\otimes B)^G / C_0([1, \infty), C^*(\tilde H_X\otimes B)^G) \] and the inverse of the isomorphism (the quotient map) \[ K_\ast(C^*_{L, u}(\tilde H_X\otimes B)^G) \to K_\ast(C^*_{L, Q}(\tilde H_X\otimes B)^G). \] We have $p_c=V_cV_c^\ast$ and as before, for $S\in \mathfrak{L}(H_X\otimes B)$, \[ V_c^\ast (S)_{g\in G}V_c = \int_{g\in G}g(c)S g(c) d\mu_g(g). \] To show that $p_c \cdot p_c$ and $\mathrm{Ad}_{V_c \ast}$ coincide on $K_\ast(C^*_{L, u}(H_X\otimes B)^G)$, it is enough to show that for any $T \in C^*_{L, u}(H_X\otimes B)^G$, we have $\limt||T_t - V^\ast_c(T_t)_{g\in G}V_c|| = 0$ but this is precisely the condition (2) of Lemma \ref{lem_properly}, which is satisfied for any $T$ in $\mathcal{C}_L(\pi)^G_{\mathrm{proper}}$ in particular for any $T \in C^*_{L, u}(H_X\otimes B)^G= \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}$. \end{proof} The same proof shows the following. \begin{lemma} The composition \[ \rho_\ast \circ \iota_{p_c} \colon K_\ast(\mathcal{C}_L(\pi)^G_{\mathrm{proper}}) \to K_\ast(\mathcal{C}_L(\tilde\pi)^G_{\mathrm{proper}}) \] coincides with the map induced by the strict cover $V_c\colon H_X\to \tilde H_X$ of the identity map on $X$. \end{lemma} Before jumping to the conclusion, we make an important remark here. The equivariant Roe algebra $C^*(H_X\otimes B)^G$, the localized equivariant Roe algebra $C^*_{L, u}(H_X\otimes B)^G$ as well as all the other algebras such as $\mathcal{C}(\pi)^G$ and $\mathcal{C}_L(\pi)^G$ are not well-behaved with respect to an arbitrary equivariant continuous cover $(V_t\colon H_X\to H_Y)_{t\in [1, \infty)}$ of a $G$-equivariant continuous map $f\colon X\to Y$ even if $f$ is the identity map on a $G$-compact space. This is because a continuous cover $V_t$ may not be properly supported in general and hence the conjugation by $V_t$ may not preserve locally compact operators. On the other hand, they are functorial with respect to a strict cover $V$, if exists, of the identity map. More relevantly, if we focus on $X$-$G$-modules of the form $\tilde H_X\otimes B$ for an $X$-$G$-module $H_X$, the equivariant Roe algebra $C^*(\tilde H_X\otimes B)^G$, the localized equivariant Roe algebra $C^*_{L, u}(\tilde H_X\otimes B)^G=\mathcal{C}_L(\tilde H_X\otimes B)^G_{\mathrm{proper}}$ as well as $\mathcal{C}_L(\tilde H_X\otimes B)^G$ are functorial with respect to a $G$-equivariant continuous cover of the identity map of the form $\tilde V=V\otimes 1\colon \tilde H_X \to \tilde H'_X$ for a $G$-equivariant continuous cover $V\colon H_X \to H_X'$. To see this, first, we have the following commutative diagram \[ \xymatrix{ C^*(\tilde H_X\otimes B)^G \ar[r]^-{\mathrm{Ad}_{\tilde V}} & C^*(\tilde H'_X\otimes B)^G \\ \mathfrak{K}(H_X\otimes B)\rtimes_rG \ar[r]^-{\mathrm{Ad}_V} \ar[u]^-{\cong}_{\rho} & \mathfrak{K}(H'_X\otimes B)\rtimes_rG \ar[u]^-{\cong}_{\rho} } \] for any $G$-equivariant isometry $V\colon H_X \to H_X'$, which in particular says that $\mathrm{Ad}_{\tilde V}$ maps $C^*(\tilde H_X\otimes B)^G$ to $C^*(\tilde H'_X\otimes B)^G$. Now, if $(V_t\colon H_X \to H'_X)_{t\in [1, \infty)}$ is a $G$-equivariant continuous cover of the the identity map on $X$, we can see that $\mathrm{Ad}_{\tilde V_t}$ maps $\mathcal{C}_L(\tilde H_X\otimes B)^G$ to $\mathcal{C}_L(\tilde H'_X\otimes B)^G$ and maps $C_{L, u}^*(\tilde H_X\otimes B)^G=\mathcal{C}_L(\tilde H_X\otimes B)^G_{\mathrm{proper}}$ to $C_{L, u}^*(\tilde H'_X\otimes B)^G=\mathcal{C}_L(\tilde H'_X\otimes B)^G_{\mathrm{proper}}$. To see the latter, let $X_0$ be a compact subset of $X$ with $X=GX_0$. If $S\in \mathcal{C}_L(\tilde H_X\otimes B)^G$ is properly supported, then $S'_t=\tilde V_tS_t\tilde V_t^\ast$ satisfies \[ \chi_{X_0} S'_t = \chi_{X_0} S'_t \chi_{X_1}, \,\,\, S'_t \chi_{X_0} = \chi_{X_1} S'_t \chi_{X_0} \] for some compact subset $X_1$ of $X$ for large enough $t$. This already implies that $S'$ satisfies the condition (3) of Lemma \ref{lem_properly}. Similarly, for any two covers $(V_{i, t}\colon H_X \to H'_X)_{t\in [1, \infty)}$ for $i=1, 2$, $\tilde V_{1, t} \tilde V_{2, t}^\ast$ multiplies $C_{L, u}^*(\tilde H'_X\otimes B)^G=\mathcal{C}_L(\tilde H'_X\otimes B)^G_{\mathrm{proper}}$. Using this, we see that the induced map \[ \mathrm{Ad}_{\tilde V_t \ast}\colon K_\ast(C_{L, u}^*(\tilde H_X\otimes B)^G) \to K_\ast(C_{L, u}^*(\tilde H'_X\otimes B)^G) \] on $K$-theory is independent of the covers and it is an isomorphism whenever $H_X$ and $H_X'$ are universal $X$-$G$-modules. We say that an $X$-$G$-module $H_X$ is of infinite multiplicity if $H_X\cong H^{0}_X\otimes l^2(\mathbb{N})$ as an $X$-$G$-module for some $X$-$G$-module $H^0_X$. \begin{lemma} For any universal $X$-$G$-module $H_X$ of infinite multiplicity, the right-regular representation $\rho$ induces an isomorphism \[ \rho_\ast \colon K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \cong K_\ast(C^*_{L, u}(\tilde H_X\otimes B)^G). \] \end{lemma} \begin{proof} By Corollary \ref{cor_first_composition}, we already know that the composition \[ \iota_{p_c}\circ \rho_\ast \colon K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \to K_\ast(RL^*_u(\tilde H_X\otimes B)\rtimes_rG) \] is an isomorphism. It thus suffices to show that the composition \[ \rho_\ast \circ \iota_{p_c}\colon K_\ast(C^*_{L, u}(\tilde H_X\otimes B)^G) \to K_\ast(C^*_{L, u}(\tilde H^{(2)}_X\otimes B)^G) \] is injective (isomorphism) where $\tilde H^{(2)}_X= \tilde H'_X$ for $H'_X=\tilde H_X$. By Lemma \ref{lem_second_composition}, this composition is induced by the isometry $V_c\colon H_X' \to \tilde H_X'$ which is a strict cover of the identity map. On the other hand, $ H_X'\cong H^{0}_X\otimes l^2(\mathbb{N})\otimes L^2(G)$ and $\tilde H_X' = H_X'\otimes L^2(G)$ are isomorphic as $X$-$G$-modules. From this, we see that $\rho_\ast \circ \iota_{p_c}=\mathrm{Ad}_{V_c \ast}$ is an isomorphism. \end{proof} \begin{theorem}\label{thm_magic} For any universal $X$-$G$-module $H_X$, the right-regular representation $\rho$ and the natural inclusion induce isomorphisms \[ \rho_\ast \colon K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \cong K_\ast(C^*_{L, u}(\tilde H_X\otimes B)^G) \cong K_\ast(\mathcal{C}_L(\tilde \pi)^G_{\mathrm{proper}}). \] \end{theorem} \begin{proof} The first isomorphism follows from the previous lemma using the functoriality of both $K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG)$ and $K_\ast(C^*_{L, u}(\tilde H_X\otimes B)^G)$ with respect to a $G$-equivariant continuous cover $(V_t\colon H_X \to H'_X)_{t\in [1, \infty)}$ of the identity map on $X$, i.e. we can assume that the universal $X$-$G$-module $H_X$ is of infinite-multiplicity. We always have the second isomorphism. \end{proof} \section{$\rho_X$ is an isomorphism, part II} In this section, we study a $G$-equivariant analogue of \cite{DWW18}. We let $X$ be a $G$-compact proper $G$-space and $H_X$ be an $X$-$G$-module which will be assumed to be of infinite-multiplicity at some places. We recall that $\mathcal{C}_L(H_X\otimes B)^G_\mathrm{proper} = C^*_{L, u}(H_X\otimes B)^G$ is the $C^*$-subalgebra of $C_{b, u}([1,\infty), C^*(H_X\otimes B)^G)$ consisting of functions $T$ such that $\lim_{t\to \infty}||[\phi, T_t]||=0$ and such that $T-\psi_c(T) \in C_0([1, \infty), C^*(H_X\otimes B)^G)$ for some (any) cut-off function $c$ on $X$, where \[ \psi_c(T) = \int_{g\in G}g(c)Tg(c)d\mu_G(g). \] We introduce the following $C^*$-algebras (c.f.\ \cite[Section 3]{DWW18}). \begin{itemize} \item $D^*(H_X\otimes B)^G$ is the $C^*$-subalgebra of $\mathfrak{L}(H_X\otimes B)$ generated by $G$-equivariant, properly supported, pseudo-local operators. Here, an operator $T$ in $\mathfrak{L}(H_X\otimes B)$ is pseudo-local if $[\phi, T] \in \mathfrak{K}(H_X\otimes B)$ for any $\phi$ in $C_0(X)$. \item $\mathcal{D}_L(H_X\otimes B)^G$ is the $C^*$-subalgebra of $C_{b, u}([1,\infty), D^*(H_X\otimes B)^G)$ consisting of functions $T$ such that $\lim_{t\to \infty}||[\phi, T_t]||=0$. \item $\mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}}$ is the $C^*$-subalgebra of $\mathcal{D}_L(H_X\otimes B)^G$ consisting of functions $T$ such that $T-\psi_c(T) \in C_0([1, \infty), D^*(H_X\otimes B)^G)$ for some (any) cut-off function $c$ on $X$. Similarly to Lemma \ref{lem_properly}, the second condition is equivalent to that there is a properly supported function $S$ in $\mathcal{D}_L(H_X\otimes B)^G$ such that $T-S \in C_0([1, \infty), D^*(H_X\otimes B)^G)$. \item $\mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}}$ is the $C^*$-subalgebra of $C_{b, u}([1, \infty), C^*(H_X\otimes B)^G)$ generated by functions $T$ which are properly supported. \item $\mathcal{D}_T(H_X\otimes B)^G_{\mathrm{proper}}$ is the $C^*$-subalgebra of $C_{b, u}([1, \infty), D^*(H_X\otimes B)^G)$ generated by functions $T$ which are properly supported. \end{itemize} We have inclusions (each vertical map is an inclusion as an ideal) \[ \xymatrix{ \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}} \ar[r]^-{} \ar[d]^-{} &\mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}} \ar[d]^-{} \\ \mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}} \ar[r]^-{} & \mathcal{D}_T(H_X \otimes B)^G_{\mathrm{proper}}, } \] as well as inclusions (horizontal maps are inclusions as constant functions) \[ \xymatrix{ C^*(H_X\otimes B)^G \ar[r]^-{} \ar[d]^-{} &\mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}} \ar[d]^-{} \\ D^*(H_X\otimes B)^G \ar[r]^-{} & \mathcal{D}_T(H_X \otimes B)^G_{\mathrm{proper}}. } \] \begin{lemma} We have \[ \mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}} \cap \mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}} = \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}. \] \end{lemma} \begin{proof} Let $T\in \mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}} \cap \mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}}$. Then, we have $T \in C_{b, u}([1, \infty), C^*(H_X\otimes B)^G)$ and $\limt||[T_t, \phi]|| = 0$ and $\limt||T_t-\psi_c(T_t)|| = 0$ for any cut-off function $c$ on $X$. Thus, $T\in \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}$. The converse is trivial. \end{proof} \begin{lemma}\label{lem_ex}(c.f.\ \cite[Lemma 4.1]{DWW18}) Let $D \subset C_{b, u}([1,\infty), D^*(H_X \otimes B)^G)$ be a separable subset. Assume that the set $D$ is properly supported in a sense that for any compact subset $A$ of $X$, there is a compact subset $B$ so that $\chi_AT=\chi_AT\chi_B$ (and $T\chi_A=\chi_BT\chi_A$) for any $T\in D$. There is $x \in C_{b, u}([1,\infty), C^*(H_X \otimes B)^G)$ which is properly supported such that \begin{itemize} \item $[x, \phi] \in C_0([1, \infty), \mathfrak{K}(H_X\otimes B))$ for any $\phi \in C_0(X)$, \item $(1-x)[T, \phi] \in C_0([1, \infty), \mathfrak{K}(H_X\otimes B))$ for any $\phi\in C_0(X)$ and $T\in D$. \end{itemize} \end{lemma} \begin{proof} Let $c$ be a cut-off function on $X$ and $X_0$ be the support of $c$ which is a compact subset of $X$ with $GX_0=X$. Since $D$ is properly supported, there is $\chi\in C_c(X)$ such that $c=c\chi$ and $cT=cT\chi$ for any $T\in D$. Let $X_1$ be the support of $\chi$. Let $K$ be a compact subset of $G$ so that $X_0\cap gX_1=\emptyset$ for $g\in G\backslash K$. Let $(\phi_n)_n$ be a countable subset of $C_0(X)$ such that $\mathrm{supp}(\phi_n)\subset X_0$ for all $n$ and such that $G(\phi_n)_n$ is total in $C_0(X)$. Let $(T^{(m)})_{m\geq1}$ be a countable dense subset of $D$. Let $y_n$ be an approximate unit in $\mathfrak{K}(H_X\otimes B)$ quasi-central with respect to $C_0(X)$ so that we have $\limn||[\phi, y_n]||= 0$ for any $\phi \in C_0(X)$ and such that \[ ||(1-y_n) c[T^{(m)}_t, g^{-1}(\phi_k)]|| < 1/n \] for any $g\in K$, $t\in [1, n+1]$, $1\leq m \leq n$, and $1\leq k \leq n$. Let \[ x_n=\int_{g\in G}g(c) g(y_n) g(c) d\mu_G(g) \in C^*(H_X \otimes B)^G. \] Then, we have $(n\mapsto [\phi, x_n]) \in C_0(\mathbb{N}, \mathfrak{K}(H_X\otimes B))$ for any $\phi$ in $C_0(X)$. Take any $\phi$ in $C_0(X)$ with support contained in $X_0$. For any $T\in D$, we have \[ (1-x_n)[T_t, \phi] =\int_{g\in G}g(c) (1-g(y_n)) g(c)[T_t, \phi] g(\chi) d\mu_G(g) \] \[ = \int_{g\in K}g(c) (1-g(y_n)) g(c)[T_t, \phi] g(\chi) d\mu_G(g), \] and \[ ||(1-g(y_n)) g(c)[T_t, \phi]|| = ||(1-y_n) c[T_t, g^{-1}(\phi)]||. \] In particular, we have \[ ||(1-g(y_n)) g(c)[T^{(m)}_t, \phi_k]|| \leq 1/n \] for any $g\in K$, $t\in [1, n+1]$, $1\leq m \leq n$ and $1\leq k \leq n$ and so \[ ||(1-x_n)[T^{(m)}_t, \phi_k]|| \leq C/n \] for any $t\in [1, n+1]$, $1\leq m \leq n$ and $1\leq k \leq n$, where the constant $C>0$ only depends on fixed functions $c$ and $\chi$. A desired function $x$ can be obtained by \[ x_t= (n+1-t)x_n + (t-n)x_{n+1} \] for $t\in [n, n+1]$. \end{proof} \begin{lemma}(c.f.\ \cite[Proposition 2.3]{QiaoRoe}, \cite[Proposition 4.2]{DWW18}) \label{lem_eta} The natural map (inclusion) \[ \eta\colon \mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}} / \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}} \ \to \mathcal{D}_T(H_X \otimes B)^G_{\mathrm{proper}} / \mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}} \] is bijective (isomorphism). \end{lemma} \begin{proof} It is enough to show that for any $T \in C_{b, u}([1,\infty), D^*(H_X \otimes B)^G)$ which is properly supported, there is $S \in \mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}} $ such that $T-S \in \mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}}$. We let $x$ as given by Lemma \ref{lem_ex} for $D=\{T\}$. Set $S=(1-x)T$. Then, we have \[ T -S = xT\in \mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}} \] since $x \in \mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}}$. We claim that $S \in \mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}}$. Since $x$ and $T$ are properly supported, $S=(1-x)T$ is properly supported. Thus, it is enough to show that $\lim_{t\to \infty}||[\phi, S_t]|| = 0$ but this follows from \[ [\phi, S]=-[\phi, x]T + (1-x)[\phi, T] \] and from the property of $x$. \end{proof} \begin{lemma}(c.f.\ \cite[Proposition 3.5]{QiaoRoe}, \cite[Proposition 4.3]{DWW18}) \label{lem_boundary_isom} Assume that $H_X$ is of infinite multiplicity. We have $K_\ast(\mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}})=0$. Hence the boundary map \[ \partial \colon K_\ast(\mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}}/ \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}) \to K_{\ast+1}(\mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}) \] is an isomorphism. \end{lemma} \begin{proof} We have $H_X\cong H^{(0)}(X)\otimes l^2(\mathbb{N})$ and let $U_n$ be isometries on $l^2(\mathbb{N})$ such that \[ \sum_{n\geq0} U_nU_n^\ast = 1. \] The following map is well-defined on $\mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}}$, \[ \alpha\colon T \mapsto \sum_{n\geq0} U_n T_{t+n} U_n^\ast. \] Indeed, it maps the ideal $C_0([1, \infty), D^*(H_X\otimes B)^G)$ to itself and maps properly supported functions in $\mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}}$ to themselves. Once we have $\alpha$ well-defined, it is routine to show that $K_\ast(\mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}})=0$. \end{proof} \begin{lemma}(c.f.\ \cite[Proposition 3.6]{QiaoRoe}, \cite[Proposition 4.3]{DWW18})\label{lem_constant_isom} Assume that $H_X$ is of infinite multiplicity. The evaluation map at $t=1$ induces an isomorphism \[ \mathrm{ev}_{1 \ast}\colon K_\ast(\mathcal{D}_T(H_X\otimes B)^G_{\mathrm{proper}}/ \mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}} ) \cong K_\ast(D^*(H_X\otimes B)^G/C^*(H_X\otimes B)^G ). \] \end{lemma} \begin{proof} Let $\mathcal{D}^0_T(H_X\otimes B)^G_{\mathrm{proper}}$ be the kernel of the evaluation map $\mathrm{ev}_{1}$ on $\mathcal{D}_T(H_X\otimes B)^G_{\mathrm{proper}}$ and let $\mathcal{C}^0_T(H_X\otimes B)^G_{\mathrm{proper}}=\mathcal{D}^0_T(H_X\otimes B)^G_{\mathrm{proper}}\cap \mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}}$. We have a split short exact sequence \[ 0 \to \mathcal{D}^0_T(H_X\otimes B)^G_{\mathrm{proper}}/\mathcal{C}^0_T(H_X\otimes B)^G_{\mathrm{proper}} \] \[ \to \mathcal{D}_T(H_X\otimes B)^G_{\mathrm{proper}}/ \mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}}\to D^*(H_X\otimes B)^G/C^*(H_X\otimes B)^G \to 0. \] Thus, it suffices to show that $K_\ast( \mathcal{D}^0_T(H_X\otimes B)^G_{\mathrm{proper}})=0$ and $K_\ast( \mathcal{C}^0_T(H_X\otimes B)^G_{\mathrm{proper}})=0$. Let $U_n$ be isometries on $H_X\otimes B$ as in the proof of Lemma \ref{lem_boundary_isom}. The following map is well-defined on $\mathcal{C}^0_T(H_X\otimes B)^G_{\mathrm{proper}}$ and on $\mathcal{D}^0_T(H_X\otimes B)^G_{\mathrm{proper}}$, \[ \alpha\colon T \mapsto \sum_{n\geq0} U_n T_{t-n} U_n^\ast \] where functions $T$ on $[1,\infty)$ vanishing at $t=1$ are constantly extended to the left by zero. Indeed, it maps properly supported functions in $\mathcal{C}^0_T(H_X\otimes B)^G_{\mathrm{proper}}$, resp. in $\mathcal{D}^0_T(H_X\otimes B)^G_{\mathrm{proper}}$, vanishing at $t=1$ to themselves and it is not to hard to see that they form a dense subalgebra of $\mathcal{C}^0_T(H_X\otimes B)^G_{\mathrm{proper}}$, resp. of $\mathcal{D}^0_T(H_X\otimes B)^G_{\mathrm{proper}}$. Once we have $\alpha$ well-defined, it is routine to show that $K_\ast(\mathcal{C}^0_T(H_X\otimes B)^G_{\mathrm{proper}})=0$ and $K_\ast(\mathcal{D}^0_T(H_X\otimes B)^G_{\mathrm{proper}})=0$. \end{proof} Let $\pi\colon C_0(X) \to \mathfrak{L}(H_X\otimes B)$ be the structure map for the $X$-$G$-module $H_X\otimes B$. We define: \begin{itemize} \item $C^*(\pi)^G$ is the $C^*$-subalgebra of $\mathfrak{L}(H_X\otimes B)$ consisting of $G$-equivariant, locally compact operators. \item $D^*(\pi)^G$ is the $C^*$-subalgebra of $\mathfrak{L}(H_X\otimes B)$ consisting of $G$-equivariant, pseudo-local operators. \end{itemize} The following is a standard fact (c.f.\ \cite[Lemma 6.2]{HigsonRoe95}, \cite[Lemma 5.8]{Roe96}, \cite[Lemma 12.3.2]{HigsonRoe}). \begin{lemma}\label{lem_eta0} The natural map \[ \eta_0\colon D^*(H_X\otimes B)^G/ C^*(H_X\otimes B)^G \to D^*(\pi)^G/C^*(\pi)^G \] is bijective (isomorphism). \end{lemma} \begin{proof} Surjectivity: Let $T\in D^*(\pi)^G$. For a cut-off function $c$ on $X$, consider \[ T'= \int_{g\in G}g(c) T g(c) d\mu_G(g). \] Then, we have $T' \in D^*(H_X\otimes B)^G$ and $T-T' \in C^*(\pi)^G$. Injectivity: we need to show that $D^*(H_X\otimes B)^G\cap C^*(\pi)^G = C^*(H_X\otimes B)^G$. Note that this is not a trivial consequence of the definitions. We first claim that $T - T' \in C^*(H_X\otimes B)^G$ for any $T\in D^*(H_X\otimes B)^G$. In fact, if $T$ is properly supported, this is easy to see since then, $T -T'$ is $G$-equivariant, locally compact and properly supported. The general case follows from the continuity of the map $T\mapsto T-T'$. Let $T \in D^*(H_X\otimes B)^G\cap C^*(\pi)^G$. Then, $T\in C^*(H_X\otimes B)^G$ follows from $T'\in C^*(H_X\otimes B)^G$ which is easy to see since $T'$ is $G$-equivariant, locally compact and properly supported. \end{proof} \begin{proposition}\label{prop_sequence_isom} Assume that $H_X$ is of infinite multiplicity. We have the following sequence of isomorphisms. \[ \xymatrix{ K_\ast(D^*(\pi)^G/C^*(\pi)^G ) \ar[r]_-{\cong}^-{\eta_0^{-1}} & K_\ast(D^*(H_X\otimes B)^G/C^*(H_X\otimes B)^G ) \ar[d]^-{\cong}_-{\iota} \\ & K_\ast(\mathcal{D}_T(H_X\otimes B)^G_{\mathrm{proper}}/ \mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}}) \ar[d]^-{\cong}_{\eta^{-1}} \\ K_{\ast+1}(\mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}) & \ar[l]^-{\cong}_-{\partial} K_\ast(\mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}}/ \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}) } \] where $\iota$ is the natural inclusion as constant functions. \end{proposition} \begin{proof} Combine Lemma \ref{lem_eta}, Lemma \ref{lem_boundary_isom}, Lemma \ref{lem_constant_isom} and Lemma \ref{lem_eta0}. \end{proof} \section{$\rho_X$ is an isomorphism, part III} Let us consider, in general, an $X$-$G$-Hilbert $B$-module $\mathcal{E}$, that is a (countably generated) $G$-Hilbert $B$-module equipped with a non-degenerate representation of the $G$-$C^*$-algebra $C_0(X)$. The support and the propagation of operators between two $X$-$G$-Hilbert $B$-modules are defined in an obvious way. We let $\tilde \mathcal{E}$ be an $X$-$G$-Hilbert $B$-module $\mathcal{E}\otimes L^2(G)$ equipped with a unitary representation \[ G\ni g\mapsto u_g\otimes \lambda_g \] where $u_g$ is the unitary on $\mathcal{E}$ corresponding to $g\in G$ and $\lambda_g$ is the left-translation by $g$ and with the structure map \[ C_0(X) \ni \phi \mapsto \phi\otimes 1. \] For any cut-off function $c$ on $X$, we have a $G$-equivariant isometry $V_c\in \mathfrak{L}(\mathcal{E}, \tilde \mathcal{E})$ defined by sending $v \in \mathcal{E}$ to \[ (V_c(v)\colon h \mapsto h(c)v ) \in \tilde \mathcal{E}=L^2(G, \mathcal{E}). \] The isometry $V_c$ is a strict cover of the identity, i.e. it commutes with $C_0(X)$. \begin{lemma}\label{lem_absorbing} Let $G$ be a locally compact group and $X$ be a $G$-compact, proper $G$-space. Let $H_X$ be an $X$-$G$-module which is ample as an $X$-module and let $\mathcal{E}$ be the $X$-$G$-Hilbert $B$-module $\tilde H_X\otimes B$. For any $X$-$G$-Hilbert $B$-module $\mathcal{E}_0$, there is a sequence $V_n$ of $G$-equivariant isometries in $\mathfrak{L}(\mathcal{E}_0, \mathcal{E})$ such that for any $\phi \in C_0(X)$, \begin{itemize} \item $[V_n, \phi] \in \mathfrak{K}(\mathcal{E}_0, \mathcal{E})$, \item $\lim_{n\to \infty}||[V_n, \phi]|| = 0$. \end{itemize} \end{lemma} \begin{proof} Let $c \in C_c(X)$ be a cut-off function and $X_0$ be the support of $c$. Let $W_n \in \mathfrak{L}(\mathcal{E}_0, H_X\otimes B)$ be a sequence of isometries satisfying the following three conditions: for any $\phi$ in $C_0(X)$, \begin{itemize} \item $[W_n, \phi] \in \mathfrak{K}(\mathcal{E}_0, H_X\otimes B)$, \item $\limn||[W_n, \phi]|| = 0$, \item $W_nc = \chi W_n c$ for some $\chi \in C_c(X)$. \end{itemize} We explain how we can find such $W_n$. Fix a metric $d$ on $X^+$ and let $\delta>0$ be small enough so that the $\delta$-neighborhood of $X_0$ is relatively compact in $X$. Let $(\phi_i)_{i\in S}$ be a finite partition of unity in $C(X^+)$ so that the support of each $\phi_i$ is contained in a ball of radius $\delta/2$. Let $X_i$ be the support of $\phi_i$ and let $\mathcal{E}_0^{(i)}=C(X^+)\phi_i\mathcal{E}_0$ which is naturally an $X_i$-module. Then, we have an isometry \[ \Phi\colon \mathcal{E}_0 \to \bigoplus_{i\in S} \mathcal{E}_0^{(i)}, \,\,\, v\mapsto (\phi^{1/2}_iv). \] Let $H_{X_i}=C(X^+)\phi_iH_X$ which is naturally an ample $X_i$-module. We take a sequence $W_n^{(i)} \in \mathfrak{L}(\mathcal{E}^{(i)}_0, H_{X_i}\otimes B)$ of isometries satisfying, \begin{itemize} \item For each $n$, $(W_n^{(i)})_{i \in S}$ have mutually orthogonal ranges in $H_X\otimes B$, \item $[W^{(i)}_n, \phi] \in \mathfrak{K}(\mathcal{E}_0, H_{X_i}\otimes B)$ for any $\phi$ in $C(X_i)$, \item $\limn||[W^{(i)}_n, \phi]|| = 0$ for any $\phi$ in $C(X_i)$. \end{itemize} Such isometries exist because $H_{X_i}$ is an ample $C(X_i)$-module (see \cite[Theorem 2.3]{DWW18}). Let $W_n$ be the composition of $\Phi$ and the (orthogonal) sum of $W_n^{(i)}$. We can see that $W_n$ are isometries satisfying the three conditions above. We now define a sequence $V_n$ of $G$-equivariant isometries in $\mathfrak{L}(\mathcal{E}_0, \mathcal{E})$ as the composition \[ V_n=\tilde W_nV_c\colon \mathcal{E}_0 \to \tilde \mathcal{E}_0 \to \tilde H_X\otimes B \] where a $G$-equivariant isometry $\tilde W_n\colon \tilde \mathcal{E}_0 \to \tilde H_X\otimes B$ is of the form \[ (\tilde W_n\colon g \mapsto g(W_n) ) \in C_{b, \mathrm{SOT}^*}(G, \mathfrak{L}(\mathcal{E}_0, H_X\otimes B)). \] Here, $g(W_n)=u'_gW_nu^{-1}_{g}$ where $u'_g$ (resp. $u_g$) is the unitary on $H_X\otimes B$, (resp. on $\mathcal{E}_0$) corresponding to $g$ in $G$. Note that $g(W_n)$ is adjointable although $u'_g$ and $u_g$ may not. We also note that $g\mapsto g(W_n)$ is not necessarily norm-continuous ($W_n$ may not be $G$-continuous). We show that $V_n$ satisfies the two desired conditions. Concretely, $V_n$ maps $v\in \mathcal{E}_0$ to \[ (h\to h(\chi) h(W_n) h(c) v) \in L^2(G, H_X\otimes B). \] Let $\phi\in C_c(X)$. Then, $[V_n, \phi]$ maps $v\in \mathcal{E}_0$ to \[ (h\to h(\chi) [h(W_n), \phi] h(c) v) \in L^2(G, H_X\otimes B) \] and we note that $h(\chi) [h(W_n), \phi] h(c)=0$ for $h\in G\backslash K$ for some compact subset $K$ of $G$ which only depends on the support of $\phi$ (and $c$ and $\chi$) but not on $n$. Now, we have \[ [V_n, \phi]^\ast [V_n, \phi] = \int_{h\in K} h(c)[h(W_n), \phi]^\ast h(\chi^2) [h(W_n), \phi] h(c) d\mu_G(h). \] We note $(h \mapsto [h(W_n), \phi]) \in C_b(G, \mathfrak{K}(\mathcal{E}_0, H_X\otimes B))$ since we have $(h\mapsto [W_n, h^{-1}(\phi)]) \in C_b(G, \mathfrak{K}(\mathcal{E}_0, H_X\otimes B))$. We also have $\limn||[W_n, h^{-1}(\phi)]||= 0$ uniformly in $h \in K$. From these, we see that \begin{itemize} \item $[V_n, \phi]^\ast [V_n, \phi] \in \mathfrak{K}(\mathcal{E}_0)$, \item $\limn||[V_n, \phi]^\ast [V_n, \phi] || = 0$. \end{itemize} We are done. \end{proof} Let $H_X$ be an $X$-$G$-module which is ample as an $X$-module. Let \[ \pi\colon C_0(X) \to \mathfrak{L}(\tilde H_X\otimes B) \] be the structure map for the $X$-$G$-Hilbert $B$-module $\tilde H_X\otimes B$. Note that we may view $\pi$ as a non-degenerate representation \[ \pi\colon C_0(X) \to M(\mathfrak{K}(\tilde H_X)\otimes B) \] and $B'=\mathfrak{K}(\tilde H_X)\otimes B$ is $G$-stable in a sense that $B'\otimes \mathfrak{K}(L^2(G)) \cong B'$ as $G$-$C^*$-algebras. In general, for any separable $G$-$C^*$-algebra $A$ and for any separable $G$-stable $G$-$C^*$-algebra $B'$, let us say that a non-degenerate, $G$-equivariant representation \[ \pi\colon A \to M(B') \] is non-degenerately $G$-equivariantly absorbing (c.f.\ \cite[Section 2]{Thomsen01}) if for any non-degenerate $G$-equivariant representation \[ \pi_0\colon A\to M(B'), \] there is a sequence $u_n$ of $G$-equivariant unitaries in $\mathfrak{L}(B'\oplus B', B')$ such that for any $a\in A$, \begin{itemize} \item $u_n(\pi_0(a)\oplus \pi(a)) - \pi(a)u_n \in \mathfrak{K}(B'\oplus B', B')$, \item $\limn|| u_n(\pi_0(a)\oplus \pi(a)) - \pi(a)u_n || = 0$. \end{itemize} It is routine to deduce from Lemma \ref{lem_absorbing} that the structure map $\pi$ for $\tilde H_X\otimes B$ is non-degenerately $G$-equivariantly absorbing. \begin{proposition}\label{prop_absorbing} For any $X$-$G$-module $H_X$ which is ample as an $X$-module, the structure map \[ \pi\colon C_0(X) \to M(B') \] for the $X$-$G$-module $\tilde H_X\otimes B$ where $B'=\mathfrak{K}(\tilde H_X)\otimes B$, is non-degenerately $G$-equivariantly absorbing. \end{proposition} \begin{proof} We use a standard trick (see the proof of \cite[Corollary 2, p341]{Arveson77}). Let $\pi_0\colon C_0(X) \to M(B')$ be a non-degenerate, $G$-equivariant representation and let $\pi_0^\infty \colon C_0(X) \to M(B')$ be its amplification so that $\pi_0\oplus \pi_0^\infty$ is unitarily equivalent to $\pi_0^\infty$. Lemma \ref{lem_absorbing} gives a sequence $v_n$ of $G$-equivariant isometries in $\mathfrak{L}(B', B')$ such that for any $\phi \in C_0(X)$, \begin{itemize} \item $v_n\pi^\infty_0(\phi) - \pi(\phi)v_n \in \mathfrak{K}(B', B')$, \item $\limn|| v_n\pi^\infty_0(\phi) - \pi(\phi)v_n || = 0$. \end{itemize} Let $p_n=1-v_nv_n^\ast$ and $\sigma_n$ be a $G$-equivariant c.c.p.\ map from $C_0(X)$ to $\mathfrak{L}(p_nB')$ defined by $\sigma_n(\phi)=p_n\pi(\phi)p_n$. We set $w_n$ to be a unitary in $\mathfrak{L}(p_nB' \oplus B', B')$ given by the direct sum of the identity map on $p_nB'$ and $v_n$. Then, $w_n$ is a $G$-equivariant unitary in $\mathfrak{L}(p_nB' \oplus B', B')$ such that for any $\phi \in C_0(X)$, \begin{itemize} \item $w_n(\sigma_n(\phi)\oplus \pi^\infty_0(\phi)) - \pi(\phi)w_n \in \mathfrak{K}(p_nB' \oplus B', B')$, \item $\limn|| w_n(\sigma_n(\phi)\oplus \pi^\infty_0(\phi)) - \pi(\phi)w_n || = 0$. \end{itemize} Define unitaries $u_n$ in $\mathfrak{L}(B' \oplus B', B')$ by the composition \[ \xymatrix{ B' \oplus B' \ar[r]^-{1_{B'}\oplus w_n^\ast} & B' \oplus (p_nB' \oplus B') \ar[r]^-{\cong} & p_nB' \oplus (B' \oplus B') \ar[r]^-{1_{p_nB'} \oplus w} & p_nB' \oplus B' \ar[r]^-{w_n} & B' } \] Where $w\in \mathfrak{L}(B'\oplus B', B')$ is a unitary equivalence from $\pi_0\oplus \pi_0^\infty$ to $\pi_0^\infty$. Then, we have for any $\phi \in C_0(X)$, \begin{itemize} \item $u_n(\pi_0(\phi)\oplus \pi(\phi)) - \pi(\phi)u_n \in \mathfrak{K}(B'\oplus B', B')$, \item $\limn|| u_n(\pi_0(\phi)\oplus \pi(\phi)) - \pi(\phi)u_n || = 0$. \end{itemize} \end{proof} For any $G$-equivariant representation \[ \pi\colon A \to M(B'), \] we define $\mathcal{D}(\pi)^G$ to be the $C^*$-algebra of $G$-equivariant elements $x$ in $M(B')$ such that $[x, \pi(a)] \in B'$ for any $a\in A$ and $\mathcal{C}(\pi)^G$ to be the $C^*$-algebra of $G$-equivariant elements $x$ in $M(B')$ such that $x\pi(a), \pi(a)x \in B'$ for any $a\in A$. We have a group homomorphism \[ \Theta\colon K_1(\mathcal{D}(\pi)^G/\mathcal{C}(\pi)^G) \to KK_0^G(A, B') \] which sends a unitary $u$ in $M_n(\mathcal{D}(\pi)^G/\mathcal{C}(\pi)^G)$ to the following even Kasparov triple for $KK_0^G(A, B')$, \[ ( B'^{\oplus n}\oplus B'^{\oplus n}, \pi^{\oplus n } \oplus \pi^{\oplus n }, \, \begin{bmatrix} 0 & v\\ v^\ast & 0 \end{bmatrix} ) \] where $v$ is any lift of $u$ in $M_n(\mathcal{D}(\pi)^G)$. Similarly, we have a group homomorphism \[ \Theta \colon K_0(\mathcal{D}(\pi)^G/\mathcal{C}(\pi)^G) \to KK_1^G(A, B') \] which sends a projection $p$ in $M_n(\mathcal{D}(\pi)^G/\mathcal{C}(\pi)^G)$ to the following odd Kasparov triple for $KK_1^G(A, B')$, \[ ( B'^{\oplus n}, \pi^{\oplus n }, 2P-1) \] where $P$ is any (self-adjoint) lift of $p$ in $M_n(\mathcal{D}({\pi})^G)$. We recall some terminologies from Section 3 \cite{Thomsen05}. An even Kasparov triple $\mathcal{E}=(E, \phi, F)$ for $KK_0^G(A, B')$ is called elementary if $E=B'\oplus B'$ and it it called essential if $\phi(A)E=E$. Similarly, an odd Kasparov triple $\mathcal{E}=(E, \phi, P)$ for $KK_1^G(A, B')$ is called elementary if $E=B'$ and it it called essential if $\phi(A)E=E$. \begin{proposition}(\cite[Theorem 3.9, Theorem 4.2]{Thomsen05}) Let $A$ be a separable proper $G$-$C^*$-algebra and $B'$ be a separable, $G$-stable $G$-$C^*$-algebra. \begin{enumerate} \item[(1a)] Every element of $KK^G_0 (A, B')$ is represented by an even Kasparov triple which is both elementary and essential. \item [(1b)] Two elementary and essential even Kasparov triples, $\mathcal{E}_1$ and $\mathcal{E}_2$ define the same element of $KK^G_0 (A, B')$ if and only if there are degenerate even Kasparov triples, $\mathcal{D}_1$ and $\mathcal{D}_2$ for $KK^G_0 (A, B')$ which are both elementary and essential, such that $\mathcal{E}_1\oplus \mathcal{D}_1$ is operator homotopic to $\mathcal{E}_2\oplus \mathcal{D}_2$. \item [(2a)] Every element of $KK^G_1 (A, B')$ is represented by an odd Kasparov triple which is both elementary and essential. \item [(2b)] Two elementary and essential odd Kasparov triples, $\mathcal{E}_1$ and $\mathcal{E}_2$ define the same element of $KK^G_1 (A, B')$ if and only if there are degenerate odd Kasparov triples, $\mathcal{D}_1$ and $\mathcal{D}_2$ for $KK^G_1 (A, B')$ which are both elementary and essential, such that $\mathcal{E}_1\oplus \mathcal{D}_1$ is operator homotopic to $\mathcal{E}_2\oplus \mathcal{D}_2$. \end{enumerate} \end{proposition} \begin{proof} In \cite{Thomsen05}, the proof was given for $A$ which is $G$-stable but the only property of $A$ which is used for the proof is that for any $G$-equivariant representation $\phi\colon A \to \mathfrak{L}(E)$ on a $G$-Hilbert $B'$-module $E$ with $\phi(A)E=E$, we have $E\oplus B'\cong B'$ (see \cite[Theorem 2.8, Corollary 2.9]{Thomsen05}). Any proper $G$-$C^*$-algebra has this property (see \cite[Proposition 8.6]{Meyer00}). \end{proof} \begin{theorem}(c.f.\ \cite[Theorem 3.2]{Thomsen01}) \label{thm_KKduality} Let $A$ be a separable proper $G$-$C^*$-algebra and $B'$ be a separable, $G$-stable $G$-$C^*$-algebra. Suppose a non-degenerate $G$-equivariant representation $\pi\colon A \to M(B')$ is non-degenerately $G$-equivariantly absorbing. Then, the group homomorphisms $\Theta$ induce isomorphisms \[ K_1(\mathcal{D}(\pi)^G/\mathcal{C}(\pi)^G)\cong KK_0^G(A, B') \,\,\, \text{and} \,\,\, K_0(\mathcal{D}(\pi)/\mathcal{C}(\pi)^G)\cong KK_1^G(A, B'). \] \end{theorem} \begin{proof} The proof of \cite[Theorem 3.2]{Thomsen01} works with minor changes so we shall be brief. We give a proof for the isomorphism $K_1(\mathcal{D}(\pi)^G/\mathcal{C}(\pi)^G)\cong KK_0^G(A, B')$. The other one is parallel. Surjectivity: Take any even elementary, essential, Kasparov triple $\mathcal{E}=(B'\oplus B', \pi_0\oplus \pi_1, \begin{bmatrix} 0 & x\\ x^\ast & 0 \end{bmatrix} )$ for $KK^G_0 (A, B')$. By adding an essential, degenerate Kasparov triple \[ \mathcal{D}= (B'^\infty \oplus B'^\infty, (\pi^\infty_0\oplus\pi^\infty_1)\oplus (\pi^\infty_0\oplus \pi^\infty_1), \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix} ) \oplus (B'\oplus B', \pi\oplus \pi, \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix} ) \] to $\mathcal{E}$, where $(\pi^\infty_0\oplus \pi^\infty_1)$ is the infinite direct sum of $(\pi_0 \oplus \pi_1)$, we see that the triple $\mathcal{E}\oplus \mathcal{D}$ is isomorphic to the essential triple $\mathcal{E}_1$ of the form $(B'\oplus B', \pi'\oplus \pi', \begin{bmatrix} 0 & v\\ v^\ast & 0 \end{bmatrix} )$ where $\pi'(a)-\pi(a) \in B'$ for any $a\in A$, i.e. $\mathcal{E}_1$ is a compact perturbation of the triple $(B'\oplus B', \pi\oplus \pi, \begin{bmatrix} 0 & v\\ v^\ast & 0 \end{bmatrix})$. Using a cut-off function (as usual), the latter triple is a compact perturbation of the triple of the same form where $v$ is $G$-equivariant. Such a triple is in the image of $\Theta$. Injectivity: We may assume $\pi$ is of infinite multiplicity. We take a unitary $u\in \mathcal{D}(\pi)^G/\mathcal{C}(\pi)^G$ and let $v \in \mathcal{D}(\pi)^G$ be any lift of $u$ (it is enough to consider $1\times 1$-matrix since $\mathcal{D}(\pi)^G/\mathcal{C}(\pi)^G$ is properly infinite.). Suppose $\Theta(u)=0$ in $KK^G_0(A, B')$. Then, there are essential, degenerate triples $\mathcal{D}_1, \mathcal{D}_2$ such that $(B'\oplus B', \pi\oplus \pi, \begin{bmatrix} 0 & v\\ v^\ast & 0 \end{bmatrix} )\oplus \mathcal{D}_1$ is operator homotopic to $(B'\oplus B', \pi\oplus \pi, \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix} )\oplus \mathcal{D}_2$. By adding infinite copies of $\mathcal{D}_1\oplus \mathcal{D}_2$ to both, we may assume that $\mathcal{D}_1=\mathcal{D}_2$. We can further arrange that $\mathcal{D}_1=\mathcal{D}_2$ are of the form $(B'\oplus B', \lambda\oplus \lambda, \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix} )$ with $\lambda$ of infinite multiplicity. By adding $(B'\oplus B', \pi\oplus \pi, \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix} )$, we can assume that there is a $G$-equivariant unitary $w\in M(B')$ such that $w\lambda(a)w^\ast - \pi(a) \in B'$ for any $a\in A$. From here, using the same reasoning of the proof \cite[Theorem 3.2]{Thomsen01}, we get a $G$-equivariant norm-continuous path $G_t$ in the matrix algebra $M_n(M(B'))$ such that $G_0=1_n$ and $G_1= v\oplus 1_{n-1}$ and \[ (G_t^\ast G_t - 1_n)\pi(a), (G_tG_t^\ast - 1_n)\pi(a), [\pi(a), G_t] \in M_n(B') \] for all $t$ and for all $a \in A$. The $G$-equivariance can be arranged by perturbing the operator homotopy into $G$-equivariant one using a cut-off function $c$ for a proper $G$-space $X$ for which $A$ is a $G$-$C_0(X)$-algebra. More precisely, $G_t$ would be a homotopy between $G_0=1_n$ and $G_1= v'\oplus 1_{n-1}$ where $v'=\int_{g\in G}g(c)g(v)g(c)d\mu_G(g)=\int_{g\in G}g(c)vg(c)d\mu_G(g)$ but $v'$ is still a lift of $u$. This gives a path of unitaries in $M_n(\mathcal{D}(\pi)^G/\mathcal{C}(\pi)^G)$ connecting $u\oplus 1_{n-1}$ to $1_n$. \end{proof} \begin{corollary}\label{cor_canonical_isom} Let $G$ be a locally compact group, $X$ be a $G$-compact proper $G$-space and $B$ be a separable $G$-$C^*$-algebra. Let $H_X$ be an $X$-$G$-module which is ample as an $X$-module. Let $\pi\colon C_0(X) \to \mathfrak{L}(\tilde H_X\otimes B)$ be the structure map for the $X$-$G$-Hilbert $B$-module $\tilde H_X\otimes B$. Then, there are canonical isomorphisms \[ \Theta\colon K_1(D^*(\pi)^G/C^*(\pi)^G) \cong KK_0^G(C_0(X), B), \] \[ \Theta\colon K_0(D^*(\pi)^G/C^*(\pi)^G) \cong KK_1^G(C_0(X), B). \] \end{corollary} \begin{corollary}\label{cor_sequence_isom} Let $G$ be a locally compact group, $X$ be a $G$-compact proper $G$-space and $B$ be a separable $G$-$C^*$-algebra. Let $H_X$ be a universal $X$-$G$-module. Assume that $H_X$ is of infinite multiplicity. Let $\pi\colon C_0(X) \to \mathfrak{L}(\tilde H_X\otimes B)$ be the structure map for the $X$-$G$-Hilbert $B$-module $\tilde H_X\otimes B$. We have the following sequence of isomorphisms. \[ \xymatrix{ KK_\ast^G(C_0(X), B) & \ar[l]^-{\cong}_-{\Theta} K_{\ast+1}(D^*(\pi)^G/C^*(\pi)^G ) \ar[d]^-{\cong}_-{\eta_0^{-1}} \\ & K_{\ast+1}(D^*(\tilde H_X\otimes B)^G/C^*(\tilde H_X\otimes B)^G ) \ar[d]^-{\cong}_-{\iota} \\ & K_{\ast+1}(\mathcal{D}_T(\tilde H_X\otimes B)^G_{\mathrm{proper}}/ \mathcal{C}_T(\tilde H_X \otimes B)^G_{\mathrm{proper}} ) \ar[d]^-{\cong}_{\eta^{-1}} \\ & K_{\ast+1}(\mathcal{D}_L(\tilde H_X\otimes B)^G_{\mathrm{proper}}/ \mathcal{C}_L(\tilde H_X\otimes B)^G_{\mathrm{proper}}) \ar[d]^-{\cong}_-{\partial} \\ K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \ar[r]^-{\cong}_-{\rho_\ast} & K_{\ast}(\mathcal{C}_L(\tilde H_X\otimes B)^G_{\mathrm{proper}}) } \] In particular, for any universal $X$-$G$-module $H_X$, there is a canonical isomorphism \[ K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \cong KK_\ast^G(C_0(X), B). \] \end{corollary} \begin{proof} Combine Theorem \ref{thm_magic}, Proposition \ref{prop_sequence_isom} and Corollary \ref{cor_canonical_isom}. \end{proof} \section{$\rho_X$ is an isomorphism, part IV} Now we are ready to show that the group homomorphism \[ \rho_X\colon K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \to E_\ast^G(C_0(X), B) \cong KK_\ast^G(C_0(X), B) \] is an isomorphism for any $G$-compact proper $G$-space $X$ where $H_X$ is a universal $X$-$G$-module. For any $X$-$G$-module $H_X$, we have a group homomorphism \[ \iota_X\colon K_\ast(C_{L, u}^*(H_X\otimes B)^G ) \to E_\ast^G(C_0(X), B) \cong KK_\ast^G(C_0(X), B) \] defined in the same way as $\rho_X$ using a canonical asymptotic morphism \[ \pi_X\otimes \iota \colon C_0(X) \otimes C_{L, u}^*(H_X\otimes B)^G \to \mathfrak{A}(\mathfrak{K}(H_X\otimes B)) \] such that the image of $\phi\otimes T$ is represented by \[ \phi T \in C_b([1, \infty), \mathfrak{K}(H_X\otimes B)). \] It is clear that we have $\rho_X= \iota_X\circ \rho_\ast$, that is $\rho_X$ factors through the map \[ \rho_\ast \colon K_\ast(RL^*_u(H_X\otimes B)\rtimes_rG) \to K_\ast(C_{L, u}^*(\tilde H_X\otimes B)^G) \] induced by the right-regular representation which is an isomorphism whenever $H_X$ is universal. \begin{lemma}(c.f.\ \cite[Theorem 5.1]{DWW18}) \label{lem_isom_commutes} Let $G$ be a locally compact group, $X$ be a $G$-compact proper $G$-space and $B$ be a separable $G$-$C^*$-algebra. Let $H_X$ be any $X$-$G$-module. Let $\pi\colon C_0(X) \to \mathfrak{L}(H_X\otimes B)$ be the structure map for the $X$-$G$-Hilbert $B$-module $H_X\otimes B$. Then, the following diagram commutes. \[ \xymatrix{ K_{\ast+1}(D^*(H_X\otimes B)^G/C^*(H_X\otimes B)^G ) \ar[d]^-{\cong}_-{\eta_0} \ar[r]^-{\iota} & K_{\ast+1}(\mathcal{D}_T(H_X\otimes B)^G_{\mathrm{proper}}/ \mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}}) \ar[d]^-{\cong}_{\eta^{-1}} \\ K_{\ast+1}(D^*(\pi)^G/C^*(\pi)^G ) \ar[d]^-{}_-{\Theta} & K_{\ast+1}(\mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}}/ \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}) \ar[d]^-{}_-{\partial} \\ KK_\ast^G(C_0(X), B) ) \ar[d]^-{\cong} & K_{\ast}(\mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}) \ar[d]^-{}_-{=} \\ E_\ast^G(C_0(X), B) & \ar[l]_-{\iota_X} K_{\ast}(C^\ast_{L, u}(H_X\otimes B)^G_{\mathrm{proper}}). } \] \end{lemma} \begin{proof} The argument of the proof of \cite[Theorem 5.1]{DWW18} works with minor changes. We give a proof for the case $\ast=0$. The case of $\ast=1$ is simpler and this is the one of the two cases which is explained carefully in \cite{DWW18}. Take a unitary $\dot u$ in $D^*(H_X\otimes B)^G/C^*(H_X\otimes B)^G$ (for simplicity we are taking a unitary in the $1\times 1$-matrix algebra). We first compute the image of $\dot u$ in $E_0^G(C_0(X), B)$ by the clock-wise composition. The functional calculus for $\dot u$ gives a $\ast$-homomorphism \[ \Sigma \ni f \mapsto f(\dot u) \in D^*(H_X\otimes B)^G/C^*(H_X\otimes B)^G \] where $\Sigma \cong C_0(0, 1)$ is identified as $C_0(S^1-\{1\})$. We let \[ \Sigma\ni f\mapsto f(u) \in D^*(H_X\otimes B)^G \] be its (not necessarily linear) continuous, bounded lift to $D^*(H_X\otimes B)^G$ (which exists by Bartle--Graves Theorem \cite[Theorem 4]{BartleGraves}). This is an abuse of notations and $f(u)$ is not the functional calculus applied to some element $u$. We may compose this map $f\mapsto f(u)$ with a c.c.p.\ map \[ x \to \int_{g\in G}g(c)xg(c)d\mu_G(g) \] on $D^*(H_X\otimes B)^G$ to assume that the image of $\Sigma$ is properly supported as a set. Note that such a composition is still a lift of $f \mapsto f(\dot u)$. Let $x \in \mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}}$ be as given by Lemma \ref{lem_ex} for \[ D=\{ f(u) \in D^*(H_X\otimes B)^G \subset \mathcal{D}_T(H_X\otimes B)^G_{\mathrm{proper}} \mid f\in \Sigma \}. \] Then, \[ \Sigma\ni f\mapsto (1-x)f(u) \in \mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}} \] is a lift of the $\ast$-homomorphism from $\Sigma$ to $\mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}}/\mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}$ associated to the unitary $\dot u$, or more precisely the unitary $\iota(\dot u) \in \mathcal{D}_T(H_X\otimes B)^G_{\mathrm{proper}}/ \mathcal{C}_T(H_X \otimes B)^G_{\mathrm{proper}} = \mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}}/ \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}$. The boundary map $\partial$ sends the element $[\dot u]$ in $K_1(\mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}}/\mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}})$ to the element in $K_0(\mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}})$ represented by an asymptotic morphism \[ h\otimes f \mapsto h(v_t)(1-x)f(u) \] from $\Sigma^2$ to $\mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}$. Here, we let $M$ be a separable $C^*$-subalgebra of $\mathcal{D}_L(H_X\otimes B)^G_{\mathrm{proper}}$ generated by $(1-x)f(u)$ and $v_t$ is a continuous approximate unit for $M\cap \mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}}$, quasi-central with respect to $M$. The morphism $\iota_X$ sends this element in $K_0(\mathcal{C}_L(H_X\otimes B)^G_{\mathrm{proper}})$ to the element in $E_0^G(C_0(X), B)$ represented by an asymptotic morphism \[ h\otimes f\otimes \phi \mapsto h(v_t(s(t)))(1-x(s(t)))f(u) \phi \] from $\Sigma^2\otimes C_0(X)$ to $\mathfrak{K}(H_X\otimes B)$ where $t\mapsto s(t)$ is a continuous, increasing map on $[1 ,\infty)$ with $s(t)\to \infty$ sufficiently fast as $t\to \infty$. On the other hand, the downward composition sends the element $[\dot u]$ in $K_1(D^*(H_X\otimes B)^G/C^*(H_X\otimes B)^G)$ to the element in $E_0^G(C_0(X), B)$ represented by an asymptotic morphism \[ h\otimes f\otimes \phi \mapsto h(u_t) f(u)\phi \] from $\Sigma^2\otimes C_0(X)$ to $\mathfrak{K}(H_X\otimes B)$ where $u_t$ is an asymptotically $G$-equivariant, continuous approximate (see \cite[Definition 6.2]{HigsonKasparov}) in $\mathfrak{K}(H_X\otimes B)$ which is quasi-central with respect to $C_0(X)$ and $ f(u)$ for $f\in \Sigma$. We can take $u_t$ so that we have \[ ||(1-u_t)x(s(t))\phi || \to 0 \] as $t\to \infty$ for any $\phi \in C_0(X)$. Then, the above asymptotic morphism is asymptotic to the one \[ h\otimes f\otimes \phi \mapsto h(u_t)(1-x(s(t)))f(u)\phi. \] The two asymptotic morphisms \[ h\otimes f\otimes \phi \mapsto h(u_t)(1-x(s(t)))f(u)\phi, \,\,\, h\otimes f\otimes \phi \mapsto h(v_t(s(t)))(1-x(s(t)))f(u) \phi \] are homotopic through an asymptotic morphism \[ h\otimes f\otimes \phi \mapsto h(w^{r}_t)(1-x(s(t)))f(u)\phi, \] from $\Sigma^2\otimes C_0(X)$ to $\mathfrak{K}(H_X\otimes B)\otimes C[0 ,1]$ where \[ w^{r}_t = (1-r)v_t(s(t)) + ru_t \] for $0\leq r \leq 1$. That this is well-defined can be seen from that \[ f \mapsto (1-x(s(t)))f(u) \] is asymptotically a $\ast$-homomorphism modulo the product of $\phi\in C_0(X)$ and $h(w^{r}_t)$ and from that \[ || [h(w^{r}_t) (1-x(s(t)))f(u), \phi]|| \to 0, \,\,\, ||[h(w^{r}_t), (1-x(s(t)))f(u)]|| \to 0 \] as $t\to \infty$. \end{proof} \begin{theorem} \label{thm_main_isom} Let $G$ be a locally compact group. The natural transformation \[ \rho_X\colon \bD_\ast^{B, G}(X) \to \varinjlim_{Y\subset X, \mathrm{Gcpt}}E_\ast^G(C_0(Y), B) \cong \varinjlim_{Y\subset X, \mathrm{Gcpt}}KK_\ast^G(C_0(Y), B) \] is an isomorphism for any proper $G$-space $X$ and for any separable $G$-$C^*$-algebra $B$. \end{theorem} \begin{proof} The case when $X$ is $G$-compact follows from Proposition \ref{prop_ucsameB}, Corollary \ref{cor_sequence_isom} and Lemma \ref{lem_isom_commutes}. The general case follows since the transformation is natural with respect to $X$ and both sides are representable by $G$-compact subsets. \end{proof} \begin{theorem} \label{thm_main_equivalent} Let $G$ be a locally compact group, $X$ be a proper $G$-space and $B$ be a separable $G$-$C^*$-algebra. The forget-control map \[ \mathcal{F}\colon \bD_\ast^{B, G}(X) \to K_\ast(B\rtimes_rG) \] is naturally equivalent to the Baum--Connes assembly map \[ \mu_{X}^{B, G}\colon \varinjlim_{Y\subset X, \mathrm{Gcpt}}KK_\ast^G(C_0(X), B) \to K_\ast(B\rtimes_rG). \] That is, there is a natural isomorphism \[ \rho_X\colon \bD_\ast^{B, G}(X) \to \varinjlim_{Y\subset X, \mathrm{Gcpt}}KK_\ast^G(C_0(Y), B) \] of the functors from $\mathcal{PR}^G$ to $\mathcal{GA}$ and the following diagram commutes \begin{equation*} \xymatrix{ \bD_\ast^{B, G}(X) \ar[dr]^{\rho_X}_-{\cong} \ar[rr]^{\mathcal{F}} & & K_\ast(B\rtimes_rG) \\ & \varinjlim_{Y\subset X, \mathrm{Gcpt}}KK_\ast^G(C_0(Y), B). \ar[ur]^{\mu^{B, G}_X} & } \end{equation*} \end{theorem} \begin{proof} Combine Theorem \ref{thm_forget_factor} and Theorem \ref{thm_main_isom}. \end{proof} As before, let $RL^0_c(H_X\otimes B)$ be the kernel of the evaluation map $\mathrm{ev}_1$ on $RL_c^\ast(H_X\otimes B)$ (see the end of Section \ref{sec_forget}). \begin{corollary} \label{cor_N} Let $G$ be a locally compact group and $B$ be a separable $G$-$C^*$-algebra. The Baum--Connes assembly map $\mu^{B, G}_r$ is an isomorphism if and only if \[ K_\ast(RL^0_c(H_X\otimes B)\rtimes_rG)=0 \] for a universal $X$-$G$-module $H_X$ for $X=\underline{E}G$. \end{corollary} \section{$X$-$G$-localized element in $KK^G(\C, \C)$} We recall some materials on Kasparov's $G$-equivariant $KK$-theory from \cite{Kasparov88} (see also \cite{Blackadar}). The graded (minimal) tensor product is denoted by $\hat\otimes$. Let $G$ be a second countable, locally compact group. For any (not necessarily separable) graded $G$-$C^*$-algebras $A$ and $B$, Kasparov defines the abelian group $KK^G(A, B)=KK_0^G(A, B)$ (\cite[Definition 2.2]{Kasparov88} ) as the group of homotopy classes (up to isomorphisms) of even Kasparov triples $(E, \pi, T)$ where $E$ is a countably generated, graded $G$-Hilbert $B$-module $E$ which is equipped with a (graded, $G$-equivariant) $\ast$-homomorphism $\pi\colon A\to \mathfrak{L}(E)$ and where $T$ is an odd, $G$-continuous operator in $\mathfrak{L}(E)$ such that for any $a\in A$ and $g\in G$, (we write $a=\pi(a)$) \[ a(1-T^2), \,\,[a, T], \,\,a(T-T^\ast),\,\, a(g(T)-T) \in \mathfrak{K}(\mathcal{E}). \] We often assume that $T$ is self-adjoint without loss of generality. The homotopy is defined by the even Kasparov triple for $KK^G(A, B[0 ,1])$ for $B[0, 1]=B\otimes C[0 ,1]$ and the group structure is given by the direct sum operation. For any $\ast$-homomorphisms $\phi\colon A_1 \to A_2$, $\psi\colon B_1 \to B_2$, we have canonically defined homomorphisms (\cite[Definition 2.5]{Kasparov88} \[ \phi^\ast\colon KK^G(A_2, B) \to KK^G(A_1, B), \,\,\, \psi_\ast\colon KK^G(A, B_1) \to KK^G(A, B_2) \] and the group $KK^G(A, B)$ is homotopy invariant in both variables. If $D$ is $\sigma$-unital, we have a canonically defined homomorphism \[ \sigma_D\colon KK^G(A, B) \to KK^G(A\hat\otimes D, B\hat\otimes D) \] which sends $(E, \pi, T)$ to $(E\hat\otimes D, \pi\hat\otimes \mathrm{id}_D, T\hat\otimes 1)$. For $A$, separable, Kasparov defines the well-defined, bilinear, product law (Kasparov product) (see \cite[Definition 2.10, Theorem 2.11]{Kasparov88}) \[ KK^G(A, B_1) \otimes KK^G(B_1, B_2) \to KK^G(A, B_2), \,\,\, (x_1, x_2) \mapsto x_1\otimes_{B_1}x_2 \] for any $B_1, B_2$. The Kasparov product satisfies several functorial properties, assuming separability or $\sigma$-unital for the relevant slots (see \cite[Theorem 2.14]{Kasparov88}). The descent homomorphism \cite[Section 3.11]{Kasparov88} \[ j^G_r\colon KK^G(A, B) \to KK(A\rtimes_rG, B\rtimes_rG) \] is defined for any $A, B$ and it satisfies functorial properties, assuming separability or $\sigma$-unital for the relevant slots (see \cite[Theorem 3.11]{Kasparov88}). The abelian group $KK^G_1(A, B)$ is defined to be \[ KK^G_1(A, B) = KK^G(A\hat\otimes \C_1, B)= KK^G(A, B\hat\otimes \C_1) \] where $\C_1$ is the first Clifford algebra. We define \[ K_\ast(A) = KK_\ast(\C, A) \] for any graded $C^*$-algebra $A$ and when $A$ is ungraded, this group is canonically isomorphic to the topological K-theory of $A$. In particular, a cycle for the commutative ring $R(G)=KK^G(\C, \C)$ is a pair $(H ,T)$ of an odd, self-adjoint, $G$-continuous operator $T$ on a (separable) graded $G$-Hilbert space $H$ satisfying for any $g\in G$, \[ 1-T^2, g(T)-T \in \mathfrak{K}(H). \] We call such a pair, a Kasparov cycle for $KK^G(\C, \C)$. A cycle $(\C, 0)$ defines the unit of the ring $R(G)$, denoted by $1_G$, For a proper $G$-space $X$, a graded $X$-$G$-module $H_X$ is a graded $G$-Hilbert space equipped with a non-degenerate (graded, $G$-equivariant) representation of $C_0(X)$. It is just a pair $H_X=H_X^{(0)}\oplus H_X^{(1)}$ of $X$-$G$-modules. For any graded $X$-$G$-module $H_X$, the representable localization algebra $RL^*_c(H_X)$ is naturally a graded $G$-$C^*$-algebra. \begin{definition}\label{def_XGlocalized} An $X$-$G$-localized Kasparov cycle for $KK^G(\C, \C)$ is a pair $(H_X, T)$ of a graded $X$-$G$-module $H_X$ and an odd, self-adjoint, $G$-continuous element $T$ in the multiplier algebra $M(RL_c^*(H_X))$ satisfying for any $g\in G$, \[ 1-T^2, \,\,\, g(T)-T \in RL^*_c(H_X). \] \end{definition} \begin{remark} Although $RL^*_c(H_X)$ is not $\sigma$-unital, one may think $(RL_c^*(H_X), T)$ as a cycle for $KK^G(\C, RL^*_c(H_X))$ (see \cite[Section 3]{Skandalis85}). \end{remark} The evaluation \[ \mathrm{ev}_{t=1}\colon RL^*_c(H_X) \to \mathfrak{K}(H_X) \] extends to \[ \mathrm{ev}_{t=1}\colon M(RL^*_c(H_X)) \to \mathfrak{L}(H_X). \] For any $T \in M(RL^*_c(H_X))$, we write $T_1 \in \mathfrak{L}(H_X)$, its image by $\mathrm{ev}_{t=1}$. \begin{lemma} Let $X$ be a proper $G$-space and $(H_X, T)$ be an $X$-$G$-localized Kasparov cycle for $KK^G(\C, \C)$. Then, a pair $(H_X, T_1)$ is a cycle for $KK^G(\C, \C)$. \end{lemma} \begin{definition}\label{def_XGlocalized_element} Let $X$ be a proper $G$-space. We say that an element $x$ in $KK^G(\C, \C)$ is $X$-$G$-localized if there is an $X$-$G$-localized Kasparov cycle $(H_X, T)$ for $KK^G(\C, \C)$ such that \[ [H_X, T_1] = x \,\,\text{in}\,\,\, KK^G(\C, \C). \] \end{definition} We say that $x\in KK^G(\C, \C)$ factors through a $G$-$C^*$-algebra $B$ if there is $y \in KK^G(\C, B)$ and $z\in KK^G(B, \C)$ such that $x=y\otimes_Bz$. By definition, $x\in KK^G(\C, \C)$ is the gamma element if it factors through a (separable) proper $G$-$C^*$-algebra $A$ and it satisfies $x=1_K$ in $R(K)$ for any compact subgroup $K$ of $G$. \begin{proposition}\label{prop_XGlocalize} Suppose that an element $x \in KK^G(\C, \C)$ factors through a separable $G$-$C_0(X)$-algebra $A$ for a proper $G$-space $X$. Then $x$ is $X$-$G$-localized. \end{proposition} The following is an immediate corollary. \begin{theorem}\label{thm_XGgamma0} The gamma element $\gamma$ for $G$, if exists, is $X$-$G$-localized for $X=\underline{E}G$. \end{theorem} Before giving a proof of Proposition \ref{prop_XGlocalize}, we prove two lemmas. Let $A$ be a graded $G$-$C_0(X)$ algebra and $(H, \pi, F)$ be a Kasparov triple for $KK^G(\C, A)$. If $\pi$ is non-degenerate, $\pi$ induces a natural non-degenerate representation of $C_0(X)$ on $H$. We view $H$ naturally as a (graded) $X$-$G$-module through this representation and set $H_X=H$. Any element $a$ in $M(A)$ commutes with $C_0(X)$ and hence $a$ (as a constant function) is naturally an element in $M(RL^*_c(H_X))\subset \mathfrak{L}(H_X\otimes C_0[1, \infty))$. \begin{lemma}\label{lem_XG1} Let $X$ be a proper $G$-space and $A$ be a graded, separable $G$-$C_0(X)$-algebra. Let $(H, \pi, F_0)$ be a Kasparov triple for $KK^G(A, \C)$ with $\pi$ non-degenerate. We view $H$ naturally as a graded $X$-$G$-module through this representation and set $H_X=H$. Then, there is an odd, $G$-continuous, self-adjoint element $F$ in $M(RL^*_c(H_X))\subset \mathfrak{L}(H_X\otimes C_0[1, \infty))$ such that \begin{enumerate}[(I)] \item $a(F - F_0) \in C_b([1, \infty), \mathfrak{K}(H_X))$ for any $a\in A$, where $F_0\in \mathfrak{L}(H_X)$ is regarded as a constant function in $\mathfrak{L}(H_X\otimes C_0[1, \infty))$, \item for any $a\in A$ and $g\in G$, \[ a(1-F^2), [a, F], a(g(F)-F) \in RL^*_c(H_X). \] \end{enumerate} \end{lemma} \begin{proof} We assume $F_0$ is self-adjoint. We assume $F_0$ is $G$-equivariant using the standard trick: Let $c\in C_b(X)$ be a cut-off function on $X$. Then, \[ F_0'=\int_{g\in G} g(c)g(F_0)g(c) d\mu_G(g) \] is $G$-equivariant and satisfies \[ a(F_0' -F_0) \in \mathfrak{K}(H_X) \] for any $a\in A$. Here, for an isometry $V_c\colon H_X\to H_X\otimes L^2(G)$ defined by \[ V_c\colon v\mapsto (g\mapsto g(c)v ), \] we have $F_0'=V_c^\ast\tilde FV_c$ where $\tilde F$ on $H_X\otimes L^2(G)$ is the diagonal operator $(g\mapsto g(F_0)) \in C_b(G, \mathfrak{L}(H_X))$. That $a(F_0' -F_0) \in \mathfrak{K}(H_X)$ can be checked by \[ a(F_0' -F_0) = \int_{g\in G} a g(c)(g(F_0)-F_0)g(c) + ag(c)[F_0, g(c)] d\mu_{G}(g) \] which is norm convergent in $\mathfrak{K}(H_X)$ for compactly supported $a$. We remark that a cut-off function $c$ may not be $G$-continuous in general unless $X$ is $G$-compact. Nonetheless, the map \[ g \mapsto g(c)a, g\mapsto g(c)S \] are norm-continuous for any $a\in A$ and for any $S\in \mathfrak{K}(H_X)$. Analogous remark applies to some of the arguments in the subsequent proof. Let $X^+$ be the one-point compactification of $X$ and fix any metric on $X^+$ which is compatible with its topology. Let $A$ be the support of a cut-off function $c$ on $X$. The closed set $A\subset X$ is not compact (unless $X$ is $G$-compact) but it satisfies that for any compact subset $X_0$ of $X$, $A\cap gX_0=\emptyset$ for $g$ outside a compact set of $G$. We have $GA=X$. For $n\geq1$, let $\mathcal{U}_n$ be a finite open cover of $X^+$ such that each $U\in \mathcal{U}_n$ is contained in a ball of radius $1/n$. Let $(\phi^0_{k, n})_{k \in S_n}$ be a finite partition of unity in $C(X^+)=C_0(X) + \C1_{X^+}$ which is sub-ordinate to the cover $\mathcal{U}_n$. We set $\phi_{k ,n}=c^2\phi^0_{k ,n} \in C_b(X)$. Thus, we have \[ \sum_{k \in S_n} \phi_{k, n}=c^2 \] and the support of $\phi_{k ,n}$ is contained in a ball of radius $1/n$ in $A$. We have \[ \sum_{k\in S_n}\int_{g\in G} g(\phi_{k, n}) d\mu_G(g) = 1. \] Let \[ F_n=\sum_{k\in S_n}\int_{g\in G}g(\phi_{k, n})^{1/2} F_0 g(\phi_{k, n})^{1/2} d\mu_G(g). \] We see that $F_n$ is an odd, $G$-equivariant, self-adjoint operator on $H_X$ such that \[ ||F_n|| \leq ||F_0||. \] The last one can be seen from $F_n=V^\ast_n\tilde F_0V_n$ where \[ V_n\colon H_X\to \bigoplus_{k\in S_n}L^2(G)\otimes H_X \] is an isometry which sends $v\in H_X$ to \[ V_n(v)(k, g)=g(\phi_{k, n})^{1/2} v \,\,\, (k\in S_n, g\in G) \] and where $\tilde F_0$ on $\bigoplus_{k\in S_n}L^2(G)\otimes H_X$ is the diagonal operator $\left( (k, g)\mapsto F_0 \right) \in C_b(S_n\times G, \mathfrak{L}(H_X))$. We have \[ a(F_n-F_0) =\sum_{k\in S_n}\int_{G}ag(\phi_{k, n})^{1/2} [F_0, g(\phi_{k, n})^{1/2}] d\mu_G(g) \in \mathfrak{K}(H_X) \] for any $a\in A$. The integral is norm convergent in $\mathfrak{K}(H_X)$ for $a\in A$ with compact support. For $t\in (n, n+1)$, we set \[ F_t= (n+1-t)F_n + (t-n)F_{n+1}. \] It is clear that $t\mapsto F_t\in \mathfrak{L}(H_X)$ is (uniformly) norm-continuous, $G$-equivariant, $||F_t||\leq ||F_0||$ and \[ a(F-F_0)\in C_b([1, \infty), \mathfrak{K}(H_X)) \] for any $a\in A$. We show that $F_t$ satisfies $\limt||[F_t, \phi]|| = 0$ for any $\phi \in C_0(X)$. Let $\phi \in C_c(X)$. We have \[ [F_n, \phi] = \sum_{k\in S_n}\int_{G}g(\phi_{k, n})^{1/2} [F_0, \phi] g(\phi_{k, n})^{1/2} d\mu_G(g). \] Recall that for any $n\geq1$ and $k\in S_n$, the support of $\phi_{k, n}$ is contained in a ball of radius $1/n$ in $A$. There is a compact subset $K$ of $G$ such that $g(\phi_{k, n})\phi=0$ for $g \notin K$, and so \[ g(\phi_{k, n})^{1/2} [F_0, \phi] g(\phi_{k, n})^{1/2} =0 \] for any $n\geq1$, $k\in S_n$ and for any $g\notin K$. Now, consider a family of functions $g^{-1}(\phi) \in C_0(X)$ for $g\in K$. This family is compact in $C_0(X)\subset C(X^+)$ so $g^{-1}(\phi)$ $(g\in K)$ are equi-uniformly-continuous on $X^+$. It follows that for any $\epsilon$, there is $N>0$ so that \[ |g^{-1}(\phi)(x) - g^{-1}(\phi)(x_0)| < \epsilon \] holds for any $n\geq N$, for any $g\in K$ and for any $x, x_0\in \mathrm{supp}(\phi_{k, n})$ for any $k\in S_n$. In other words, \[ |\phi(x) - \phi(x_0)| < \epsilon \] holds for any $n\geq N$, for any $x, x_0\in \mathrm{supp}(g(\phi_{k, n}))$ for any $g\in K$ and $k\in S_n$. Using this, we see that \[ ||[F_n, \phi]|| = \vert \vert \sum_{k\in S_n}\int_{K}g(\phi_{k, n})^{1/2} [F_0, \phi-\phi(gx_{k, n})] g(\phi_{k, n})^{1/2} d\mu_G(g) \vert \vert < 2\epsilon||F_0|| \] for $n\geq N$ where $x_{k, n}$ is any point in $\mathrm{supp}(\phi_{k, n})$. This shows $\limt||[F_t, \phi]|| = 0$. It follows that $(t\mapsto F_t) \in \mathfrak{L}(H_X\otimes [1, \infty))$ defines an odd, $G$-equivariant, self-adjoint element $F$ in $M(RL^*_c(H_X))$ such that \[ a(F - F_0) \in C_b([1, \infty), \mathfrak{K}(H_X)) \] for any $a\in A$. It follow from this and from the properties of $F_0$, \[ a(1-F^2), [a, F], a(g(F)-F) \in C_b([1, \infty), \mathfrak{K}(H_X)) \] for any $a\in A$ and $g\in G$. Moreover, if $x$ is any of these elements $a(1-F^2), [a, F], a(g(F)-F)$, we can see that for any $\phi \in C_0(X)$, \[ ||[x, \phi]|| \in C_0([1, \infty), \mathfrak{K}(H_X)) \] because $[F, \phi]\in C_0([1, \infty), \mathfrak{L}(H_X))$. We can also see that for any $\epsilon>0$, there is $\phi \in C_0(X)$ \[ ||(1-\phi)x || < \epsilon. \] These imply $x\in RL^*_c(H_X)$ as desired. For example, to see the last assertion holds for $x=[a, F]$, we can assume $a$ has compact support so that $a=\chi a$ for $\chi \in C_c(X)$. Then, we have \[ (1-\phi)[a, F] = (1-\phi)aF - (1-\phi)F\chi a = (1-\phi)aF - (1-\phi)[F, \chi] a - (1-\phi)\chi Fa \] which can be made arbitrarily small for some $\phi\in C_0(X)$ since $[F, \chi]a \in C_0([1, \infty), \mathfrak{K}(H_X))$. This ends our proof. \end{proof} \begin{lemma}\label{lem_XG2} Let $X$ be a proper $G$-space and $A$ be a separable, graded $G$-$C_0(X)$-algebra which is non-degenerately represented on a graded $G$-Hilbert space $H$. We view $H$ naturally as an $X$-$G$-module through this representation and set $H_X=H$. Let $D_1\subset M(A)$ and $D_2\subset M(RL^*_c(H_X))$ be separable $G$-$C^*$-subalgebras such that $[A, D_2]\subset RL^*_c(H_X)$. Let $D_3$ be the subalgebra of $D_2$ consisting of $d\in D_2$ such that $ad \in RL^*_c(H_X)$ for $a\in A$. There are even, $G$-continuous elements $M, N=1-M$ in $M(RL^*_c(H_X))$ such that \begin{enumerate}[(I)] \item $[M, D_1] \subset RL^*_c(H_X)$, \item $[M, D_2] \subset RL^*_c(H_X)$, \item $M (D_1\cap A) \subset RL^*_c(H_X)$, \item $ND_3 \subset RL^*_c(H_X)$, \item $g(M)-M \in RL^*_c(H_X)$. \end{enumerate} \end{lemma} \begin{proof} We construct $M, N=1-M \in M(RL^*_c(H_X))$ following the standard argument (see \cite[Theorem 12.4.2]{Blackadar}, \cite{Higson87}, \cite[Section 1.4]{Kasparov88},\cite[Section 3.8]{HigsonRoe}). Let $U_n$ be an increasing sequence of relatively compact, open subsets of $G$ such that $\cup U_n=G$ and let $K_n$ be the closure of $U_n$. Let $Y$ be a compact total subset of $C_0(X)$. For $i=1, 2, 3$, let $Y_i$ be a compact total subset of $D_i$ such that $Y_1\cap A$ is total in $D_1\cap A$. Let $Z$ be a compact total subset of $C_0([1, \infty), \mathfrak{K}(H_X))$. First, let $(a_n)_{n\geq 0}$ ($a_0=0$) be an even, quasi-central approximate unit in $A\subset M(A)$ so that $d_n=(a_{n+1}-a_n)^{1/2}$ satisfies for $n\geq1$, \begin{enumerate}[({1}.1)] \item $||[d_n, x]||<2^{-n}$ for $x\in Y_1$, \item $||d_nx||<2^{-n}$ for $x\in Y_1\cap A$, \item $||g(d_n)-d_n||<2^{-n}$ for $g\in K_n \subset G$, \item $||d_nx|| < 2^{-n}$ for $x\in Z$. \end{enumerate} Secondly, let $J$ be a separable $G$-$C^*$-subalgebra of $RL^*_c(H_X)$ which contains the ideal $RL^*_0(H_X)$, $[A, D_2]$ and $AD_3$. We arrange $J$ so that elements in $D_1$, $D_2$ and $C_0(X)$ multiply (and hence, derive) $J$. This is possible because $D_1$, $D_2$ and $C_0(X)$ multiply $RL^*_c(H_X)$. Thirdly, let $u_n$ be an even, quasi-central approximate unit in $J$ so that $u_n$ satisfies \begin{enumerate}[({2}.1)] \item $||[u_n, x]||<2^{-n}$ for $x \in Y\cup Y_1 \cup Y_2$, \item $||(1-u_n)x||<2^{-n}$ for $x\in d_nY_3 \cup [d_n, Y_2]\cap [d_n, Y_2]^\ast$, \item $||g(d_n)-d_n||<2^{-n}$ for $g\in K_n \subset G$. \end{enumerate} Now, we consider \[ M=\sum_{n\geq0} d_nu_nd_n, \,\,\, N=1-M=\sum_{n\geq0}d_n(1-u_n)d_n. \] By (1.4), these infinite sums converge in the strict topology in $M(\mathfrak{K}(H_X)\otimes C_0[1, \infty))$ to define an element in $\mathfrak{L}(H_X\otimes C_0[1, \infty))$ (see \cite[Proposition 12.1.2]{Blackadar}) but they may not converge in the strict topology in $M(RL^*_c(H_X))$. We claim that $M \in M(RL^*_c(H_X))$ and so is $N=1-M$. This is because for any $\phi$ in $C_0(X)$, \[ [M, \phi] = \sum_{n\geq0} d_n[u_n, \phi]d_n \in C_0([1, \infty), \mathfrak{K}(H_X)) \] which absolutely converges in $C_0([1, \infty), \mathfrak{K}(H_X))$ for $\phi \in Y$ by (2.1). Note that $[u_n, \phi] \in C_0([1, \infty), \mathfrak{K}(H_X))$ since $u_n\in J \subset RL^*_c(H_X)$. We check $M, N$ satisfy the conditions (I) - (V). \begin{enumerate}[(I)] \item $[M, D_1] \subset RL^*_c(H_X)$ follows from (1.1) and (2.1). \item $[M, D_2]=[N, D_2] \subset RL^*_c(H_X)$ follows from (2.1) and (2.2). \item $M (D_1\cap A) \subset RL^*_c(H_X)$ follows from (1.2). \item $ND_3 \subset RL^*_c(H_X)$ follows from (2.2). \item $g(M)-M \in RL^*_c(H_X)$ follows from (1.3) and (2.3). \end{enumerate} That $M$ is $G$-continuous follows from (1.3) and (2.3). \end{proof} \begin{proof}[Proof of Proposition \ref{prop_XGlocalize}] Let $y \in KK^G(\C, A)$ and $z\in KK^G(A, \C)$ so that $x=y\otimes_A z$. By stabilizing $A$ by $\mathfrak{K}(H_0)$ for a graded Hilbert space $H_0$ if necessary, we can assume that $y$ is represented by an odd, $G$-continuous, self-adjoint element $b \in M(A)$ such that $1-b^2 \in A$ and $g(b)-b\in A$ for $g\in G$. We can also assume that $z$ is represented by a cycle $(H, \pi, F_0)$ with $\pi$ non-degenerate. As in Lemma \ref{lem_XG1}, we set $H_X=H$ and let $F\in M(RL^*_c(H_X))$ be as given by the lemma. Now, let $M, N=1-M$ in $M(RL^*_c(H_X))$ as given by Lemma \ref{lem_XG2} for unital separable $G$-$C^*$-algebras $D_1 \subset M(A)$ containing $b$ and $D_2 \subset M(RL^*_c(H_X))$ containing $F$, $[F, b]$. Note we have \[ 1-b^2, g(b)-b \in D_1\cap A, \,\,\, 1-F^2, g(F)-F, [F, b] \in D_3 \] for $g\in G$. Let \[ T= M^{1/4}bM^{1/4} + N^{1/4}FN^{1/4} \in M(RL^*_c(H_X)). \] It is routine to check that this odd, $G$-continuous, self-adjoint element $T$ satisfies $1-T^2 \in RL^*_c(H_X)$ and $g(T)-T \in RL^*_c(H_X)$ for any $g$ in $G$. Thus, $(H_X, T)$ is an $X$-$G$-localized Kasparov cycle for $KK^G(\C, \C)$. By the construction, and since $a(F_1-F_0) \in \mathfrak{K}(H_X)$ for any $a \in A$, the pair $(H_X, T_1)$ for $KK^G(\C, \C)$ represents the Kasparov product of $y$ and $z$, that is $[H_X, T_1]=x$. \end{proof} \section{Controlled algebraic aspect of the gamma element method} We recall for any $x\in R(G)=KK^G(\C, \C)$ and for any separable (graded) $G$-$C^*$-algebra $B$, we have the following canonical ring homomorphism \[ \xymatrix{ KK^G(\C, \C) \ar[r]^-{\sigma_B} & KK^G(B, B) \ar[r]^-{j^G_r} & KK(B\rtimes_rG, B\rtimes_rG) \ar[r] & \mathrm{End}(K_\ast(B\rtimes_rG)) } \] where the last map is defined by the Kasparov product. Let us write the image of $x$ by \[ x^{B\rtimes_rG}_\ast \in \mathrm{End}(K_\ast(B\rtimes_rG)). \] In this section, we prove the following. \begin{theorem}\label{thm_XGfactor} Let $X$ be a proper $G$-space. Suppose that an element $x\in R(G)$ is $X$-$G$-localized, that is $x=[H_X, T_1]$ for an $X$-$G$-localized Kasparov cycle $(H_X, T)$ for $KK^G(\C, \C)$. Then, there is a natural group homomorphism \[ \nu^{B, T}\colon K_\ast(B\rtimes_rG) \to \bD^{B, G}_\ast(X) \] for any separable $G$-$C^*$-algebra $B$ such that \[ x^{B\rtimes_rG}_\ast = \mathcal{F} \circ \nu^{B, T} \colon K_\ast(B\rtimes_rG) \to \bD^{B, G}_\ast(X) \to K_\ast(B\rtimes_rG). \] In particular, $x^{B\rtimes_rG}_\ast$ factors through the Baum--Connes assembly map $\mu_r^{B ,G}$. Here, $\nu^{B, T}$ is natural with respect to a $G$-equivariant $\ast$-homomorphism $B_1\to B_2$ in a sense that the following diagram commutes \[ \xymatrix{ K_\ast(B_1\rtimes_rG) \ar[r]^-{\nu^{B_1, T}} \ar[d]^-{\pi\rtimes_r1_\ast} & \bD^{B_1, G}_\ast(X) \ar[d]^-{\pi_\ast} \\ K_\ast(B_2\rtimes_rG) \ar[r]^-{\nu^{B_2, T}} & \bD^{B_2, G}_\ast(X). } \] \end{theorem} \begin{theorem}\label{thm_XGgamma} Let $X$ be a proper $G$-space. If there is an $X$-$G$-localized element $x\in R(G)$ such that $x=1_K$ in $R(K)$ for any compact subgroup $K$, the Baum--Connes assembly map $\mu_r^{B, G}$ is split-injective for any $B$ and in this case the image of $\mu_r^{B, G}$ coincides with the image of $x^{B\rtimes_rG}_\ast$. In particular, if $x=1_G$, BCC holds for $G$. \end{theorem} The proof of these theorems will be quite simple and formal. We start with the construction of $\nu^{B, T}$. We should think that the map $\nu^{B, T}$ is obtained by sending an $X$-$G$-localized cycle $(H_X, T)$ through \[ KK^G(\C, RL^*_c(H_X)) \to KK^G(B, RL^*_c(H_X\otimes B)) \to KK(B\rtimes_rG, RL^*_c(H_X\otimes B)\rtimes_rG) \] \[ \to \mathrm{End}(K_\ast(B\rtimes_rG), K_\ast(RL^*_c(H_X\otimes B)\rtimes_rG)) \] where the first map is $\sigma_B$ followed by the inclusion $RL^*_c(H_X)\otimes B \to RL^*_c(H_X\otimes B)$, the second map is Kasparov's descent $j^G_r$ and the last map is given by the Kasparov product. Although we can take it as the definition of $\nu^{B, T}$, we reduce this to the separable $G$-$C^*$-algebras setting. Recall, for any $C^*$-subalgebra $A$ of $RL^*_c(H_X)$ containing the ideal $RL^*_0(H_X)$, the multiplier algebra $M(A)$ is naturally a subalgebra of $M(RL^*_0(H_X))$. Thus, it makes sense to ask if an operator $T$ in $M(RL^*_c(H_X)) \subset M(RL^*_0(H_X))$ belongs to $M(A)$. \begin{lemma} Let $(H_X, T)$ be an $X$-$G$-localized Kasparov cycle for $KK^G(\C, \C)$. Then, there is a separable, graded $G$-$C^*$-subalgebra $A_X$ of $RL^*_c(H_X)$ containing the ideal $RL^*_0(H_X)$ such that \[ T \in M(A_X) \subset M(RL^*_0(H_X)) \] and for any $g\in G$, \[ 1-T^2, g(T) - T \in A_X \] Note, $T$ is automatically a $G$-continuous element in $M(A_X)$. \end{lemma} \begin{proof} Just take any separable, graded $G$-$C^*$-subalgebra $A_X$ of $RL^*_c(H_X)$ containing the ideal $RL^*_0(H_X)$, such that it contains $1-T^2$ and $T-g(T)$ for all $g\in G$ and such that $T$ multiplies $A_X$. \end{proof} Now, for any $X$-$G$-localized cycle $(H_X, T)$ for $KK^G(\C, \C)$, let $A_X$ be a separable, graded $G$-$C^*$-subalgebra $A_X$ of $RL^*_c(H_X)$ as in the previous lemma. Then, the pair $(A_X, T)$ is a cycle for $KK^G(\C, A_X)$ and hence by applying the usual homomorphism \[ \xymatrix{ KK^G(\C, A_X) \ar[r]^-{j^G_r\circ \sigma_B} & \mathrm{End}(K_\ast(B\rtimes_rG), K_\ast((A_X\otimes B)\rtimes_rG)), } \] we obtain a group homomorphism \[ (j^G_r\circ\sigma_B)[A_X, T]_\ast \colon K_\ast(B\rtimes_rG) \to K_\ast((A_X\otimes B)\rtimes_rG) \] which is natural with respect to a $G$-equivariant $\ast$-homomorphism $B_1\to B_2$. The desired group homomorphism \[ \nu^{B, T} \colon K_\ast(B\rtimes_rG) \to K_\ast(RL^*_c(H_X\otimes B)\rtimes_rG) \] is obtained as the composition of $ (j^G_r\circ\sigma_B)[A_X, T]_\ast $ with the $\ast$-homomorphism \[ (A_X\otimes B)\rtimes_rG \to (RL^*_c(H_X)\otimes B)\rtimes_rG \to RL^*_c(H_X\otimes B)\rtimes_rG. \] It is clear, from the naturality of Kasparov product and the descent map, that $\nu^{B, T}$ is natural with respect to a $G$-equivariant $\ast$-homomorphism $B_1\to B_2$. Note that for a graded $X$-$G$-module $H_X$, the K-theory group $K_\ast(RL^*_c(H_X\otimes B)\rtimes_rG)$ of the graded $C^*$-algebra $RL^*_c(H_X\otimes B)\rtimes_rG$ is functorial with respect to a graded, $G$-equivariant continuous cover $(V_t\colon H_X\to H_Y)_{t\in [1,\infty)}$ for a $G$-equivariant map $f\colon X\to Y$. Such a cover exists for any $H_X$ whenever $H_Y$ is universal, that is if both the even space $H_Y^{(0)}$ and the odd space $H_Y^{(1)}$ is universal. Moreover, if $H^0_X$ is a (ungraded) $X$-$G$-module and $H_X$ is a graded $X$-$G$-module given by $H_X^{(0)}= H_X^{(1)} =H^0_X$, the inclusion \[ RL^*_c(H^{0}_X\otimes B) \to RL^*_c(H_X\otimes B) \] induces an isomorphism \[ K_\ast( RL^*_c(H^{0}_X\otimes B)\rtimes_rG ) \cong K_\ast( RL^*_c(H_X\otimes B)\rtimes_rG ). \] Therefore, we may redefine the functor $\bD^{B, G}_\ast(X)$ by using a graded universal $X$-$G$-module $H_X$ instead, and this functor is naturally equivalent to the original one. Moreover, in this case, for any graded $X$-$G$-module $H_X$, we have a natural group homomorphism \[ K_\ast(RL^*_c(H_X\otimes B)\rtimes_rG) \to \bD^{B, G}_\ast(X). \] The forget control map \[ \mathcal{F}\colon \bD^{B, G}_\ast(X) \to K_\ast(B\rtimes_rG) \] is defined as before. With these in mind, the natural group homomorphism $\nu^{B, T}\colon K_\ast(B\rtimes_rG) \to K_\ast(RL^*_c(H_X\otimes B)\rtimes_rG) $ extends to \[ \nu^{B, T}\colon K_\ast(B\rtimes_rG) \to \bD^{B, G}_\ast(X). \] \begin{lemma}\label{lem_XGfactor} Let $(H_X, T)$ be an $X$-$G$-localized cycle for $KK^G(\C, \C)$ and $x=[H_X, T_1]$ in $R(G)$. Then, we have \[ \mathcal{F} \circ \nu^{B, T} = x^{B\rtimes_rG}_\ast \] on $K_\ast(B\rtimes_rG)$. \end{lemma} \begin{proof} The evaluation $\mathrm{ev}_1\colon RL^*_c(H_X)\to \mathfrak{K}(H_X)$ restricts to $A_X$ and it extends to the map \[ \mathrm{ev}_1\colon M(A_X) \to \mathfrak{L}(H_X) \] which sends $T$ to $T_1$. The evaluation map $\mathrm{ev}_1\colon A_X\to \mathfrak{K}(H_X)$ defines an element $[H_X, \mathrm{ev}_1, 0] \in KK^G(A_X, \C)$ whose Kasparov product with $[A_X, T]\in KK^G(\C, A_X)$ is just $x=[H_X, T_1]$ in $KK^G(\C, \C)$. It is easy to see that the composition $\mathcal{F} \circ \nu^{B, T}$ coincides with the composition of \[ (j^G_r\circ\sigma_B)[A_X, T]_\ast\colon K_\ast(B\rtimes_rG) \to K_\ast((A_X\otimes B)\rtimes_rG) \] and \[ (j^G_r\circ\sigma_B)[H_X, \mathrm{ev}_1, 0]_\ast \colon K_\ast((A_X\otimes B)\rtimes_rG) \to K_\ast(B\rtimes_rG) \] defined by the element $[H_X, \mathrm{ev}_1, 0] \in KK^G(A_X, \C)$ through \[ \xymatrix{ KK^G(A_X, \C) \ar[r]^-{j^G_r\circ \sigma_B} & \mathrm{End}(K_\ast(A_X\otimes B\rtimes_rG), K_\ast(B\rtimes_rG) ). } \] From this, we can see that $\mathcal{F} \circ \nu^{B, T}= x^{B\rtimes_rG}_\ast$. \end{proof} \begin{proof}[Proof of Theorem ~\ref{thm_XGfactor}] The first part of the theorem follows from Lemma \ref{lem_XGfactor}. The second part follows from the first part and from Theorem \ref{thm_forget_factor}. Note that we do not need the isomorphism Theorem \ref{thm_main_isom}. \end{proof} \begin{proof}[Proof of Theorem ~\ref{thm_XGgamma}] Thanks to Theorem \ref{thm_XGfactor}, there are group homomorphisms \[ \nu^{B, T}\colon K_\ast(B\rtimes_rG) \to \varinjlim_{Y\subset \underline{E}G, \mathrm{Gcpt}}K_\ast^G(C_0(Y), B) \] which are natural with respect to $B$ and we have \[ \mu_r^{B, G}\circ \nu^{B, T} = x^{B\rtimes_rG}_\ast \colon K_\ast(B\rtimes_rG) \to K_\ast(B\rtimes_rG). \] Under the assumption $x=1_K$ in $R(K)$ for any compact subgroup $K$ of $G$, we can show that the other composition $\nu^{B, T} \circ \mu_r^{B, G}$ is the identity for all $B$ just as in the proof of \cite[Proposition 5.3]{Nishikawa19} using the naturality of $\mu_r^{B, G}$ and of $\nu^{B, T}$ with respect to a $G$-equivariant $\ast$-homomorphism $\pi \colon B_1\to B_2$. It follows that $\mu_r^{B, G}$ is split-injective for all $B$ and the image of $\mu_r^{B, G}$ coincides with the idempotent $\mu_r^{B, G}\circ \nu^{B, G} = x^{B\rtimes_rG}_\ast $ on $K_\ast(B\rtimes_rG)$. \end{proof} \noindent \textbf{Description of the splitting of the forget-control map} Let $(H_X, T)$ be an $X$-$G$-localized Kasparov cycle for $KK^G(\C, \C)$. We give an explicit description of the map \[ \nu^{B, T}\colon K_0(B\rtimes_rG) \to \bD^{B, G}_0(X) \] which is a splitting of the forget-control map $\mathcal{F}$ if $[H_X, T_1]=1_K$ in $R(K)$ for any compact subgroup $K$ of $G$. Let $\mathcal{H}_0=l^2(\mathbb{N})^{(0)}\oplus l^2(\mathbb{N})^{(1)}$ be the standard graded Hilbert space. For any (not necessarily separable) graded $C^*$-algebra $A$, the K-theory group $K_0(A)=KK(\C, A)$ can be identified (see \cite[Section 3]{Skandalis85}) as the set of homotopy equivalence classes of odd, self-adjoint operators $F \in M(\mathfrak{K}(\mathcal{H}_0)\hat\otimes A)\cong \mathfrak{L}(\mathcal{H}_0\hat\otimes A)$ such that $1-F^2 \in \mathfrak{K}(\mathcal{H}_0)\hat\otimes A \cong \mathfrak{K}(\mathcal{H}_0\hat\otimes A)$. We describe the image $\nu^{B, T}([F]) \in \bD^{B, G}_0(X)$ of $[F] \in K_0(B\rtimes_rG)$ for $F\in M(\mathfrak{K}(\mathcal{H}_0)\hat\otimes B\rtimes_rG)$. We have canonical representations \[ M(RL^*_c(H_X)) \to M(\mathfrak{K}(\mathcal{H}_0)\hat\otimes (RL^*_c(H_X)\otimes B)\rtimes_rG ) \] \[ M(\mathfrak{K}(\mathcal{H}_0)\hat\otimes B\rtimes_rG) \to M(\mathfrak{K}(\mathcal{H}_0)\hat\otimes (RL^*_c(H_X)\otimes B)\rtimes_rG ) \] We still denote by $F$, resp. by $T$, the image of $F \in M(\mathfrak{K}(\mathcal{H}_0)\hat\otimes B\rtimes_rG)$, resp. $T \in M(RL^*_c(H_X))$ in $M(\mathfrak{K}(\mathcal{H}_0)\hat\otimes (RL^*_c(H_X)\otimes B)\rtimes_rG )$ by these representations. Let \[ T \sharp F = F + (1-F^2)^{1/4} T (1-F^2)^{1/4} \in M(\mathfrak{K}(\mathcal{H}_0)\hat\otimes (RL^*_c(H_X)\otimes B)\rtimes_rG ). \] It satisfies \[ 1- (T \sharp F )^2 \in \mathfrak{K}(\mathcal{H}_0)\hat\otimes (RL^*_c(H_X)\otimes B)\rtimes_rG. \] Hence, the odd, self-adjoint element $T \sharp F$ defines the element $[T \sharp F]$ in \[ K_0((RL^*_c(H_X)\otimes B)\rtimes_rG) \to K_0(RL^*_c(H_X\otimes B)\rtimes_rG) \to \bD_0^{B, G}(X). \] We have \[ \nu^{B, T}[F] = [T \sharp F] \in \bD^{B, G}_0(X). \] In this sense, $\nu^{B, T}$ is just given by the Kasparov product by $T$. Of course, the forget control map $\mathcal{F}\colon \bD^{B, G}_0(X) \to K_0(B\rtimes_rG)$ sends this element $[T \sharp F]$ to \[ [T_1 \sharp F] \in K_0(B\rtimes_rG) \] represented by an odd, self-adjoint operator \[ T_1 \sharp F = F + (1-F^2)^{1/4} T_1 (1-F^2)^{1/4} \] in $M(\mathfrak{K}(\mathcal{H}_0)\hat\otimes (\mathfrak{K}(H_X)\otimes B)\rtimes_rG ) \cong \mathfrak{L}(\mathcal{H}_0\hat\otimes H_X \otimes B\rtimes_rG)$, which represents the Kasparov product $[F]\otimes_{B\rtimes_rG}j^G_r(\sigma_B [H_X, T_1] )$ in $KK(\C, B\rtimes_rG)$. \section{Examples} In this section, we give some examples of $X$-$G$-localized Kasparov cycles for $KK^G(\C, \C)$. Recall from \cite{BaajJulg} that an unbounded cycle for $KK^G(\C, \C)$ is a pair $(H, D)$ of a separable, graded $G$-Hilbert space $H$ and an odd, self-adjoint unbounded operator $D$ on $H$ such that \begin{itemize} \item $(D\pm i)^{-1}\in \mathfrak{K}(H)$ ($D$ has compact resolvent) and \item the $G$-action on $H$ preserves the domain of $D$, and $g(D)-D$ extends to a strongly continuous, locally bounded, $\mathfrak{L}(H)$-valued function of $g \in G$. \end{itemize} For any unbounded cycle $(H, D)$, the pair $(H, T)$ with $T=D(1+D^2)^{-1}$ is a (bounded) Kasparov cycle for $KK^G(\C, \C)$ \cite{BaajJulg}. Now suppose we have a family $\{D_t\}_{t\geq1}$ of odd, self-adjoint unbounded operators on a graded $G$-Hilbert space $H$ which defines an odd, regular, self-adjoint unbounded operator $D$ (\cite[Chapter 10]{Lance}) on a Hilbert $C_0[1, \infty)$-module $H\otimes C_0[1, \infty)$. Then, the bounded transform $T=D(1+D^2)^{-1}$ of $D$ is an odd, self-adjoint operator in $\mathfrak{L}(H\otimes C_0[1, \infty))$. \begin{proposition}\label{prop_unboundedXG} Let $H_X$ be a graded $X$-$G$-module and $D$ be an odd, regular self-adjoint unbounded operator on a Hilbert $C_0[1, \infty)$-module $H_X\otimes C_0[1, \infty)$ such that \begin{enumerate} \item there is a dense subalgebra $B$ of $C_0(X)$ which preserves the domain of $D$ such that for any $\phi \in B$, $[D, \phi]$ extends to a bounded operator $S \in \mathfrak{L}(H\otimes C_0[1, \infty))$ with $||S_t||\to 0$ as $t \to \infty$, \item the $G$-action on $H$ preserves the domain of $D$, and $g(D)-D$ extends to a norm-continuous $\mathfrak{L}(H\otimes C_0[1, \infty))$-valued function of $g \in G$, and \item $(D\pm i)^{-1}\in RL^*_c(H_X)$. \end{enumerate} Then, the pair $(H_X, T)$ with $T=D(1+D^2)^{-1}$ is an $X$-$G$-localized Kasparov cycle for $KK^G(\C, \C)$. If $D$ satisfies the conditions (1), (2), then the condition (3) is equivalent to the following \begin{enumerate} \item [(3.1)] $(D\pm i) \in C_b([1,\infty), \mathfrak{K}(H_X))$ and \item [(3.2)] for any $\epsilon>0$, there is $\phi \in C_0(X)$ such that $||(1-\phi)(D\pm i)||<\epsilon$. \end{enumerate} \end{proposition} \begin{proof} The condition (3) implies $1-T^2 = (1+D^2)^{-1} \in RL^*_c(H_X)$. We have the following Baaj--Julg formula \cite{BaajJulg} \[ T = \frac{\pi}{2} \int_{0}^\infty D(1+D^2+\lambda^2)^{-1}d\lambda = \frac{\pi}{4} \int_{0}^\infty (D+\sqrt{1+\lambda^2} i)^{-1} + (D-\sqrt{1+\lambda^2} i)^{-1} d\lambda. \] Here, the integral converges in the strong topology on $\mathfrak{L}(H\otimes C_0[1, \infty))$. For $\phi \in B$, we have \begin{equation}\label{eq_Tphi} [T, \phi] = \frac{\pi}{4} \int_{0}^\infty [(D+\sqrt{1+\lambda^2} i)^{-1}, \phi] + [(D-\sqrt{1+\lambda^2} i)^{-1}, \phi] d\lambda \end{equation} and \[ [(D\pm \sqrt{1+ \lambda^2} i)^{-1}, \phi] = (D\pm \sqrt{1+ \lambda^2} i)^{-1}[\phi, D](D\pm\sqrt{1+ \lambda^2} i)^{-1}. \] From the condition (1), we see $[T_t, \phi] \to 0$ as $t\to \infty$. For $g\in G$, we have \begin{equation}\label{eq_gT1} g(T) - T = \frac{\pi}{4} \int_{0}^\infty A_g^{+} +A_g^{-} d\lambda \end{equation} where \begin{equation}\label{eq_gT2} A_g^{\pm} = (g(D)\pm \sqrt{1+\lambda^2} i)^{-1} (D-g(D)) (D\pm \sqrt{1+\lambda^2} i)^{-1}. \end{equation} Using the conditions (2), (3) and $\lim_{t\to \infty}||[T_t, \phi]||= 0$ for $\phi \in C_0(X)$, we see that $T$ is a $G$-continuous element in $M(RL^*_c(H_X))$ and that \[ g(T) - T \in C_b([1 ,\infty), \mathfrak{K}(H_X)). \] To see $g(T) - T \in RL^*_c(H_X)$, since we have $\lim_{t\to \infty}[T_t, \phi] = 0$, it is enough to show that for any $\epsilon>0$, there is $\phi \in C_0(X)$ such that $||(g(T)-T)(1-\phi)|| < \epsilon$. This follows from \eqref{eq_gT1}, \eqref{eq_gT2}, using $ (D\pm \sqrt{1+\lambda^2} i)^{-1} \in RL^*_c(H_X)$ for any $\lambda\geq0$ (which follows from the condition (1)). We may also use $g(D)-D \in M(RL^*_c(H_X))$ (which follows from the conditions (1), (3)) to see $g(T) - T \in RL^*_c(H_X)$. We explain that for $D$ satisfying (1) and (2), the condition (3) is equivalent to (3.1) and (3.2). That (3) implies (3.1) and (3.2) is obvious. On the other hand, the conditions (1) and (3.1) imply $[(D\pm i)^{-1}, \phi]\in C_0([1,\infty), \mathfrak{K}(H_X))$ and the condition (2) implies that $(D\pm i)^{-1}$ is $G$-continuous. Together with (3.2), these imply (3). \end{proof} \begin{example}(c.f.\ \cite[Section 5]{Kasparov88} \cite[Section 3]{Lafforgue2002} \cite[Example 9.6]{Valette02}) Let $X$ be a complete, simply connected Riemannian manifold of non-positive sectional curvature, with curvature bounded below, on which a locally compact group $G$ acts properly and isometrically. Let $x_0 \in X$, $d_X$ be the Riemannian distance function on $X$ and $\rho\in C^\infty(X)$ be defined by \[ \rho(x)=\sqrt{d_X(x_0, x)^2+ 1}. \] Let $d\rho= \xi$ be the exterior derivative of $\rho$. The norm $||\xi||_x$ of the co-vector $\xi$ at $x$ satisfies $||\xi||_x=\frac{d_X(x_0, x)}{\sqrt{d_X(x_0, x)^2+ 1}}\to 1$ as $x$ goes to infinity. Let $H_X=\Omega^*_{L^2}(X)$ be the $L^2$-space of the de-Rham complex on $X$ and for $t>0$, let \[ D_t = t^{-1}(d+d^\ast) + \mathrm{ext}(\rho d\rho) + \mathrm{int}(\rho d\rho) \] where $d$ is the de-Rham differential operator and $\mathrm{ext}(v)$ and resp. $\mathrm{int}(v)$ are the exterior, resp. interior, multiplication by the co-vector $v$. The odd, unbounded operators $D_t$ defined on the space $\mathcal{E}_c^\infty$ of compactly supported differential forms are essentially self-adjoint and have compact resolvent. We have \[ D_t^2= t^{-2}\Delta + \rho^2||\xi||^2 + t^{-1}A \] where $\Delta=dd^\ast + d^\ast d$ and $A= [(d+d^\ast), \mathrm{ext}(\rho d\rho) + \mathrm{int}(\rho d\rho)]$ is an order-zero, smooth bundle-endomorphism. There are constants $C_0, C_1>0$ so that we have for $x\in X$, \[ ||A_x|| \leq C_0 + C_1\rho(x). \] Using this, we see that there is $N>0$ so that we have for $t\geq1$, \begin{equation}\label{eq_N} D_t^2 + N^2 \geq t^{-2}\Delta + \rho^2/2. \end{equation} The family $\{D_t\}_{t\geq1}$ defines an odd, essentially self-adjoint, regular operator $D$ on $H_X\otimes C_0[1, \infty)$ with domain $C_c([1, \infty), \mathcal{E}_c^\infty)$. We have \[ (D\pm i)^{-1} \in C_b([1, \infty), \mathfrak{K}(H_X)). \] Let $T=D(1+D^2)^{-1}$ in $\mathfrak{L}(H_X\otimes C_0[1, \infty))$. \begin{proposition} The pair $(H_X, T)$ is an $X$-$G$-localized Kasparov cycle for $KK^G(\C, \C)$. \end{proposition} \begin{proof} We check that $D$ satisfies the conditions (1) -(3) of Proposition \ref{prop_unboundedXG}. (1): Let $B=C_c^\infty(X) \subset C_0(X)$. We have for any $\phi \in B$, \[ [D_t, \phi] = t^{-1} ( \mathrm{ext}(d\phi) - \mathrm{int}(d\phi)) \] from which the condition (1) holds for $D$. (2): we have \[ g(D_t) - D_t = \mathrm{ext}(\rho' d\rho' - \rho d\rho) + \mathrm{int}(\rho' d\rho' - \rho d\rho) \] where $\rho'(x) =\sqrt{d(g(x_0), x)^2+ 1}$. We have $||d\rho'-d\rho||_x \leq 6d(g(x_0) ,x_0)(\rho(x)+\rho'(x))^{-1}$ \cite[Lemma 5.3]{Kasparov88}. Using this, we see that the condition (2) holds for $D$. (3): Since $(D\pm i)^{-1} \in C_b([1, \infty), \mathfrak{K}(H_X))$, it is enough to show that for any $\epsilon>0$, there is $\phi \in C_0(X)$, such that $||(1-\phi)(D_t\pm i)^{-1}|| < \epsilon$ for all $t\geq1$. For this, we may instead prove the same claim not for $(D_t\pm i)^{-1}$ but for $(D_t\pm Ni)^{-1}$ for the constant $N>0$ as in \eqref{eq_N}. Take any $v=(D_t \pm Ni)v_0$ for $v_0 \in \mathcal{E}_c^\infty$. Then, we have \[ (1-\phi) (D_t\pm i)^{-1}v = (1-\phi)v_0 \] and \[ ||v||^2 = ||(D_t \pm Ni)v_0||^2 = \s{(D_t^2 + N^2)v_0, v_0} \geq \frac{1}{2} \s{\rho^2v_0, v_0}. \] Using these, it is not hard to see that for any $\epsilon>0$, there is $\phi\in C_0(X)$ with $||(1-\phi)(D_t\pm Ni)^{-1}|| < \epsilon$ for all $t\geq1$. Hence, the condition (3) holds for $D$. \end{proof} One can show that $[H_X, T_1]=1_K$ in $R(K)$ for any compact subgroup $K$ of $G$. By Theorem \ref{thm_XGgamma}, we see that the Baum--Connes assembly map $\mu^{B, G}_r$ is split-injective for any $G$ which acts properly and isometrically on a complete, simply connected Riemannian manifold of non-positive sectional curvature, with curvature bounded below. Of course, $[H_X, T_1]$ in $KK^G(\C, \C)$ is nothing but the gamma element for $G$ constructed by Kasparov \cite{Kasparov88}. \end{example} \begin{example} (c.f.\ \cite{JulgValette84}) Let $X$ be a locally finite tree on which a locally compact group $G$ acts properly (and continuously) by automorphisms. Let \[ H^{\mathrm{JV}}_X= l^2(X^0) \oplus l^2(X^1) \] be the direct sum of $l^2$-spaces of the set $X^0$ of vertices of $X$ and of the set $X^1$ of the edges. More precisely, we define $l^2(X^1)$ as the quotient of $l^2$-space of oriented edges by the closed subspaces spanned by $[x, y] + [y, x]$ for oriented edges $[x, y]$ of $X$. The Hilbert space $H^{\mathrm{JV}}_X$ is naturally a graded $X$-$G$-module. Fix $x_0\in X$ and $T_{\mathrm{JV}}$ be the associated Julg--Valette operator on $H^{\mathrm{JV}}_X$. The operator $T_{\mathrm{JV}}$ can be defined as follows. For any $x\in X^0\backslash \{x_0\}$, let $e_x=[x', x]$ be the unique (oriented) edge which is closest to $x_0$ among the all edges containing $x$. Then, the Hilbert space $H^{\mathrm{JV}}_X$ has the following direct sum decomposition \[ H^{\mathrm{JV}}_X = \C\delta_{x_0} \oplus \bigoplus_{x\in X^0 \backslash \{x_0\}} \C\delta_{x} \oplus \C\delta_{e_x}. \] Under this decomposition of $H^{\mathrm{JV}}_X$, the Julg--Valette operator $T_{\mathrm{JV}}$ is expressed as the block-diagonal operator \[ 0 + \sum_{x\in X^0\backslash \{x_0\}} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}. \] The Julg--Valette operator $T_{\mathrm{JV}}$ is an, odd, self-adjoint, bounded, $G$-continuous operator on $H^{\mathrm{JV}}_X$ such that for any $g\in G$, \[ 1-T_{\mathrm{JV}}^2, \,\,\, g(T_{\mathrm{JV}})- T_{\mathrm{JV}} \in \mathfrak{K}(H^{\mathrm{JV}}_X) \] (in fact they are of finite rank). Thus, $(H^{\mathrm{JV}}_X, T_{\mathrm{JV}})$ is a (bounded) Kasparov cycle for $KK^G(\C, \C)$ and we have $[H^{\mathrm{JV}}_X, T_{\mathrm{JV}}]=1_G$ in $KK^G(\C, \C)$ \cite[Proposition 1.6]{JulgValette84} (for this, the $G$-action does not have to be proper). Now, we recall some idea from \cite{BGHN}. Instead of the $X$-$G$-module $H^{\mathrm{JV}}_X$ which is rather ``discrete'', we consider the Hilbert space $H^{\mathrm{dR}}_X$ of the de-Rham complex on $X$ which is more ``continuous''. The Hilbert space $H^{\mathrm{dR}}_X$ is defined as \[ H^{\mathrm{dR}}_X = \Omega^\ast_{L^2}(X) = \bigoplus_{i=0, 1} \bigoplus_{\sigma \in X^i} \Omega^\ast_{L^2}(\sigma) \] where for a vertex $\sigma\in X^0$, $\Omega^\ast_{L^2}(\sigma)$ is just the one-dimensional space $\C\delta_{\sigma}$ and for a (non-oriented) edge $\sigma\in X^1$, $\Omega^\ast_{L^2}(\sigma)$ is the space of $L^2$-sections of the de-Rham complex on $\sigma$ equipped with the standard metric so that $\sigma \cong [0 ,1]$ isometrically. The space $H^{\mathrm{dR}}_X$ is naturally a graded $X$-$G$-module and we have an embedding \[ H^{\mathrm{JV}}_X \to H^{\mathrm{dR}}_X \] of $X$-$G$-modules which sends $\delta_x$ for $x\in X^0$ to $\delta_x \in \Omega^0_{L^2}(x)$ and $\delta_e$ for an oriented edge $e=[x, y]$ to the constant 1-form in $\Omega^1_{L^2}(e)$ which corresponds to $ds$ under the isomorphism \[ \Omega^1_{L^2}(e) \cong \Omega^1_{L^2}[0 ,1] = L^2[0, 1]ds \] under which the vertex $x$ is identified as $0$ and $y$ is identified as $1$ in $[0, 1]$. Fix $x_0\in X$, we have the following direct sum decomposition \begin{equation}\label{eq_decomp_dR} H^{\mathrm{dR}}_X = \C\delta_{x_0} \oplus \bigoplus_{x\in X^0\backslash \{x_0\}} \C\delta_{x} \oplus \Omega^\ast_{L^2}(e_x) \end{equation} where $e_x=[x', x]$ as before. Furthermore, we have an isomorphism \begin{equation}\label{eq_identify} \Omega^\ast_{L^2}(e_x) \cong \Omega^\ast_{L^2}[0 ,1] = L^2[0, 1] \oplus L^2[0, 1]ds \end{equation} under which $x'$ is identified as $0$ and $x$ is identified as $1$ in $[0, 1]$. For any $t>0$, let \[ d_t = t^{-1}d + \mathrm{ext}(ds)\colon \Omega^0_{0}[0 ,1] \to \Omega^1[0 ,1] \] be the Witten-type perturbation $t^{-1}(e^{-ts}de^{ts})$ of the de-Rham differential operator $d$ on $\Omega^\ast_{L^2}[0 ,1]$ and consider it as an unbounded operator on $\Omega^\ast_{L^2}(e_x)$. Here, $\Omega^\ast[0 ,1]$ is the space of (smooth) forms and \[ \Omega_0^0[0 ,1] = \{ s \in C^\infty[0 ,1]\mid s(0)=s(1)=0 \}. \] Let \[ D_t = d_t + d_t^\ast \colon \Omega^\ast_{L^2}[0 ,1] \to \Omega^\ast_{L^2}[0 ,1] \] with domain $\Omega_0^0[0 ,1] \oplus \Omega^1[0 ,1]$. Then, $D_t$ is an odd, essentially self-adjoint, unbounded operator on $\Omega^\ast_{L^2}[0 ,1]$ with compact resolvent. For each integer $k>0$, the span of the forms \[ \sin(\pi k s), \,\,\, d_t\sin(\pi k s) = t^{-1}\pi k \cos(\pi k s)ds + \sin(\pi k s)ds \] is invariant under $D_t$ and we have \[ D_t^2 = t^{-2}\pi^2k^2 + 1 \] on the span. These spans are mutually orthogonal in $\Omega^\ast_{L^2}[0 ,1]$ and their mutual orthogonal complement $\Omega^\ast_{L^2}[0 ,1]$ is spanned by \[ \tilde e_t = e^{ts}ds \in \Omega^1[0 ,1] \] which lies in the kernel of $D_t$. Now, let \[ S_t = \chi(D_t) \] where $\chi\in C_b(\mathbb{R})$ is any odd, continuous, increasing function on $\mathbb{R}$ such that $\chi(x)=1$ for $x\geq1$. Then, we have \[ 1-S_t^2 = P_t \] where $P_t$ is the rank one projection onto the span of $e_t$ in $\Omega^\ast_{L^2}[0 ,1]$. Under the identification \eqref{eq_identify}, $S_t$ defines an, odd, self-adjoint operator $S_{t, x}$ on each $\Omega^\ast_{L^2}(e_x)$ for $x\in X^0 \backslash \{x_0\}$. Now, let \[ e_t = \frac{\tilde e_t}{||\tilde e_t || } = \sqrt{\frac{2t}{e^{2t}-1}} e^{ts}ds = \sqrt{\frac{2t}{1-e^{-2t}}} e^{t(s-1)}ds \in\Omega^1[0 ,1] \] and $e_{t, x}$ be the corresponding unit vector in $\Omega^\ast_{L^2}(e_x)$ under the identification \eqref{eq_identify}. Then, $1-S^2_{t, x}$ on $\Omega^\ast_{L^2}(e_x)$ is the rank one projection onto the span of $e_{t, x}$. We set \[ R_{t, x} = \theta_{e_{t, x}, \delta_x} + \theta_{\delta_x, e_{t, x}} \] be the partial isometry on $\C\delta_{x} \oplus \Omega^\ast_{L^2}(e_x)$ which sends $e_{t, x}$ to $\delta_x$ and $\delta_x$ to $e_{t, x}$ and is zero on their complement. We define the block-diagonal operator \[ T_t = 0 + \sum_{x\in X^0\backslash \{x_0\}} S_{t, x} + R_{t, x} \] on $H^{\mathrm{dR}}_X$ with respect to the decomposition \eqref{eq_decomp_dR}. Then, the family $\{T_t\}_{t\geq1}$ defines an odd, self-adjoint operator $T$ in $\mathfrak{L}(H^{\mathrm{dR}}_X\otimes C_0[1 ,\infty))$. \begin{proposition} The pair $(H^{\mathrm{dR}}_X, T)$ is an $X$-$G$-localized Kasparov cycle for $KK^G(\C, \C)$. \end{proposition} \begin{proof} It is clear that $1-T_t^2$ is the rank-one projection onto the span of $\delta_{x_0}$. Moreover, as in the case of the Julg--Valette operator, for any $g\in G$, the difference $g(T_t)-T_t$ is contributed from only finitely many blocks corresponding to the vertices $x$ on the geodesics from $x_0$ to $g(x_0)$. After noticing this, it is not hard to see that $T$ is $G$-continuous and that for any $g\in G$, \[ g(T) - T \in C_b([1, \infty), \mathfrak{K}(H^{\mathrm{dR}}_X)) \] and $g(T)-T$ has uniform compact support with respect to $X$. Finally, we have $\lim_{t\to \infty}||[T_t, \phi]||=0$ for any $\phi$ in $C_0(X)$. To see this, we take $\phi$ to be a compactly supported function on $X$ which is smooth on each edge. Then, we have \[ [D_t, \phi] = t^{-1}(\mathrm{ext}(d\phi) - \mathrm{int}(d\phi) ) \] on each block $\Omega^\ast_{L^2}(e_x)\cong \Omega^\ast_{L^2}[0 ,1]$ and this implies $\lim_{t\to \infty}||[S_{t, x}, \phi]||=0$ for each $x$. To see $\lim_{t\to \infty}||[R_{t, x}, \phi]||=0$, we compute \[ [\theta_{e_{t, x}, \delta_x} , \phi] = (\phi - \phi(x))\theta_{e_{t, x}, \delta_x}. \] We have $||(\phi - \phi(x))e_{t, x}|| \to 0$ as $t\to \infty$, uniformly in $x\in X\backslash \{x_0\}$. This follows from the following elementary estimate: \[ \sqrt{\frac{2t}{1-e^{-2t}}} ||(1-s) e^{t(s-1)}||_{L^2[0, 1]} = \sqrt{\frac{2t}{1-e^{-2t}}}||se^{-ts}||_{L^2[0 ,1]} \] \[ =\frac{1}{\sqrt{t}} \sqrt{\frac{2}{1-e^{-2t}}}||tse^{-ts}||_{L^2[0 ,1]} \leq \frac{1}{\sqrt{t}} \sqrt{\frac{2}{1-e^{-2t}}} \sup \{ se^{-s} \mid s\geq 0\} \] \[ \leq \frac{1}{\sqrt{t}} \sqrt{\frac{2}{1-e^{-2t}}} \to 0 \] as $t\to \infty$. We now see that $\lim_{t\to \infty}||[T_t, \phi]||=0$ holds for any $\phi \in C_0(X)$. Hence, $T\in M(RL^*_c(H^{\mathrm{dR}}_X))$ is an odd, self-adjoint, $G$-continuous operator and we have for any $g\in G$, \[ 1-T^2, \,\,\ g(T)-T \in RL^*_c(H^{\mathrm{dR}}_X). \] \end{proof} One can show (see \cite[Section 5]{BGHN}) that $[H^{\mathrm{dR}}_X, T_1]=[H^{\mathrm{JV}}_X, T_{\mathrm{JV}}]=1_G$ in $KK^G(\C, \C)$. In fact, $(H^{\mathrm{dR}}_X, T_t)$ for $t\in [0, 1]$ is a homotopy from $[H^{\mathrm{dR}}_X, T_1]$ to $[H^{\mathrm{JV}}_X, T_{\mathrm{JV}}]$. By Theorem \ref{thm_XGgamma}, we see that the Baum--Connes assembly map $\mu^{B, G}_r$ is an isomorphism for any $G$ which acts properly (and continuously) on a locally finite tree by automorphisms. See \cite{KasparovSkandalis91} and \cite{Tu1999} for the proof by the gamma element method. \end{example} \section{BCC for groups acting on finite-dimensional CAT(0)-cubical spaces} In this final section, we extend the recently obtained new proof of the Baum--Connes conjecture with coefficients for groups $G$ which act properly (and continuously) and co-compactly on a finite-dimensional CAT(0)-cubical space with bounded geometry \cite{BGHN} to the not-necessarily co-compact setting. The groundwork for this is already done in \cite{BGHN} (almost all results are proven in the general, not-necessarily co-compact setting). We first recall some materials and established results from \cite{BGHN} (see also \cite{BGH}). For the rest of this section, $X$ will be a bounded geometry CAT(0)-cubical space, as in \cite[Section 2.2]{NibloReeves98}. In particular, $X$ is obtained by identifying cubes, each of which is isometric to the standard Euclidean cube $[0, 1]^q$ of the appropriate dimension, by isometries of their faces. We shall refer to $0$-cubes as vertices, $1$-cubes as edges, etc. The bounded geometry condition asserts that there is a uniform bound on the number of edges that contain a given vertex, and this implies that $X$ is finite dimensional. Further, we shall assume all group actions on $X$ are by automorphism of its structure as a CAT(0)-cubical space. In particular, a group acting on $X$ permutes the vertices, the edges, and the $q$-cubes for each fixed $q$. The midplanes of a $q$-cube $C$ in $X$ correspond to the intersections of the standard cube with the coordinate hyperplanes $x_j=1/2$ ($j=1, \cdots, q$). We generate an equivalence relation on the collection of all midplanes of all cubes in $X$ by declaring as equivalent any two midplanes whose intersection is itself a midplane. The union of all midplanes in a given class is called a hyperplane, and each hyperplane $H$ satisfies the following important properties: (i) $H$ is connected, and (ii) the complement $X \backslash H$ has precisely two path components. We shall say that a hyperplane $H$ separates two subsets of $X$ if these subsets lie in distinct components of $X \backslash H$. A group acting on $X$ necessarily permutes the hyperplanes. A hyperplane $H$ is adjacent to a cube $C$ if it is disjoint from $C$, but intersects a cube that includes $C$ as a codimension-one face. On the cube $[0, 1]^q$, the smooth $p$-forms can be written as $\alpha=\sum_{I}f_Idx_I$ where $I=\{i_1 < \cdots < i_p \}$ is a multi-index, and $f_I$ is a smooth function on the cube $[0, 1]^q$. For each $q$-cube $C$, the space $\Omega^{p}(C)$ of $p$-forms carries a natural inner product and we let $\Omega_{L^2}^{p}(C)$ be the Hilbert space completion. We fix a base vertex ($0$-cube) $P$ of $X$. Let $C$ be a cube in $X$ and $H$ be a hyperplane that intersects $C$. A coordinate function $x_H\colon D\to \mathbb{R}$ is defined as the affine function on $C$ which takes $1$ on the codimension-one face of $C$, which is disjoint from $H$ and is separated by $H$ from the base vertex, while it takes $0$ on the opposite codimension-one face disjoint from $H$ but not separated by $H$ from the base vertex. We define $\alpha_H=dx_H$. This is a constant-coefficient one-form on $C$. A weight function $w$ for $X$ is a positive function on the set of hyperplanes. A weight function $w$ is called proper if the set $\{H \mid w(H)\leq M \}$ is finite for any $M>0$. If a locally compact group $G$ acts on $X$ by automorphisms, it is $G$-adapted if $\sup_{H}|w(H) - w(gH)| < \infty$ for any $g\in G$ and for some open subgroup $G$, $w(gH)=w(H)$ for all $H$ and for all $g$ in the subgroup. We take $w(H)=\mathrm{distace}(P, H)$ as our preferred weight function which is both proper and $G$-adapted but is is helpful for us not to fix a weight function in what follows. For each cube $C$, and for each $p\geq0$, we denote by $\Omega_0^p(C)\subset \Omega^p(C)$ the space of those smooth $p$-forms on $C$ that pull back to zero on each open face of $C$. We consider the de-Rham differential $d$ as an unbounded operator from $\Omega_{L^2}^p(C)$ to $\Omega_{L^2}^{p+1}(C)$ with domain $\Omega_{0}^p(C)$. We consider its formal adjoint $d^\diamond$ as an unbounded operator with domain $\Omega^{p+1}(C)$. We form the direct sum \[ \Omega_{0}^{\ast}(C) = \bigoplus_p \Omega_{0}^{p}(C), \,\,\, \Omega_{L^2}^{\ast}(C) = \bigoplus_p \Omega_{L^2}^{p}(C), \] and then form the unbounded operator \[ D = d+ d^{\diamond} \colon \Omega_{L^2}^{\ast}(C) \to \Omega_{L^2}^{\ast}(C). \] The operator $D$ on $\Omega_{L^2}^{\ast}(C)$ is essentially self-adjoint on the domain $\Omega_{0}^{\ast}(C)$, and the self-adjoint closure of $D$ has compact resolvent. The kernel of the closure is precisely the one-dimensional space of top-degree constant coefficient forms (see \cite[Lemma 4.2]{BGHN}). Let $w$ be a weight function for $X$. For each cube $C$ in $X$, an affine function $w_C\colon C\to \mathbb{R}$ is defined by the formula \[ w_C= \sum_{H\in \mathrm{Mid}(C)} w(H)x_H \] where $\mathrm{Mid}(C)$ is the set of hyperplanes that contain a midplane of $C$. We form the Witten-type perturbation \begin{equation}\label{eq_dw} d_w= e^{-w_c} d e^{w_c}\colon \Omega_{0}^{\ast}(C) \to \Omega^{\ast}(C). \end{equation} We have \[ d_w\colon \beta \mapsto d\beta + \sum_{H\in \mathrm{Mid}(C)} w(H)x_H\wedge \beta, \] so that $d_w$ is a bounded perturbation of $d$. Let $d_w^\diamond$ be its formal adjoint considered as an unbounded operator with domain $\Omega^{\ast}(C)$, then the unbounded operator \[ D_w = d_w+ d_w^{\diamond} \colon \Omega_{L^2}^{\ast}(C) \to \Omega_{L^2}^{\ast}(C) \] with domain $\Omega_{0}^{\ast}(C)$ is essentially self-adjoint and has compact resolvent. Let $C$ be a cube in $X$ and $H$ be a hyperplane that intersects $C$. We let \[ y_H= x_H-1 \colon C\to \mathbb{R}. \] Note that $dy_H=\alpha_H=dx_H$. We define a linear operator (see \cite[Definition 4.4]{BGHN}) \[ e_w\colon \Omega^p(X) \to \Omega^{p+1}(X) \] by the formula \[ e_w\colon \beta \mapsto \sum_{H\in \mathrm{SAH}(C)} w(H)e^{w(H)y_H}\alpha_H \wedge \beta \] for $\beta \in \Omega^\ast(C)$ where: \begin{itemize} \item $\mathrm{SAH}(C)$ is the set of all hyperplanes that are adjacent to $C$ and separate $C$ from the base vertex $P$. \item For a hyperplane $H\in \mathrm{SAH}(C)$, if $D_H$ is the cube that is intersected by $H$ and that includes $C$ as a codimension-one face, then we view $\alpha_H\wedge \beta$ as a differential form on $D_H$ by pulling back $\beta$ along the orthogonal projection from $D_H$ to $C$. \end{itemize} We denote by $D^{\mathrm{dR}}_w$ (see \cite[Definition 4.5]{BGHN}, it was denoted by $D_{\mathrm{dR}}$) the following symmetric operator on $\Omega^\ast_{L^2}(X)$ with domain $\Omega^\ast_{0}(X)$ \[ D^{\mathrm{dR}}_w = d_w + e_w + d_w^\diamond + e_w^\diamond. \] By $d_w$, we mean here the direct sum of all the Witten-type operators \eqref{eq_dw} over all cubes of $X$ and the same for the formal adjoint $d_w^\diamond$. The operator $D^{\mathrm{dR}}_w$ is essentially self-adjoint and it has compact resolvent \cite[Theorem 4.6]{BGHN} if the weight function $w$ is proper. For any vertex $Q$ of $X$, a cube $C$ leads to $Q$ to the base vertex $P$ if (i) the cube $C$ includes the vertex $Q$ and (ii) all the hyperplanes that intersect $C$ also separate $Q$ from $P$. That is $\mathrm{Mid}(C) \subset \mathrm{SAH}(Q)$. We denote by $\Omega^\ast_{L^2}(X)_Q \subset \Omega^\ast_{L^2}(X)$ the direct sum of the spaces $\Omega^\ast_{L^2}(C)$ over those cubes $C$ that lead $Q$ to $P$. We define $\Omega^\ast_{0}(X)_Q$ and $\Omega^\ast(X)_Q$ similarly (see \cite[Definition 4.7]{BGHN}). We have the following Hilbert space direct sum decomposition: \begin{equation}\label{eq_dR_decomposition} \Omega^\ast_{L^2}(X) = \bigoplus_Q \Omega^\ast_{L^2}(X)_Q. \end{equation} The operator $D^{\mathrm{dR}}_w$ is block-diagonal with respect to this decomposition \cite[Lemma 4.9]{BGHN}. We have the following useful estimate. \begin{lemma}(See the proof of \cite[Theorem 4.6]{BGHN})\label{lem_estimate} For any $\beta \in \Omega^\ast_{0}(X)_Q$, \[ || D^{\mathrm{dR}}_w\beta ||^2 \geq \sum_{H \in \mathrm{SAH}(Q)} \min\{\frac12 w(H)(1-e^{-2w(H)}), \pi^2+ w(H)^2 \} ||\beta||^2 \] \[ = \sum_{H \in \mathrm{SAH}(Q)} \frac12 w(H)(1-e^{-2w(H)}) ||\beta||^2. \] \end{lemma} \begin{lemma}(See \cite[Lemma 4.11]{BGHN}) If the weight function $w$ is $G$-adapted, then for any $g$ in $G$, the difference $g(D^{\mathrm{dR}}_w) - D^{\mathrm{dR}}_w$ is a bounded operator. Moreover $||g(D^{\mathrm{dR}}_w) - D^{\mathrm{dR}}_w||$ is bounded by a constant times \[ \sup \{ gw(H) - w(H) \mid \text{ $H$ is a hyperplane of $X$} \} + \max \{ w(H) \mid \text{$H$ separates $P$ and $gP$} \} \] The constant depends only on the dimension of $X$. \end{lemma} If $G$ acts on $X$ by automorphisms, for any proper, $G$-adapted weight function $w$, the pair $(\Omega^\ast_{L^2}(X), D^{\mathrm{dR}}_w)$ is an unbounded Kasparov cycle for $KK^G(\C, \C)$ and we denote by $[D^{\mathrm{dR}}_w]$ its associated class in $KK^G(\C, \C)$ (see \cite[Theorem 4.12, Definition 4.13]{BGHN}). \begin{theorem}(See \cite[Theorem 5.1]{BGHN} and \cite[Theorem 9.14]{BGH}) \label{thm_dR1G} If $G$ acts on $X$ by automorphisms, the element $[D^{\mathrm{dR}}_w]\in KK^G(\C, \C)$ satisfies \[ [D^{\mathrm{dR}}_w] = 1_G \,\,\, \text{in $KK^G(\C, \C)$}. \] \end{theorem} Now, suppose the $G$-action on $X$ is proper. Let $H_X=\Omega^\ast_{L^2}(X)$ which is naturally a graded $X$-$G$-module. We shall construct an $X$-$G$-localized Kasparov cycle $(H_X, T)$ for $KK^G(\C, \C)$ such that $[H_X, T_1]=[D^{\mathrm{dR}}_w] =1_G$ in $KK^G(\C, \C)$. In what follows, for simplicity, we assume $w\geq1/2$. For example, we can use $w(H)=\mathrm{distace}(P, H)$. For $t\geq1$, we define an odd, symmetric unbounded operator \begin{equation}\label{eq_formulaDt} \mathcal{D}_t= t^{-1}(d_{tw} + d_{tw}^\diamond) + (\sqrt{t})^{-1}(e_{tw} + e_{tw}^\diamond) \colon \Omega^\ast_{L^2}(X) \to \Omega^\ast_{L^2}(X) \end{equation} with domain $\Omega^\ast_{0}(X)$. Note that $\mathcal{D}_1=D^{\mathrm{dR}}_{w}$ but $\mathcal{D}_t$ is slightly different from $t^{-1}D^{\mathrm{dR}}_{tw}$. The operators $\mathcal{D}_t$ are block-diagonal with respect to the decomposition \eqref{eq_dR_decomposition} and it is essentially self-adjoint with compact resolvent on each block $\Omega^\ast_{L^2}(X)_Q$. From these and from the formula \eqref{eq_formulaDt} of $\mathcal{D}_t$, we see that the family $\{\mathcal{D}_t\}_{t\geq1}$ defines an odd, regular, symmetric unbounded operator $\mathcal{D}$ on $H_X\otimes C_0[1, \infty)$ with domain $C_c([1,\infty), \Omega^\ast_{0}(X))$ and that its closure is self-adjoint. We have the following estimate. \begin{lemma}\label{lem_testimate} For $t\geq1$ and for any $\beta \in \Omega^\ast_{0}(X)_Q$, \[ || \mathcal{D}_t\beta ||^2 \geq \sum_{H \in \mathrm{SAH}(Q)} \min\{\frac12 w(H)(1-e^{-2tw(H)}), t^{-2}\pi^2+ w(H)^2 \} ||\beta||^2. \] \end{lemma} \begin{proof} This can be proven as in the proof of \cite[Theorem 4.6]{BGHN}. \end{proof} \begin{lemma}\label{lem_Xlem} If the weight function $w$ is proper, The odd, regular, self-adjoint unbounded operator $\mathcal{D}$ on $H_X\otimes C_0[1, \infty)$ satisfies \[ (\mathcal{D} \pm i)^{-1} \in C_b([1, \infty), \mathfrak{K}(H_X)). \] Moreover, for any $\epsilon>0$, there is $\phi\in C_0(X)$, such that $||(1-\phi)(\mathcal{D} \pm i)^{-1} ||< \epsilon$. \end{lemma} \begin{proof} The unbounded operator $\mathcal{D}$ is block diagonal with respect to \eqref{eq_dR_decomposition} and it is not hard to see that its restriction $\mathcal{D}_Q$ on each block $\Omega^\ast_{L^2}(X)_Q$ satisfies \[ (\mathcal{D}_Q \pm i)^{-1} \in C_b([1, \infty), \mathfrak{K}(\Omega^\ast_{L^2}(X)_Q)). \] Lemma \ref{lem_testimate} implies the sum \[ (\mathcal{D} \pm i)^{-1} = \sum_{Q} (\mathcal{D}_Q \pm i)^{-1} \] is absolutely norm-convergent in $C_b([1, \infty), \mathfrak{K}(H_X))$. The second claim also follows from Lemma \ref{lem_testimate}. \end{proof} Note that the $G$-action preserves the domain $C_c([1,\infty), \Omega^\ast_{0}(X))$ of $\mathcal{D}$. \begin{lemma}\label{lem_Glem} If the weight function $w$ is $G$-adapted, then for any $g$ in $G$, the difference $g(\mathcal{D}) - \mathcal{D}$ is a bounded operator. Moreover $||g(\mathcal{D}) - \mathcal{D}||$ is bounded by a constant times \[ \sup \{ gw(H) - w(H) \mid \text{ $H$ is a hyperplane of $X$} \} + \max \{ w(H) \mid \text{$H$ separates $P$ and $gP$} \} \] The constant depends only on the dimension of $X$. \end{lemma} \begin{proof} This can be proven in the same way as the proof of \cite[Lemma 4.11]{BGHN} with bit more refined estimates. We write $\mathcal{D}_t=\mathcal{D}_{t, w, P}$ to make it clear its dependence on the weight function $w$ and on the base vertex $P$ so that $g(\mathcal{D}_t)=\mathcal{D}_{t, gw, gP}$. We bound the differences \[ \mathcal{D}_{t, gw, gP} - \mathcal{D}_{t, w, gP}\,\,\, \text{and} \,\,\, \mathcal{D}_{t, w, gP} - \mathcal{D}_{t, w, P} \] separately, uniformly in $t\geq1$ to obtain the estimate in the statement. We have \[ \mathcal{D}_{t, gw, gP} - \mathcal{D}_{t, w, gP}= t^{-1}(d_{tgw, gP} - d_{tw, gP}) + t^{-1}(d_{tgw, gP}^\diamond - d_{tw, gP}^\diamond) \] \[ + (\sqrt{t})^{-1}(e_{tgw, gP} - e_{tw, gP}) + (\sqrt{t})^{-1}(e_{tgw, gP}^\diamond - e_{tw, gP}^\diamond). \] Since $w$ is $G$-adapted, there is $M\geq0$ such that $|gw(H)-w(H)| \leq M$ for all $H$. Using the formula \[ t^{-1}d_{tw, gP}\colon \beta \mapsto t^{-1}d\beta + \sum_{H\in \mathrm{Mid}(C)} w(H)x_{H, gP}\wedge \beta, \] we see that \[ || t^{-1}(d_{tgw, gP} - d_{tw, gP})|| \leq \dim(X)\cdot M \] for all $t\geq1$ and thus $|| t^{-1}(d_{tgw, gP}^\diamond - d_{tw, gP}^\diamond)|| \leq \dim(X)\cdot M$ for all $t\geq1$. For $w\geq1/2$ and for $M\geq m \geq0$, we have \begin{equation}\label{eq_elementary} \sqrt{t}|| we^{-twx}- (w+m)e^{-t(w+m)x} ||_{L^2[0 ,1]} \leq 2M \end{equation} for all $t\geq1$ since for instance we can write \[ |we^{-twx}- (w+m)e^{-t(w+m)x}| \leq w(tmx)e^{-twx}\frac{(1-e^{-tmx})}{tmx} + me^{-t(w+m)x} \] \[ \leq m(twx)e^{-twx} + me^{-t(w+m)x} \] and since (we use $w\geq1/2$) \[ ||twxe^{-twx}||^2_{L^2[0,1]} \leq (tw)^{-1} \int_0^\infty y^2e^{-2y}dy \leq \frac{1}{4wt} \leq \frac{1}{2t}, \] and \[ ||e^{-t(w+m)x}||^2_{L^2[0, 1]} \leq \frac{1}{t(w+m)}\int_0^\infty e^{-2y} dy \leq \frac{1}{2tw} \leq \frac{1}{t} \] so that $\sqrt{t}|| we^{-twx}- (w+m)e^{-t(w+m)x} ||_{L^2[0 ,1]}$ is bounded by \[ \sqrt{t}m (||twxe^{-twx}||_{L^2[0,1]} + ||e^{-t(w+m)x}||_{L^2[0, 1]} ) \leq \sqrt{t}m (\frac{1}{\sqrt{2t}} + \frac{1}{\sqrt t} ) \leq 2M. \] Using \eqref{eq_elementary} and the formula \[ (\sqrt{t})^{-1}e_{tw, gP}\colon \beta \mapsto \sum_{H\in \mathrm{SAH}(C)} \sqrt{t}w(H)e^{tw(H)y_{H, gP}}\alpha_{H, gP} \wedge \beta \] we obtain \[ ||(\sqrt{t})^{-1}(e_{tgw, gP} - e_{tw, gP})|| \leq 2\dim(X) \cdot M \] for all $t\geq1$ and thus $||(\sqrt{t})^{-1}(e_{tgw, gP}^\diamond - e_{tw, gP}^\diamond)|| \leq 2\dim(X) \cdot M$ for all $t\geq1$. Putting everything together, we obtain \[ ||\mathcal{D}_{t, gw, gP} - \mathcal{D}_{t, w, gP}|| \leq 6\dim(X)\cdot M. \] Next, we consider the difference $\mathcal{D}_{t, w, gP} - \mathcal{D}_{t, w, P}$. As in the proof \cite[Lemma 4.11]{BGHN}, \[ (t^{-1}d_{tw, gP} - t^{-1}d_{tw, P})\beta = \sum_{H \in \mathrm{Mid}} w(H)(\alpha_{H, gP} - \alpha_{H, P}) \wedge \beta \] where the difference $\alpha_{H, gP} - \alpha_{H, P}$ is zero unless the hyperplane $H$ separates $P$ and $gP$ in which case the difference is $2\alpha_{H, gP}$. Thus, we have \[ ||t^{-1}d_{tw, gP} - t^{-1}d_{tw, P}|| \leq 2\dim(X) \cdot \max\{w(H) \mid \text{$H$ separates $P$ and $gP$}\}. \] The same is true for $||t^{-1}d_{tw, gP}^\diamond - t^{-1}d_{tw, P}^\diamond||$. Lastly, we have \[ (\sqrt{t})^{-1}(e_{tw, gP} - e_{tw, P})\beta = \sum_{H} \sqrt{t}w(H)(e^{tw(H)y_{H, gP}}\alpha_{H, gP} - e^{tw(H)y_{H, P}} \alpha_{H, P}) \wedge \beta \] where again the summands are zero except when $H$ separates $P$ and $gP$ and we can bound the sum by \[ 2\dim (X) \cdot \max\{ \sqrt{t}w(H)||e^{-tw(H)x}||_{L^2[0 ,1]} \mid \text{$H$ separates $P$ and $gP$} \} \] \[ \leq 2\dim (X) \cdot \max\{ w(H) \mid \text{$H$ separates $P$ and $gP$} \}. \] The same is true for $||(\sqrt{t})^{-1}(e_{tw, gP}^\diamond - e_{t, w, P}^\diamond)||$. Putting everything together, we obtain \[ ||\mathcal{D}_{t, w, gP} - \mathcal{D}_{t, w, P}|| \leq 8\dim(X)\cdot \max\{ w(H) \mid \text{$H$ separates $P$ and $gP$} \} \] so we are done. \end{proof} \begin{theorem}\label{thm_cubeXG} Let $w$ be a weight function for $X$ which is proper and $G$-adapted. Let $H_X=\Omega^\ast_{L^2}(X)$ and $T=\mathcal{D}(1+\mathcal{D}^2)^{-1}$. Then, the pair $(H_X, T)$ is an $X$-$G$-localized Kasparov cycle for $KK^G(\C, \C)$ with $[H_X, T_1]=[D^{\mathrm{dR}}_w] =1_G$ in $KK^G(\C, \C)$. \end{theorem} \begin{proof} We check that $\mathcal{D}$ satisfies the conditions (1) -(3) of Proposition \ref{prop_unboundedXG}. (1): Let $B\subset C_0(X)$ be the subalgebra of compactly supported functions that are smooth on each cube. For any $\phi \in B$, we compute \begin{align*} [\mathcal{D}_t, \phi] & = [t^{-1}(d_{tw} + d_{tw}^\diamond) + (\sqrt{t})^{-1}(e_{tw} + e_{tw}^\diamond), \phi] \\ &= t^{-1}[d_{tw} + d_{tw}^\diamond, \phi] + (\sqrt{t})^{-1}[e_{tw} + e_{tw}^\diamond, \phi] \\ & = t^{-1}c(\phi) + (\sqrt{t})^{-1}[e_{tw} + e_{tw}^\diamond, \phi] \end{align*} where $c(\phi)$ denotes the Clifford multiplication by the gradient of $\phi$ in each cube which is a bounded operator. Let $C>0$ so that \[ |\phi(x)-\phi(y)|\leq C\cdot \mathrm{distance}(x, y)\,\,\, \text{for any $x, y \in X$}. \] We have (see the proof of \cite[Theorem 5.2]{BGHN}) for all $\beta \in \Omega^\ast(C)$ \begin{align*} ||[e_{tw}, \phi]\beta ||^2 & \leq \sum_{H \in \mathrm{SAH}(C)}C^2 ||xtw(H)e^{-tw(H)x}||^2_{L^2[0, 1]}\cdot ||\beta||^2 \\ & \leq \dim(X) \cdot C^2 \cdot \max\{ (xe^{-x})^2 \mid x\geq0 \} \cdot ||\beta||^2 \\ & \leq \dim(X) \cdot C^2 \cdot ||\beta||^2 \end{align*} for all $t\geq1$. From these, we see that $[\mathcal{D}, \phi]$ extends to a bounded operator $S \in \mathfrak{L}(H_X\otimes C_0[1, \infty))$ with $||S_t||\to 0$ as $t\to \infty$. (2): This follows from Lemma \ref{lem_Glem}. (3): This follows from Lemma \ref{lem_Xlem}. Thus, by Proposition \ref{prop_unboundedXG}, $(H_X, T)$ is an $X$-$G$-localized Kasparov cycle for$KK^G(\C, \C)$. That $[H_X, T_1]=[D^{\mathrm{dR}}_w]$ follows from the definition of $\mathcal{D}$. We have $[D^{\mathrm{dR}}_w]=1_G$ by Theorem \ref{thm_dR1G}. \end{proof} \begin{theorem}\label{thm_cube} Let $G$ be a second countable, locally compact group $G$ which acts properly and continuously on a finite-dimensional CAT(0)-cubical space with bounded geometry by automorphisms. Then, the Baum--Connes assembly map $\mu^{B, G}_r$ is an isomorphism for any separable $G$-$C^*$-algebra $B$, i.e. BCC holds for $G$. \end{theorem} \begin{proof} Combine Theorem \ref{thm_XGgamma} and Theorem \ref{thm_cubeXG}. \end{proof} \begin{remark} Any group $G$ which acts properly on a CAT(0)-cubical space has the Haagerup Approximation property \cite[Section 1.2.7]{CCJJV}, so $G$ is a-T-menable. Thus, BCC for these groups are already known by the Higson--Kasparov Theorem \cite{HK97}, \cite{HigsonKasparov}. \end{remark}
15ff4a16796777aac4f302a6f4eb734085402801
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{$\text{UPOC}^\text{2}$: A Unified Pre-training and Fine-tuning Framework for PMT} In this section, we introduce our \upoc~~model and the proposed cross-modal cross-lingual pre-training tasks for the product description translation. Figure~\ref{fig:model} illustrates the architecture of our \upoc~~model. It is stacked with multiple multi-layer transformer encoders and follows a unified pre-training and fine-tuning scheme \cite{devlin2019bert,zhou2020unified,li2020oscar}. We first pre-train our model with three cross-modal cross-lingual pre-training tasks to effectively learn semantic alignments between images and bilingual texts, including multimodal translation language modeling (MTLM), image source-sentence matching (ISM) and product attribute prediction (ATTP). Then, we fine-tune the pre-trained model for the PMT task. \subsection{Model Architecture} \noindent\textbf{Input Representation.} The input sequence is the concatenation of Image-Source-Target triplet ($V$, $X$, $Y$), where $V$ is the global embedding sequence of product images, $X$ is the word embedding sequence of the source language product description and $Y$ is the word embedding sequence of the target language translation. For each $v_i$ in $V=\{v_0,\cdots,v_i,\cdots,v_N\}$, we extract the global visual feature for the $i$-th image via ResNet-101 \cite{he2016resnet}. Then, a linear layer is employed to map the visual feature to the same dimensionality as the word embedding. For the source and target sentences, we add a special start token ([SOS]) and a special end token ([EOS]) to the start and end of each sentence, and represent each token in the sentence with a word embedding vector learned from scratch. To distinguish different modalities and languages of each element in the whole input sequence ($V$, $X$, $Y$), we add a learnable modality embedding to each element indicating whether it belongs to the image modality, source language sentence or the target language sentence. Specially, we also add positional embeddings to the tokens of source and target language sentences. The illustration of input representations is shown in the right part of Figure~\ref{fig:model}. \vspace{2pt} \noindent\textbf{Multimodal Transformer.} Our \upoc~~model contains four multi-layer transformer \cite{vaswani:transformer} encoders, including three independent encoders and a cross encoder. We first employ independent encoders to encode the image sequence, source sentence and target sentence to capture their intra-context information, which have $L_v$, $L_s$ and $L_t$ layers respectively. The outputs of the three encoders are then concatenated as a whole input sequence to the cross encoder. The cross encoder is also a multi-layer transformer encoder, which is used to encode the inter-context across different modalities and languages. We denote the number of cross encoder layers as $L_c$, hidden size as $H$, and the number of self-attention heads as $A$. We share the parameters of the source sentence encoder and the target sentence encoder in pre-training, and separate them when adapting to the PMT task. Because the target sentence encoder will be learned as the translation generator in the fine-tuning stage. \subsection{Pre-training Tasks} To learn the cross-lingual correspondence between source and target language sentences and enhance the cross-modal fusion between images and texts for better translation, we pre-train our model with three pre-training tasks, described in this section. \noindent\textbf{Task \#1: Multimodal Translation Language Modeling (MTLM)} The cross-lingual alignment is important to machine translation. Inspired by the multilingual pre-training task (translation language modeling (TLM) proposed in XLM \cite{conneau2019xlm}), and the multimodal pre-training task (masked language modeling (MLM) generally used in V+L pre-training models \cite{chen2019uniter,li2020unicoder,lu2019vilbert}), we propose to combine them for the multimodal multilingual scenario as the multimodal translation language modeling (MTLM) task for the PMT. The goal of MTLM is to predict the masked words in both languages with the context information of images, the surrounding words in the same language, and all the words in the other language. We randomly choose 15\% word tokens in both languages for prediction. Each chosen word is replaced with a special [MASK] token 80\% of the time, another random word 10\% of the time and the original word 10\% of the time. Note that the random words could be foreign words. The MTLM task takes the fused feature from cross encoder as the input and predict the original word with an output softmax layer, which is tied with input embeddings. We share the vocabulary and softmax prediction layer across languages. The training objective of MTLM can be expressed as follows: \begin{equation} \mathcal{L}_{MTLM} = - \mathbb{E}_{(V,X,Y)\sim\mathcal{D}} \log p(x_m,y_m|x_{\setminus m},y_{\setminus m},V;\Theta) \end{equation} where $\mathcal{D}$ denotes the whole training set, $x_m$ and $y_m$ denote the masked words in $X$ and $Y$, and $\Theta$ denotes all learnable parameters of the pre-training model. Note that $x_m$ and $y_m$ are not semantically aligned words. With the MTLM pre-training task, the model learns the semantic alignments between source and target language words. However, since the translation guidance from the images is much weaker than that from the source words, the model tends to ignore the visual modality with only the MTLM task for the target translation. Therefore, we further propose to enhance the cross-modal fusion between images and texts via cross-modal pre-training tasks. \vspace{2pt} \noindent\textbf{Task \#2: Image Source-Sentence Matching (ISM)} The cross-modal matching task has been widely used in vision-language pre-training models, which is helpful to learn the semantic alignment between the visual and textual modalities. Considering that for the PMT/MMT task, the effective fusion between image and source sentence is important, we conduct the semantic matching between the image and source sentence. We pad the target language sentence $Y$ except for the start [SOS] token, which is further used to predict whether the images $V$ and source sentence $X$ are semantically matched. Specifically, the output of the [SOS] token is fed to a linear layer with the sigmoid function to predict a matching score $s(V, X)$ between 0 and 1. We construct the negative pair by replacing the source sentence in a matched pair with another one. Since different types of products are significantly different, to avoid an oversimplified ISM task, we choose hard negatives by randomly sampling negative source sentences from the set which describes products in the same category as the original sentence. In this way, the model will focus more on the product details rather than the product category to determine whether the image and description are matched. The training objective of the ISM task can be expressed as follows: \begin{equation} \mathcal{L}_{ISM} = - \mathbb{E}_{(V,X)\sim\mathcal{D}}[l \log s(V, X; \Theta) + (1-l) \log (1-s(V, X; \Theta))] \end{equation} where $l\in[0,1]$ indicates whether the input image-source pair is a negative or positive sample. With the ISM pre-training task, the start [SOS] token of the target language sentence will encode rich fused information from images and the source language sentence to benefit the target language translation. \vspace{2pt} \noindent\textbf{Task \#3: Product Attribute Prediction (ATTP)} The product attributes describe important information of the commercial products, including the decorations, shapes, colors or styles of the product as shown in Figure~\ref{fig:data}. To help the model to extract these information from images for better translation, we propose to further enhance the semantic encoding of the [SOS] token by predicting the product attributes according to the images. Specifically, we extend the binary classification of the ISM task to a multi-class classification task for the attribute prediction. We employ another linear layer with softmax function to predict the product attributes based on the output of [SOS] token. Since the ground-truth attributes in FACAD dataset \cite{yang2020facad} is extracted from the source sentence (the nouns and adjectives in the source sentence), it is easy to predict attributes with the source sentence as context. Therefore, for the ATTP task, we mask the source words that express product attributes, and force the model to predict attributes relying on the images. We adopt the categorical cross-entropy as the training objective as follows: \begin{equation} \mathcal{L}_{ATTP} = - \mathbb{E}_{(V,X)\sim\mathcal{D}} \sum_{c\in C} \log p_c(V,X;\Theta) \end{equation} where $C$ denotes the ground-truth attributes set. \subsection{Fine-tuning in PMT task} After pre-training with the above three pre-training tasks, we adapt our pre-trained model to the PMT task. The PMT/MMT task generates the target translation with the context of source sentence and images. It is a generation task, hence, the current generated word cannot ``see'' the future words. Therefore, we adapt our bi-directional pre-trained model to a uni-directional generator by constraining the self-attention mask of the target language sentence. Similar to the MTLM pre-training task, we randomly choose 15\% of target sentence words and replace them with the [MASK] token 80\% of the time, another random word 10\% of the time and the original word 10\% of the time. We predict the masked target word with the context information from all the images, all the source sentence words and the target sentence words before its position. The training objective can be expressed as follows: \begin{equation} \mathcal{L}_{PMT} = - \mathbb{E}_{(V,X,Y)\sim\mathcal{D}} \log p(y_m|y_{<m},X,V;\Theta) \end{equation} where $y_m$ denote the masked words in $Y$, and $\Theta$ denotes all learnable parameters of the pre-trained model. During inference time, we first encode the images, source sentence and the start [SOS] token of the target translation as the input. The [SOS] token encodes rich fused information from source sentence and images due to the ISM and ATTP pre-training tasks. Then, we start to generate the target language translation word by word through feeding a [MASK] token and sampling the predicted token from the softmax output layer. At the next step, the previous [MASK] token is replaced by the generated token, and a new [MASK] token is fed to the model for the next word generation until a [EOS] token is generated. \section{Conclusion} In this paper, we propose a large-scale product-oriented machine translation (PMT) dataset Fashion-MMT with two versions to support the PMT research. Considering that the relevance between product images and descriptions is more complex than that in image captioning dataset, we propose a unified pre-training and fine-tuning framework called \upoc~~to enhance the cross-modal fusion ability. We propose three cross-modal cross-lingual pre-training tasks to improve the semantic alignments between images and texts, and demonstrate its effectiveness on both the PMT and the conventional MMT tasks. Experimental results on the proposed Fashion-MMT dataset and the Multi30k benchmark dataset show that our model outperforms the state-of-the-art models even when pre-trained on the same dataset without any extra data. Furthermore, our model also shows the good ability to exploit more noisy triplet data to improve the translation quality and alleviate the demand of expensive manual annotations. \section{Fashion-MMT Dataset} \begin{figure}[t] \centering \begin{overpic}[scale=0.48]{figures/data.png} \end{overpic} \caption{Examples from Fashion-MMT(L) and Fashion-MMT(C) datasets.} \label{fig:data} \end{figure} \vspace{2pt} \subsection{Data Collection} \label{sec:data_collection} We build the Fashion-MMT dataset based on the fashion captioning dataset FACAD \cite{yang2020facad}. The FACAD dataset contains 126,753 English product descriptions with corresponding product images crawled from the Nordstrom website. Each description is aligned with average 6\textasciitilde 7 product images of different colors and poses. There are also product category and attributes labels for each product. To extend the FACAD dataset for the PMT task, we first automatically translate the English product descriptions into Chinese with Google English$\rightarrow$Chinese translation system. We remove redundant examples and the examples whose English description is shorter than 6 words, or longer than 35 words, or the Chinese translation is shorter than 15 characters. Finally, we obtain 114,257 English-Chinese parallel description pairs with 885,244 product images, denoted as the Fashion-MMT(L) dataset. Since the automatically translated Chinese translations contain noise, we manually clean a subset of the Fashion-MMT(L) dataset to further benefit the field. We sample 40,000 triplets from Fashion-MMT(L) dataset and conduct manual annotation on the Alibaba Data Service platform \footnote{https://smartdata.taobao.com}. All workers are native Chinese speakers with good English skills, and are asked to modify the automatically translated Chinese description given the images and English description. To ensure annotation quality, each translation is further reviewed and accepted by another two independent workers. The whole data collection window is around one month. More than 98.5\% of automatic Chinese translations have been post-edited, and the average Levenshtein distance between before and after manual edition for each sentence is 20.2. It suggests that the state-of-the-art text-based machine translation system is far from perfect to translate product descriptions. We denote the cleaned version as Fashion-MMT(C) dataset. Examples from Fashion-MMT(L) and Fashion-MMT(C) datasets are presented in Figure~\ref{fig:data}. \begin{table} \centering \caption{Data splits of Fashion-MMT datasets.} \label{tab:data_split} \small \begin{tabular}{l| c| c c c} \toprule Dataset & Split & Train & Validation & Test \\ \midrule \multirow{2}{*}[-0.5ex]{Fashion-MMT(L)} & \#Triplets & 110,257 & 2,000 & 2,000 \\ & \#Images & 853,503 & 15,974 & 15,767 \\ \midrule \multirow{2}{*}[-0.5ex]{Fashion-MMT(C)} & \#Triplets & 36,000 & 2,000 & 2,000 \\ & \#Images & 280,915 & 15,974 & 15,767 \\ \bottomrule \end{tabular} \end{table} \begin{table} \centering \caption{Comparison with other MMT datasets. Avg\_len denotes the average English sentence length. Avg\_img denotes the average number of images for each parallel text pair.} \label{tab:data_compare} \small \begin{tabular}{l|cccc} \toprule Dataset & \#Image & \#Triplet & Avg\_len & Avg\_img \\ \midrule Multi30k \cite{elliott2016Multi30k} & 31,014 & 31,014 & 13.1 & 1.00 \\ IKEA \cite{zhou2018ikea} & 3,600 & 3,600 & 71.4 & 1.00 \\ \midrule Fashion-MMT(C) & 312,656 & 40,000 & 20.8 & 7.82 \\ Fashion-MMT(L) & 885,244 & 114,257 & 22.6 & 7.75 \\ \bottomrule \end{tabular} \end{table} \subsection{Dataset Statistics} We split each version of Fashion-MMT into three sets as shown in Table~\ref{tab:data_split}. Since the validation and test set should be clean to correctly evaluate different models, for the Fashion-MMT(L) dataset, we use the same clean validation and test set with the Fashion-MMT(C) dataset and exclude the corresponding 4,000 triplets from the training set. In Table~\ref{tab:data_compare}, we compare our Fashion-MMT datasets with other existing MMT datasets. Both of our Fashion-MMT(C) and Fashion-MMT(L) datasets have more triplets than the Multi30k \cite{elliott2016Multi30k} benchmark and the commercial domain IKEA \cite{zhou2018ikea} dataset. Compared with the IKEA dataset, our Fashion-MMT datasets covers more diverse fashion product categories, while the IKEA dataset only focuses on furnishings. Furthermore, for each parallel text pair in the Fashion-MMT, there are multiple images associated with it, while previous MMT datasets only contain one image for each parallel text. Multiple product images are more common in the real-world scenario as products on the e-commerce platform are usually associated with more than one image with different poses or colors, showing various product details. \section{Related Works} \noindent\textbf{Multimodal Machine Translation.} The common visual world is shown to be beneficial to bridge different languages \cite{chen:bilingual,sigurdsson:visual_grounding,chen2019fromwordstosents,huang2020ummt,su2019ummt}. Therefore, recent research works have paid more attentions to the Multimodal Machine Translation (MMT) task \cite{elliott2016Multi30k}, which aims to leverage visual context to aid textual machine translation. Calixto and Liu \cite{calixto-liu:Init} explore to integrate global image feature into the encoder or decoder of the NMT model, while works in \cite{caglayan:AttConcat, calixto:doublyAtt, helcl:attentionStrategy, delbrouck:gating, yao2020mmt} employ more fine-grained image context such as spatial image regions via attention mechanism. Caglayan et al. \cite{caglayan:AttConcat} and Calixto et al. \cite{calixto:doublyAtt} apply concatenation or mean pooling on two independently attended contexts, including the textual context of source words and visual context of image regions, when predicting each target word. To avoid noise of irrelevant information in the image, Delbrouck and Dupont \cite{delbrouck:gating} propose a gating mechanism to weight the importance of the textual and image contexts. Recently, more works \cite{Ive2019distilling, yang2020visual_agreement, yin2020graph, lin2020capsule} propose to represent the image with a set of object features via object detection, considering the strong correspondences between image objects and noun entities in the source sentence. Yang et al. \cite{yang2020visual_agreement} propose to jointly train source-to-target and target-to-source translation models to encourage the model to share the same focus on object regions by visual agreement regularization. Yin et al. \cite{yin2020graph} propose to construct a multi-modal graph with image objects and source words according to an external visual grounding model for the alignment of multi-modal nodes. However, the visual relevance between images and descriptions in PMT is more complex than MMT, which not only includes explicit correspondence such as products, but also implicit relevance such as shape, texture or even subjective style. \vspace{2pt} \noindent\textbf{E-commerce Related Tasks.} Fashion-oriented tasks \cite{yang2020facad,zhang2020poet,zhang2020video_titling,guo2019fashion,han2017fashion_compatibility} have been recently studied due to the rapid development of e-commerce platforms. Han et al. \cite{han2017fashion_compatibility} propose to learn compatibility relationships among fashion items for fashion recommendation. Zhang et al. \cite{zhang2020video_titling} propose to generate a short title for user-generated videos. Yang et al. \cite{yang2020facad} and Zhang et al. \cite{zhang2020poet} propose to generate detailed descriptions for product images/videos. Guo et al. \cite{guo2019fashion} propose to retrieve expected products given a similar product image and a modifying text description. However, no previous studies focus on fashion product description translation. \vspace{2pt} \noindent\textbf{Transformer-based Pre-training.} Recently, self-supervised pre-training has been witnessed a great success on multiple down-stream tasks. Devlin et al. \cite{devlin2019bert} propose a monolingual pre-trained language model named BERT and achieves the state-of-the-art results on eleven natural language processing tasks via simply pre-training with two self-supervised tasks. Afterwards, multilingual pre-trained language models \cite{devlin2019bert, conneau2019xlm, conneau2020xlmr} and multimodal pre-trained models \cite{lu2019vilbert, li2019visualbert, chen2019uniter, li2020oscar, zhou2020unified, li2020unicoder} are further proposed to support multilingual or multimodal scenarios. The multilingual pre-training models show strong abilities on zero-shot cross-lingual transfer, while the vision-language pre-training models outperform the state-of-the-art task-specific models on both vision-language understanding tasks, e.g., image retrieval \cite{faghri2018vsepp}, and generation tasks, e.g. image captioning \cite{vinyals2015showtell}. However, there are few works \cite{ni2021m3p,zhou2021uc2} studying the multimodal multilingual pre-training models, and none of them verify the effectiveness on MMT or PMT tasks. \section{Introduction} With the rapid development of e-commerce, more and more people go shopping online because of its convenience and efficiency. In order to better serve e-shoppers all over the world, it is necessary to translate e-commercial product descriptions into various languages. Therefore, the product-oriented machine translation (PMT) task \cite{zhou2018ikea, calixto2017ecommerce, calixto2017casestudy} has received growing research attentions recently. \begin{figure}[t] \centering \begin{overpic}[scale=0.33]{figures/intro.png} \end{overpic} \caption{Examples of MMT and PMT tasks. The colored text are visually relevant phrases. The underlined words are specialized jargons which are ambiguous and translated incorrectly by current machine translation system.} \label{fig:intro} \end{figure} The domain specialty makes the PMT task more challenging than traditional machine translation problems. Firstly, product descriptions contain many specialized jargons, which could be ambiguous in different contexts. It is hard to understand and translate these descriptions without the corresponding product images. For example, in Figure~\ref{fig:intro}, the word ``checks'' in the source product description means ``grids'' and the word ``tank'' means ``vest''. The meanings of these two words are different from their common meanings. Therefore, the current text-based machine translation system cannot translate them correctly. Secondly, although the visual context is beneficial to the translation, the relevance between product image and text description is more complex than that in conventional multimodal machine translation (MMT) task. As shown in Figure~\ref{fig:intro}, in the MMT task, the text description explicitly describes the major objects in the image. However, in the PMT task, text descriptions are related to images in very different aspects, such as products, shapes, colors or even more subjective styles. Therefore, it requires PMT models to dynamically extract different types of information from images to help the text translation. Last but not least, existing resources for PMT research are rather limited, for example, the latest PMT dataset IKEA \cite{zhou2018ikea} contains only 3,600 data samples in the <image, source sentence, target translation> triplet format. It is extremely expensive to manually annotate translations for product descriptions as it demands the annotator to master multiple languages as well as knowledge about the products. The scarcity of PMT datasets further constrains the automatic translation quality of product descriptions. In this paper, we construct a large-scale bilingual product description dataset, Fashion-MMT, which is based on automatically crawled product images and English descriptions from e-commerce website \cite{yang2020facad}. The dataset covers various fashion products from 78 categories, including dresses, shoes, pants, sunglasses, earrings and so on. We create two types of translation annotations. The first type, called Fashion-MMT(L), contains 114,257 automatic Chinese translations of original English product descriptions via a state-of-the-art text-based machine translation system. Although it is easy to achieve large-scale, such translations are noisy. The second type, called Fashion-MMT(C), is a cleaned subset of Fashion-MMT(L) and contains 40,000 <image, English description, Chinese translation> triplets with the manually annotated Chinese translations. In order to take advantage of the large-scale Fashion-MMT dataset to learn semantic alignments among product images and bilingual texts, we propose a Unified Product-Oriented Cross-lingual Cross-modal (\upoc~) model for pre-training and fine-tuning. The \upoc~~model is based on multimodal transformer \cite{lu2019vilbert, li2019visualbert, chen2019uniter} and we design three proxy tasks to effectively learn the inter relationships among images and bilingual texts in pre-training, including multi-modal translation language modeling (MTLM), image source-sentence matching (ISM), and product attribute prediction (ATTP) tasks. Experimental results on Fashion-MMT(C) and Multi30k benchmark datasets show that even pre-trained on the same dataset without any extra data, our model outperforms the state-of-the-art model by a large margin due to its better abilities in cross-modal fusion, i.e., +5.9\% BLEU@4 score on Multi30k and +2.1\% on Fashion-MMT(C). When augmented with large-scale noisy triplet data in Fashion-MMT(L), our model achieves more gains by +1.4 BLEU@4 score and +17.6 CIDEr score. The main contributions of our work are summarized as follows: \parskip=0.1em \begin{itemize}[itemsep=1pt,partopsep=0pt,parsep=\parskip,topsep=0.5pt] \item We propose the first large-scale PMT dataset Fashion-MMT to support the PMT research and e-commerce applications. \item We design a unified pre-training and fine-tuning model \upoc~ and three cross-modal cross-lingual proxy tasks to enhance product machine translation. \item Our model achieves the state-of-the-art results on both the Multi30k and Fashion-MMT datasets, and demonstrates its robustness to benefit from large-scale noisy data. \end{itemize} \section{Experiments} We evaluate our \upoc~~model on the Multi30k \cite{elliott2016Multi30k} benchmark dataset and the new proposed Fashion-MMT(C/L) datasets. To demonstrate the effectiveness of our cross-modal cross-lingual pre-training scheme for the PMT/MMT task, we experiment with two settings: 1) pre-training on the same downstream dataset without extra data and 2) pre-training with additional noisy triplet data. \begin{table*}[ht] \caption{PMT results with different pre-training tasks and pre-training data on Fashion-MMT(C) dataset.} \vspace{-5pt} \label{tab:aba_task} \centering \small \begin{tabular}{c |c |c c c| c c c|c c c} \toprule \multirow{2}{*}[-0.5ex]{Row} & \multirow{2}{*}[-0.5ex]{Extra data} & \multicolumn{3}{c|}{Pre-training tasks} & \multicolumn{3}{c|}{Validation} & \multicolumn{3}{c}{Test} \\ \cmidrule{3-11} & & MTLM & ISM & ATTP & BLEU@4 & METEOR & CIDEr & BLEU@4 & METEOR & CIDEr \\ \midrule 1 & - & & & & 39.39 & 34.69 & 285.42 & 39.41 & 34.59 & 281.94 \\ \specialrule{0.05em}{1.5pt}{1.5pt} 2 & - & \checkmark & & & 41.45 & 35.79 & 307.61 & 41.17 & 35.55 & 303.58 \\ 3 & - & \checkmark & \checkmark & & 41.79 & 35.84 & 308.59 & 41.38 & 35.68 & 305.21 \\ 4 & - & \checkmark & \checkmark & \checkmark & \textbf{41.93} & \textbf{36.01} & \textbf{313.46} & \textbf{41.56} & \textbf{35.87} & \textbf{306.82} \\ \specialrule{0.05em}{1.5pt}{1.5pt} 5 & Fashion-MMT(L) & \checkmark & \checkmark & \checkmark & \textbf{43.06} & \textbf{36.64} & \textbf{324.36} & \textbf{43.00} & \textbf{36.68} & \textbf{324.46} \\ \bottomrule \end{tabular} \end{table*} \subsection{Experimental Settings} \noindent\textbf{Datasets.} Multi30k \cite{elliott2016Multi30k} is the benchmark dataset for the conventional MMT task, where the images are about human daily activities. It contains 29,000 images for training, 1,014 for validation and 1,000 for testing (aka Test2016). Each image is annotated with an English description and its human translation to German (DE). Besides the official testing set of Multi30k, we also evaluate on another testing set (aka Test2017) released for the multimodal translation shared task \cite{elliott:ambcoco}, which is more difficult to translate. We adopt the byte pair encoding \cite{sennrich:bpe} with 10,000 merge operations to get both the source and target language vocabularies. For the Fashion-MMT(L/C), we segment the Chinese translation with raw Chinese characters to get the target language vocabulary. \noindent\textbf{Metrics.} We follow previous works \cite{Ive2019distilling, yin2020graph} to use BLEU \cite{papineni:bleu} and METEOR \cite{michael:meteor} to evaluate the translation qualities. The BLEU metric computes the precision of $n$-grams and the METEOR metric employs more flexible matching, including exact, stem, synonym, and paraphrase matches between words and phrases. Furthermore, we also report the CIDEr \cite{vedantam2015cider} score, which is commonly used in the image/video captioning tasks and pays more attention to visually relevant words based on the term frequency and inverse document frequency (tf-idf). \noindent\textbf{Implementation Details.} We pre-train two model variants for different experimental settings. For the setting with no extra pre-training data, we set the layer number of independent encoders $L_v=L_s=L_t=1$, the layer number of cross encoder $L_c=3$, the hidden size $H=512$, and the head number $A=8$. For the setting of data augmentation with noisy data, we drop the independent encoders to avoid the influence of noise from target translations, and set the the layer number of cross encoder $L_c=6$. An ablation study of the layer number settings can be found in Section~\ref{sec:ablation}. We initialize all the model parameters from scratch. When pre-training the model, we sample the batch of each pre-trained task with the proportion of MTLM:ISM:ATTP=9:2:1 except for Multi30k. Since there is no attribute in the Multi30k dataset, we pre-train the model without the ATTP task, and sample the batch of the two other tasks with the the proportion of MTLM:ISM=3:1. We pre-train our model with at most 250K iterations and a warm-up strategy of 10,000 steps. When fine-tuning the pre-trained model on PMT/MMT task, we set the learning rate as 6e-5. For the images in Fashion-MMT(C/L), we use the ResNet-101 \cite{he2016resnet} pre-trained on ImageNet to encode the global feature for each image. To narrow the large domain gap between ImageNet and our Fashion-MMT dataset, we fine-tune the conv4\_x and the conv5\_x layers of ResNet-101 with the objectives of product category classification and attribute prediction. For the images in Multi30k, we follow previous works \cite{yin2020graph,lin2020capsule} to represent each image with object region features. We detect up to 20 object regions and extract their corresponding visual features by Faster R-CNN \cite{ren2015fastrcnn}. \begin{figure}[t] \centering \begin{overpic}[scale=0.27]{figures/table.png} \end{overpic} \vspace{-5pt} \caption{Results variation with different layer numbers of encoders on Fashion-MMT test set.} \vspace{-10pt} \label{fig:aba_layer} \end{figure} \begin{table*}[ht] \caption{Experimental results on EN$\Rightarrow$ZH translation task on the Fashion-MMT(C) dataset.} \vspace{-4pt} \label{tab:FAMMT_results} \small \centering \begin{tabular}{c |l| c| c c c|c c c} \toprule \multirow{2}{*}[-0.5ex]{Row} & \multirow{2}{*}[-0.5ex]{Method} & \multirow{2}{*}[-0.5ex]{Extra data} & \multicolumn{3}{c|}{Validation} & \multicolumn{3}{c}{Test} \\ \cmidrule{4-9} & & & BLEU@4 & METEOR & CIDEr & BLEU@4 & METEOR & CIDEr \\ \midrule 1 & Transformer \cite{vaswani:transformer} & - & 40.58 & 35.84 & 303.69 & 40.61 & 35.77 & 302.3 \\ 2 & Multimodal Graph \cite{yin2020graph} & - & 41.07 & 35.55 & 307.38 & 40.70 & 35.45 & 305.08 \\ 3 & \textbf{$\text{UPOC}^\text{2}$ (ours)} & - & \textbf{41.93} & \textbf{36.01} & \textbf{313.46} & \textbf{41.56} & \textbf{35.87} & \textbf{306.82} \\ \midrule 4 & Transformer \cite{vaswani:transformer} & Fashion-MMT(L) & 41.28 & 36.06 & 315.15 & 41.21 & 35.91 & 312.34 \\ 5 & Multimodal Graph \cite{yin2020graph} & Fashion-MMT(L) & 41.39 & 35.96 & 316.21 & 41.49 & 35.95 & 312.68 \\ 6 & \textbf{$\text{UPOC}^\text{2}$ (ours)} & Fashion-MMT(L) & \textbf{43.06} & \textbf{36.64} & \textbf{324.36} & \textbf{43.00} & \textbf{36.68} & \textbf{324.46} \\ \bottomrule \end{tabular} \vspace{-4pt} \end{table*} \begin{table}[t] \caption{EN$\Rightarrow$DE translation results on Multi30k dataset, where B denotes BLEU, and M denotes METEOR.} \vspace{-4pt} \label{tab:multi30k_results} \small \centering \begin{tabular}{l |c c| c c } \toprule \multirow{2}{*}[-0.5ex]{Model} & \multicolumn{2}{c|}{Test2016} & \multicolumn{2}{c}{Test2017} \\ \cmidrule{2-5} & B@4 & M & B@4 & M \\ \midrule INIT \cite{calixto-liu:Init} & 37.3 & 55.1 & - & - \\ DATTN \cite{calixto:doublyAtt} & 36.5 & 55.0 & - & - \\ SATTN \cite{delbrouck:gating} & 38.2 & 55.4 & - & - \\ Imagination \cite{elliott:Imagination} & 36.8 & 55.8 & - & - \\ $\mathrm{\text{VMMT}_\text{F}}$ \cite{calixto:latent} & 37.7 & 56.0 & 30.1 & 49.9 \\ Deliberation \cite{Ive2019distilling} & 38.0 & 55.6 & - & - \\ VGR \cite{yang2020visual_agreement} & - & - & 29.5 & 50.3 \\ UVR \cite{zhang:universal} & 36.9 & - & 28.6 & - \\ Multimodal Transformer \cite{yao2020mmt} & 38.7 & 55.7 & - \\ Multimodal Graph \cite{yin2020graph} & 39.8 & 57.6 & 32.2 & 51.9 \\ DCCN \cite{lin2020capsule} & 39.7 & 56.8 & 31.0 & 49.9 \\ \midrule \textbf{$\text{UPOC}^\text{2}$ (ours)} & \textbf{40.8} & \textbf{58.9} & \textbf{34.1} & \textbf{53.4} \\ \bottomrule \end{tabular} \vspace{-6pt} \end{table} \begin{table*}[t] \caption{EN$\Rightarrow$ZH translation results on Fashion-MMT dataset with different number of clean triplets for fine-tuning.} \label{tab:few_shot} \small \centering \begin{tabular}{c| l| c| c c c|c c c} \toprule \multirow{2}{*}[-0.5ex]{Row} & \multirow{2}{*}[-0.5ex]{Method} & \multirow{2}{*}[-0.5ex]{\begin{tabular}[c]{@{}c@{}}Fine-tuning \\ data (\#Triplets)\end{tabular}} & \multicolumn{3}{c|}{Validation} & \multicolumn{3}{c}{Test} \\ \cmidrule{4-9} & & & BLEU@4 & METEOR & CIDEr & BLEU@4 & METEOR & CIDEr \\ \midrule 1 & $\text{UPOC}^\text{2}$ & 5,000 & 38.33 & 34.06 & 281.18 & 38.13 & 33.92 & 276.74 \\ 2 & $\text{UPOC}^\text{2}$ & 10,000 & 40.32 & 34.92 & 298.31 & 39.95 & 35.01 & 293.69\\ 3 & $\text{UPOC}^\text{2}$ & 15,000 & 41.28 & 35.63 & 307.15 & 40.89 & 35.48 & 306.28\\ \midrule 4 & $\text{UPOC}^\text{2}$ & 36,000 (full) & 42.44 & 36.21 & 323.22 & 42.43 & 36.25 & 320.25 \\ 5 & Multimodal Graph \cite{yin2020graph} & 36,000 (full) & 41.07 & 35.55 & 307.38 & 40.70 & 35.45 & 305.08 \\ \bottomrule \end{tabular} \end{table*} \subsection{Pre-training for PMT} \label{sec:ablation} \noindent\textbf{Pre-training Tasks and Data.} In Table~\ref{tab:aba_task}, we experiment with different pre-training tasks and data to evaluate the effectiveness of our \upoc~~model for the PMT task. The row~1 stands for a non-pretrain baseline, which is directly trained with the fine-tuning task on Fashion-MMT(C) dataset. When first pre-trained with the MTLM task on the same dataset, and then fine-tuned for the PMT evaluation, the model is improved significantly by over 2 points on BLEU@4 as shown in row~2, which demonstrates the effectiveness of our pre-training and fine-tuning schemes on the PMT task. Gradually combining the ISM and ATTP tasks with the MTLM task improves the BLEU@4 score from 41.45 to 41.93, and the CIDEr score from 307.61 to 313.46, which indicates the effectiveness of ISM and ATTP tasks. Pre-training with the three tasks together finally achieves the best translation results on the Fashion-MMT(C) dataset. In row~5, we add the noisy triplet data from Fashion-MMT(L) dataset in the pre-training stage, and fine-tune the model on the clean Fashion-MMT(C) dataset, which achieves significant additional gains compared with row~4. It shows that our model can benefit from machine generated noisy data which are easy to acquire by simply translating the existing extensive mono-lingual product descriptions through machine translation system. We also evaluate our \upoc~~model in a lower-resource setting in Section~\ref{sec:few_shot}, where the model is pre-trained on the noisy Fashion-MMT(L) dataset and fine-tuned with less clean PMT data. \vspace{3pt} \noindent\textbf{Different Layer Numbers of Encoders.} In our \upoc~~model, there are three independent encoders to encode the intra-context information for each modality and language, and a cross encoder to capture the cross-modal and cross-lingual information. We set the layer number of independent encoders as 1 and that of the cross encoder as 3 when only using clean data in the pre-training. However, when adding more noisy data for pre-training, we study the translation performance with respect to different layer numbers in Figure~\ref{fig:aba_layer}. The left figure shows that increasing the layer number of independent encoders will lead to performance degradation, because the noise in the machine generated target translations will influence the language modeling of target sentence encoder. Therefore, we drop independent encoders in this setting and directly encodes the concatenated input sequence with a unified cross encoder. In the right figure, we experiment with different cross encoder layers. It shows that with the number of cross encoder layers increasing, the translation performance improves until the layer number exceeds 6. It is also found that when the number of layers are less than 4, the model achieves even worse results than that without additional pre-training data, which implies that too simple model architecture will hinder the performance gains from data augmentation. \subsection{Comparison with the State-of-the-arts} To demonstrate the effectiveness of our proposed \upoc~~model in the PMT and MMT tasks, we compare our model with the following state-of-the-art NMT and MMT baselines: \parskip=0.1em \begin{itemize}[itemsep=1.5pt,partopsep=0pt,parsep=\parskip,topsep=1.5pt] \item Transformer \cite{vaswani:transformer}: The state-of-the-art text purely NMT model. \item Multimodal Graph \cite{yin2020graph}: The state-of-the-art MMT model, which constructs a multimodal graph with the source sentence and image. \end{itemize} Table~\ref{tab:FAMMT_results} and Table~\ref{tab:multi30k_results} report the translation results of different models on the Fashion-MMT(C) and Multi30k datasets respectively. Even without any extra data for pre-training, our \upoc~~model outperforms the state-of-the-art MMT model on both datasets, which demonstrates the effectiveness of our pre-training and fine-tuning strategies for PMT and MMT tasks. Comparing row~1 and row~2 in Table~\ref{tab:FAMMT_results}, it shows that the conventional state-of-the-art MMT model has limited improvements over the pure text NMT model when adapting to the PMT task. It further validates the challenge of the PMT task, that is, the visual relevance between the product image and description is more complex than that in image captioning data and it is difficult to capture via simply connecting the source entities with visual objects. However, our \upoc~~model achieves the best performance on both PMT and conventional MMT tasks even without any extra data for pre-training, due to its good cross-modal fusion abilities through multiple pre-training tasks. When pre-training with more noisy triplet data annotated by machine translation system, our model achieves additional gains, while the NMT and state-of-the-art MMT models receive limited improvement due to the noise influence. However, with the proposed multiple cross-modal cross-lingual pre-training tasks, our model has better abilities to alleviate the influence of noise from the training data than the models solely based on the maximum likelihood estimation (MLE) objective over the ground-truth translation. \subsection{Lower-resource Learning Results} \label{sec:few_shot} Considering that the manually annotated PMT triplets are rather rare in reality, while there are a lot of mono-lingual product descriptions available at the e-commerce platform, we conduct experiments in a lower-resource setting where our \upoc~~model is pre-trained only with machine generated noisy triplets in Fashion-MMT(L) and fine-tuned with fewer clean triplet data in Fashion-MMT(C). Table~\ref{tab:few_shot} shows the translation results with different numbers of triplets used in the fine-tuning stage. With only 15,000 annotated triplets (\textasciitilde40\% of the dataset) used for fine-tuning, our model achieves comparable results with the state-of-the-art MMT model (row~5), which is trained on the full set of Fashion-MMT(C). It demonstrates that our model can greatly alleviate the demand of annotated triplets and effectively benefit from the noisy data which can be easily acquired from the existing large-scale mono-lingual product descriptions. The result of our model fine-tuned on the full set in row~4 is slightly inferior than our final result in row~6 of Table~\ref{tab:FAMMT_results}. It is because our final model is pre-trained on both the Fashion-MMT(L) and Fashion-MMT(C) datasets, while in the lower-resource setting, we pre-train the model only on the noisy Fashion-MMT(L) dataset. \begin{figure}[t] \centering \begin{overpic}[scale=0.49]{figures/results.png} \end{overpic} \caption{The yellow represents incorrect translations of underlined source words, while our model translates them correctly, which are colored in blue.} \label{fig:results} \end{figure} \begin{figure}[t] \centering \begin{overpic}[scale=0.43]{figures/attn.png} \end{overpic} \caption{Visualization of the attention map on the images and source sentence at each EN$\Rightarrow$ZH translation step.} \label{fig:attn} \end{figure} \subsection{Qualitative Results} In Figure~\ref{fig:results}, we visualize the EN$\Rightarrow$ZH translation results of our \upoc~~model and other compared baselines on the Fashion-MMT(C) dataset. Our model is pre-trained and fine-tuned on the same dataset without any extra data. In the first example, the text-only transformer and the state-of-the-art MMT model both literally translate the source words ``edge'' and ``top'', while in this example, the word ``edge'' means ``style'' and the word ``top'' means the blouse. With the help of the cross-modal information, our model correctly understands and translates these words. In the second example, the NMT model and MMT model both incorrectly understand the source word ``grounded'', and translate it as the word ``foundation''. However, our model correctly translates it with the meaning of sole. In Figure~\ref{fig:attn}, we visualize the attention map on the source sentence and images at each translation generation step. Each generated target word is shown to align well with the corresponding source word, which demonstrates that our model learns a good bi-lingual semantic alignment for the accurate translation. For the visual attention, the model is shown to attend more on the detailed partial images (IMG5-6) when translating the source words ``pearly button'', while it attends more on other full body images when translating the word ``dress''. It shows that our model can focus on the related images when translating different source words. \section{Conclusion} In this paper, we propose a large-scale product-oriented machine translation (PMT) dataset Fashion-MMT with two versions to support the PMT research. Considering that the relevance between product images and descriptions is more complex than that in image captioning dataset, we propose a unified pre-training and fine-tuning framework called \upoc~~to enhance the cross-modal fusion ability. We propose three cross-modal cross-lingual pre-training tasks to improve the semantic alignments between images and texts, and demonstrate its effectiveness on both the PMT and the conventional MMT tasks. Experimental results on the proposed Fashion-MMT dataset and the Multi30k benchmark dataset show that our model outperforms the state-of-the-art models even when pre-trained on the same dataset without any extra data. Furthermore, our model also shows the good ability to exploit more noisy triplet data to improve the translation quality and alleviate the demand of expensive manual annotations. \section{Fashion-MMT Dataset} \begin{figure}[t] \centering \begin{overpic}[scale=0.48]{figures/data.png} \end{overpic} \caption{Examples from Fashion-MMT(L) and Fashion-MMT(C) datasets.} \label{fig:data} \end{figure} \vspace{2pt} \subsection{Data Collection} \label{sec:data_collection} We build the Fashion-MMT dataset based on the fashion captioning dataset FACAD \cite{yang2020facad}. The FACAD dataset contains 126,753 English product descriptions with corresponding product images crawled from the Nordstrom website. Each description is aligned with average 6\textasciitilde 7 product images of different colors and poses. There are also product category and attributes labels for each product. To extend the FACAD dataset for the PMT task, we first automatically translate the English product descriptions into Chinese with Google English$\rightarrow$Chinese translation system. We remove redundant examples and the examples whose English description is shorter than 6 words, or longer than 35 words, or the Chinese translation is shorter than 15 characters. Finally, we obtain 114,257 English-Chinese parallel description pairs with 885,244 product images, denoted as the Fashion-MMT(L) dataset. Since the automatically translated Chinese translations contain noise, we manually clean a subset of the Fashion-MMT(L) dataset to further benefit the field. We sample 40,000 triplets from Fashion-MMT(L) dataset and conduct manual annotation on the Alibaba Data Service platform \footnote{https://smartdata.taobao.com}. All workers are native Chinese speakers with good English skills, and are asked to modify the automatically translated Chinese description given the images and English description. To ensure annotation quality, each translation is further reviewed and accepted by another two independent workers. The whole data collection window is around one month. More than 98.5\% of automatic Chinese translations have been post-edited, and the average Levenshtein distance between before and after manual edition for each sentence is 20.2. It suggests that the state-of-the-art text-based machine translation system is far from perfect to translate product descriptions. We denote the cleaned version as Fashion-MMT(C) dataset. Examples from Fashion-MMT(L) and Fashion-MMT(C) datasets are presented in Figure~\ref{fig:data}. \begin{table} \centering \caption{Data splits of Fashion-MMT datasets.} \label{tab:data_split} \small \begin{tabular}{l| c| c c c} \toprule Dataset & Split & Train & Validation & Test \\ \midrule \multirow{2}{*}[-0.5ex]{Fashion-MMT(L)} & \#Triplets & 110,257 & 2,000 & 2,000 \\ & \#Images & 853,503 & 15,974 & 15,767 \\ \midrule \multirow{2}{*}[-0.5ex]{Fashion-MMT(C)} & \#Triplets & 36,000 & 2,000 & 2,000 \\ & \#Images & 280,915 & 15,974 & 15,767 \\ \bottomrule \end{tabular} \end{table} \begin{table} \centering \caption{Comparison with other MMT datasets. Avg\_len denotes the average English sentence length. Avg\_img denotes the average number of images for each parallel text pair.} \label{tab:data_compare} \small \begin{tabular}{l|cccc} \toprule Dataset & \#Image & \#Triplet & Avg\_len & Avg\_img \\ \midrule Multi30k \cite{elliott2016Multi30k} & 31,014 & 31,014 & 13.1 & 1.00 \\ IKEA \cite{zhou2018ikea} & 3,600 & 3,600 & 71.4 & 1.00 \\ \midrule Fashion-MMT(C) & 312,656 & 40,000 & 20.8 & 7.82 \\ Fashion-MMT(L) & 885,244 & 114,257 & 22.6 & 7.75 \\ \bottomrule \end{tabular} \end{table} \subsection{Dataset Statistics} We split each version of Fashion-MMT into three sets as shown in Table~\ref{tab:data_split}. Since the validation and test set should be clean to correctly evaluate different models, for the Fashion-MMT(L) dataset, we use the same clean validation and test set with the Fashion-MMT(C) dataset and exclude the corresponding 4,000 triplets from the training set. In Table~\ref{tab:data_compare}, we compare our Fashion-MMT datasets with other existing MMT datasets. Both of our Fashion-MMT(C) and Fashion-MMT(L) datasets have more triplets than the Multi30k \cite{elliott2016Multi30k} benchmark and the commercial domain IKEA \cite{zhou2018ikea} dataset. Compared with the IKEA dataset, our Fashion-MMT datasets covers more diverse fashion product categories, while the IKEA dataset only focuses on furnishings. Furthermore, for each parallel text pair in the Fashion-MMT, there are multiple images associated with it, while previous MMT datasets only contain one image for each parallel text. Multiple product images are more common in the real-world scenario as products on the e-commerce platform are usually associated with more than one image with different poses or colors, showing various product details. \section{Experiments} We evaluate our \upoc~~model on the Multi30k \cite{elliott2016Multi30k} benchmark dataset and the new proposed Fashion-MMT(C/L) datasets. To demonstrate the effectiveness of our cross-modal cross-lingual pre-training scheme for the PMT/MMT task, we experiment with two settings: 1) pre-training on the same downstream dataset without extra data and 2) pre-training with additional noisy triplet data. \begin{table*}[ht] \caption{PMT results with different pre-training tasks and pre-training data on Fashion-MMT(C) dataset.} \vspace{-5pt} \label{tab:aba_task} \centering \small \begin{tabular}{c |c |c c c| c c c|c c c} \toprule \multirow{2}{*}[-0.5ex]{Row} & \multirow{2}{*}[-0.5ex]{Extra data} & \multicolumn{3}{c|}{Pre-training tasks} & \multicolumn{3}{c|}{Validation} & \multicolumn{3}{c}{Test} \\ \cmidrule{3-11} & & MTLM & ISM & ATTP & BLEU@4 & METEOR & CIDEr & BLEU@4 & METEOR & CIDEr \\ \midrule 1 & - & & & & 39.39 & 34.69 & 285.42 & 39.41 & 34.59 & 281.94 \\ \specialrule{0.05em}{1.5pt}{1.5pt} 2 & - & \checkmark & & & 41.45 & 35.79 & 307.61 & 41.17 & 35.55 & 303.58 \\ 3 & - & \checkmark & \checkmark & & 41.79 & 35.84 & 308.59 & 41.38 & 35.68 & 305.21 \\ 4 & - & \checkmark & \checkmark & \checkmark & \textbf{41.93} & \textbf{36.01} & \textbf{313.46} & \textbf{41.56} & \textbf{35.87} & \textbf{306.82} \\ \specialrule{0.05em}{1.5pt}{1.5pt} 5 & Fashion-MMT(L) & \checkmark & \checkmark & \checkmark & \textbf{43.06} & \textbf{36.64} & \textbf{324.36} & \textbf{43.00} & \textbf{36.68} & \textbf{324.46} \\ \bottomrule \end{tabular} \end{table*} \subsection{Experimental Settings} \noindent\textbf{Datasets.} Multi30k \cite{elliott2016Multi30k} is the benchmark dataset for the conventional MMT task, where the images are about human daily activities. It contains 29,000 images for training, 1,014 for validation and 1,000 for testing (aka Test2016). Each image is annotated with an English description and its human translation to German (DE). Besides the official testing set of Multi30k, we also evaluate on another testing set (aka Test2017) released for the multimodal translation shared task \cite{elliott:ambcoco}, which is more difficult to translate. We adopt the byte pair encoding \cite{sennrich:bpe} with 10,000 merge operations to get both the source and target language vocabularies. For the Fashion-MMT(L/C), we segment the Chinese translation with raw Chinese characters to get the target language vocabulary. \noindent\textbf{Metrics.} We follow previous works \cite{Ive2019distilling, yin2020graph} to use BLEU \cite{papineni:bleu} and METEOR \cite{michael:meteor} to evaluate the translation qualities. The BLEU metric computes the precision of $n$-grams and the METEOR metric employs more flexible matching, including exact, stem, synonym, and paraphrase matches between words and phrases. Furthermore, we also report the CIDEr \cite{vedantam2015cider} score, which is commonly used in the image/video captioning tasks and pays more attention to visually relevant words based on the term frequency and inverse document frequency (tf-idf). \noindent\textbf{Implementation Details.} We pre-train two model variants for different experimental settings. For the setting with no extra pre-training data, we set the layer number of independent encoders $L_v=L_s=L_t=1$, the layer number of cross encoder $L_c=3$, the hidden size $H=512$, and the head number $A=8$. For the setting of data augmentation with noisy data, we drop the independent encoders to avoid the influence of noise from target translations, and set the the layer number of cross encoder $L_c=6$. An ablation study of the layer number settings can be found in Section~\ref{sec:ablation}. We initialize all the model parameters from scratch. When pre-training the model, we sample the batch of each pre-trained task with the proportion of MTLM:ISM:ATTP=9:2:1 except for Multi30k. Since there is no attribute in the Multi30k dataset, we pre-train the model without the ATTP task, and sample the batch of the two other tasks with the the proportion of MTLM:ISM=3:1. We pre-train our model with at most 250K iterations and a warm-up strategy of 10,000 steps. When fine-tuning the pre-trained model on PMT/MMT task, we set the learning rate as 6e-5. For the images in Fashion-MMT(C/L), we use the ResNet-101 \cite{he2016resnet} pre-trained on ImageNet to encode the global feature for each image. To narrow the large domain gap between ImageNet and our Fashion-MMT dataset, we fine-tune the conv4\_x and the conv5\_x layers of ResNet-101 with the objectives of product category classification and attribute prediction. For the images in Multi30k, we follow previous works \cite{yin2020graph,lin2020capsule} to represent each image with object region features. We detect up to 20 object regions and extract their corresponding visual features by Faster R-CNN \cite{ren2015fastrcnn}. \begin{figure}[t] \centering \begin{overpic}[scale=0.27]{figures/table.png} \end{overpic} \vspace{-5pt} \caption{Results variation with different layer numbers of encoders on Fashion-MMT test set.} \vspace{-10pt} \label{fig:aba_layer} \end{figure} \begin{table*}[ht] \caption{Experimental results on EN$\Rightarrow$ZH translation task on the Fashion-MMT(C) dataset.} \vspace{-4pt} \label{tab:FAMMT_results} \small \centering \begin{tabular}{c |l| c| c c c|c c c} \toprule \multirow{2}{*}[-0.5ex]{Row} & \multirow{2}{*}[-0.5ex]{Method} & \multirow{2}{*}[-0.5ex]{Extra data} & \multicolumn{3}{c|}{Validation} & \multicolumn{3}{c}{Test} \\ \cmidrule{4-9} & & & BLEU@4 & METEOR & CIDEr & BLEU@4 & METEOR & CIDEr \\ \midrule 1 & Transformer \cite{vaswani:transformer} & - & 40.58 & 35.84 & 303.69 & 40.61 & 35.77 & 302.3 \\ 2 & Multimodal Graph \cite{yin2020graph} & - & 41.07 & 35.55 & 307.38 & 40.70 & 35.45 & 305.08 \\ 3 & \textbf{$\text{UPOC}^\text{2}$ (ours)} & - & \textbf{41.93} & \textbf{36.01} & \textbf{313.46} & \textbf{41.56} & \textbf{35.87} & \textbf{306.82} \\ \midrule 4 & Transformer \cite{vaswani:transformer} & Fashion-MMT(L) & 41.28 & 36.06 & 315.15 & 41.21 & 35.91 & 312.34 \\ 5 & Multimodal Graph \cite{yin2020graph} & Fashion-MMT(L) & 41.39 & 35.96 & 316.21 & 41.49 & 35.95 & 312.68 \\ 6 & \textbf{$\text{UPOC}^\text{2}$ (ours)} & Fashion-MMT(L) & \textbf{43.06} & \textbf{36.64} & \textbf{324.36} & \textbf{43.00} & \textbf{36.68} & \textbf{324.46} \\ \bottomrule \end{tabular} \vspace{-4pt} \end{table*} \begin{table}[t] \caption{EN$\Rightarrow$DE translation results on Multi30k dataset, where B denotes BLEU, and M denotes METEOR.} \vspace{-4pt} \label{tab:multi30k_results} \small \centering \begin{tabular}{l |c c| c c } \toprule \multirow{2}{*}[-0.5ex]{Model} & \multicolumn{2}{c|}{Test2016} & \multicolumn{2}{c}{Test2017} \\ \cmidrule{2-5} & B@4 & M & B@4 & M \\ \midrule INIT \cite{calixto-liu:Init} & 37.3 & 55.1 & - & - \\ DATTN \cite{calixto:doublyAtt} & 36.5 & 55.0 & - & - \\ SATTN \cite{delbrouck:gating} & 38.2 & 55.4 & - & - \\ Imagination \cite{elliott:Imagination} & 36.8 & 55.8 & - & - \\ $\mathrm{\text{VMMT}_\text{F}}$ \cite{calixto:latent} & 37.7 & 56.0 & 30.1 & 49.9 \\ Deliberation \cite{Ive2019distilling} & 38.0 & 55.6 & - & - \\ VGR \cite{yang2020visual_agreement} & - & - & 29.5 & 50.3 \\ UVR \cite{zhang:universal} & 36.9 & - & 28.6 & - \\ Multimodal Transformer \cite{yao2020mmt} & 38.7 & 55.7 & - \\ Multimodal Graph \cite{yin2020graph} & 39.8 & 57.6 & 32.2 & 51.9 \\ DCCN \cite{lin2020capsule} & 39.7 & 56.8 & 31.0 & 49.9 \\ \midrule \textbf{$\text{UPOC}^\text{2}$ (ours)} & \textbf{40.8} & \textbf{58.9} & \textbf{34.1} & \textbf{53.4} \\ \bottomrule \end{tabular} \vspace{-6pt} \end{table} \begin{table*}[t] \caption{EN$\Rightarrow$ZH translation results on Fashion-MMT dataset with different number of clean triplets for fine-tuning.} \label{tab:few_shot} \small \centering \begin{tabular}{c| l| c| c c c|c c c} \toprule \multirow{2}{*}[-0.5ex]{Row} & \multirow{2}{*}[-0.5ex]{Method} & \multirow{2}{*}[-0.5ex]{\begin{tabular}[c]{@{}c@{}}Fine-tuning \\ data (\#Triplets)\end{tabular}} & \multicolumn{3}{c|}{Validation} & \multicolumn{3}{c}{Test} \\ \cmidrule{4-9} & & & BLEU@4 & METEOR & CIDEr & BLEU@4 & METEOR & CIDEr \\ \midrule 1 & $\text{UPOC}^\text{2}$ & 5,000 & 38.33 & 34.06 & 281.18 & 38.13 & 33.92 & 276.74 \\ 2 & $\text{UPOC}^\text{2}$ & 10,000 & 40.32 & 34.92 & 298.31 & 39.95 & 35.01 & 293.69\\ 3 & $\text{UPOC}^\text{2}$ & 15,000 & 41.28 & 35.63 & 307.15 & 40.89 & 35.48 & 306.28\\ \midrule 4 & $\text{UPOC}^\text{2}$ & 36,000 (full) & 42.44 & 36.21 & 323.22 & 42.43 & 36.25 & 320.25 \\ 5 & Multimodal Graph \cite{yin2020graph} & 36,000 (full) & 41.07 & 35.55 & 307.38 & 40.70 & 35.45 & 305.08 \\ \bottomrule \end{tabular} \end{table*} \subsection{Pre-training for PMT} \label{sec:ablation} \noindent\textbf{Pre-training Tasks and Data.} In Table~\ref{tab:aba_task}, we experiment with different pre-training tasks and data to evaluate the effectiveness of our \upoc~~model for the PMT task. The row~1 stands for a non-pretrain baseline, which is directly trained with the fine-tuning task on Fashion-MMT(C) dataset. When first pre-trained with the MTLM task on the same dataset, and then fine-tuned for the PMT evaluation, the model is improved significantly by over 2 points on BLEU@4 as shown in row~2, which demonstrates the effectiveness of our pre-training and fine-tuning schemes on the PMT task. Gradually combining the ISM and ATTP tasks with the MTLM task improves the BLEU@4 score from 41.45 to 41.93, and the CIDEr score from 307.61 to 313.46, which indicates the effectiveness of ISM and ATTP tasks. Pre-training with the three tasks together finally achieves the best translation results on the Fashion-MMT(C) dataset. In row~5, we add the noisy triplet data from Fashion-MMT(L) dataset in the pre-training stage, and fine-tune the model on the clean Fashion-MMT(C) dataset, which achieves significant additional gains compared with row~4. It shows that our model can benefit from machine generated noisy data which are easy to acquire by simply translating the existing extensive mono-lingual product descriptions through machine translation system. We also evaluate our \upoc~~model in a lower-resource setting in Section~\ref{sec:few_shot}, where the model is pre-trained on the noisy Fashion-MMT(L) dataset and fine-tuned with less clean PMT data. \vspace{3pt} \noindent\textbf{Different Layer Numbers of Encoders.} In our \upoc~~model, there are three independent encoders to encode the intra-context information for each modality and language, and a cross encoder to capture the cross-modal and cross-lingual information. We set the layer number of independent encoders as 1 and that of the cross encoder as 3 when only using clean data in the pre-training. However, when adding more noisy data for pre-training, we study the translation performance with respect to different layer numbers in Figure~\ref{fig:aba_layer}. The left figure shows that increasing the layer number of independent encoders will lead to performance degradation, because the noise in the machine generated target translations will influence the language modeling of target sentence encoder. Therefore, we drop independent encoders in this setting and directly encodes the concatenated input sequence with a unified cross encoder. In the right figure, we experiment with different cross encoder layers. It shows that with the number of cross encoder layers increasing, the translation performance improves until the layer number exceeds 6. It is also found that when the number of layers are less than 4, the model achieves even worse results than that without additional pre-training data, which implies that too simple model architecture will hinder the performance gains from data augmentation. \subsection{Comparison with the State-of-the-arts} To demonstrate the effectiveness of our proposed \upoc~~model in the PMT and MMT tasks, we compare our model with the following state-of-the-art NMT and MMT baselines: \parskip=0.1em \begin{itemize}[itemsep=1.5pt,partopsep=0pt,parsep=\parskip,topsep=1.5pt] \item Transformer \cite{vaswani:transformer}: The state-of-the-art text purely NMT model. \item Multimodal Graph \cite{yin2020graph}: The state-of-the-art MMT model, which constructs a multimodal graph with the source sentence and image. \end{itemize} Table~\ref{tab:FAMMT_results} and Table~\ref{tab:multi30k_results} report the translation results of different models on the Fashion-MMT(C) and Multi30k datasets respectively. Even without any extra data for pre-training, our \upoc~~model outperforms the state-of-the-art MMT model on both datasets, which demonstrates the effectiveness of our pre-training and fine-tuning strategies for PMT and MMT tasks. Comparing row~1 and row~2 in Table~\ref{tab:FAMMT_results}, it shows that the conventional state-of-the-art MMT model has limited improvements over the pure text NMT model when adapting to the PMT task. It further validates the challenge of the PMT task, that is, the visual relevance between the product image and description is more complex than that in image captioning data and it is difficult to capture via simply connecting the source entities with visual objects. However, our \upoc~~model achieves the best performance on both PMT and conventional MMT tasks even without any extra data for pre-training, due to its good cross-modal fusion abilities through multiple pre-training tasks. When pre-training with more noisy triplet data annotated by machine translation system, our model achieves additional gains, while the NMT and state-of-the-art MMT models receive limited improvement due to the noise influence. However, with the proposed multiple cross-modal cross-lingual pre-training tasks, our model has better abilities to alleviate the influence of noise from the training data than the models solely based on the maximum likelihood estimation (MLE) objective over the ground-truth translation. \subsection{Lower-resource Learning Results} \label{sec:few_shot} Considering that the manually annotated PMT triplets are rather rare in reality, while there are a lot of mono-lingual product descriptions available at the e-commerce platform, we conduct experiments in a lower-resource setting where our \upoc~~model is pre-trained only with machine generated noisy triplets in Fashion-MMT(L) and fine-tuned with fewer clean triplet data in Fashion-MMT(C). Table~\ref{tab:few_shot} shows the translation results with different numbers of triplets used in the fine-tuning stage. With only 15,000 annotated triplets (\textasciitilde40\% of the dataset) used for fine-tuning, our model achieves comparable results with the state-of-the-art MMT model (row~5), which is trained on the full set of Fashion-MMT(C). It demonstrates that our model can greatly alleviate the demand of annotated triplets and effectively benefit from the noisy data which can be easily acquired from the existing large-scale mono-lingual product descriptions. The result of our model fine-tuned on the full set in row~4 is slightly inferior than our final result in row~6 of Table~\ref{tab:FAMMT_results}. It is because our final model is pre-trained on both the Fashion-MMT(L) and Fashion-MMT(C) datasets, while in the lower-resource setting, we pre-train the model only on the noisy Fashion-MMT(L) dataset. \begin{figure}[t] \centering \begin{overpic}[scale=0.49]{figures/results.png} \end{overpic} \caption{The yellow represents incorrect translations of underlined source words, while our model translates them correctly, which are colored in blue.} \label{fig:results} \end{figure} \begin{figure}[t] \centering \begin{overpic}[scale=0.43]{figures/attn.png} \end{overpic} \caption{Visualization of the attention map on the images and source sentence at each EN$\Rightarrow$ZH translation step.} \label{fig:attn} \end{figure} \subsection{Qualitative Results} In Figure~\ref{fig:results}, we visualize the EN$\Rightarrow$ZH translation results of our \upoc~~model and other compared baselines on the Fashion-MMT(C) dataset. Our model is pre-trained and fine-tuned on the same dataset without any extra data. In the first example, the text-only transformer and the state-of-the-art MMT model both literally translate the source words ``edge'' and ``top'', while in this example, the word ``edge'' means ``style'' and the word ``top'' means the blouse. With the help of the cross-modal information, our model correctly understands and translates these words. In the second example, the NMT model and MMT model both incorrectly understand the source word ``grounded'', and translate it as the word ``foundation''. However, our model correctly translates it with the meaning of sole. In Figure~\ref{fig:attn}, we visualize the attention map on the source sentence and images at each translation generation step. Each generated target word is shown to align well with the corresponding source word, which demonstrates that our model learns a good bi-lingual semantic alignment for the accurate translation. For the visual attention, the model is shown to attend more on the detailed partial images (IMG5-6) when translating the source words ``pearly button'', while it attends more on other full body images when translating the word ``dress''. It shows that our model can focus on the related images when translating different source words. \section{Introduction} With the rapid development of e-commerce, more and more people go shopping online because of its convenience and efficiency. In order to better serve e-shoppers all over the world, it is necessary to translate e-commercial product descriptions into various languages. Therefore, the product-oriented machine translation (PMT) task \cite{zhou2018ikea, calixto2017ecommerce, calixto2017casestudy} has received growing research attentions recently. \begin{figure}[t] \centering \begin{overpic}[scale=0.33]{figures/intro.png} \end{overpic} \caption{Examples of MMT and PMT tasks. The colored text are visually relevant phrases. The underlined words are specialized jargons which are ambiguous and translated incorrectly by current machine translation system.} \label{fig:intro} \end{figure} The domain specialty makes the PMT task more challenging than traditional machine translation problems. Firstly, product descriptions contain many specialized jargons, which could be ambiguous in different contexts. It is hard to understand and translate these descriptions without the corresponding product images. For example, in Figure~\ref{fig:intro}, the word ``checks'' in the source product description means ``grids'' and the word ``tank'' means ``vest''. The meanings of these two words are different from their common meanings. Therefore, the current text-based machine translation system cannot translate them correctly. Secondly, although the visual context is beneficial to the translation, the relevance between product image and text description is more complex than that in conventional multimodal machine translation (MMT) task. As shown in Figure~\ref{fig:intro}, in the MMT task, the text description explicitly describes the major objects in the image. However, in the PMT task, text descriptions are related to images in very different aspects, such as products, shapes, colors or even more subjective styles. Therefore, it requires PMT models to dynamically extract different types of information from images to help the text translation. Last but not least, existing resources for PMT research are rather limited, for example, the latest PMT dataset IKEA \cite{zhou2018ikea} contains only 3,600 data samples in the <image, source sentence, target translation> triplet format. It is extremely expensive to manually annotate translations for product descriptions as it demands the annotator to master multiple languages as well as knowledge about the products. The scarcity of PMT datasets further constrains the automatic translation quality of product descriptions. In this paper, we construct a large-scale bilingual product description dataset, Fashion-MMT, which is based on automatically crawled product images and English descriptions from e-commerce website \cite{yang2020facad}. The dataset covers various fashion products from 78 categories, including dresses, shoes, pants, sunglasses, earrings and so on. We create two types of translation annotations. The first type, called Fashion-MMT(L), contains 114,257 automatic Chinese translations of original English product descriptions via a state-of-the-art text-based machine translation system. Although it is easy to achieve large-scale, such translations are noisy. The second type, called Fashion-MMT(C), is a cleaned subset of Fashion-MMT(L) and contains 40,000 <image, English description, Chinese translation> triplets with the manually annotated Chinese translations. In order to take advantage of the large-scale Fashion-MMT dataset to learn semantic alignments among product images and bilingual texts, we propose a Unified Product-Oriented Cross-lingual Cross-modal (\upoc~) model for pre-training and fine-tuning. The \upoc~~model is based on multimodal transformer \cite{lu2019vilbert, li2019visualbert, chen2019uniter} and we design three proxy tasks to effectively learn the inter relationships among images and bilingual texts in pre-training, including multi-modal translation language modeling (MTLM), image source-sentence matching (ISM), and product attribute prediction (ATTP) tasks. Experimental results on Fashion-MMT(C) and Multi30k benchmark datasets show that even pre-trained on the same dataset without any extra data, our model outperforms the state-of-the-art model by a large margin due to its better abilities in cross-modal fusion, i.e., +5.9\% BLEU@4 score on Multi30k and +2.1\% on Fashion-MMT(C). When augmented with large-scale noisy triplet data in Fashion-MMT(L), our model achieves more gains by +1.4 BLEU@4 score and +17.6 CIDEr score. The main contributions of our work are summarized as follows: \parskip=0.1em \begin{itemize}[itemsep=1pt,partopsep=0pt,parsep=\parskip,topsep=0.5pt] \item We propose the first large-scale PMT dataset Fashion-MMT to support the PMT research and e-commerce applications. \item We design a unified pre-training and fine-tuning model \upoc~ and three cross-modal cross-lingual proxy tasks to enhance product machine translation. \item Our model achieves the state-of-the-art results on both the Multi30k and Fashion-MMT datasets, and demonstrates its robustness to benefit from large-scale noisy data. \end{itemize} \section{$\text{UPOC}^\text{2}$: A Unified Pre-training and Fine-tuning Framework for PMT} In this section, we introduce our \upoc~~model and the proposed cross-modal cross-lingual pre-training tasks for the product description translation. Figure~\ref{fig:model} illustrates the architecture of our \upoc~~model. It is stacked with multiple multi-layer transformer encoders and follows a unified pre-training and fine-tuning scheme \cite{devlin2019bert,zhou2020unified,li2020oscar}. We first pre-train our model with three cross-modal cross-lingual pre-training tasks to effectively learn semantic alignments between images and bilingual texts, including multimodal translation language modeling (MTLM), image source-sentence matching (ISM) and product attribute prediction (ATTP). Then, we fine-tune the pre-trained model for the PMT task. \subsection{Model Architecture} \noindent\textbf{Input Representation.} The input sequence is the concatenation of Image-Source-Target triplet ($V$, $X$, $Y$), where $V$ is the global embedding sequence of product images, $X$ is the word embedding sequence of the source language product description and $Y$ is the word embedding sequence of the target language translation. For each $v_i$ in $V=\{v_0,\cdots,v_i,\cdots,v_N\}$, we extract the global visual feature for the $i$-th image via ResNet-101 \cite{he2016resnet}. Then, a linear layer is employed to map the visual feature to the same dimensionality as the word embedding. For the source and target sentences, we add a special start token ([SOS]) and a special end token ([EOS]) to the start and end of each sentence, and represent each token in the sentence with a word embedding vector learned from scratch. To distinguish different modalities and languages of each element in the whole input sequence ($V$, $X$, $Y$), we add a learnable modality embedding to each element indicating whether it belongs to the image modality, source language sentence or the target language sentence. Specially, we also add positional embeddings to the tokens of source and target language sentences. The illustration of input representations is shown in the right part of Figure~\ref{fig:model}. \vspace{2pt} \noindent\textbf{Multimodal Transformer.} Our \upoc~~model contains four multi-layer transformer \cite{vaswani:transformer} encoders, including three independent encoders and a cross encoder. We first employ independent encoders to encode the image sequence, source sentence and target sentence to capture their intra-context information, which have $L_v$, $L_s$ and $L_t$ layers respectively. The outputs of the three encoders are then concatenated as a whole input sequence to the cross encoder. The cross encoder is also a multi-layer transformer encoder, which is used to encode the inter-context across different modalities and languages. We denote the number of cross encoder layers as $L_c$, hidden size as $H$, and the number of self-attention heads as $A$. We share the parameters of the source sentence encoder and the target sentence encoder in pre-training, and separate them when adapting to the PMT task. Because the target sentence encoder will be learned as the translation generator in the fine-tuning stage. \subsection{Pre-training Tasks} To learn the cross-lingual correspondence between source and target language sentences and enhance the cross-modal fusion between images and texts for better translation, we pre-train our model with three pre-training tasks, described in this section. \noindent\textbf{Task \#1: Multimodal Translation Language Modeling (MTLM)} The cross-lingual alignment is important to machine translation. Inspired by the multilingual pre-training task (translation language modeling (TLM) proposed in XLM \cite{conneau2019xlm}), and the multimodal pre-training task (masked language modeling (MLM) generally used in V+L pre-training models \cite{chen2019uniter,li2020unicoder,lu2019vilbert}), we propose to combine them for the multimodal multilingual scenario as the multimodal translation language modeling (MTLM) task for the PMT. The goal of MTLM is to predict the masked words in both languages with the context information of images, the surrounding words in the same language, and all the words in the other language. We randomly choose 15\% word tokens in both languages for prediction. Each chosen word is replaced with a special [MASK] token 80\% of the time, another random word 10\% of the time and the original word 10\% of the time. Note that the random words could be foreign words. The MTLM task takes the fused feature from cross encoder as the input and predict the original word with an output softmax layer, which is tied with input embeddings. We share the vocabulary and softmax prediction layer across languages. The training objective of MTLM can be expressed as follows: \begin{equation} \mathcal{L}_{MTLM} = - \mathbb{E}_{(V,X,Y)\sim\mathcal{D}} \log p(x_m,y_m|x_{\setminus m},y_{\setminus m},V;\Theta) \end{equation} where $\mathcal{D}$ denotes the whole training set, $x_m$ and $y_m$ denote the masked words in $X$ and $Y$, and $\Theta$ denotes all learnable parameters of the pre-training model. Note that $x_m$ and $y_m$ are not semantically aligned words. With the MTLM pre-training task, the model learns the semantic alignments between source and target language words. However, since the translation guidance from the images is much weaker than that from the source words, the model tends to ignore the visual modality with only the MTLM task for the target translation. Therefore, we further propose to enhance the cross-modal fusion between images and texts via cross-modal pre-training tasks. \vspace{2pt} \noindent\textbf{Task \#2: Image Source-Sentence Matching (ISM)} The cross-modal matching task has been widely used in vision-language pre-training models, which is helpful to learn the semantic alignment between the visual and textual modalities. Considering that for the PMT/MMT task, the effective fusion between image and source sentence is important, we conduct the semantic matching between the image and source sentence. We pad the target language sentence $Y$ except for the start [SOS] token, which is further used to predict whether the images $V$ and source sentence $X$ are semantically matched. Specifically, the output of the [SOS] token is fed to a linear layer with the sigmoid function to predict a matching score $s(V, X)$ between 0 and 1. We construct the negative pair by replacing the source sentence in a matched pair with another one. Since different types of products are significantly different, to avoid an oversimplified ISM task, we choose hard negatives by randomly sampling negative source sentences from the set which describes products in the same category as the original sentence. In this way, the model will focus more on the product details rather than the product category to determine whether the image and description are matched. The training objective of the ISM task can be expressed as follows: \begin{equation} \mathcal{L}_{ISM} = - \mathbb{E}_{(V,X)\sim\mathcal{D}}[l \log s(V, X; \Theta) + (1-l) \log (1-s(V, X; \Theta))] \end{equation} where $l\in[0,1]$ indicates whether the input image-source pair is a negative or positive sample. With the ISM pre-training task, the start [SOS] token of the target language sentence will encode rich fused information from images and the source language sentence to benefit the target language translation. \vspace{2pt} \noindent\textbf{Task \#3: Product Attribute Prediction (ATTP)} The product attributes describe important information of the commercial products, including the decorations, shapes, colors or styles of the product as shown in Figure~\ref{fig:data}. To help the model to extract these information from images for better translation, we propose to further enhance the semantic encoding of the [SOS] token by predicting the product attributes according to the images. Specifically, we extend the binary classification of the ISM task to a multi-class classification task for the attribute prediction. We employ another linear layer with softmax function to predict the product attributes based on the output of [SOS] token. Since the ground-truth attributes in FACAD dataset \cite{yang2020facad} is extracted from the source sentence (the nouns and adjectives in the source sentence), it is easy to predict attributes with the source sentence as context. Therefore, for the ATTP task, we mask the source words that express product attributes, and force the model to predict attributes relying on the images. We adopt the categorical cross-entropy as the training objective as follows: \begin{equation} \mathcal{L}_{ATTP} = - \mathbb{E}_{(V,X)\sim\mathcal{D}} \sum_{c\in C} \log p_c(V,X;\Theta) \end{equation} where $C$ denotes the ground-truth attributes set. \subsection{Fine-tuning in PMT task} After pre-training with the above three pre-training tasks, we adapt our pre-trained model to the PMT task. The PMT/MMT task generates the target translation with the context of source sentence and images. It is a generation task, hence, the current generated word cannot ``see'' the future words. Therefore, we adapt our bi-directional pre-trained model to a uni-directional generator by constraining the self-attention mask of the target language sentence. Similar to the MTLM pre-training task, we randomly choose 15\% of target sentence words and replace them with the [MASK] token 80\% of the time, another random word 10\% of the time and the original word 10\% of the time. We predict the masked target word with the context information from all the images, all the source sentence words and the target sentence words before its position. The training objective can be expressed as follows: \begin{equation} \mathcal{L}_{PMT} = - \mathbb{E}_{(V,X,Y)\sim\mathcal{D}} \log p(y_m|y_{<m},X,V;\Theta) \end{equation} where $y_m$ denote the masked words in $Y$, and $\Theta$ denotes all learnable parameters of the pre-trained model. During inference time, we first encode the images, source sentence and the start [SOS] token of the target translation as the input. The [SOS] token encodes rich fused information from source sentence and images due to the ISM and ATTP pre-training tasks. Then, we start to generate the target language translation word by word through feeding a [MASK] token and sampling the predicted token from the softmax output layer. At the next step, the previous [MASK] token is replaced by the generated token, and a new [MASK] token is fed to the model for the next word generation until a [EOS] token is generated. \section{Related Works} \noindent\textbf{Multimodal Machine Translation.} The common visual world is shown to be beneficial to bridge different languages \cite{chen:bilingual,sigurdsson:visual_grounding,chen2019fromwordstosents,huang2020ummt,su2019ummt}. Therefore, recent research works have paid more attentions to the Multimodal Machine Translation (MMT) task \cite{elliott2016Multi30k}, which aims to leverage visual context to aid textual machine translation. Calixto and Liu \cite{calixto-liu:Init} explore to integrate global image feature into the encoder or decoder of the NMT model, while works in \cite{caglayan:AttConcat, calixto:doublyAtt, helcl:attentionStrategy, delbrouck:gating, yao2020mmt} employ more fine-grained image context such as spatial image regions via attention mechanism. Caglayan et al. \cite{caglayan:AttConcat} and Calixto et al. \cite{calixto:doublyAtt} apply concatenation or mean pooling on two independently attended contexts, including the textual context of source words and visual context of image regions, when predicting each target word. To avoid noise of irrelevant information in the image, Delbrouck and Dupont \cite{delbrouck:gating} propose a gating mechanism to weight the importance of the textual and image contexts. Recently, more works \cite{Ive2019distilling, yang2020visual_agreement, yin2020graph, lin2020capsule} propose to represent the image with a set of object features via object detection, considering the strong correspondences between image objects and noun entities in the source sentence. Yang et al. \cite{yang2020visual_agreement} propose to jointly train source-to-target and target-to-source translation models to encourage the model to share the same focus on object regions by visual agreement regularization. Yin et al. \cite{yin2020graph} propose to construct a multi-modal graph with image objects and source words according to an external visual grounding model for the alignment of multi-modal nodes. However, the visual relevance between images and descriptions in PMT is more complex than MMT, which not only includes explicit correspondence such as products, but also implicit relevance such as shape, texture or even subjective style. \vspace{2pt} \noindent\textbf{E-commerce Related Tasks.} Fashion-oriented tasks \cite{yang2020facad,zhang2020poet,zhang2020video_titling,guo2019fashion,han2017fashion_compatibility} have been recently studied due to the rapid development of e-commerce platforms. Han et al. \cite{han2017fashion_compatibility} propose to learn compatibility relationships among fashion items for fashion recommendation. Zhang et al. \cite{zhang2020video_titling} propose to generate a short title for user-generated videos. Yang et al. \cite{yang2020facad} and Zhang et al. \cite{zhang2020poet} propose to generate detailed descriptions for product images/videos. Guo et al. \cite{guo2019fashion} propose to retrieve expected products given a similar product image and a modifying text description. However, no previous studies focus on fashion product description translation. \vspace{2pt} \noindent\textbf{Transformer-based Pre-training.} Recently, self-supervised pre-training has been witnessed a great success on multiple down-stream tasks. Devlin et al. \cite{devlin2019bert} propose a monolingual pre-trained language model named BERT and achieves the state-of-the-art results on eleven natural language processing tasks via simply pre-training with two self-supervised tasks. Afterwards, multilingual pre-trained language models \cite{devlin2019bert, conneau2019xlm, conneau2020xlmr} and multimodal pre-trained models \cite{lu2019vilbert, li2019visualbert, chen2019uniter, li2020oscar, zhou2020unified, li2020unicoder} are further proposed to support multilingual or multimodal scenarios. The multilingual pre-training models show strong abilities on zero-shot cross-lingual transfer, while the vision-language pre-training models outperform the state-of-the-art task-specific models on both vision-language understanding tasks, e.g., image retrieval \cite{faghri2018vsepp}, and generation tasks, e.g. image captioning \cite{vinyals2015showtell}. However, there are few works \cite{ni2021m3p,zhou2021uc2} studying the multimodal multilingual pre-training models, and none of them verify the effectiveness on MMT or PMT tasks.
04893b8da484f5db36a41bdb5690b305d6fe0869
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \subsection{Physical properties of naturally layered manganites} The tri-layer La$_{3-3x}$Sr$_{1+3x}$Mn$_{3}$O$_{10}$ manganites are a member of the Ruddlesden-Popper (RP) series (La, Sr)$_{m+1}$Mn$_{m}$O$_{3m+1}$ manganite perovskites, where m = 1, 2, 3... $\infty$\cite{fawcett1998structure}. The RP series manganites are naturally arranged layered structure with alternate stacking of m-MnO$_2$ planes and rock-salt type block layers (La, Sr)$_2$O$_2$ along c-axis\cite{fawcett1998structure}. In the RP series manganites, the dimensionality depends on the number of perovskite layers and significantly affects the magnetic and transport properties of the system. In manganites, the introduction of a divalent atom in place of a trivalent atom causes the coexistence of Mn$ ^{3+} $ and Mn$ ^{4+} $ ion, which alters the bond length of Mn$ - $O due to the Jahn-Teller (JT) effect\cite{dagotto2013nanoscale, o1971molecular, van1932theory, jahn1937stability}. The 3d orbital of Mn splits into two energy levels t$ _{2g} $ and e$ _{g} $ in the presence of crystal field and JT effect. The doping of divalent atom results in some empty e$ _{g} $-orbital energy levels which facilitate hopping of electrons responsible for transport properties of tri-layer La$_{3-3x}$Sr$_{1+3x}$Mn$_{3}$O$_{10}$. The ferromagnetism (FM) and metal to insulator transition in manganites are governed by the hoping of e$ _{g} $-orbital electrons between the adjoining Mn$ ^{3+} $ and Mn$ ^{4+} $ ions through O. This mechanism is called double exchange (DE) interaction\cite{zener1951interaction}. The most studied oxides of the RP series manganites are bi-layer (n = 2) La$_{2-2x}$Sr$_{1+2x}$Mn$_{2}$O$_{7}$ and infinite-layer (n = $\infty$) La$_{1-x}$Sr$_{x}$MnO$_{3}$. In particular the three-dimensional (3D) infinite-layer La$_{1-x}$Sr$_{x}$MnO$_{3}$ is most widely studied manganite perovskite due to their extraordinary thermal, electronic and magnetic properties\cite{paraskevopoulos2000magnetic, hemberger2002structural, urushibara1995insulator, rao1998colossal, szewczyk2000magnetocaloric, coey1999mixed, dagotto2013nanoscale, von1993giant, jin1994thousandfold, tokura1994giant, chmaissem2003structural}. The 3D La$_{1-x}$Sr$_{x}$MnO$_{3}$ manganite perovskites have continuous stacking of perovskite structure. The bi-layer La$_{2-2x}$Sr$_{1+2x}$Mn$_{2}$O$_{7}$ manganites consists of quasi-two-dimensional (Q2D) MnO$_2$ bi-layers separated by an insulating (La, Sr)$_2$O$_2$ layer\cite{kimura1996interplane} and received growing interest due to their intriguing physical properties\cite{von1993giant, jonker1950ferromagnetic, chahara1993magnetoresistance, jin1994thousandfold, rao1997charge, kimura1996interplane, wang2004magnetic, moritomo1996giant, asano1997two, seshadri1997study, hirota2002spin}. Apart from the extraordinary magnetic and transport properties, recently observed skyrmionic-bubbles in manganite perovskites\cite{yu2014biskyrmion, nagai2012formation, yu2017variation, morikawa2015lorentz} triggered the renewed attention of researchers. A magnetic skyrmion is a topological particle having a local whirl of the spins\cite{nagaosa2013topological, fert2013skyrmions, chauhan2019multiple, yu2011near}. A topological skyrmion formation occurs due to the competition among different interactions such as Heisenberg (HI) interaction, Dzyaloshinskii-Moriya (DM) interaction, long-range dipole interaction and anisotropy\cite{yu2014biskyrmion, nagai2012formation, yu2017variation, morikawa2015lorentz, nagaosa2013topological, fert2013skyrmions, chauhan2019multiple, yu2011near}. In non-centrosymmetric magnetic materials, DM and HI interaction are responsible for skyrmion formation\cite{nagaosa2013topological}. On the other hand, in centrosymmetric magnetic materials, long-range dipole interaction and anisotropy have been proposed to be responsible for the formation of the skyrmions\cite{yu2014biskyrmion}. Though, it is not yet fully understood how the absence of DM interaction can give rise to skyrmions in manganites. The tri-layer La$_{3-3x}$Sr$_{1+3x}$Mn$_{3}$O$_{10}$ manganites have Q2D MnO$_2$ tri-layers separated by (La, Sr)$_2$O$_2$ layer as shown in Fig. {\ref{XRD}(b)}. However, there are remarkably few studies on the transport and magnetic properties of tri-layer La$_{3-3x}$Sr$_{1+3x}$Mn$_{3}$O$_{10}$ manganites. There are only two reports on tri-layer La$_{3-3x}$Sr$_{1+3x}$Mn$_{3}$O$_{10}$ manganites\cite{mahesh1996effect, jung1999electrical}. Both the reports contain a very limited discussion about the structural, magnetic and transport properties of La$_{2.1}$Sr$_{1.9}$Mn$_{3}$O$_{10}$. The scarcity of the studies in tri-layer La$_{3-3x}$Sr$_{1+3x}$Mn$_{3}$O$_{10}$ manganites is because of the inherent difficulty in the synthesis of high-quality samples of the tri-layer manganites. The preparation of tri-layer manganite samples is challenging in comparison to the bi-layer La$_{2-2x}$Sr$_{1+2x}$Mn$_{2}$O$_{7}$ and infinite-layer La$_{1-x}$Sr$_{x}$MnO$_{3}$ manganites due to difficulty in achieving the stable phase. A little mismatch in the stoichiometric ratio of the precursors and a little deviation from the required temperature cycle may result in the formation of 3D infinite-layer or Q2D bi-layer manganites perovskite as an impurity in the matrix of tri-layer manganite. Hence, careful synthesis of tri-layer La$_{3-3x}$Sr$_{1+3x}$Mn$_{3}$O$_{10}$ manganite is required to get a high-quality sample without impurity. As it stands presently, the study of tri-layer La$_{3-3x}$Sr$_{1+3x}$Mn$_{3}$O$_{10}$ manganite is important due to the following issues: (i) the magnetic and transport properties are not at all explored rigorously, (ii) the exchange mechanism responsible for the spin-spin interaction for FM is not known and (iii) recently observed skyrmionic-bubbles in manganites\cite{yu2014biskyrmion, nagai2012formation, yu2017variation, morikawa2015lorentz} indicates that the tri-layer La$_{3-3x}$Sr$_{1+3x}$Mn$_{3}$O$_{10}$ may also be a potential candidate for the skyrmion host material. These issues emphasize that a thorough magnetic analysis of the tri-layer La$_{3-3x}$Sr$_{1+3x}$Mn$_{3}$O$_{10}$ is required to establish the basic understanding of the magnetism and the exchange interaction involved in tri-layer manganite. The magnetic and electrical properties are explored with the help of high precision magnetic and electrical measurements. In order to investigate the exchange mechanism responsible for the spin-spin interaction, a detailed critical analysis of the second-order phase transition has been carried out. \subsection{Theoretical background and methodology} The non-equilibrium dynamics of the magnetic systems near the magnetic phase transition temperature (T$_C$) have received an increasing attention\cite{halperin1967generalization, halperin1969scaling, halperin1972calculation, halperin1974renormalization, halperin1976renormalization, hohenberg1977theory, mazenko2008nonequilibrium}. At time t = 0, the system is quenched in the vicinity of the T$_C$ from an equilibrium state away from the T$_C$. This results in a critical slow down due to the slow relaxation towards the new equilibrium state of the system driven by the sudden quenching in the vicinity of T$_C$. Generally, the Langevin-type equation is used to define the theoretical models for the critical dynamics of a system, which is governed by the Ginzburg-Landau theory for conserved or non-conserved order parameters\cite{halperin1972calculation, halperin1974renormalization, halperin1976renormalization, hohenberg1977theory, mazenko2008nonequilibrium}. Different universality classes are deduced from different theoretical models, which depend on the associated conservation laws and model parameters n and d (n $-$ spin dimensionality and d $-$ space dimensionality). The critical analysis of magnetic systems gives rise the vital information such as the universality class, behavior of phase transition and spin interaction of the system. According to the Landau (mean-field) theory, the magnetic free energy F$_M$ of a second-order magnetic system can be expressed as a power series in the order parameter M in the vicinity of the T$_C$ as \begin{equation} F(M, T) = F(0)+\frac{a(T)}{2}M^2 +\frac{b(T)}{4}M^4 \\ +...-\mu_0{HM}, \end{equation} where a(T) and b(T) are Landau coefficients and $\mu{_0}$, H and M are the vacuum permeability, magnetic field and magnetization, respectively. The equilibrium condition is defined by the minimization of the F$_M$ as $\partial$F(M, T)/$\partial$M = 0. The minimization of the F$_M$ gives the following equation of state for a magnetic system near T$_C$ \begin{equation} \mu_0{H} = {a(T)}M +{b(T)}M^3. \label{LT2} \end{equation} This mean field approach fails in a critical region around T$_C$ characterized by the Ginzburg criterion\cite{ginzburg1961some}. The critical behavior of a magnetic system which undergoes a second-order phase transition can be investigated in detail by a series of correlated critical exponents\cite{stanley1971introduction}. The divergence of correlation length $\xi = \xi_0|(T-T_C)/T_C|^{-\nu}$ ($\nu$ $ - $ critical exponent) results in the universal scaling laws for the spontaneous magnetization (M$_S$) and the inverse magnetic susceptibility ($\chi_0$(T)) in the vicinity of the second-order phase transition T$_C$. The M$_S$(T) is defined for T $<$ T$_C$ and characterized by the exponent $\beta$. The $\chi_0$(T) is defined for T $>$ T$_C$ and characterized by the exponent $\gamma$. The isothermal magnetization (M-H) at T$_C$ is characterized by the exponent $\delta$. The M$_S$(T) and $\chi_0$(T) show a power law dependence on the reduced temperature $\epsilon$ = (T-T$ _{C} $)/T$ _{C} $ and the critical magnetization depends on $\mu_{0}$H\cite{stanley1971introduction, Fishertheory, stanley1999scaling}. The critical exponents before and after T$ _{C} $ can be given by \begin{equation} {M_S(T) \propto (-\epsilon)^\beta}; \ {\epsilon<0},\ {T< T_C}, \label{Ms} \end{equation} \begin{equation} {\chi_0^{-1}(T) \propto (\epsilon)^\gamma}; \ {\epsilon>0}, \ {T>T_C} \label{chi} \end{equation} and \begin{equation} {M \propto (\mu_0H)^{1/\delta}; \ {\epsilon = 0}}, \ {T = T_C}. \label{Del} \end{equation} Generally, the critical exponents associated with the M$_S$ and $\chi_0$(T) should follow the Arrott-Noakes equation of state\cite{arrott1967approximate} in the asymptotic region $|\epsilon|<0.1$ \begin{equation} (\mu_0H/M)^{1/\gamma} = a\epsilon + bM^{1/\beta}, \label{Arrott Plot} \end{equation} where a and b are the material constant. Using scaling hypothesis, the magnetic equation of state, i.e., the relationship among the variables M($\mu_0$H, $\epsilon$), $\mu_0$H and T in the asymptotic critical region is expressed as\cite{stanley1999scaling, kaul1985static} \begin{equation} M(\mu_0H,\epsilon) = {\epsilon^{\beta}}f_{\pm}(\mu_0H/\epsilon^{(\beta+\gamma)}), \label{scal} \end{equation} where $f_{+}$ is defined for $T > T_C$ and $f_{-}$ is for $T < T_C$. The magnetic equation of the state emphasizes that if the choice of the values of the critical exponents is correct, then all the M-H curve should collapse onto two separate curves below and above T$_C$ independently. The Eq. (\ref{scal}) can be rewritten in terms of the renormalized magnetization m [m = M($\mu_0$H, $\epsilon$)${\epsilon}^{-\beta}$] and renormalized field h [h = $\mu_0$H$\epsilon^{-(\beta+\gamma)}$] as \begin{equation} m = f_\pm(h). \end{equation} Furthermore, according to the statistical theory, the relations between the critical exponents that limit the number of independent variables to two are given as\cite{stanley1971introduction, huang1987statistical, kadanoff1966scaling, widom1965surface}: \begin{equation} \alpha + 2\beta + \gamma = 2~~~~~~~~~{(Rushbrooke~scaling~realtion)}~ \label{Rushbrooke} \end{equation} and \begin{equation} \delta = 1+\frac{\gamma}{\beta}.~~~~~~~~~~~~(Widom~scaling~realtion)~ \label{Widom} \end{equation} In the present study, we have chosen a tri-layer compound La$_{2.1}$Sr$_{1.9}$Mn$_{3}$O$_{10}$ for x = 0.3, (hereafter referred to as TL-LSMO-0.3, where TL $ - $ stand for tri-layer). The low dimensional magnetism in tri-layer TL-LSMO-0.3 is explained by the critical analysis using different methods, which includes Kouvel-Fisher (KF) method, modified Arrott plots (MAPs) scaling and critical isotherm analysis. Further confirmation of the low dimensionality of the magnetism in TL-LSMO-0.3 is obtained by renormalization group theory. We have shown that the layered manganite TL-LSMO-0.3 has special characteristics that cannot be explained by the 3D universality classes. \section{Experimental Details} A high-quality tri-layer La$_{2.1}$Sr$_{1.9}$Mn$_{3}$O$_{10}$ manganite sample was synthesized through the standard solid state reaction technique. The stoichiometric amount of high purity precursors of La$_2$O$_3$, SrCO$_3$ and MnO$_2$ were grounded together to achieve the homogeneous mixture of the sample. The final mixture was then calcined at 1050 $^\circ$C for 48 h and sintered at 1400 $^\circ$C for 72 h after making pallets. The sample was regrounded after each calcination and sintering, the final sintering process was repeated to achieve the single phase. The room temperature crystal structure and phase purity were determined by the powder X-ray diffraction (PXRD) (Rigaku miniflex 600-X-ray diffractometer with Cu-K$_{\alpha}$ radiation) followed by the Rietveld refinement. The sample was found to be a tetragonal (\textit{I4/mmm}) structure with no impurity peak. The temperature and field-dependent high precision magnetic data were collected using a physical property measurement system (PPMS). The temperature-dependent zero-field cooled (ZFC) and field cooled (FC) magnetization data were obtained under a constant magnetic field of 10 mT in the temperature range 5 $ - $ 300 K. First quadrant field-dependent M-H curves were obtained under a varying magnetic field of 0 $ - $ 7 T (field step is; 0 to 500 mT $\rightarrow$ $\Delta${$\mu_0$H} = 20 mT and 500 mT to 7 T $\rightarrow$ $\Delta${$\mu_0$H} = 200 mT) in the temperature range of 90 to 120 K with $\Delta$T = 1 K. The resistivity of TL-LSMO-0.3 was collected in the temperature range of 10 to 300 K by using PPMS. \begin{figure}[htb!] \centering \includegraphics[trim=0.1mm 0.1mm 0.2mm 0.2mm,clip,width=\linewidth]{Figure1-a.jpg} \includegraphics[trim=0.1mm 0.1mm 0.2mm 0.2mm,clip,width=\linewidth]{Figure1-bc.jpg} \caption{(a) The lower panel shows the PXRD of the tri-layer TL-LSMO-0.3 synthesized under different conditions. The PXRD pattern in green color(pattern 4) is corresponding to the pure phase TL-LSMO-03. Upper panel and inset shows the Rietveld refinement of the single phase TL-LSMO-0.3 and La$ _{0.7} $Sr$ _{0.3} $MnO$_{ 3} $, respectively. (b) and (c) Unit cell crystal structure of the tri-layer La$_{2.1}$Sr$_{1.9}$Mn$_{3}$O$_{10}$ and infinite-layer La$ _{0.7} $Sr$ _{0.3} $MnO$_{ 3} $ manganite. The MnO$_6$ octahedra in the crystalline bulk are denoted in cyan color. Different atoms are shown as the spheres of different [(La, Sr) - light green, Mn - blue oxygen - red]. The perovskite structure ABO$_3$, [A $\rightarrow$ (La, Sr) and B $\rightarrow$ Mn] consist of a MnO$_6$ octahedra in the center and (La, Sr) atoms lies at the corner of the perovskite structure.} \label{XRD} \end{figure} The lower panel of the Fig. \ref{XRD}(a) shows the PXRD patterns of samples prepared at different conditions and represented by 1, 2, 3 and 4. The upper panel and inset of Fig. \ref{XRD}(a) shows the Rietveld refinement of the pure phase TL-LSMO-0.3 (pattern 4) and infinite-layer La$ _{0.7} $Sr$ _{0.3} $MnO$_{ 3} $, respectively. Among all the PXRD patterns, the pattern 1, 2 and 3 were not found to be a single phase but showed impurity peaks corresponding to the bi-layer and infinite-layer. The impurity peak formation is either due to the mismatch of stoichiometric ratio and/or the lack of controlled heating rate and heating cycle. The $\star$ and $\triangledown$ symbols correspond to the bi-layer and infinite-layer impurities, respectively. The low dimensionality of the TL-LSMO-0.3 can be understood by comparing the crystal structure of TL-LSMO-0.3 with that of infinite-layer La$ _{0.7} $Sr$ _{0.3} $MnO$_{ 3} $ manganite. Figure \ref{XRD}(b) and (c) show the crystal structure of the TL-LSMO-0.3 and La$ _{0.7} $Sr$ _{0.3} $MnO$_{ 3} $, respectively. The crystal structure of TL-LSMO-0.3 shows layered characteristic in which the rock-salt type structure separates three consecutive perovskite layers and these perovskites layers are made of two-dimensional (2D) network of Mn$ - $O bond and form Q2D MnO$ _{2} $ planes\cite{asano1997two}. In contrast, the crystal structure of infinite-layer La$ _{0.7} $Sr$ _{0.3} $MnO$_{ 3} $ has continuous stacking of the 3D perovskite layers, which is made of a 3D network of Mn$ - $O bond and noticeably different than that of TL-LSMO-0.3\cite{asano1997two}. Hence, the dimensionality of the RP series manganites can be modified by altering the number of perovskite layers\cite{mahesh1996effect}. \section{Results and analysis} The FC and resistivity curves for TL-LSMO-0.3 are shown in Fig. \ref{FC_ZFC}(a). The T$_C$ $\approx$ 103 K of the sample is determined by the minimum of the derivative of FC curve, shown in the inset (1) of Fig. \ref{FC_ZFC}(a)\cite{mahesh1996effect, jung1999electrical}. There is another transition T$ ^{*} $ that appears at $\approx$ 263 K in addition to the first transition 103 K. Generally, in the case of infinite-layer manganites, the magnetization above T$ _{C} $ is zero, but in the present sample, the magnetization is non-zero above T$_C$. Similar results have been observed in bi-layer La$_{2-2x}$Sr$_{1+2x}$Mn$_{2}$O$_{7}$ manganites\cite{kimura1996interplane, wang2005magnetic, argyriou1997unconventional}. \begin{figure}[htb!] \centering \includegraphics[trim=0.1mm 0.1mm 0.2mm 0.2mm,clip,width=\linewidth]{Figure2-ab.jpg} \caption{(a) FC of TL-LSMO-0.3 in constant magnetic field of 10 mT. The T$_C$ $\approx103$ K is determined by the minimum of the derivative of magnetization (dM/dT). Resistivity of TL-LSMO-0.3 shows a metal-insulator transition (T$_{MI}$) at $\approx{230}$ K and a step at $\approx{103}$ K corresponding to the magnetic phase transition T$_{C}$ and a anomaly at $\approx{263}$ K represented by T$ ^{*} $. (b) M-H curve for TL-LSMO-0.3 in the temperature range 90 to 120 K with $\Delta$T = 1 K and magnetic field of 7 T.} \label{FC_ZFC} \end{figure} The non-zero magnetization above T$_C$ in bi-layer La$_{2-2x}$Sr$_{1+2x}$Mn$_{2}$O$_{7}$ manganites has been explained by the 2D short-range FM ordering in their PM state\cite{kimura1996interplane, wang2005magnetic, argyriou1997unconventional}. The temperature-dependent resistivity curve for TL-LSMO-0.3 shows a broad peak at $\approx{230}$ K corresponding to the metal-insulator transition (T$_{MI}$) along with a step-like behavior at T$ _{C} $ $\approx{103}$ K [inset (2)] and a small anomaly at $\approx{263}$ K corresponding to the second transition T$ ^{*} $. Generally, in manganites, the T$_{MI}$ of MIT coincides with the T$ _{C} $ of the system. In contrast, the TL-LSMO-0.3 shows a significant difference between the MIT transition T$_{MI}$ and the magnetic transition T$ _{C} $. The metallic behavior of TL-LSMO-0.3 below T$ _{C} $ can be explained by the DE mechanism, where a large number of carriers are available\cite{zener1951interaction, jonker1950ferromagnetic}. On the other hand, the metallic behavior above T$ _{C} $ is due to the formation of FM cluster and can be explained with percolation mechanism, which describes the metallic behavior even in the absence of long-range magnetic ordering. The transition from metallic to insulating phase occurs due to the formation of polarons because of the distortion in MnO$ _{6} $ octahedra\cite{millis1995double}. Figure \ref{FC_ZFC}(b) represents the M-H data of TL-LSMO-0.3 in critical region 90 K $\leq$ T $\leq$ 120 K, where $\Delta$T = 1 K. \section{Entropy analysis} \subsection{Universal curve for second-order phase transition} This section presents systematic study of the behavior of universal curve for magnetic entropy change ({$\Delta{S_M}$}) to confirm the order of the magnetic phase transition in TL-LSMO-0.3. A universal curve should be constructed for field-dependent $\Delta{S_M}$ only in case of second-order phase transition\cite{VFranco, Romero-Muniz, franco2007field, Bonilla, guetari2016structure}. The existence of a universal curve is based on the formalism that the equivalent points of the different {$\Delta{S_M}$} curves calculated for different magnetic fields should collapse on a single curve\cite{VFranco, Romero-Muniz, franco2007field, Bonilla, guetari2016structure}. If TL-LSMO-0.3 shows a second-order magnetic phase transition, all the $\Delta{S_M}$ curves will collapse on a single universal curve. Before starting the analysis of the scaling behavior of {$\Delta{S_M}$}, we have to calculate the temperature variation of {$\Delta{S_M}$}. The {$\Delta{S_M}$} can be calculated by using M-H data and Maxwell's thermodynamic relation given below\cite{Romero-Muniz, bingham2009magnetocaloric} \begin{eqnarray} \Delta{S_M}(\mu{_0}H, T) = \Delta{S_M}(\mu{_0}H, T) - \Delta{S_M}(0, T),\nonumber\\ \Delta{S_M}(\mu{_0}H, T) = \int_{0}^{\mu{_0}H_{max}}\Bigg(\frac{\partial{S(\mu{_0}H, T)}}{\partial{(\mu{_0}H)}}\Bigg)_{T}d({\mu{_0}H}).~~~~~ \end{eqnarray} Using Maxwell's thermodynamic relation \begin{equation} \Bigg(\frac{\partial{S(\mu{_0}H, T)}}{\partial{(\mu{_0}H)}}\Bigg)_{T} = \Bigg(\frac{\partial{M}(\mu{_0}H,T)}{\partial{T}}\Bigg)_{\mu_{0}H}. \end{equation} Now, above relation {$\Delta{S_M}$} can be expressed as follows \begin{eqnarray} \Delta{S_M}(\mu{_0}H, T) = \int_{0}^{\mu{_0}H_{max}}\Bigg(\frac{\partial{M}(\mu{_0}H,T)}{\partial{T}}\Bigg)_{\mu_{0}H}d({\mu{_0}H}).\nonumber\\ \label{Entropy} \end{eqnarray} Using isothermal M-H curves, the $\Delta{S_M}$ in the presence of the magnetic field can be calculated numerically from the following equation \begin{eqnarray} \Delta{S_M}(\frac{T_{1} + T_{2}}{2}) = \frac{1}{(T_{1} - T_{2})} \Bigg[ \int_{0}^{\mu{_0}H_{max}}\Big({\partial{M}(\mu{_0}H,T_{2}})\nonumber\\ - {\partial{M}(\mu{_0}H,T_{1}}\Big)_{\mu_{0}H}d({\mu{_0}H})\Bigg].~~~~~ \label{Entr} \end{eqnarray} Figure \ref{Entropy1}(a) shows the variation of {$\Delta{S_M}$} with temperature and all the curves show a maximum at T$_C$. The value of the {$\Delta{S_M}$} peak increases with magnetic field. In order to construct the universal curve, all the {$\Delta{S_M}$} curves were normalized with their respective maximum entropy change $\Delta{S_M}(T, \mu_{0}H)$/$\Delta{S_M^{peak}}$(T, $\mu_{0}$H). Next, the temperature axis is rescaled by considering the reference temperature such that {$\Delta{S_M}(T_{r})$/$\Delta{S_M^{peak}}$} $ \geq {l}$, where T$_r$ is the reference temperature and $ l $ (0 $ < $ $ l $ $\leq$ 1) is the arbitrary constant. Although, $ l $ can take any value between 0 to 1 but large value of $ l $, i.e., the reference temperature chosen very close to $\Delta{S_M^{peak}}$ may result a large numerical errors due to limited number of points. We define the new rescaled temperature axis ($\theta$) as \begin{equation} \theta = \begin{cases} -(T-T{_C})/(T_{r_1}-T_{C}), \ T \leq T_{C} \\ (T-T{_C})/(T_{r_2}-T_{C}), \ T > T_{C}, \end{cases} \label{Entropyscaling} \end{equation} where T$_{r_1}$ and T$_{r_2}$ are the two reference temperatures for T $\leq$ T$_{C}$ and T $ > $ T$_{C}$ respectively. The reference temperatures T $\leq$ T$_{C}$ and T $ > $ T$_{C}$ are selected such that $\Delta{S_M(T_{r_1})}$/$\Delta{S_M^{peak}}$ = $\Delta{S_M}(T_{r_2})$/$\Delta{S_M^{peak}}$ = 0.7. The universal curve for TL-LSMO-0.3 is plotted in Fig. \ref{Entropy1}(b), which shows the collapse of all the $\Delta{S_M}$ curves calculated at different field on a single curve. The formation of single curve for TL-LSMO-0.3 confirms the second-order phase transition around T$ _{C} $. \begin{figure}[htb!] \centering \includegraphics[trim=0.3mm 0.3mm 0.3mm 0.3mm,clip,width=\linewidth]{Figure3-ab.jpg} \caption{(a) The evolution of $\Delta{S_M}$ vs. T at different fields (0.5 $\rightarrow$ 7 T) determined by M-H curve [Fig. \ref{FC_ZFC}(b)], which shows a continuous nonmonotonic change of $\Delta{S_M}$ around T$_C$. (b) Universal curve for TL-LSMO-0.3 shows the collapse of all $\Delta{S_M}$ on single curve, which is the characteristic of second-order phase transition.} \label{Entropy1} \end{figure} \begin{figure*}[htb!] \centering \begin{tabular}{cccc} \includegraphics[trim=0.1mm 0.1mm 0.1mm 0.1mm,clip,width=85mm]{Figure4-a.jpg} \includegraphics[trim=0.1mm 0.1mm 0.1mm 0.1mm,clip,width=85mm]{Figure4-b.jpg}\\ \includegraphics[trim=0.1mm 0.1mm 0.1mm 0.1mm,clip,width=85mm]{Figure4-c.jpg} \includegraphics[trim=0.1mm 0.1mm 0.1mm 0.1mm,clip,width=85mm]{Figure4-d.jpg} \end{tabular} \caption{Isotherms of M$^2$ vs. ${\mu{_0}}$H/M for 94 K $\leq$ T $\leq$ 111 K ($|\epsilon| \leq 0.1$) with (a) the Arrott plot (mean-field theory), which shows non-linear or non-mean-field type behavior even in higher magnetic field with positive slopes corresponding to the second-order phase transition. (b) MAPs for 3D-Ising with short-range interaction (c) MAPs for 2D-Ising model with Long-range interaction and (d) MAPs of M-H curves with $\beta$ = 0.118 and $\gamma$ = 1.681 in TL-LSMO-0.3. Solid lines are corresponding to the fits by the Eq. (\ref{Arrott Plot}).} \label{MAP} \end{figure*} \section{Critical analysis} One way to determine the T$ _{C} $ and critical exponents ($\beta$ and $\gamma$) is Arrott analysis of the data\cite{domb2000phase}. Generally, if a magnetic system belongs to the mean-field ordering, i.e., $\beta$ = 0.5, $\gamma$ = 1, then Arrott plot (M$^2$ vs. ${\mu{_0}}$H/M) should results parallel lines and the isothermal curve at T$_C$ should pass through the origin\cite{domb2000phase}. The Arrot plot (M$^2$ vs. ${\mu{_0}}$H/M) for TL-LSMO-0.3 does not yield parallel lines around T$_C$, as shown in Fig. \ref{MAP}(a), which implies that there is non-mean-field type interaction in TL-LSMO-0.3. The positive slope in the Arrott plot confirms the second-order magnetic phase transition in TL-LSMO-0.3\cite{banerjee1964generalised}. Neither short-range 3D-Ising model ($\beta$ = 0.325, $\gamma$ = 1.24) nor long-range 2D-Ising model ($\beta$ = 0.289, $\gamma$ = 1.49) produce parallel lines, which are shown in Fig. \ref{MAP}(b) and \ref{MAP}(c). Therefore one can conclude that these two models cannot describe the critical behavior of TL-LSMO-0.3. Hence, we reanalyzed the magnetization isotherms of the TL-LSMO-0.3 by using the Arrott Noakes equation of state defined in the critical region, Eq. (\ref{Arrott Plot})\cite{arrott1967approximate}. The modified Arrott plots (MAPs) M$^{1/\beta}$ vs. ${\mu{_0}}$H/M$^{1/\gamma}$ for the M-H isotherms of TL-LSMO-0.3 in asymptotic region ($|\epsilon|<0.1$) is shown in Fig. \ref{MAP}(d). The value of the exponents $\beta$ and $\gamma$ are chosen such that the isotherms of MAPs display as close as parallel lines. The best fit of Eq. (\ref{Arrott Plot}) to the MAPs defined for TL-LSMO-0.3 in the temperature range 95 K $\leq$ T $\leq$ 111 K and field range 0.5 T $\leq$ $\mu_0$H $\leq$ 7 T yields the value of exponents $\beta$ = 0.118 $\pm$ 0.004, $\gamma$ = 1.681 $\pm$ 0.006 and T$_C$ = 103.54 $\pm$ 0.03 K. Next, we find out the value of exponent $\delta$ using M-H curve at T$_C$ and Eq. (\ref{Del}) as shown in Fig. \ref{Delta}. The value of the exponent $\delta$ = 14.668 $\pm$ 0.002 is obtained for TL-LSMO-0.3 by fitting the isotherm at T$_C$ to the Eq. (\ref{Del}). The value of the exponent $\delta$ for TL-LSMO-0.3 is larger than $\delta$ value in 3D universality classes defined for the short-range interaction, see Table \ref{Theory}. These exponents $\beta$, $\gamma$ and $\delta$ for TL-LSMO-0.3 should satisfy the Eq. (\ref{Widom})\cite{widom1965surface}. The value $\delta$ = 15.245 is obtained by using the value of $\beta$ and $\gamma$ determined from MAPS in Eq. (\ref{Widom}). The $\delta$ value obtained from Eq. (\ref{Widom}) is consistent with the $\delta$ value determined from the critical isotherm. Hence, both the exponents $\beta$ and $\gamma$ found to satisfy the Widom-scaling relation defined in Eq. (\ref{Widom}). \begin{figure}[htb!] \centering \includegraphics[trim=0.3mm 0.3mm 0.3mm 0.3mm,clip,width=\linewidth]{Figure5.jpg} \caption{ M-H curve at T${_C}$ = 103 K and inset shows log-log plot of the curve. Solid line represents the fit to the Eq. (\ref{Del}) and yields $\delta$ = 14.668 $\pm$ 0.002 mentioned in graph is obtained from the fit to the Eq. (\ref{Del}). } \label{Delta} \end{figure} Further, the exponents $\beta$ and $\gamma$ have been determined more accurately by Kouvel-Fisher (KF) method\cite{kouvel1964detailed}. The M$_S$ and $\chi_0$ are determined from the intersection with the axes M$^{1/\beta}$ and ${\mu{_0}}$H/M$^{1/\gamma}$, respectively. The intercepts are obtained from the linear extrapolation in the MAPs plotted for the 2D short-range Ising model because of the nearly parallel behavior of the isotherms as displayed in Fig. \ref{2DSR}. \begin{figure}[htb!] \centering \includegraphics[trim=0.3mm 0.3mm 0.3mm 0.3mm,clip,width=\linewidth]{Figure6.jpg} \caption{Isotherms of M$^2$ vs. ${\mu{_0}}$H/M for 94 K $\leq$ T $\leq$ 111 K ($|\epsilon| \leq 0.1$) with the short-range 2D-Ising model.} \label{2DSR} \end{figure} The variation of M$_S$ and $\chi_0$ with temperature for TL-LSMO-0.3 is shown in the Fig. \ref{KFplot}(a). The solid lines in the Fig. \ref{KFplot}(a) represent fit to the M$_S$ and $\chi_0$ using Eqs. (\ref{Ms}) and (\ref{chi}), respectively. The KF method has the following form, which is obtained by Eq. (\ref{Arrott Plot}) in the limit H $\rightarrow$ 0 for T $ < $ T$_C$ and T $ > $ T$_C$ \begin{equation} \frac{M_S(T)}{dM_S(T)/dT} = \frac{T-T_C}{\beta} \label{KFB} \end{equation} and \begin{equation} \frac{\chi_0^{-1}(T)}{d\chi_0^{-1}(T)/d(T)} = \frac{T-T_C}{\gamma}. \label{KFG} \end{equation} The value of exponents $\beta$ and $\gamma$ can be determined from the slopes 1/$\beta$ and 1/$\gamma$ obtained from the linear variation of ${M_S}({dM_S(T)/dT})^{-1}$ vs. T and ${\chi_0^{-1}}({d\chi_0^{-1}/dT})^{-1}$ vs. T, respectively. The intersection with temperature axis yields T$_C$ as shown in the Fig. \ref{KFplot}(b). Solid lines in Fig. \ref{KFplot}(b) represent the fit to the ${M_S}({dM_S(T)/dT})^{-1}$ vs. T and ${\chi_0^{-1}}({d\chi_0^{-1}/dT})^{-1}$ vs. T using Eqs. (\ref{KFB}) and (\ref{KFG}), respectively. The KF method results the value of exponents $\beta$ = 0.120 $\pm$ 0.003 with T$_C$ = 103.24 $\pm$ 0.01 K and $\gamma$ = 1.710 $\pm$ 0.005 with T$_C$ = 103.12 K $\pm$ 0.02. These results are consistent with the value of exponents obtained from the MAPs. \begin{figure}[htb!] \centering \includegraphics[trim=0.3mm 0.3mm 0.3mm 0.3mm,clip,width=\linewidth]{Figure7.jpg} \caption{(a) Temperature variation of spontaneous magnetization M$_S$(T, 0) (left) and inverse susceptibility ${\chi}_0^{-1}$(T, 0) (right). (b) KF plots for the ${M_S}({dM_S(T)/dT})^{-1}$ (left) and ${\chi_0^{-1}}({d\chi_0^{-1}/dT})^{-1}$ (right).} \label{KFplot} \end{figure} Further, we confirm that the obtained exponents are not different above and below T$_C$. As we know that, one can also deduce the critical exponents of a magnetic sample using scaling theory, which states that for an appropriate value of the critical exponents ($\beta$ and $\gamma$), the plot of scaled magnetization (M${\epsilon}^{-\beta}$) vs. renormalized field ($\mu_0$H$\epsilon^{-(\beta+\gamma)}$) should fall onto two separate curves: one for T $ < $ T$_C$ and other for T $ > $ T$_C$. Figure \ref{Scaling} represents the M${\epsilon}^{-\beta}$ as a function of $\mu_0$H$\epsilon^{-(\beta+\gamma)}$ below and above T$_C$ in TL-LSMO-0.3. One can see that all the magnetization curve fall onto two curves below and above T$_C$ separately, when the value of T$_C$ and exponents are chosen as T$_C$ = 103.17 $\pm$ 0.01 K, $\beta$ = 0.121 $\pm$ 0.001, {$\gamma$}$^{'}$ = 1.710 $\pm$ 0.005 for T $ < $ T$_C$ and {$\gamma$} = 1.702 $\pm$ 0.002 for T $ > $ T$_C$ in Eq (\ref{scal}). Further, we have plotted m$^2$ vs. h/m and again found that all the data collapse onto two separate curves above and below T$ _{C} $, respectively. This confirms that the critical exponents are reliable, unambiguous and the interactions get renormalized appropriately following the scaling equation of state in the critical regime. \begin{figure}[htb!] \centering \includegraphics [trim=0.3mm 0.3mm 0.3mm 0.3mm,clip,width=\linewidth]{Figure8.jpg} \caption{(a) Scaling of the M-H curve below and above T$_C$ for TL-LSMO-0.3 in critical region $\epsilon$ $\leq$ 0.1 using Eq. (\ref{scal}). Solid lines show best fit polynomials. Collapse of all the M-H curve on single curve below and above T$_C$ separately confirms the validity of obtained exponents. (b) m$^2$ vs h/m below and above $T_C$ also shows collapse of curve again validate the obtained results.} \label{Scaling} \end{figure} \begin{table*}[htb] \caption{Comparison of critical exponents $\beta$, $\gamma$ and $\delta$ of TL-LSMO-0.3 with various theoretical models for three dimension and two dimension. RG: Renormalization group, CI: Critical isotherm. } \begin{ruledtabular} \begin{tabular}{ccccccc} {} & Method & {$T_C$}(K)& $\alpha$ & {$\beta$} & {$\gamma$} & {$\delta$}\\ \hline (Theory) \\ Mean Field~\cite{kim2002critical, kadanoff1966scaling} & {}&{}&0& 0.5 & 1 & 3 \\ Tricritical Mean Field~\cite{kim2002critical, huang1987statistical}&{}&{}&0&0.25&1&5\\ 3D-ISing (d=3, n=1) ~\cite{kim2002critical, kadanoff1966scaling} &{RG-$\phi$$^{4}$}&{}& 0.11&0.325 & 1.241 & 4.82 \\ 3D-XY (d=3, n=2)~ \cite{kim2002critical, kadanoff1966scaling} & {RG-$\phi$$^{4}$} &{}&{-0.007}& 0.346 & 1.316 & 4.81 \\ 3D-Heisenberg (d=3, n=3)~\cite{kim2002critical, kadanoff1966scaling}&{RG-$\phi$$^{4}$} &{} &{-0.115}&0.365 & 1.386 & 4.8 \\ Short-range 2D-ISing~\cite{domb2000phase, fisher1974renormalization} &{Onsager solution}&{}&{0}& 0.125 & 1.75 & 15 \\ Long-range 2D-ISing~\cite{fisher1974renormalization} &{RG-$\epsilon$$^{'}$}&{}&{}& 0.289 & 1.49 & 6 \\ \hline (Experiment)\\ {La$_{2.1}$Sr$_{1.9}$Mn$_{3}$O$_{10}$}[This work] &MAPs& 103.54 $\pm$ 0.03 &{}&0.118 $\pm$ 0.004&{1.681 $\pm$ 0.006}&\\ {}&CI&{}&{}&{}&{}&14.668 $\pm$ 0.002\\ {}&KF&103.24 $\pm$ 0.01&{}&0.120 $\pm$ 0.003&\\ {}&{}&{103.12 $\pm$ 0.02}&{}&{}&1.710 $\pm$ 0.005&\\ {}&{RG}&{}&{}&{0.145}&{1.91}&{14.172}\\ {}&{Scaling}&{103.17 $\pm$ 0.01}&{}&{0.121 $\pm$ 0.001}&{1.710 $\pm$ 0.005}&{} \end{tabular} \end{ruledtabular} \label{Theory} \end{table*} \section{spin interaction} Finally, we have discuss the range and dimensionality of the TL-LSMO-0.3 with the help of the renormalization group theory. For a homogeneous magnet the universality class of the magnetic phase transition is defined by the interaction J(r). Fisher \textit{et al.}\cite{fisher1974renormalization} used renormalization group theory and suggested that the exchange interaction decays with distance r as J(r) $\sim {r^{-(d+\sigma)}}$, where d is dimensionality and $\sigma$ is the range of the interaction. Also, they have discussed the validity of such a model for $\sigma$ $ < $ 2 having long-range interactions. Further, the critical exponent $\gamma$ associated with the susceptibility can be given as \begin{eqnarray} \begin{split} \gamma = 1 + \frac{4}{d}\Bigg(\frac{n+2}{n+8}\Bigg)\Delta{\sigma}+\Bigg(\frac{8(n+2)(n-4)}{d^2(n+8)^2}\Bigg) \nonumber\\ \times \Bigg[{1+\frac{2G(\frac{d}{2})(7n+20)}{(n-4)(n+8)}}\Bigg]\Delta{\sigma^2}, \label{spin} \end{split} \end{eqnarray} where $\Delta{\sigma}$ = $\Big(\sigma - {\frac{d}{2}}\Big)$ and $G(\frac{d}{2}) = 3$ - $\frac{1}{4}$$\big(\frac{d}{2}\big)^2$. The range of interaction $\sigma$ and dimensionality of both space and spin, is determined by the same procedure defined in the Ref \cite{fischer2002critical}. The value of exchange interaction $\sigma$ is chosen for a particular set of $\left\{d:n\right\}$ such that the Eq. (\ref{spin}) results the value of exponent $\gamma$ close to the experimentally determined, $\gamma$ = 1.71. Further, the remaining exponents can be determined with the help of Eqs. (\ref{Rushbrooke}), (\ref{Widom}) and $\sigma$ value using following expressions: $\alpha$ = 2- $\nu$d, $\nu$ = $\gamma$/$\sigma$, $\gamma$ = $\nu$(2 - $\eta$) and $\eta$ = 2-$\sigma$. We found that $\left\{d:n\right\}$ = $\left\{2:1\right\}$ yields a value $\sigma$ = 1.69. The value of $\sigma$ = 1.69 is then used to determine the remaining exponents; $\beta$ = 0.135, $\gamma$ = 1.91 and $\delta$ = 14.172, which are close to the value of exponents obtained from previous methods MAPs, KF and scaling analysis (Table \ref{Theory}). We have also examined the remaining 3D and 2D models but they cannot describe the experimental results obtained for TL-LSMO-0.3. For example, the 3D-Heisenberg $\left\{d:n\right\}$ = $\left\{3:3\right\}$, 3D-XY $\left\{d:n\right\}$ = $\left\{3:2\right\}$ and 3D-Ising models $\left\{d:n\right\}$ = $\left\{3:1\right\}$ with short-range exchange interaction yield the value of exponent $\gamma$ = 1.25, 1.27 and 1.23, respectively. Similarly, the 2D Heisenberg $\left\{d:n\right\}$ = $\left\{2:3\right\}$ and the 2D XY $\left\{d:n\right\}$ = $\left\{2:2\right\}$ models defined for the short-range exchange interaction yield $\gamma$ = 2.56 and 2.30, respectively. The other calculated exponents ($\beta$ and $\delta$) by using respective $\sigma$ values for different models $\left\{d:n\right\}$ = $\left\{2:2, 3\right\}$, $\left\{d:n\right\}$ = $\left\{3:1, 2, 3\right\}$ also show significant difference from experimental results for $\beta$ and $\delta$. Hence, all the other 2D and 3D models can be discarded. The long-range mean field model is valid for $\sigma$ $\leq$ 3/2 and j(r) decreases as j(r) $\sim$ r$^{-4.5}$. For $\sigma$ $\geq$ 2 only short-range 3D-Heisenberg model is valid and j(r) varies as j(r) $\sim$ r$^{-5}$. The other 3D universality classes for short-range lies between 3/2 $<$ $\sigma$ $<$ 2, where j(r) decreases as j(r) $\sim$ r$^{-d-\sigma}$. All the theoretical models with short-range exchange interaction varies with distance r as J(r) $\sim$ e$^{-(r/b)}$ (where b is correlation length). The renormalization group analysis suggests that the spin interaction in TL-LSMO-0.3 is of a short-range 2D Ising $\left\{d:n\right\}$ = $\left\{2:1\right\}$ type with $\sigma$ = 1.69 and decays as $\sim$ r$^{-3.69}$. \section{Discussion} All the findings in the above sections for TL-LSMO-0.3 yield the value of critical exponents close to the short-range 2D-Ising model. A graphical comparison of the critical exponents $\beta$, $\gamma$ and $\delta$ for TL-LSMO-0.3 with the various theoretical models is represented in Fig. \ref{Models}. The obtained results of the exponents consistent with the Q2D-layered structural characteristic of TL-LSMO-0.3 and emphasize that the magnetic anisotropy is playing a crucial role in the magnetism of the TL-LSMO-0.3. \begin{figure}[htb!] \centering \includegraphics[trim=0.3mm 0.3mm 0.3mm 0.3mm,clip,width=\linewidth]{Figure9.pdf} \caption{Comparison of different exponents of TL-LSMO-0.3 (denoted as closed circles filled with yellow color) with that of standard universality classes. Different vertical bars represents different models for short-range and long-range exchange interactions. Short-range $\rightarrow$ the 2D-Ising (green bar), 3D-Ising (sky blue bar), 3D-XY (sky blue bar) and 3D-Heisenberg (sky blue bar) and long-range $\rightarrow$ 2D-Ising model (pink bar).} \label{Models} \end{figure} The 2D magnetism in TL-LSMO-0.3 also emphasizes that the inter-layer interaction is weakened around T$_C$. In contrast, the intra-layer interaction becomes stronger, which leads to a 2D FM in TL-LSMO-0.3. Our results for TL-LSMO-0.3 are consistent with the Taroni \textit{et al.}\cite{taroni2008universal} criterion, according to which the value of critical exponent $\beta$ for 2D magnets should lie in the 0.1 $\geq$ $\beta$ $\geq$ 0.25. Similar results have been reported in bi-layer La$_{2-2x}$Sr$_{1+2x}$Mn$_{2}$O$_{7}$ in which short-range 2D FM ordering occurs around T$_C$\cite{osborn1998neutron, gordon1999specific}. Osborn \textit{et al.}\cite{osborn1998neutron} performed neutron scattering measurement in bi-layer La$_{2-2x}$Sr$_{1+2x}$Mn$_{2}$O$_{7}$ for x = 0.4 and claimed that, there is a short-range 2D-Ising interaction with $\beta$ = 0.13 $\pm$ 0.01. Gordon \textit{et al.}\cite{gordon1999specific} performed specific heat measurement on La$_{2-2x}$Sr$_{1+2x}$Mn$_{2}$O$_{7}$ for x = 0.4 and claimed that the obtained result is consistent with 2D-XY or 2D-Ising critical fluctuation. There is no neutron diffraction data on tri-layer La$ _{3-3x} $Sr$ _{1+3x} $Mn$ _{3} $O$ _{10} $. However, one can get an idea about the spin structure and spin-spin interaction from the neutron diffraction data for bi-layer manganites. Next, we discuss the unconventional behavior of temperature-dependent magnetization and magnetic spin structure in different regions. Conventionally, when an FM material undergoes a magnetic phase transition from FM to PM sate, the magnetic moment of the system becomes zero above T$ _{C} $. As shown in Fig. \ref{FC_ZFC}, the magnetic moment of the FC curve for TL-LSMO-0.3 is non-zero even above T$ _{C} $ and also another transition appears at higher temperature $\approx$ 263 K, denoted as T$ ^{*} $. The non-zero magnetization above T$ _{C} $ emphasizes that the phase transition in TL-LSMO-0.3 is not an FM to PM state. \begin{figure}[htb!] \centering \includegraphics[trim=0.3mm 0.3mm 0.3mm 0.3mm,clip,width=\linewidth]{Figure10.jpg} \caption{ Spin structure in the different temperature range of TL-LSMO-0.3. Below T$ _{C} $, i.e., T $ < $ T$ _{C} $ the spins have FM alignment. First type of canting of spins occur in the range T$ _{C} $ $ < $ T $ < $ T$ ^{*} $ and above T$ ^{*} $ second type of canting is represented. The inset shows the FC curve for infinite-layer La$ _{0.7} $Sr$ _{0.3} $MnO$ _{3} $\cite{shi1999electrical}. The solid line in red color represents the fit to the Eq. \ref{Ms} and yield the value of exponent $\beta$ = 0.3. } \label{canted} \end{figure} A similar magnetic phase transition has been observed in bi-layer La$_{2-2x}$Sr$_{1+2x}$Mn$_{2}$O$_{7}$ RP series manganites and extensive studies have been conducted to explore the magnetic structure of this region (between T$ _{C} $ and T$ ^{*} $). Kimura \textit{et al.}\cite{kimura1996interplane} studied La$_{1.4}$Sr$_{1.6}$Mn$_{2}$O$_{7}$ and claimed that there is a short-range 2D FM ordering between T$ _{C} $ and T$ ^{*} $, which give rise the finite magnetic moment in this region and above T$ ^{*} $ system goes to the PM state. The disagreement of 2D FM characteristic above T$ _{C} $ was shown by Heffner \textit{et al.}\cite{heffner1998effects} from the muon spin rotation study in bi-layer La$_{1.4}$Sr$_{1.6}$Mn$_{2}$O$_{7}$ and claimed that there is no evidence of 2D magnetic ordering above T$ _{C} $. Later, a rigorous neutron scattering study in bi-layer La$_{2-2x}$Sr$_{1+2x}$Mn$_{2}$O$_{7}$ for x = 0.4 by Osborn \textit{et al.}\cite{osborn1998neutron} revealed that there is a strong canting of the Mn spins in adjacent MnO$ _{2} $ layers within each MnO$ _{2} $ bi-layer above T$ _{C} $ and the canting angle depends on both temperature and magnetic field. Their neutron scattering study revealed that the FM and antiferromagnetic (AFM) magnetic ordering are inhomogeneously distributed in approximately equal volume above T$ _{C} $. Therefore, a non-collinear spin correlation or the canting of spins arises due to the competing FM DE and AFM superexchange (SE) interaction. The intra-bi-layer or inter-planer interaction is substantially weaker than the intra-planar interaction, which produces a large magnetic anisotropy in the exchange interactions. The FM in manganites is governed by the DE mechanism based on the hopping of the electrons and the kinetic energy of mobile electrons is lowered by polarizing the Mn spins, which are localized in the MnO$ _{2} $ plane. Hence, there would be larger free energy within the plane due to the energy acquired from delocalized electrons\cite{osborn1998neutron}. A comparatively large number of Mn spins can take part in the FM cluster than among the adjacent planes, where only two Mn spin sites are available. Therefore, the SE interaction strongly affects the spin interactions along the c-axis compared to the spins in the ab-plane\cite{osborn1998neutron}. This is the reason why intra-planar interaction is stronger than intra-bi-layer interaction. Based on the above discussions for non-zero magnetization in bi-layer La$_{2-2x}$Sr$_{1+2x}$Mn$_{2}$O$_{7}$, we propose that a similar inhomogeneous distribution of FM and AFM clusters are giving rise the canted AFM-I (CAFM-I) type spin structure above T$ _{C} $, which is responsible for the non-zero magnetization in TL-LSMO-0.3. The magnetic moment of the system is continuously decreasing with increase in temperature and the system undergoes another transition at T$ ^{*} $. It is noted that the magnetic moment is still non-zero above second transition T$ ^{*} $, which indicate that the system is going to another canted AFM-II (CAFM-II) state with a canting angle greater than that of the CAFM-I state and responsible for a finite magnetic moment above the second transition T$ ^{*} $. The layered crystal structure of the TL-LSMO-0.3 suggests that there are different type of interactions present in the system; inter-tri-layer ($ J' $), inter-layer or intra-tri-layer ($ J_{c} $) and intra-planer ($ J_{ab} $) interaction. The two manganese ions Mn$ ^{3+} $ and Mn$ ^{4+} $ are distributed in the ratio of Mn$ ^{3+} $:Mn$ ^{4+} $ 2.33:1 in the system. Depending upon the distribution and distance between Mn spins in different directions the order of the strength of interaction can be given as $ J_{ab} $ $ > $ $ J_{c} $ $ >> $$ J' $. The intra-planer interaction $ J_{ab} $ is the DE interaction, i.e., the spins are coupled ferromagnetically and the intra-tri-layer interaction $ J_{c} $ is SE interaction, which implies that the spins are coupled antiferromagnetically. On the other hand, the inter-tri-layer interaction $ J' $ is the direct exchange interaction. Although the DE interaction is strong in ab-plane, but a minor AFM interaction also coexists. Similarly, the intra-tri-layer interaction $ J_{c} $ is an SE interaction along the c-axis, but a weak DE interaction can also coexist. The intra-planer interaction $ J_{ab} $ is the strongest and dominates over other interactions ($ J_{c} $ and $ J' $). $ J_{ab} $ combined with the anisotropy give rise to the 2D-Ising like spin structure below T$ _{C} $. With an increase in temperature, the weakest interaction $ J' $ breaks at first transition T$ _{C} $ and give rise to the phase transition from 2D-Ising to the CAFM-I state. The relative strength of intra-planer interaction $ J_{ab} $ decreases with increase in temperature and hence the intra-tri-layer interaction $ J_{c} $ started competing with $ J_{ab} $. The competition between these two magnetic interactions results in the CAFM-I state. The intra-tri-layer minor FM interaction along c-axis and intra-layer AFM interaction in ab-plane are weakened with further increase in temperature resulting in a phase transition from CAFM-I to CAFM-II at T$ ^{*} $. The CAFM-II state is observed due to the competition between SE exchange interaction along the c-axis and DE interaction in the ab-plane. Figure \ref{canted} shows the possible magnetic structures above and below T$ _{C} $ in TL-LSMO-0.3 based on the bi-layer studies\cite{sonomura2013neutron}. The inset of Fig. \ref{canted} shows the FC curve for infinite-layer La$ _{0.7} $Sr$ _{0.3} $MnO$ _{3} $\cite{shi1999electrical}. The solid line in red color represents the fit to the Eq. \ref{Ms} and yield the value of exponent $\beta$ = 0.3, which is close to the 3D-Ising universality class. Michael \textit{et al.}\cite{martin1996magnetism} and Vasiliu \textit{et al.}\cite{vasiliu1998spin} performed neutron scattering in infinite-layer La$ _{0.7} $Sr$ _{0.3} $MnO$ _{3} $ and shown that it belongs to the short-range 3D-Ising universality class with $\beta$ = 0.295 and 0.3, respectively. One can see that the magnetic moment of infinite-layer La$ _{0.7} $Sr$ _{0.3} $MnO$ _{3} $ above T$ _{C} $ is zero in contrast to TL-LSMO-0.3. With reduced dimensionality from 3D to Q2D, the system changes from 3D-Ising to 2D-Ising like spin-spin interaction and there exists a canted AFM magnetic structure between FM and PM state due to the existence of different exchange interactions. It is well known that the different interactions are responsible for different spin structures. The exchange interaction aligns the spins parallel to each other. In contrast, long-range dipolar interaction favors a close loop of spins and anisotropy energy favors perpendicular alignment of spins to the plane. Hence, the anisotropy in a magnetic system results in the Ising spin structure and the system behaves as a uniaxial magnet. The Ising interaction below T$ _{C} $ in TL-LSMO-0.3 emphasizes that magnetic anisotropy plays a crucial role in the magnetism of the TL-LSMO-0.3. It is believed that the skyrmions in manganite perovskites result from the competition between different energies such as exchange interaction, long-range dipolar interaction and anisotropy energy. Keeping in view of the observation of bi-skyrmion in the bi-layer La$_{1.37}$Sr$_{1.63}$Mn$_{2}$O$_{7}$, which has similar magnetic properties to the TL-LSMO-0.3, we contemplate that TL-LSMO-0.3 should also host the skyrmions. All the above discussions and experimental observations imply that much more experimental and theoretical works are needed to thoroughly understand the magnetism in tri-layer La$_{3-3x}$Sr$_{1+3x}$Mn$_{3}$O$_{10}$ manganite perovskite. The magnetic and transport properties of tri-layer La$_{3-3x}$Sr$_{1+3x}$Mn$_{3}$O$_{10}$ manganite perovskite for different Sr concentration is not yet explored. Therefore, it is highly desirable to establish the structural, magnetic and electronic phase diagram of tri-layer La$_{3-3x}$Sr$_{1+3x}$Mn$_{3}$O$_{10}$ because it may be a potential candidate for the future spintronics. We hope the present study will prompt further investigation in understanding the magnetic phase transition and different types of exchange interaction in the low dimensional RP series manganite perovskites. \section{Conclusion} In summary, we have established an understanding of the phase transition in a novel quasi-two-dimensional ferromagnetic tri-layer La$_{2.1}$Sr$_{1.9}$Mn$_{3}$O$_{10}$ RP series manganite. We have discussed the low dimensionality in the magnetic properties of the tri-layer La$_{2.1}$Sr$_{1.9}$Mn$_{3}$O$_{10}$ manganite perovskite. A comprehensive experimental study of the critical properties is performed using isothermal magnetization in the vicinity of the Curie temperature T$_C$. We have used various techniques, including the modified Arrott plots (MAPs), Kouvel-Fisher (KF) method, scaling and critical isotherm analysis to determine the critical exponents of the La$_{2.1}$Sr$_{1.9}$Mn$_{3}$O$_{10}$. The obtained critical exponents for La$_{2.1}$Sr$_{1.9}$Mn$_{3}$O$_{10}$ are close to theoretical values compatible with 2D-Ising model with short-range interaction. The critical exponents of the La$_{2.1}$Sr$_{1.9}$Mn$_{3}$O$_{10}$ were also determined by using renormalization group approach for a two-dimensional (2D) Ising system with short-range interactions decaying as j(r) $\sim$ r$^{-d-\sigma}$ with $\sigma$ = 1.69. We suggest that the strong anisotropy and layered structure are playing a crucial role resulting Ising-like interaction in La$_{2.1}$Sr$_{1.9}$Mn$_{3}$O$_{10}$. Based on results obtained for La$_{2.1}$Sr$_{1.9}$Mn$_{3}$O$_{10}$ in the present study, we propose that the La$_{3-3x}$Sr$_{1+3x}$Mn$_{3}$O$_{10}$ can be a potential candidate for the skyrmion host material. Finally, we propose that the non-zero magnetic moment above T$ _{C} $ is due to the canted antiferromagnetic spin orientation. \bibliographystyle{apsrev4-2}
e1d81d30d65a24372b0a2f2d3dada76638034b77
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Statement of Results} The fundamental objective in the theory of Diophantine approximation is to seek answers to the question \emph {how well an irrational number can be approximated by a rational number}? This question in the one dimensional settings has been well understood as the theory of continued fractions provides quick and efficient way for finding good rational approximations to irrational numbers. The continued fraction can be computed by the Guass transformation $T:[0, 1)\to [0, 1)$ defined as \begin{equation*} T(0) =0\quad \quad {\rm and} \quad T(x) =\frac{1}{x}({\rm mod} \ 1) \quad {\rm if} \quad x\in(0,1). \end{equation*} Then every $x\in[0,1)$ admits a unique continued fraction expansion \begin{equation*}\label{cf1} x=\frac{1}{a_{1}(x)+\displaystyle{\frac{1}{a_{2}(x)+\displaystyle{\frac{1}{ a_{3}(x)+_{\ddots }}}}}}\end{equation*} where $a_{n}(x)$ are called the partial quotients of $x$ with $$a_{1}(x)=\floor*{\frac{1}{x}} \quad \text{and}\quad a_{n}(x)=\floor*{ \frac{1} {T^{n}(x)}}=a_1\left(T^{n-1}(x)\right)\in\mathbb N$$ for each $n\geq 1$ (where $\lfloor \cdot\rfloor$ stands for the integral part). Equation \eqref{cf1} can also be represented as $$x=[a_{1}(x),a_{2}(x),a_{3}(x),\ldots,a_{n}(x)+T^{n}x ]= [a_{1}(x),a_{2}(x),a_{3}(x),\ldots ].$$ Studying the properties of growth of partial quotients valid for almost all (or almost none) $x\in[0, 1)$ is a major area of investigation within the theory of continued fractions and is referred to as the metrical theory of continued fractions. Since the partial quotients can be obtained through Gauss map, the theory has close connections with dynamical systems, ergodic theory and Diophantine approximation. Historically, the focus has been on the metrical theory of the sets \begin{equation*}\label{BB} \mathcal E(\Psi):=\left\{x\in[0, 1): a_{n}(x)\geq \Psi(n) \ \text{for infinitely many} \ n\in \mathbb N \right\} \end{equation*} for a given function $\Psi:\mathbb N\to\mathbb R_{\ge 1}$. Borel-Bernstein's theorem \cite{Be_12, Bo_12} is a fundamental result that describes the size of the set $\mathcal E(\Psi)$ in terms of Lebesgue measure. \begin{thm}[Borel-Bernstein, 1911-1912]\label{Bor} Let $\Psi:\mathbb N\to\mathbb R_{\ge 1}$. Then \begin{equation*} \mathcal L\big(\mathcal E(\Psi)\big)= \begin{cases} 0\ & \mathrm{if}\quad \sum_{n=1}^{\infty }\frac{1}{\Psi (n)} \,<\,\infty, \\[2ex] 1 \ & \mathrm{if}\quad \sum_{n=1}^{\infty}\frac{1}{\Psi (n)} \, =\,\infty . \end{cases} \end{equation*} \end{thm} Good \cite{Good} and {\L}uczak \cite{Luczak} were the main contributors to studying the Hausdorff dimension of this set for $\Psi(n)$ tending to infinity at a polynomial $n^{a}$ and super-exponential speeds $a^{b^n}$ respectively, see also \cite{MR1464376, Hir, Moor} and references therein. Then the dimension of $\mathcal{E}(\Psi)$ was computed by Wang-Wu \cite{WaWu08} for arbitrary $\Psi$. In what follows, $P(T, \phi)$ will stand for the pressure function for the dynamics of the Gauss map $T$ with potential $\phi$; see \S\ref{Pressure Functions} for a precise definition. \begin{thm}[Wang-Wu, 2008] \label{WaWu} Let $\Psi:\mathbb N\to\mathbb R_{\ge 1}$. Denote \begin{equation}\label{Bb}\log B:=\liminf\limits_{n\rightarrow \infty }\frac{\log \Psi (n)}{n} \ { and}\ \log b:=\liminf\limits_{n\rightarrow \infty }\frac{\log \log \Psi (n)}{n}.\end{equation} Then \begin{equation*} \hdim \mathcal E(\Psi)=\left\{ \begin{array}{ll}1,& \mathrm{if}\ \ B=1, \\ [2ex] \inf \left\{s\geq 0: P\big(T, -s(\log B+\log |T^{\prime }|)\big)\le 0\right\}& \mathrm{if}\ \ 1<B< \infty, \\ [2ex] \frac{1}{1+b} & \mathrm{if}\ \ B=\infty. \end{array} \right. \end{equation*} {In particular, $\hdim \mathcal E(\Psi) > 1/2$ if $B<\infty$.} \end{thm} In this paper, we study a generalized form of the set $\mathcal E(\Psi)$ which has close connections with the improvements to Dirichlet's theorem (1842). Namely, in \cite{KleinbockWadleigh} Kleinbock-Wadleigh considered the set \begin{equation}\label{E2} \mathcal E_2(\Psi):=\left\{x\in[0, 1): {a_{n}(x)a_{n+1}(x)}\geq \Psi(n) \ \text{for infinitely many} \ n\in \mathbb N \right\}, \end{equation} and found a zero-one law for $\mathcal L\big(\mathcal E_2(\Psi)\big)$, see \cite[Theorem 3.6]{KleinbockWadleigh}. This result was used to establish a zero-one law for the sets of $\psi$-Dirichlet improvable real numbers \cite[Theorem 1.8]{KleinbockWadleigh}, where $\psi$ is a positive non-increasing function. See \cite[\S2]{KleinbockWadleigh} for a connection between \eqref{E2} and the improvements to Dirichlet's theorem, and {\cite{HKWW, BHS, Ba_20}} for further results in that direction. The work of Kleinbock-Wadleigh was followed by Huang-Wu-Xu \cite{HuWuXu} with both Lebesgue measure and Hausdorff dimension results for a natural generalization of the set \eqref{E2}. Namely, for $m\in\mathbb N$ they considered \begin{equation}\label{em} \mathcal E_m(\Psi):=\left\{x\in[0, 1): {a_{n}(x)\cdots a_{n+m-1}(x)}\geq \Psi(n) \ \text{for infinitely many} \ n\in \mathbb N \right\}, \end{equation} and proved the following \pagebreak \begin{thm}[Huang-Wu-Xu, 2019]\label{HWXLeb} Given {$\Psi:\mathbb N\to\mathbb R_{\ge 1}$}, \begin{itemize} \item[\rm (a)] \cite[Theorem 1.5]{HuWuXu} \begin{equation*} \mathcal L\big(\mathcal E_m(\Psi)\big)= \begin{cases} 0\ & \mathrm{if}\quad \sum\limits_{n}^\infty\frac {\log^{m-1}\Psi(n)}{\Psi(n)} \,<\,\infty ,\\[2ex] 1 \ & \mathrm{if}\quad \sum\limits_{n}^\infty\frac {\log^{m-1}\Psi(n)}{\Psi(n)}\,=\,\infty ; \end{cases} \end{equation*} \item[\rm (b)] \cite[Theorem 1.7]{HuWuXu} \begin{equation*} \dim _{\mathrm{H}}\mathcal{E} _m(\Psi )=\left\{ \begin{array}{ll}1,& \mathrm{if}\ \ B=1, \\ [3ex] \inf \big\{s\geq 0: {P}\left( T,-f_m(s)\log B-s\log|T^{\prime }|\right) \leq 0\big\}& \mathrm{if}\ \ 1<B< \infty; \\ [3ex] \frac{1}{1+b} & \mathrm{if}\ \ B=\infty, \end{array} \right. \end{equation*} where $B,b$ are as in \eqref{Bb}, and $f_m$ is given by the following iterative formula: \begin{equation}\label{fi}{f_1(s)=s, \quad f_{k+1}(s)=\frac{sf_k(s)}{1-s+f_k(s)}, \ k\geq 1.}\end{equation} \end{itemize} \end{thm} In this paper we consider a weighted generalization of \eqref{em}: take ${\mathbf t} = {(t_0,\dots,t_{m-1})}\in\mathbb R_+^m$ and {$\Psi:\mathbb N\to\mathbb R_{\ge 1}$}, and define \begin{equation*}\label{mainset} {\mathcal E_{\mathbf t}}(\Psi):=\left\{x\in[0, 1): {\prod_{i=0}^{m-1}}a^{t_i}_{n+i}(x)\ge \Psi(n) \ {\text{for infinitely many}} \ n\in \mathbb N \right\}. \end{equation*} Clearly {$\mathcal E_m (\Psi)=\mathcal E_{{\mathbf 1}_m}(\Psi)$, where ${\mathbf 1}_m = (\underbrace{1,\dots,1}_{m})$}. Generalizing Theorem \ref{HWXLeb}(a), we prove the following dichotomy statement for the Lebesgue measure of ${\mathcal E_{\mathbf t}}(\Psi)$: \begin{thm}\label{mainLeb} Let {$\Psi:\mathbb N\to\mathbb R_{\ge 1}$}. Then \begin{equation*} \mathcal L\big({\mathcal E_{\mathbf t}}(\Psi )\big)=\left\{ \begin{array}{ll}0,& \mathrm{if}\ \ \sum\limits_{n=1}^{\infty}\frac{(\log \Psi(n))^{\ell-1}}{\Psi(n)^{1/t_{\max}}} <\infty, \\ [2ex] 1 & \mathrm{if}\ \ \sum\limits_{n=1}^{\infty}\frac{(\log \Psi(n))^{\ell-1}}{\Psi(n)^{1/t_{\max}}} =\infty,\end{array} \right. \end{equation*}where \begin{equation}\label{tmax} t_{\max}=\max\{t_i: {0\le i\le m-1}\}, \ \ell=\# \{i: t_i=t_{\max}\}. \end{equation} \end{thm} \ignore{\begin{cor}\label{KKW} Let $\Psi:\mathbb N\to [1, \infty)$ be an increasing function. Define $$ F_m^{\mathbf t} (\Psi):=\left\{x\in[0, 1): \prod_{i=1}^ma_{n+i}^{t_i}(x) \ge \Psi(q_n(x)) \ {\text{for i.m.}} \ n\in \mathbb N\right\}. $$ Then \begin{equation*} \mathcal L\left(F^\mathbf t_m(\Psi)\right)= \begin{cases} 0\ & \mathrm{if}\quad \sum\limits_{n=1}^{\infty}\frac{(\log \Psi(n))^{\ell-1}}{n\Psi(n)^{1/t_{\max}}}<\infty ; \\[2ex] 1 \ & \mathrm{if}\quad \sum\limits_{n=1}^{\infty}\frac{(\log \Psi(n))^{\ell-1}}{n\Psi(n)^{1/t_{\max}}}=\infty . \end{cases} \end{equation*} \end{cor} Corollary \ref{KKW} recovers not only Khintchine's classical theorem (1924) by setting $m=1$, and $t_1=t_2=\ldots=1$ but also Kleinbock-Wadleigh theorem \cite{KleinbockWadleigh} by setting $m=2$, and $t_1=t_2=\ldots=1$. See the next section for further details.} A weighted generalization of Theorem \ref{HWXLeb}(b) {is straightforward in} the {case when $B$ is either infinite or equal to $1$: \begin{thm}\label{BHWthmeasy} Let $\Psi:\mathbb N\to\mathbb R_{\ge 1}$, and let $B,b$ be as in \eqref{Bb}. Then \begin{equation*} \hdim {\mathcal E_{\mathbf t}}(\Psi )=\left\{ \begin{array}{ll} \ \, 1& \mathrm{if}\ \ B=1, \\ [3ex] \frac{1}{1+b} & \mathrm{if}\ \ B=\infty. \\ [3ex] \end{array} \right. \end{equation*} \end{thm}} As for the remaining intermediate case $1<B< \infty$, we are only able to treat the $m=2$ case, characterizing the Hausdorff dimension of sets ${\mathcal E_{\mathbf t}}(\Psi )$ for ${\mathbf t} = ({t_0,t_1})\in\mathbb R_+^2$. \begin{thm}\label{BHWthm} Let $\Psi:\mathbb N\to\mathbb R_{\ge 1}$ be such that $1<B< \infty$, and let ${\mathbf t} = ({t_0,t_1})\in\mathbb R_+^2$. Then \begin{equation*} \hdim {\mathcal E_{\mathbf t}}(\Psi )= \inf \big\{ s\geq 0: P(T, -s\log |T'|-f_{{t_0,t_1}}(s)\log B)\le 0\big\} , \end{equation*} where \begin{equation}\label{ft1t2} f_{{t_0,t_1}}(s):=\frac{s^2}{{t_0t_1}\cdot \max\left\{{\frac{s}{t_1}+\frac{1-s}{t_0}, \frac{s}{t_0}}\right\}}.\end{equation} \end{thm} Note that $f_{1, 1}(s) = s^2$ {for all $0 \le s \le 1$}, which agrees with the $k=2$ case of \eqref{fi}. See \S\ref{final} for an explanation of why the case $m > 2$ is much more involved. \begin{rem} It is worth highlighting an interesting phenomena here. The Lebesgue measure of the set ${\mathcal E_{{\mathbf t}}(\Psi )}$ is independent of the ordering of the exponents, whereas the Hausdorff dimension depends on it. For instance \begin{equation*}\label{f12} f_{2, 1}(s)=\frac {s^2}{1+s},\ {\text{and}}\ \ f_{1, 2}(s)=\begin{cases}\frac{s^2}{2-s}&\text{\rm if }s \le \frac23;\\ \frac{s}{2}&\text{\rm if }s > \frac23.\end{cases}\end{equation*} It is easy to see that $f_{2,1}(s)<f_{1,2}(s)$ for any $1/2<s<1$. Since $\hdim \mathcal E_{\mathbf (1,2)}(\Psi ) \ge \hdim \mathcal E(\Psi ) > 1/2$ whenever $B<\infty$ (see Theorem \ref{WaWu}), it follows that in Theorem \ref{BHWthm} one always {has} $${\hdim \mathcal E_{\mathbf (2,1)}(\Psi )>\hdim \mathcal E_{\mathbf (1,2)}(\Psi )}.$$ \end{rem} \noindent{\bf Acknowledgements.} {The research of A.\ Bakhtawar is supported by the ARC grant DP180100201, of M.\ Hussain by the ARC grant DP200100994, of D.\ Kleinbock by the NSF grant DMS-1900560, and of B.\ Wang by NSFC (11831007).} Part of this work was carried out during the workshop ``Ergodic Theory, Diophantine approximation and related topics'' sponsored by the MATRIX Research Institute. \ignore{ \section{ Background and literature survey} At its most fundamental level, the theory of Diophantine approximation is concerned with the question of how well a real number can be approximated by rationals. Dirichlet's theorem (1842) is the starting point in this theory. \begin{thm}[Dirichlet, 1842]\label{Dir} \label{Dirichletsv} \noindent Given $x\in \mathbb R$ and $t>1 $, there exist integers $p,q$ such that \begin{equation*}\label{eqdir} \left\vert qx-p\right\vert\leq 1/t \quad{\rm and} \quad 1\leq{q}<{t}. \end{equation*} \end{thm} A consequence of this theorem is the following global statement concerning the `rate' of rational approximation to any real number. \begin{cor} \noindent For any $x\in \mathbb R$, there exist infinitely many integers $p$ and $q > 0 $ such that \begin{equation*}\label{side1} \left\vert qx-p\right\vert<1/q. \end{equation*} \end{cor} Generalising this corollary leads to the landmark theorems of Khintchine (1924) and Jarn\'ik (1928), the former about the Lebesgue measure and latter about the Hausdorff measure of the set, \begin{equation}\label{eqWPsi} W(\psi):=\left\{x\in[0, 1): \left|x-\frac pq\right|<\psi(q) \ \text{for infinitely many} \ (p, q)\in\mathbb Z\times \mathbb N \right\}. \end{equation} Where $\psi:\mathbb N\to \mathbb R_+$ is a decreasing function such that $\psi(q)\to 0$ as $q\to \infty$, referred to as an ``approximating function''. We combine both the theorems in one statement below. \begin{thm}[Khintchine-Jarn\'ik, 1924-1931]\label{KJthm}Let $\psi$ be an approximating function. Then, for any $s\in(0, 1]$, $$ \mathcal H^s\left(W(\psi)\right)= \left\{ \begin{array}{cl} 0& {\rm \ if} \qquad\displaystyle \sum_{q=1}^{\infty} \ q\psi^s(q) < \infty, \, \\ [2ex] \mathcal H^s\left([0, 1)\right) & { \rm \ if} \qquad \displaystyle \sum_{q=1}^{\infty} \ q\psi^s(q) = \infty. \end{array} \right. $$ \end{thm} Note that $\mathcal H^1$ is comparable to the one-dimensional Lebesgue measure (Khintchine's theorem). In fact both, Khintchine and Jarn\'ik theorems were proved by considering the alternative form of the set $W(\psi)$ in terms of the growth of the partial quotients in the continued fractions. Let \begin{equation}\label{eqWPsicon} \mathcal K(\psi):=\left\{x\in[0, 1): a_{n+1}(x)\geq \frac{1}{q_n^2\psi(q_n)} \ \text{for infinitely many} \ n\in \mathbb N \right\}, \end{equation} where $q_n= q_n(x)$ is the denominator of the $n$th convergent $p_n/q_n = [a_1,...,a_n]$ of $x$. The equivalence of both of the sets \eqref{eqWPsi} and \eqref{eqWPsicon} readily follows from the relation \begin{equation*}\label{CF1} \frac{1}{(a_{n+1}+2)q_{n}^{2}}\,<\,\Big|x-\frac{p_{n}}{q_{n}}\Big|<\,\frac{1}{a_{n+1}q_{n}^{2}}. \end{equation*} along with the well-known Lagrange's and Legendre's theorems. Building on a work of Davenport-Schmidt \cite{DavenportSchmidt3}, Kleinbock-Wadleigh \cite{KleinbockWadleigh} considered the set \begin{equation* D(\psi):=\left\{x\in\mathbb R: \begin{array}{ll}\exists\, N \ {\rm such\ that\ the\ system}\ |qx-p|\, <\, \psi(t), \ |q|<t\ \\%[2ex] \text{has a nontrivial integer solution for all }t>N\quad \end{array} \right\}. \end{equation*} calling it a set of $\psi$-Dirichlet improvable numbers. A simple calculation shows the following simple yet extremely important criterion. \begin{align*} x\in D(\psi) &\Longleftrightarrow |q_{n-1}x-p_{n-1}| \,<\, \psi(q_n) \ {\text{for all}}\ n\gg 1 \\ &\Longleftrightarrow [a_{n+1}, a_{n+2},\dots]\cdot [a_n, a_{n-1},\dots, a_1]\, < \, \frac1{\Phi(q_n)} \ {\text{for all}}\ n\gg 1,\end{align*} where\begin{equation}\label{twopsis}{ \Phi(t):=\frac{t\psi(t)}{1-t\psi(t)} = \frac{1}{1-t\psi(t)} - 1. }\end{equation} In what follows, $\psi$ and $\Phi$ will always be related by \eqref{twopsis}. This leads to the following criterian for Dirichlet improvability. \begin{lem}[Kleinbock-Wadleigh, 2018]\label{kwlem}Let $x\in [0, 1)\smallsetminus\mathbb Q$. Then \begin{itemize} \item [{\rm (i)}] $x\in D(\psi)$ if $a_{n+1}(x)a_n(x)\, \le\,\Phi(q_n)/4$ for all sufficiently large $n$. \item [{\rm (ii)}] $x\in D^c(\psi)$ if $a_{n+1}(x)a_n(x)\, >\, \Phi(q_n)$ for infinitely many~$n$. \end{itemize} \end{lem} As a consequence of this lemma, we have \begin{equation*}\label{l1.2} \mathcal K(3\psi)\subset G(\Phi) \subset D^c(\psi)\subset G(\Phi/4),\end{equation*} where \begin{equation*}\label{gpsi} G(\Phi):=\Big\{x\in [0,1): a_n(x)a_{n+1}(x)\,>\, \Phi\big(q_n(x)\big) {\text{ for infinitely many}}\ n\in \mathbb N\Big\}. \end{equation*} The Lebesgue measure and Hausdorff measure of the set $D^c(\psi)$ was proved by Kleinbock-Wadleigh in \cite[Theorem 1.8]{KleinbockWadleigh} and Hussain-Kleinbock-Wadleigh-Wang \cite{HKWW} respectively. \begin{thm}[Kleinbock-Wadleigh, 2018] \label{KW} Let $\psi$ be any non-increasing positive function and $\Phi$ as in \eqref{twopsis} non-decreasing such that $t\psi(t)<1$ for all $t$ large enough. Then \begin{equation*}\label{hdsum} \mathcal H^1(D^c(\psi))=\begin {cases} 0 \ & {\rm if } \quad\sum\limits_{t}\frac{\log{\Phi}(t)}{t{\Phi}(t)} \, < \, \infty, \\[2ex] 1 \ & {\rm if } \quad \sum\limits_{t}\frac{\log{\Phi}(t)}{t{\Phi}(t)} \, = \, \infty. \end {cases}\end{equation*} \end{thm} \begin{thm}[HKWW, 2018]\label{dicor} Let $\psi$ be a non-increasing positive function with $t\psi(t)<1$ for all large $t$. Then for any $0\leq s<1$ \begin{equation*}\label{hdsum} \mathcal H^s(D^c(\psi))=\begin {cases} 0 \ & {\rm if } \quad \sum\limits_{t} {t}\left(\frac{1}{{t^2\Phi({t})}} \right)^s \, < \, \infty; \\[2ex] \infty \ & {\rm if } \quad \sum\limits_{t} {t}\left(\frac{1}{{t^2\Phi({t})}} \right)^s \, = \, \infty. \end {cases}\end{equation*} Consequently, the Hausdorff dimension of the set $D^c(\psi)$ is given by $$ \hdim D^c(\psi)=\frac{2}{2+\tau}, \ {\text{where}}\ \tau=\liminf_{t\to \infty}\frac{\log \Psi(t)} {\log t}. $$ \end{thm} We refer the reader to \cite{BHS} for a generalized Hausdorff measure criterion for the set of Dirichlet non-improvable numbers for a large class of dimension functions. Bearing in mind the close connection of studying the growth of consecutive partial quotients, natural question is to consider the growth of the product of an arbitrary block of consecutive partial quotients. To this end, consider the set \begin{equation*}\label{mainset} {\mathcal E_{\mathbf t}}(\Psi):=\left\{x\in[0, 1): \prod_{i=1}^ma^{t_i}_{n+i}(x)\ge \Psi(n) \ {\text{for infinitely many}} \ n\in \mathbb N \right\}. \end{equation*} Huang-Wu-Xu \cite{HuWuXu} developed the metrical theory for the set $\mathcal E_m^{\mathbf 1} (\Psi)$, that is, when $t_1=t_2=\cdots=t_m=1$. They proved the Lebesgue measure and Hausdorff dimension results. \begin{thm}[Huang-Wu-Xu, 2019]\label{HWXLeb} Let $\Psi:\mathbb N\to[2, \infty)$ be a positive function. Then \begin{equation*} \mathcal L\left(\mathcal D^\mathbf 1_m(\Psi)\right)= \begin{cases} 0\ & \mathrm{if}\quad \sum\limits_{n}^\infty\frac {\log^{m-1}\Psi(n)}{\Psi(n)} \,<\,\infty ; \\[2ex] 1 \ & \mathrm{if}\quad \sum\limits_{n}^\infty\frac {\log^{m-1}\Psi(n)}{\Psi(n)}\,=\,\infty . \end{cases} \end{equation*} \end{thm} Note that Theorem \ref{HWXLeb} follows from our Theorem \ref{mainLeb} (by choosing $t_1=t_2=\ldots=1$). Note also that Theorems \ref{KW} (by choosing $m=2, t_1=t_2=1$) is also a consequence of our theorem. This consequence becomes apparent by combining our result with a result of Khintchine that there exists a constant $C>1$ such that for almost all $x\in[0, 1)$, $q_n(x)\leq C^n$ for all $n\gg1$. The Hausdorff dimension of the set $\mathcal D^\mathbf1_m(\Psi)$ for any $m$ was established by Huang-Wu-Xu \cite{HuWuXu}. \begin{thm}[Huang-Wu-Xu, 2019]\label{HWXthm} For any function $\Psi :\mathbb{N}\rightarrow (1,\infty)$ with ${\displaystyle \lim_{n\to \infty}} \Psi(n)=\infty$ and $$\log B=\liminf\limits_{n\rightarrow \infty }\frac{\log \Psi (n)}{n} \ {\rm and} \ \log b=\liminf\limits_{n\rightarrow \infty }\frac{\log \log \Psi (n)}{n}.$$ Then \begin{equation*} \dim _{\mathrm{H}}\mathcal{D}^\mathbf 1_m(\Psi )=\left\{ \begin{array}{ll}1,& \mathrm{if}\ \ B=1, \\ [3ex] \inf \{s\geq 0:\mathsf{P}\left( T,-f_m(s)\log B-s\log|T^{\prime }|\right) \leq 0\}& \mathrm{if}\ \ 1<B< \infty; \\ [3ex] \frac{1}{1+b} & \mathrm{if}\ \ B=\infty, \end{array} \right. \end{equation*} where $f_m$ is given by the following iterative formula $$f_1(s)=s, \quad f_{k+1}(s)=\frac{sf_k(s)}{1-s+f_k(s)}, \ k\geq 1.$$ \end{thm}} \section{Preliminaries and auxiliary results}\label{S2} For completeness we give a brief introduction to Hausdorff measures and dimension. For further details we refer to the beautiful texts \cite{BernikDodson, Falconer_book}. \subsection{Hausdorff measure and dimension}\label{HM}\ Let $0<s\in\mathbb R^n$ let $E\subset \mathbb R^n$. Then, for any $\rho>0$ a countable collection $\{B_i\}$ of balls in $\mathbb R^n$ with diameters $\mathrm{diam} (B_i)\le \rho$ such that $E\subset \bigcup_i B_i$ is called a $\rho$-cover of $E$. Let \[ \mathcal H_\rho^s(E)=\inf \sum_i \mathrm{diam}(B_i)^s, \] where the infimum is taken over all possible $\rho$-covers $\{B_i\}$ of $E$. It is easy to see that $\mathcal H_\rho^s(E)$ increases as $\rho$ decreases and so approaches a limit as $\rho \rightarrow 0$. This limit could be zero or infinity, or take a finite positive value. Accordingly, the \textit{$s$-Hausdorff measure $\mathcal H^s$} of $E$ is defined to be \[ \mathcal H^s(E)=\lim_{\rho\to 0}\mathcal H_\rho^s(E). \] It is easily verified that Hausdorff measure is monotonic and countably sub-additive, and that $\mathcal H^s(\varnothing)=0$. Thus it is an outer measure on $\mathbb R^n$. When $s=n$, $\mathcal H^n$ coincides with standard Lebesgue measure on $\mathbb R^n$. For any subset $E$ one can verify that there exists a unique critical value of $s$ at which $\mathcal H^s(E)$ `jumps' from infinity to zero. The value taken by $s$ at this discontinuity is referred to as the \textit{Hausdorff dimension of $E$} and is denoted by $\hdim E $; i.e., \[ \hdim E :=\inf\left\{s\in \mathbb R_+\;:\; \mathcal H^s(E)=0\right\}. \] Computing Hausdorff dimension of a set is typically accomplished in two steps: obtaining the upper and lower bounds separately. Upper bounds often can be handled by finding appropriate coverings. When dealing with a limsup set, one {usually applies} the Hausdorff measure version of the famous Borel-Cantelli lemma (see Lemma 3.10 of \cite{BernikDodson}): \begin{pro}\label{bclem} Let $\{B_i\}_{i\ge 1}$ be a sequence of measurable sets in $\mathbb R$ and suppose that, $$\sum_i \mathrm{diam}(B_i)^s \, < \, \infty.$$ Then $$\mathcal H^s({\limsup_{i\to\infty}B_i})=0.$$ \end{pro} \subsection{Continued fractions and Diophantine approximation} \ Suppose that $x \in [0,1)\smallsetminus \mathbb Q$ has continued fraction expansion $x= [a_1, a_2,\dots]$, where $a_n(x)=\lfloor 1/T^{n-1}(x)\rfloor$ for each $n\ge 1$. Recall the sequences $p_n= p_n(x)$, $q_n= q_n(x)$ defined by the recursive relation $(p_{-1},q_{-1}) = (0,1)$, $(p_{0},q_{0}) = (1,1)$, and \begin {equation}\label{recu} p_{n+1}=a_{n+1}(x)p_n+p_{n-1}, \ \ q_{n+1}=a_{n+1}(x)q_n+q_{n-1},\ \ n\geq 0. \end {equation} Thus $p_n=p_n(x), q_n=q_n(x)$ are determined by the partial quotients $a_1,\dots,a_n$, so we may write \linebreak $p_n=p_n(a_1,\dots, a_n)$, $q_n=q_n(a_1,\dots,a_n)$. When it is clear which partial quotients are involved, we denote them by $p_n, q_n$ for simplicity. For any integer vector $(a_1,\dots,a_n)\in \mathbb N^n$ with $n\geq 1$, write \begin{equation*}\label{cyl} I_n(a_1,\dots,a_n):=\left\{x\in [0, 1): a_1(x)=a_1, \dots, a_n(x)=a_n\right\} \end{equation*} for the corresponding `cylinder of order $n$', i.e.\ the set of all real numbers in $[0,1)$ whose continued fraction expansions begin with $(a_1, \dots, a_n).$ We will use $I_n(x)$ to denote the $n$th order cylinder containing $x$. We will frequently use the following well known properties of continued fraction expansions. They are explained in the standard texts \cite{IosKra_book, Khi_63}. \begin{pro}\label{pp3} For any {positive} integers $a_1,\dots,a_n$, let $p_n=p_n(a_1,\dots,a_n)$ and $q_n=q_n(a_1,\dots,a_n)$ be defined recursively by \eqref{recu}. {Then:} \begin{enumerate}[label={\rm (\subscript{\rm P}{\arabic*})}] \item \begin{eqnarray*} I_n(a_1,a_2,\dots,a_n)= \left\{ \begin{array}{ll} \left[\frac{p_n}{q_n}, \frac{p_n+p_{n-1}}{q_n+q_{n-1}}\right) & {\rm if }\ \ n\ {\rm{is\ even}};\\ \left(\frac{p_n+p_{n-1}}{q_n+q_{n-1}}, \frac{p_n}{q_n}\right] & {\rm if }\ \ n\ {\rm{is\ odd}}. \end{array} \right. \end{eqnarray*} {\rm Thus, its length is given by} \begin{equation*}\label{lencyl} \frac{1}{2q_n^2}\leq |I_n(a_1,\ldots,a_n)|=\frac{1}{q_n(q_n+q_{n-1})}\leq \frac1{q_n^2}, \end{equation*} {\rm since} $$ p_{n-1}q_n-p_nq_{n-1}=(-1)^n, \ {\rm for \ all }\ n\ge 1. $$ \item For any $n\geq 1$, $q_n\geq 2^{(n-1)/2}$ and $$ 1\le \frac{q_{n+m}(a_1,\dots, a_n, b_1,\dots, b_m)}{q_n(a_1,\dots, a_n)\cdot q_m(b_1,\dots,b_m)}\le 2. $$ \item $$\prod_{i=1}^na_i\leq q_n\leq \prod_{i=1}^n(a_i+1)\leq 2^n\prod_{i=1}^na_i.$$ \item there exists a constant $K>1$ such that for almost all $x\in [0,1)$, $$ q_n(x)\le K^n, \ {\text{for all $n$ sufficiently large}}. $$ \end{enumerate} \end{pro} Let $\mu_G$ be the Gauss measure given by $$ d\mu_G=\frac{1}{\log 2}\cdot \frac{1}{(1+x)}\,dx. $$ It is known that $\mu_G$ is $T$-invariant; clearly it is equivalent to Lebesgue measure $\mathcal{L}$. The next proposition concerns the position of a cylinder in $[0,1)$. \begin{pro}[Khintchine, 1963]\label{pp2} Let $I_n=I_n(a_1,\dots, a_n)$ be a cylinder of order $n$, which is partitioned into sub-cylinders $\{I_{n+1}(a_1,\dots,a_n, a_{n+1}): a_{n+1}\in \mathbb N\}$. When $n$ is odd, these sub-cylinders are positioned from left to right, as $a_{n+1}$ increases from 1 to $\infty$; when $n$ is even, they are positioned from right to left. \end{pro} The following result is due to {\L}uczak \cite{Luczak}. \begin{lem}[{\L}uczak, 1997]\label{lemb}For any $b, c>1$, the sets \begin{align*} &\left\{x\in[0, 1): a_{n}(x)\ge c^{b^n}\ {\text{for infinitely many}} \ n\in \mathbb N \right\},\\ &\left\{x\in[0, 1): a_{n}(x)\ge c^{b^n}\ {\text{for all }} \ n\geq 1 \right\}, \end{align*} have the same Hausdorff dimension $\frac1{b+1}$. \end{lem} \subsection{Pressure function and Hausdorff dimension} \label{Pressure Functions}\ In this section, we recall a fact that the pressure function with a continuous potential can be approximated by the pressure functions restricted to the sub-systems in continued fractions. For more thorough results on pressure function in infinite conformal iterated function systems, see Hanus-Mauldin-Urba\'{n}ski \cite{H02}, Mauldin-Urba\'{n}ski \cite{M96,MU99}, or their monograph \cite{MU03}. Let $\mathbb A$ be a finite or infinite subset of $\mathbb N$. Define $$X_{\mathbb A}=\{x\in [0,1):\ a_n(x)\in \mathbb A,\ \text{for all} \ n\geq 1\}.$$ Then $(X_{\mathbb A},T)$ is a sub-system of $([0,1),T).$ Let $\phi: [0,1)\rightarrow \mathbb R$ be a real function. The pressure function restricted to the system $(X_{\mathbb A},T)$ with the potential $\phi$ is defined by \begin{equation}\label{pre}P_{\mathbb A}(T, \phi)=\lim_{n\to \infty}\frac{1}{n}\log \sum_{(a_1,\ldots,a_n)\in \mathbb A^n}\sup_{x\in X_{\mathbb A}}e^{S_n\phi([a_1,\ldots,a_n+x])} \ ,\end{equation} where $S_n\phi(x)$ denotes the ergodic sum $\phi(x)+\cdots+\phi(T^{n-1}x)$. When $\mathbb A=\mathbb N$, we denote $P_{\mathbb N}(T,\phi)$ by $P(T,\phi)$, which is the pressure function that appeared in the introduction. We will also use the notation $$ {\text{Var}}_n(\phi):=\sup\big\{|\phi(x)-\phi(y)|:\ I_n(x)=I_n(y)\big\} $$ for the $n$th variation of $\phi.$ The existence of the limit in the definition of the pressure function \eqref{pre} is guaranteed by the following proposition \cite{M96}. \begin{pro}[Mauldin-Urba\'nski, 1999]\label{ppp3} Let $\phi: [0,1)\rightarrow\mathbb R$ be a real function with ${\text{Var}}_1(\phi)<\infty$ and ${\text{Var}}_n(\phi)\rightarrow 0$ as $n\rightarrow \infty$. Then the limit defining $P_A(T,\phi)$ exists, and the value of $P_A(T,\phi)$ remains the same even without taking supremum over $x\in X_A$ in \eqref{pre}. \end{pro} Henceforth, without causing any confusion, when we need to take a point $y$ from a cylinder $I_n(a_1,\ldots,a_n),$ we always take it as $y=p_n/q_n=[a_1,\ldots,a_n].$ Because all the potentials in the sequel satisfy the condition in Proposition \ref{pp3}, the pressure function can be expressed as $$P_\mathbb A(T, \phi)=\lim_{n\to \infty}\frac{1}{n}\log \sum_{(a_1,\ldots,a_n)\in \mathbb A^n}e^{S_n\phi([a_1,\ldots,a_n])} .$$ The following proposition states that in the system of continued fractions the pressure function has a continuity property when the system $\big([0,1),T)$ is approximated by its sub-systems $(X_{A},T).$ For the proof, see \cite{H02} or \cite{LWWX}. \begin{pro}[Hanus-Mauldin-Urba\'nski, 2002]\label{pp4}~Let $\phi:[0,1)\rightarrow \mathbb{R}$ be a real function with ${\text{Var}}_1(\phi)<\infty$ and ${\text{Var}}_n(\phi)\rightarrow 0$ as $n\rightarrow \infty.$ Then \begin{enumerate} \item[\rm (1)] for any $a\in \mathbb{R}$ and $\mathbb A\subset \mathbb{N}, P_{\mathbb A}(T,\phi+a)=P_{\mathbb A}(T,\phi)+a;$ \smallskip \item[\rm (2)] $P(T,\phi)=P_{\mathbb{N}}(T,\phi)=\sup\{P_{\mathbb A}(T,\phi): \mathbb A ~\text{is a finite subset of}~ \mathbb{N}\}.$ \end{enumerate} \end{pro} Now we specify the potential $\phi$ which will be related to the dimension of the set ${\mathcal E_{\mathbf t}}(\Psi)$ when $\Psi(n)=B^n$ for all $n\ge 1$. Let the function $f_{{t_0,t_1}}$ be as in \eqref{ft1t2}. Then for any $s\ge 0$, take the potential as $$ \psi(x)=-s\log |T'(x)|-f_{{t_0,t_{1}}}(s)\log B. $$ For any subset $\mathbb A\subset \mathbb N,$ ~define \begin{align*} & s^{(2)}(\mathbb A, B)=\inf\Big\{s\geq 0:P_{\mathbb A}(T,-s\log |T'(x)|-f_{{t_0, t_{1}}}(s)\log B)\leq 0\Big\}, \label{pre4}\\ &s^{(2)}_{n}(\mathbb A, B)=\inf\Big\{s\geq 0:\sum\limits_{a_1,\ldots,a_{n}\in \mathbb A} \left(\frac{1}{B^{nf_{{t_0,t_{1}}}(s)}}\right)\left(\frac{1}{q_n^{2}(y)}\right)^s\leq 1\Big\} \end{align*} where $y\in I_n(a_1,\ldots,a_n).$ If $\mathbb A$ is a finite subset of $\mathbb N$, when substitute $s$ by $s^{(2)}(\mathbb A, B)$ in the pressure function $P_{\mathbb A}$ above (or respectively $s^{(2)}_{n}(\mathbb A, B)$ in the summation), we will get an equality. For simplicity, \begin{itemize}\item when $\mathbb A=\mathbb N$, write $s^{(2)}(B)$ for $s^{(2)}(\mathbb N, B)$ and $s^{(2)}_{n}(B)$ for $s^{(2)}_{n}(\mathbb N, B)$; \item when $\mathbb A=\{1, 2, \dots, M\}$ for some integer $M\ge 1$, write them as ~$s^{(2)}(M, B)$ and $s^{(2)}_{n}(M, B)$ respectively. \end{itemize} Applying Proposition \ref{pp4}(2) to the potential $\psi,$ one has \begin{cor}\label{cor5} $$s^{(2)}(B)=s^{(2)}(\mathbb N, B)=\sup\{s^{(2)}(\mathbb A, B): \mathbb A ~\text{is a finite subset of}~ \mathbb{N}\}.$$ \end{cor} Then it follows from the definition of pressure function and Corollary \ref{cor5} that \begin{pro}\label{pp6}~For any $M\in \mathbb{N},$~we have $$\lim\limits_{n\rightarrow \infty}s^{(2)}_{n}(M, B)=s^{(2)}(M, B),~~\lim\limits_{n\rightarrow \infty}s^{(2)}_{n}(B)=s^{(2)}(B), ~~\lim\limits_{M\rightarrow \infty}s^{(2)}(M, B)=s^{(2)}(B).$$ \end{pro} \begin{pro} \label{tb} As a function of $B\in (1, \infty)$, $s^{(2)}(B)$ is continuous and $$\lim_{B\to 1}s^{(2)}(B)=1, \quad \lim_{B\to \infty}s^{(2)}(B)=\frac12.$$ \end{pro} \begin{proof}The proof follows similarly to \cite{WaWu08} without much difference. \end{proof} \section{Proof of Theorem \ref{mainLeb}} We first recall a dynamical Borel-Cantelli lemma from the paper of Kleinbock-Wadleigh \cite[Lemma 3.5]{KleinbockWadleigh}, which is essentially taken from the work of Philipp \cite{Philipp} and follows from the effective mixing property of $T$. \begin{lem}\label{KWlemma} Fix $k\in \mathbb N$. Suppose $\{A_n: n\ge 1\}$ is a sequence of sets such that for each $n\ge 1$, the set $A_n$ is a {countable} union of sets of form $$ E_{\bold{r}}=\left\{x\in [0, 1]\smallsetminus \mathbb Q: a_1(x)=r_1, \ldots, a_k(x)=r_k\right\}.$$ Then $T^nx\in A_n$ for infinitely many $n\in \mathbb N$ for almost all $x$ or almost no $x$ depending upon the divergence or convergence of the series ${\sum_{n= 1}^\infty}\mu_G(A_n)$ respectively. \end{lem} For each $n\geq 1$ and fixed $m\geq 1$, define \begin{equation}\label{an} A_n=\left\{x\in [0,1): \prod_{i=1}^ma_{i}^{t_{{i-1}}}(x)\ge \Psi(n)\right\}. \end{equation} The set $A_n$ can further be written as the union over a collection of $m$-th order cylinders as$$ A_n=\bigcup_{(a_1,\ldots, a_{m})\in \mathbb N^{m}: \ {a_1^{t_0}\cdots a_{m}^{t_{m-1}}}\ \geq \Psi(n)}I_m(a_1, \ldots, a_m). $$ To apply Lemma \ref{KWlemma}, we need only to estimate the Lebesgue measure $\mathcal{L}$ of $A_n$, which is equivalent to its Gauss measure $\mu_G$. It follows from Proposition \ref{pp3} that $$\mathcal L (A_n)\asymp \sum_{{a_1^{t_0}\cdots a_{m}^{t_{m-1}}}\ge \Psi(n)}\prod_{i=1}^m\frac{1}{a_i(a_i+1)},$$ where the constant involved in $\asymp$ depends only on $m$.% \begin{lem}\label{LebesgueAn} Let ${t_0, \dots, t_{m-1} }$ be an $m$-tuple of positive real numbers, and define $$ t_{\max}=\max\{t_i: {0\le i\le m-1}\}, \ \ell=\# \{i: t_i=t_{\max}\}. $$ Then for any $m\ge 1$ and $g\ge 1$, we have \begin{equation}\label{f3} \sum_{{a_1^{t_0}\cdots a_{m}^{t_{m-1}}}\ge g}\ \prod_{i=1}^m\frac{1}{a_i(a_i+1)}\asymp \frac{(\log g)^{\ell-1}}{g^{\frac1{t_{\max}}}}, \end{equation} where the constant implied in $\asymp$ depends on $m$ but not on $g$. \end{lem} \begin{proof} The summation in (\ref{f3}) does not depend upon the ordering of the partial quotients, therefore without loss of generality we assume that ${t_0\ge \cdots \ge t_{m-1}} $ and then $$t_{\max}=t_{{0}}, \ {\text{and}}\ \ \ell=\# \{i: t_i=t_{{0}}\}.$$ We prove this lemma by induction on $\ell\ge 1$. \begin{itemize} \item[(I)] When $\ell=1$, we show that (\ref{f3}) holds for all $m\ge 1$. Write $d=m-\ell$. Then it suffices to show (\ref{f3}) holds for all $d\ge 0$. This is done by induction on $d$. \begin{itemize} \item [(Ia)] When $d=0$, i.e. $m=1$, it is {easy} to see that (\ref{f3}) holds. \item[(Ib)] Assume that the result holds for $d-1$; we show that (\ref{f3}) still holds for $d$. Notice that \begin{align*} \sum_{{a_1^{t_0}\cdots a_{m}^{t_{m-1}}}\ge g}\ \prod_{i=1}^m\frac{1}{a_i(a_i+1)} &\asymp \sum_{a_m^{t_{{m-1}}}\ge g}\frac{1}{a_m(a_m+1)}+\sum_{1\le a_m^{t_{{m-1}}}\le g}\frac{1}{a_m^2}\sum_{a_1^{t_0}\cdots a_{m-1}^{t_{{m-2}}}\ge g/a_m^{t_{{m-1}}}}\prod_{i=1}^{m-1}\frac{1}{a_i(a_i+1)}\\ & \asymp g^{-\frac{1}{t_{{m-1}}}}+\sum_{1\le a_m^{t_{{m-1}}}\le g}\frac{1}{a_m^2}\cdot \left(\frac{a_m^{t_{{m-1}}}}{g}\right)^{\frac{1}{t_{{0}}}}\ {\text{(by induction on {the} inner summation)}}\\ &\asymp g^{-\frac{1}{t_{{m-1}}}}+g^{-\frac{1}{t_{{0}}}}\asymp g^{-\frac{1}{t_{{0}}}}, \end{align*} where the second last quantity is obtained by noticing that ${t_{m-1}/t_0}<1$, so the summation over $a_m$ converges. \end{itemize} \medskip \item[(II)] Assume that $\ell\ge 2$. As for (I) above, we use induction on $d=m-\ell$. \begin{itemize} \item [(IIa)]When $d=0$, i.e. $m=\ell$ and $t_i=t$ for all ${0\le i\le m-1}$, {we have} \begin{align*} &\sum_{{a_1^{t_0}\cdots a_{m}^{t_{m-1}}}\ge g}\ \prod_{i=1}^m\frac{1}{a_i(a_i+1)}\\ &\asymp \sum_{a_m^{t}\ge g}\frac{1}{a_m(a_m+1)}+\sum_{1\le a_m^{t}\le g}\frac{1}{a_m^2}\sum_{a_1^{t}\cdots a_{m-1}^{t}\ge g/a_m^{t}}\prod_{i=1}^{m-1}\frac{1}{a_i(a_i+1)}\\ & \asymp g^{-1/t}+\sum_{1\le a_m^{t}\le g}\frac{1}{a_m^2}\cdot \frac{(\log \frac{g}{a_m^t})^{\ell-2}}{(\frac{g}{a_m^t})^{1/t}},\ \ \ \ \ \ \ {\text{(by induction on inner summation)}}\\ &\asymp \frac{1}{g^{1/t}}+\int_1^{g^{1/t} \frac{1}{x^2} \cdot \ \frac{(\log \frac{g^{1/t}}{x})^{\ell-2}}{(\frac{g^{1/t}}{x})}\,dx,\ \ {\text{(change variable $y=\frac{g^{1/t}}{x}$)}} \\ &\asymp \frac{1}{g^{1/t}}+\int_1^{g^{1/t}}\frac{1}{g^{1/t}}\cdot \frac{(\log y)^{\ell-2}}{y}\,d \asymp \frac{1}{g^{1/t}}+\frac{(\log g)^{\ell-1}}{g^{1/t}}. \end{align*} \item [II(b)] Assume that the result holds for $d-1$. We show that (\ref{f3}) still holds for any $d$. Since $\ell$ is fixed, it means that $$ {t_0= \cdots=t_{\ell-1}>t_{\ell}\ge \cdots \ge t_{m-1}}. $$ So, \begin{equation*}\label{f4} \#\{{i\ge 1: t_i=t_1\}=\ell-1, \ {\text{and}}\ t_0=t_1}. \end{equation*}Notice that \begin{align*} I:= &\sum_{{a_1^{t_0}\cdots a_{m}^{t_{m-1}}}\ge g}\ \prod_{i=1}^m\frac{1}{a_i(a_i+1)}\\ &\asymp { \sum_{a_1^{t_0}\ge g}\frac{1}{a_1(a_1+1)}+\sum_{1\le a_1^{t_0}\le g}\frac{1}{a_1^2}\sum_{a_2^{t_1}\cdots a_{m}^{t_{m-1}}\ge g/a_1^{t_0}}\ \prod_{i=2}^{m}\frac{1}{a_i(a_i+1)}}.\end{align*} For the inner summation, the induction hypothesis is applied to give {\begin{align*} I & \asymp \frac{1}{g^{\frac{1}{t_0}}}+\sum_{1\le a_1^{t_0}\le g} \frac{1}{a_1^2}\cdot \frac{\left(\log \frac{g}{a_1^{t_0}}\right)^{\ell-2}}{\left(\frac{g}{a_1^{t_0}}\right)^{1/t_1} \\ & \asymp \frac{1}{g^{\frac{1}{t_0}}}+\sum_{1\le a_1^{t_0}\le g}\frac{1}{a_1^2}\cdot \frac{\left(\log \frac{g^{1/t_0}}{a_1}\right)^{\ell-2}}{\left(\frac{g}{a_1^{t_0}}\right)^{1/t_0}}\ \ \ \ {\text{(by $t_0=t_1$)}}. \end{align*}} So we get the same formula as in case (IIa). \end{itemize} \end{itemize} \end{proof} {Now observe that $${\mathcal E_{\mathbf t}} (\Psi) = \left\{x\in[0,1): T^{n-1}x\in A_n\text{ for infinitely many }n\in \mathbb N\right\},$$ where $A_n$ are as in \eqref{an}.} By combining Lemmas \ref{KWlemma} and \ref{LebesgueAn}, we conclude that $\mathcal L \big({\mathcal E_{\mathbf t}} (\Psi)\big)$ is zero or full according to the convergence or divergence of the series $$\sum\limits_{n}^\infty\mathcal L(A_n)\asymp\sum\limits_{n}^\infty\frac {\big(\log\Psi(n)\big)^{\ell-1}}{\Psi(n)^{1/t_{\max}}},$$ where $ t_{\max}$ and $ \ell$ are as in \eqref{tmax}. {This finishes the proof of Theorem \ref{mainLeb}.} \ignore{\begin{proof}[Proof of the Corollary \ref{KKW}] Since $q_n(x)\ge b^n$ for all $x\in [0,1]$ and all $n\ge 2$ if we choose $1<b<2^{1/4}$. Then, we have $$ F_m^{\mathbf t} (\Psi)\subset \left\{x\in [0, 1]: \prod_{i=1}^ma_{n+i}^{t_i}(x) \ge \Psi(b^{n}), \ {\text{for i.m. }}n\in \mathbb N\right\}.$$ Thus $$ \mathcal{L}\big(F_m^{\mathbf t} (\Psi)\big)=0, \ {\text{if}}\ \sum_{n=1}^{\infty}\frac{\big(\log \Psi(b^n)\big)^{\ell-1}}{\Psi(b^n)^{1/t_{\max}}}\asymp\sum_{n=1}^{\infty}\frac{\big(\log \Psi(n)\big)^{\ell-1}}{n\Psi(n)^{1/t_{\max}}}<\infty. $$ Where in the last assertion we have used the Cauchy condensation principle. For the divergence case, using the last item in Proposition \ref{pp3}, one has \begin{align*} \mathcal{L}\big(F_m^{\mathbf t} (\Psi)\big)&=\mathcal{L}\left(F_m^{\mathbf t} (\Psi)\cap \{x\in[0, 1]: q_{n+1}(x)\le K^n, \ {\text{for all }} n\gg 1\}\right)\\ &\geq \mathcal{L}\Big( \left\{x\in [0, 1]: \prod_{i=1}^ma_{n+i}^{t_i}(x)\ge \Psi(K^{n}), \ {\text{for i.m. }}n\in \mathbb N\right\}\Big). \end{align*} So, we have$$ \mathcal{L}\big(F_m^{\mathbf t} (\Psi)\big)=1, \ {\text{if}}\ \sum_{n=1}^{\infty}\frac{\big(\log \Psi(K^n)\big)^{\ell-1}}{\Psi(K^n)^{1/t_{\max}}}\asymp \sum_{n=1}^{\infty}\frac{\big(\log \Psi(n)\big)^{\ell-1}}{n\Psi(n)^{1/t_{\max}}}=\infty. $$ \end{proof}} \section{Hausdorff dimension for $B=1$ or $B=\infty } {In this section we} prove Theorem \ref{BHWthmeasy \ignore{ $$ \log B=\liminf_{n\to \infty}\frac{\log \Psi(n)}{n}, \ \ \log b=\liminf_{n\to\infty}\frac{\log \log \Psi(n)}{n}. $$ Then various cases of growth rate of $\Psi(n)$ comes into play. These include} by considering the {two cases: \begin{itemize} \item $B=1$; \item $B=\infty$ (for this case, there are three subcases $b=1,1<b<\infty,$ and $b=\infty$). \end{itemize} \ignore{We will determine the Hausdorff dimension of the set $$ {\mathcal E_{\mathbf t}} (B):=\left\{x\in[0, 1): a_{n+1}^{t_1}(x)a_{n+2}^{t_2}(x) \ge B^n \ {\text{for i.m.}} \ n\in \mathbb N\right\}. $$ by discussing all the cases one by one in different subsections below.} We start off with the easier case.} \subsection{$B=1$} It is trivial that $$ {\mathcal E_{\mathbf t}}(\Psi)\supset \left\{x\in[0, 1): {a^{t_0}_{n}}(x)\ge \Psi(n)\ {\text{for infinitely many}} \ n\in \mathbb N \right\}.$$ It follows from Theorem \ref{WaWu} that the set on the right hand side has full Hausdorff dimension. Hence $\hdim {\mathcal E_{\mathbf t}}(\Psi)=1$ when $B=1$. \subsection{ $B=\infty$} There are three subcases. \subsubsection{ $1<b<\infty$}\ By the definition of $b$, for any $c<b$, $$\frac{\log\log \Psi(n)}{n}\geq \log c, \ \text{i.e.} \ \Psi(n)\geq e^{c^n}$$ for all sufficiently large $n$ which we write as $n\gg1$. Thus for any $x\in {\mathcal E_{\mathbf t}}(\Psi)$, there are infinitely many $n$ such that $$ {\prod_{i=0}^{m-1}a^{t_i}_{n+i}(x)}\ge e^{c^n},$$ then at least for one index ${0\leq i\leq m-1}$ one has $a^{t_i}_{n+i}(x)\ge e^{\frac{1}{{m}}\cdot c^n}.$ Thus $$ {\mathcal E_{\mathbf t}}(\Psi)\subset { \bigcup_{i=0}^{m-1} \left\{x\in[0, 1): a^{t_i}_{n+i}(x)\ge e^{\frac{1}{m}\cdot c^n}\ {\text{for i.m.}} \ n\in \mathbb N \right\}} $$ It follows from Lemma \ref{lemb} that {each of} the sets on the right hand side have Hausdorff dimension $\frac{1}{1+c}$ irrespective of $t_i's$. Hence $\hdim {\mathcal E_{\mathbf t}}(\Psi)\leq \frac{1}{1+b}$ by the arbitrariness of $c<b$. On the other hand, by the definition of $b$, it follows that for any $d>b$, $$ \Psi(n)\le e^{d^n}, \ {\text{for infinitely many}}\ n\in \mathbb N. $$ Thus one has $$ {\mathcal E_{\mathbf t}}(\Psi)\supset \left\{x\in[0, 1): {a^{t_{0}}_{n}}\ge e^{d^n}\ {\text{for all}} \ n\in \mathbb N \right\}, $$ and from Lemma \ref{lemb} we {conclude that} the Hausdorff dimension of the set on the right hand side is $1/(1+d)$. \subsubsection{$b=\infty$} This case readily follows from the upper bound argument above, that is, $${\mathcal E_{\mathbf t}}(\Psi)\leq \lim_{b\to\infty}\frac1{b+1}= 0.$$ \subsubsection{$b=1$} In this case, for any $\epsilon>0$, $\ \Psi(n)\leq e^{(1+\epsilon)^n}$ for infinitely many $n$. Then \begin{align*} {\mathcal E_{\mathbf t}}(\Psi)&\supset \left\{x\in[0, 1): {a^{t_0}_{n}}\ge \Psi(n)\ {\text{for infinitely many}} \ n\in \mathbb N \right\}\\ & \supset \left\{x\in[0, 1): {a^{t_0}_{n}}(x)\ge e^{(1+\epsilon)^n}\ {\text{for all }} \ n\in \mathbb N \right\} .\end{align*} Hence by using Lemma \ref{lemb}, we have $$\hdim {\mathcal E_{\mathbf t}}(\Psi)\geq \lim_{\epsilon\to 0}\frac{1}{1+1+\epsilon}=\frac12.$$ For the upper bound, we note that $ {\prod_{i=0}^{m-1}a^{t_i}_{n+i}(x)}\ge \Psi(n) \Longrightarrow {a^{t_i}_{n+i}\geq \Psi(n)^{\frac1{m}}\quad \text{for some} \ 0\leq i\leq m-1}.$$ Hence $${\mathcal E_{\mathbf t}}(\Psi)\subseteq {\bigcup_{i=0}^{m-1}\left\{x\in [0, 1): a^{t_i}_{n+i}\geq \Psi(n)^{\frac1{m}}, \ \text{for i.m.} \ n\in \mathbb N\right\}} .$$ Since $B=\infty$, for any ${A}>1$ one has $${\mathcal E_{\mathbf t}}(\Psi)\subseteq \left\{x\in [0, 1): a_{n}\geq {A}^n, \ \text{for i.m.} \ n\in \mathbb N\right\}.$$ Hence by letting ${A}\to\infty$ and appealing to Proposition \ref{tb}, it follows that $\hdim {\mathcal E_{\mathbf t}}(\Psi)\leq 1/2$. \section{$\hdim {\mathcal E_{\mathbf t}}(B)$ for {$m=2$ and} $1<B<\infty$: an upper bound} In the next three sections we specialize to the case $m=2$, that is, take ${\mathbf t = (t_0,t_1)}$, and assume that $1 < B < \infty$. To prove Theorem \ref{BHWthm}, we first show that the Hausdorff dimension of the set \begin{equation \label{d2b} {\mathcal E_{\mathbf t}}(B):=\left\{x\in[0, 1): a_{n}^{t_0}(x)a_{n+1}^{t_1}(x) \ge B^n \ {\text{for i.m.}} \ n\in \mathbb N\right\} \end{equation} is equal to $$\inf \left\{s\ge 0: P\big(T, -s\log |T'|-f_{t_0, t_{1}}(s)\log B\right)\le 0\big\},$$ where $f_{t_0, t_{1}}$ is as in \eqref{ft1t2}. We recall that according to Theorem \ref{WaWu}, the Hausdorff dimension of the one-parameter version of \eqref{d2b}, namely the set $$ \left\{x\in[0, 1): a_{n}(x)^{t_0}\ge B^n \ {\text{for i.m.}} \ n\in \mathbb N \right\}, $$ is given by $$ \inf \left\{s\ge 0: P\left(T, -s\log |T'|-\frac{s}{t_0}\log B\right)\le 0\right\}. $$ This gives the first function $f_{t_0}$ defined by $$ f_{t_0}(s)=\frac{s}{t_0}. $$ {Now take a positive number $A$ with $1<A<B$ and} define \begin{align*} {\mathcal E'_{\mathbf t}}({A})&:= \left\{x\in[0, 1): {a_{n}^{t_0}(x) \le A^n, \ a_{n+1}^{t_1}(x)\ge \frac{B^n}{a_{n}^{t_0}(x)}} \ {\text{for i.m.}} \ n\in \mathbb N \right\},\end{align*} and \begin{align*} {\mathcal E''_{\mathbf t}}({A})&:=\left\{x\in[0, 1): {a_{n}^{t_0}}(x) \ge A^n\ {\text{for i.m.}} \ n\in \mathbb N \right\}.\end{align*} Then \begin{equation*}\label{equpper} {\mathcal E_{\mathbf t}}(B)\,\subset\, {\mathcal E'_{\mathbf t}}({A})\cup {\mathcal E''_{\mathbf t}}({A}), \end{equation*} From {the} $m=1$ case above, the Hausdorff dimension of the set ${\mathcal E''_{\mathbf t}}({A})$ is given by \begin{equation}\label{f1} \hdim {\mathcal E''_{\mathbf t}}({A})=\inf\left\{s\ge 0: P\left(T, -s\log |T'|-f_{t_{{0}}}(s)\log A\right)\le 0\right\}:=\delta_1. \end{equation} Now we focus on the Hausdorff dimension of ${\mathcal E'_{\mathbf t}}({A})$. {Since it} readily follows from Theorem \ref{WaWu} that $ 1/2<\hdim{\mathcal E_{\mathbf t}}(B)<1$ for $1<B<\infty$, we consider the $s$-Hausdorff measure of ${\mathcal E'_{\mathbf t}}({A})$ only for $1/2<s<1$. Because of the limsup nature of ${\mathcal E'_{\mathbf t}}(A)$, there is a natural cover of it. For any integers $a_1,\dots, a_{n}$, define $$ J_{n}(a_1,\dots, a_{n}) :=\bigcup_{a_{n+1}: \ a_{n+1}^{t_{{1}}}\ge \frac{B^n}{a_{n}^{t_{{0}}}}}I_{n+1}(a_1,\dots, a_{n+1}). $$ Then $$ {\mathcal E'_{\mathbf t}}({A})=\bigcap_{N=1}^{\infty}\bigcup_{n=N}^{\infty}\bigcup_{a_1,\dots, a_{n-1}\in \mathbb N}\ \bigcup_{a_{n}^{t_{{0}}}\le A^n}J_{n}(a_1,\dots, a_{n}). $$ By Proposition \ref{pp3}, one has \begin{align*} |J_{n}(a_1,\dots, a_{n})|\asymp & \left[{q_{n-1}^2a_{n}^2\left(\frac{B^n}{a_{n}^{{{t_0}}}}\right)^{\frac{1}{t_1}}}\right]^{-1}=\left[{q_{n-1}^2a_{n}^{2-\frac{{{t_0}}}{{{t_1}}}} B^{\frac{n}{t_{1}}}}\right]^{-1}, \end{align*} where the constant implied in $\asymp$ can be chosen as $4$. Thus the $s$-Hausdorff measure of ${\mathcal E'_{\mathbf t}}({A})$ can be estimated as \begin{align*} \mathcal{H}^s\big({\mathcal E'_{\mathbf t}}({A})\big)&\le \liminf_{N\to \infty}\sum_{n=N}^{\infty}\sum_{a_1,\dots, a_{n-1}\in \mathbb N}\ \sum_{a_{n}^{{{t_0}}}\le A^n} |J_{n}(a_1,\dots,a_{n})|^s\\ &\ll \liminf_{N\to \infty}\sum_{n=N}^{\infty}\sum_{a_1,\dots, a_{n-1}\in \mathbb N}\ \sum_{a_{n}^{{{t_0}}}\le A^n} \left[{q_{n-1}^2a_{n}^{2-\frac{{{t_0}}}{{{t_1}}}} B^{\frac{n}{t_{1}}}}\right]^{-s}\\ & \asymp \liminf_{N\to \infty}\sum_{n=N}^{\infty}\sum_{a_1,\dots, a_{n-1}\in \mathbb N}\ \sum_{a_{n}^{{{t_0}}}\le A^n}a_{n}^{-(2-\frac{{{t_0}}}{{{t_1}}})s} \left({q_{n-1}^2B^{\frac{n}{{{t_1}}}}}\right)^{-s}. \end{align*} Calculating the summation over $a_{n}$ gives that \begin{align*} \sum_{a_{n}^{{{t_0}}}\le A^n}a_{n}^{-\big(2-\frac{{{t_0}}}{{{t_1}}}\big)s} \ll \max\Big\{1, A^{\frac{n}{{{t_0}}}\cdot \big(1-s(2-\frac{{{t_0}}}{{{t_1}}})\big)}\Big\}=\max\Big\{1, A^{n \big(\frac{1-2s}{{{t_0}}}+\frac{s}{{{t_1}}}\big)}\Big\}. \end{align*} Thus $$ \mathcal{H}^s\big({\mathcal E'_{\mathbf t}}({A})\big)\le \liminf_{N\to \infty}\sum_{n=N}^{\infty}\sum_{a_1,\dots, a_{n-1}\in \mathbb N}\max\Big\{1, A^{n \big(\frac{1-2s}{{{t_0}}}+\frac{s}{{{t_1}}}\big)}\Big\}\cdot \left({q_{n-1}^2B^{\frac{n}{{{t_1}}}}}\right)^{-s}. $$ This gives an upper bound of the Hausdorff dimension of the set ${\mathcal E'_{\mathbf t}}({A})$ to be \begin{equation}\label{f2} \inf\left\{s\ge 0: P\bigg(T, -s\log |T'|+\max\left\{0, \frac{1-2s}{{{t_0}}}+\frac{s}{{{t_1}}}\right\}\log A-\frac{s}{t_{2}}\log B\bigg)\le 0\right\}:=\delta_2.\end{equation} Combining (\ref{f1}) and (\ref{f2}), one gets $$ \hdim {\mathcal E_{\mathbf t}}(B)\le \max\{\delta_1,\delta_2\}. $$ It would be reasonable to choose $A$ such that $\delta_1=\delta_2$ which would give the optimal upper bound of $\hdim {\mathcal E_{\mathbf t}}(B)$. Choose $A$ such that the potentials in $\delta_1$ and $\delta_2$ are equal, namely, $$ -f_{{{t_0}}}(s)\log A=\max\left\{0, \frac{1-2s}{{{t_0}}}+\frac{s}{{{t_1}}}\right\}\log A-\frac{s}{{{t_1}}}\log B $$ equivalently \begin{equation*}\label{f6} \log A=\frac{s}{{{t_1}}f_{{{t_0}}}(s)+\max\left\{0, s-\left(2s-1\right)\frac{{{t_1}}}{{{t_0}}} \right\}}\log B. \end{equation*} Then define $f_{{t_0, t_{1}}}$ such that $$ -f_{{t_0,t_{1}}}(s)\cdot \log B=-f_{{{t_0}}}(s)\cdot \log A $$ giving that (note $s>1/2$)\begin{align}\label{f5} f_{{t_0, t_{1}}}(s)&=\frac{sf_{{{t_0}}} (s)}{{{t_1}}f_{{{t_0}}}(s)+\max\left\{0, s-(2s-1)\frac{{{t_1}}}{{{t_0}}}\right\}}=\frac{sf_{{{t_0}}}(s)}{{{t_1}}\left[f_{{{t_0}}}(s)+\max\{0, \frac{s}{{{t_1}}}-\frac{2s-1}{{{t_0}}}\}\right]}. \end{align} Note that \eqref{f5} is the same as \eqref{ft1t2} given in the statement of the Theorem \ref{BHWthm}. As a result, once we can check that the chosen $A$ is less than $B$, we {will} arrive at the final conclusion $$ \hdim {\mathcal E_{\mathbf t}}(B) \le \inf \{s\ge 0: P(T, -s\log |T'|-f_{{t_0, t_{1}}}(s)\log B)\le 0\} $$ We show that $A<B$ in the following lemma. \begin{lem}\label{lemrec} For any $0<s<1$, $$f_{{t_0, t_{1}}}(s)< f_{{{t_0}}}(s), \ \ {\text{or, equivalently,}} \ A<B.$$ \end{lem} \begin{proof}Recall (\ref{f5}). Then \begin{align*} {f_{{t_0,{{t_1}}}}}(s)< f_{{{t_0}}}(s)% &\Longleftrightarrow s<{{t_1}}f_{{{t_0}}}(s)+\max\left\{0, s-(2s-1)\frac{{{t_1}}}{{{t_0}}}\right\}\\ &\Longleftarrow s<{{t_1}}f_{{{t_0}}}(s)+s-(2s-1)\frac{{{t_1}}}{{{t_0}}}\\ &\Longleftrightarrow (2s-1)\frac{{{t_1}}}{{{t_0}}}<{{t_1}}f_{{{t_0}}}(s)={{t_1}}\cdot \frac{s}{{{t_0}}}. \end{align*} The last estimate is nothing but to say $2s-1<s$, which is clearly true since $s<1$. \end{proof} \section{$\hdim {\mathcal E_{\mathbf t}}(B)$ for $1<B<\infty$: {a} lower bound}\label{s6} To obtain the lower bound, we will construct an appropriate Cantor subset of ${\mathcal E_{\mathbf t}}(B)$ and then apply the following mass distribution principle \cite{Falconer_book}. \begin{pro}[Mass Distribution Principle]\label{p1} Let $\mu$ be a probability measure supported on a measurable set $F$. Suppose there are positive constants $c$ and $r_0$ such that $$\mu\big(B(x,r)\big)\le c r^s$$ for any ball $B(x,r)$ with radius $r\le r_0$ and center $x\in F$. Then $\hdim F\ge s$. \end{pro} \subsection{Preliminaries on the dimension estimate}\ Recall that $$ f_{{{t_0}}}(s)=\frac{s}{{{t_0}}}, \ \ f_{{t_0, t_1}}(s)=\frac{sf_{{{t_0}}}(s)}{{{t_1}}\left[f_{{{t_0}}}(s)+\max\{0, \frac{s}{{{t_1}}}-\frac{2s-1}{{{t_0}}}\}\right]}, $$ and {write $s_o$ for $s^{(2)}(B)$, i.e.}$$ s_o=\inf\left\{s\ge 0: P(-s\log |T'|-f_{{t_0,t_1}}(s)\log B)\le 1\right\}. $$ We present some facts about this dimension estimate. The following may be trivial, however we give a rigorous proof to avoid any potential uncertainty. Define $$ s_o'=\Big\{s\ge 0: P(T, -s\log |T'|-\frac{s}{{{t_1}}}\log B)\le 0\Big\}. $$ \begin{lem}\label{l6.2} When $\frac{s_o}{{{t_1}}}-\frac{2s_o-1}{{{t_0}}}\le 0$, one has $s_o=s_o'$.\end{lem} \begin{proof} At first, remember that the pressure function $P(T, \cdot)$ is non-decreasing with respect to the potential, i.e. $$P(T, \psi_1)\le P(T, \psi_2), \ \ {\text{if}}\ \psi_1\le \psi_2.$$ Note that we always have $$ f_{{t_0,t_1}}(s)\le \frac{sf_{{{t_0}}}(s)}{{{t_1}}[f_{{{t_0}}}(s)+0]}=\frac{s}{{{t_1}}}. $$ Thus $$ -s\log |T'|-f_{{t_0,t_1}}(s)\log B\ge -s\log |T'|-\frac{s}{{{t_1}}}\log B, $$ which implies that $$ s_o'\le s_o. $$ For the other direction of the inequality, we distinguish two cases. \begin{itemize} \item When $\frac{s_o}{{{t_1}}}-\frac{2s_o-1}{{{t_0}}}<0$. Let $\epsilon>0$ be small such that for any $s_o-\epsilon<s<s_o+\epsilon$, we always have \begin{equation*}\label{ff3} \frac{s}{{{t_1}}}-\frac{2s-1}{{{t_0}}}<0, \end{equation*} and so \begin{equation}\label{ff4} f_{{t_0, t_1}}(s)=\frac{s}{{{t_1}}}. \end{equation} For any $s_o-\epsilon<s<s_o$, by the definition of $s_o$ we have $$ P\big(T, -s\log |T'|-f_{{t_0,t_1}}(s)\log B\big)>0. $$ so by (\ref{ff4}), it follows {that} $$ P\left(T, -s\log |T'|-\frac{s}{{{t_1}}}\log B\right)>0. $$ This implies $s_o'\ge s$. By the arbitrariness of $s$, one has $s_o'\ge s_o$. \item When $\frac{s_o}{{{t_1}}}-\frac{2s_o-1}{{{t_0}}}=0$. In this case, one has $$f_{{t_0, t_1}}(s_o)=\frac{s_o}{{{t_1}}}.$$ By the continuity of $f_{{t_0,t_1}}$ with respect to $s$, for any $\epsilon>0$, choose $0<\delta\le \epsilon$, such that for any $s_o-\delta<s<s_o$, $$ f_{{t_0,t_1}}(s)>\frac{s_o-\epsilon}{{{t_1}}}. $$ On one hand, by the definition of $s_o$, for any $s_o-\delta<s<s_o$, $$ P(T, -s\log |T'|-f_{{t_0,t_1}}(s)\log B)>0, $$ on the other hand, (since $s>s_o-\epsilon$) $$ -s\log |T'|-f_{{t_0,t_1}}(s)\log B<-(s_o-\epsilon)\log |T'|-\frac{s_o-\epsilon}{{{t_1}}}\log B, $$ which implies that $$ 0<P\Big(T, -s\log |T'|-f_{{t_0,t_1}}(s)\log B\Big)\le P\Big(T, -(s_o-\epsilon)\log |T'|-\frac{s_o-\epsilon}{{{t_1}}}\log B\Big). $$ Thus $s_o'\ge s_o-\epsilon$. \end{itemize} \end{proof} As a result, when $$ \frac{s_o}{{{t_1}}}-\frac{2s_o-1}{{{t_0}}}\le 0, $$ we consider the following subset of $\mathcal{E}_{\bold{t}}(B)$: $$ \Big\{x\in [0,1): a_{n+1}^{{{t_1}}}(x)\ge B^n, \ {\text{i.m.}}\ n\in \mathbb N\Big\} $$ which, by Theorem \ref{WaWu}, is of dimension $$ s_o'=\inf\{s\ge 0: P(T, -s\log |T'|-\frac{s}{{{t_1}}}\log B)\le 0\}. $$ Thus $$ \hdim \mathcal{E}_{\bold{t}}(B)\ge s_o'=s_o. $$ So in the following, we always assume that $$ \frac{s_o}{{{t_1}}}-\frac{2s_o-1}{{{t_0}}}> 0, $$ and then in a small neighborhood of $s_o$, we always have \begin{equation}\label{ff5} f_{{t_0,t_1}}(s)=\frac{sf_{{{t_0}}}(s)}{{{t_1}}\big(f_{{{t_0}}}(s)+\frac{s}{{{t_1}}}-\frac{2s-1}{{{t_0}}}\big)}. \end{equation} \subsection{A subset of ${\mathcal E_{\mathbf t}}(B)$}\label{s7}\ Fix integers $M,N$ sufficiently large such that $s:={s^{(2)}_N(M,B)}$ is in the small neighborhood of $s_o$ so that $1>s>1/2$ and (\ref{ff5}) holds. Then define a real number $A$ such that \begin{equation}\label{fff1} f_{{{t_0}}}(s)\log A=f_{{t_0, t_1}}(s)\log B. \end{equation} It is {straightforward} to check that $f_{{{t_0}}}(x)>f_{{t_0, t_1}}(x)$ for any $0<x<1$, so $1<A<B$. Fix a sequence of largely sparse integers $\{\ell_k\}_{k\ge 1}$, say, $$ \ell_k\gg e^{\ell_1+\cdots+\ell_{k-1}},\ {\text{and take}}\ n_1=\ell_1 N+1, \ n_{k+1}-n_{k}=\ell_{k+1} N+2, \ \forall\,k\ge 1, $$ {so that the number of integers in the interval $(n_k+1, n_{k+1})$ is a multiple of $N$.} Then define a subset of ${\mathcal E_{\mathbf t}}(B)$ as \begin{align*} E=\Bigg\{x\in [0,1): A^{\frac{n_k}{{{t_0}}}}\le a_{n_k}(x)&<2 {A^{\frac{n_k}{{{t_0}}}}}, \ \left(\frac{B^{n_k}}{A^{n_k}}\right)^{1/{{t_1}}}\le a_{n_k+1}(x)<2\left(\frac{B^{n_k}}{A^{n_k}}\right)^{1/{{t_1}}} {\text{for all}} \ k\ge 1;\\ &{\text{and}}\ a_n(x)\in \{1,\dots, M\} \ {\text{for other $n\in \mathbb N$}}\Bigg\}. \end{align*} For ease of notation, \begin{itemize} \item write $$ \alpha_0=A^{1/{{t_0}}}, \ \ \alpha_1=\left(\frac{B}{A}\right)^{1/{{t_1}}}. $$ \item write $q_n(a_1,\dots, a_n)$ as $q_n$ when the partial quotients $a_1,\dots, a_n$ are clear. Recall (\ref{fff1}). Then $$ 1=\sum_{1\le a_1,\dots, a_N\le M}\frac{1}{q_N^{2s}(a_1,\dots, a_N)\cdot B^{N\cdot f_{t_0, t_1}(s)}}=\sum_{1\le a_1,\dots, a_N\le M}\frac{1}{q_N^{2s}\cdot \alpha_0^{Ns}}. $$ \item use a symbolic space defined as $D_0=\{\varnothing\}$, and for any $n\ge 1$, \begin{align*} D_n=\Bigg\{(a_1,\dots, a_n)\in \mathbb N^n: \alpha_i^{n_k}\le a_{n_k+i}&<2\alpha_i^{n_k} \ {\text{for}} \ 0\le i\le 1, \ k\ge 1 \ {\text{with}} \ {n_k+i}\le n;\\ &{\text{and}}\ a_j\in \{1,\dots, M\} \ {\text{for other $j\le n$}}\Bigg\}, \end{align*} which is just the collection of the prefixes of the points in $E$. \item if an integer $n$ is assumed as a real value $\xi$, we mean $n=\lfloor \xi\rfloor$ and in $D_n$, the term $a_{n_k+i}$ has $\alpha_i^{n_k}$ choices. \item Use $\mathcal U$ to denote the following collection of finite words: $$ \mathcal U=\{w=(\sigma_1,\dots, \sigma_N): 1\le \sigma_i\le M, \ 1\le i\le N\}. $$ In the following, we always use $w$ to denote a word of length $N$ in $\mathcal U$. \end{itemize} \subsection{Cantor structure of $E$}\ For any $(a_1,\dots, a_n)\in D_n$, define $$ J_n(a_1,\dots, a_n)=\bigcup_{a_{n+1}: (a_1,\dots, a_n, a_{n+1})\in D_{n+1}}I_{n+1}(a_1,\dots, a_n, a_{n+1}) $$ and call it a {\em basic cylinder} of order $n$. More precisely, for each $k\ge 0$\begin{itemize} \item when $n_k+1\le n<n_{k+1}-1$ (by viewing $n_0=0$), $$ J_n(a_1,\dots, a_n)=\bigcup_{1\le a_{n+1}\le M}I_{n+1}(a_1,\dots, a_n, a_{n+1}) $$ \item when $n=n_{k+1}-1+i$ for $i=0,1$, $$ J_n(a_1,\dots, a_n)=\bigcup_{\alpha_i^{n_{k+1}}\le a_{n+1}< 2 \alpha_i^{n_{k+1}}}I_{n+1}(a_1,\dots, a_n, a_{n+1}) $$ \end{itemize}Then define $$ \mathcal{F}_n=\bigcup_{(a_1,\dots, a_n)\in D_n}J_n(a_1,\dots, a_n) $$ and call it level $n$ of the Cantor set $E$. It is clear that $$ E=\bigcap_{n=1}^{\infty}\mathcal{F}_n=\bigcap_{n=1}^{\infty}\bigcup_{(a_1,\dots, a_n)\in D_n}J_n(a_1,\dots, a_n). $$ We have the following observations about the length and gaps of the basic cylinders. \begin{lem}[Gap estimation]\label{l3} Denote by $G_n(a_1,\dots, a_n)$ the gap between $J_n(a_1,\dots, a_n)$ and other basic cylinders of order $n$. Then $$ G_n(a_1,\dots, a_n)\ge \frac{1}{M}\cdot |J_n(a_1,\dots, a_n)|. $$ \end{lem} \begin{proof} This lemma can be observed from the positions of the cylinders in Proposition \ref{pp2}. A detailed proof can be found in \cite{HuWuXu}. \end{proof} Recall the definition of $\mathcal U$. Every element $x\in E$ can be written as \begin{align*} x=[w_1^{(1)},\dots, w_{\ell_1}^{(1)}, a_{n_1}, a_{n_1+1}, & w_1^{(2)},\dots, w_{\ell_2}^{(2)}, a_{n_2},a_{n_2+1},\\ \dots, & w_1^{(k)},\dots, w_{\ell_k}^{(k)}, a_{n_k},a_{n_k+1},\dots], \end{align*} where $w\in \mathcal U$ and $$\alpha_0^{n_k}\le a_{n_k}< 2\alpha_0^{n_k}, \ \alpha_1^{n_k}\le a_{n_k+1}< 2\alpha_1^{n_k},\ \ {\text{for all}}\ k\ge 1.$$ \begin{lem}[Estimation on $q_n(x)$]\label{l6.4} Let $n_k+1< n\le n_{k+1}+1$. \begin{itemize} \item $n=n_k+1+\ell N$ for some $1\le \ell\le \ell_{k+1}$, $$ q_{n_k+1+\ell N}(x)\le \left(2^{\ell}\cdot \prod_{i=1}^{\ell}q_N(w_i^{(k+1)})\right)\cdot \prod_{t=1}^{k} \left(2^{\ell_t+4}\alpha_0^{n_t} \alpha_1^{n_t}\prod_{l=1}^{\ell_t}q_N(w_l^{(t)})\right). $$ \item $n=n_{k+1}$, $$ q_{n_{k+1}}(x)\le \left(2^{\ell_{k+1}+2}\alpha_0^{n_{k+1}}\cdot \prod_{i=1}^{\ell_{k+1}}q_N(w_i^{(k+1)})\right)\cdot \prod_{t=1}^{k} \left(2^{\ell_t+4}\alpha_0^{n_t} \alpha_1^{n_t}\prod_{l=1}^{\ell_t}q_N(w_l^{(t)})\right). $$ \item $n=n_{k+1}+1$, $$ q_{n_{k+1}+1}(x)\le \prod_{t=1}^{k+1} \left(2^{\ell_t+4}\alpha_0^{n_t} \alpha_1^{n_t}\prod_{l=1}^{\ell_t}q_N(w_l^{(t)})\right). $$ \item for any $n$ with $n_k+1+(\ell-1)N<n<n_k+1+\ell N$, $$ \frac{1}{(M+1)^N}\cdot q_{n_k+1+\ell N}(x)\le q_n(x)\le (M+1)^{N}\cdot q_{n_k+1+(\ell-1)N}(x). $$ \end{itemize} \end{lem} \begin{proof} Use the second item in Proposition \ref{pp3} recursively to get the first estimation. More precisely, \begin{align*} q_{n_k+1+\ell N}(x)&\le \left(2^{\ell}\cdot \prod_{i=1}^{\ell}q_N(w_i^{(k+1)})\right)\cdot q_{n_k+1}(x)\\ &\le \left(2^{\ell}\cdot \prod_{i=1}^{\ell}q_N(w_i^{(k+1)})\right)\cdot \left(2^{\ell_k+4}\alpha_0^{n_k} \alpha_1^{n_k}\prod_{l=1}^{\ell_k}q_N(w_l^{(k)})\right)\cdot q_{n_{k-1}+1}(x). \end{align*}For the next two, one just use $$ q_{n+1}=a_{n+1}q_n+q_{n-1}\le (a_{n+1}+1)q_n. $$ For the last item, note that the partial quotients $$ 1\le a_n\le M, \ {\text{for all}}\ n_k+1+(\ell-1)N<n<n_k+1+\ell N. $$ \end{proof} We estimate the length of basic cylinders $J_n(x)$ for all $n\ge 1$. For $n_k+1\le n<n_{k+1}-1$, we have \begin{align*} |J_n(x)|=\left|\frac{p_n+p_{n-1}}{q_n+q_{n-1}}-\frac{(M+1)p_n+p_{n-1}}{(M+1)q_n+q_{n-1}}\right|=\frac{M}{(q_n+q_{n-1})((M+1)q_n+q_{n-1})}\ge \frac{1}{8q_n^2}, \end{align*} and similarly, $$ \frac{1}{\alpha_0^{n_{k}}q_{n_{k}-1}^2(x)}>|J_{n_{k}-1}(x)|\ge \frac{1}{8\alpha_0^{n_{k}}q_{n_{k}-1}^2(x)},\ \ \ \ \frac{1}{\alpha_1^{n_{k}}q_{n_{k}}^2(x)}>|J_{n_{k}}(x)|\ge \frac{1}{8\alpha_1^{n_{k}}q_{n_{k}}^2(x)}. $$ Consequently, we have \begin{lem}[Length estimation]\label{l6.5} Let $n_k-1\le n<n_{k+1}-1$. \begin{itemize} \item For $n=n_{k}-1=n_{k-1}+1+\ell_k N$, \begin{equation*}\label{ff8}|J_{n_{k}-1}(x)|\ge \frac{1}{2^3\alpha_0^{n_{k}}}\cdot \left(\frac{1}{2^{2\ell_{k}}}\cdot \prod_{i=1}^{\ell_{k}}\frac{1}{q_N^2(w_i^{(k)})}\right)\cdot \left[\prod_{t=1}^{k-1}\left(\frac{1}{2^{2\ell_t+8}}\cdot\frac{1}{\alpha_0^{2n_t}\alpha_1^{2n_t}} \cdot\prod_{l=1}^{\ell_t}\frac{1}{q_N^2(w_l^{(t)})}\right)\right].\end{equation*} \item for $n=n_{k}$, \begin{equation*}\label{ff9}|J_{n_{k}}(x)|\ge \frac{1}{2^8}\cdot \left(\frac{1}{\alpha_0^{n_{k}}\alpha_1^{n_{k}}}\cdot |J_{n_{k}-1}(x)|\right).\end{equation*} \item for $n=n_{k}+1$, \begin{equation*}\label{ff10}|J_{n_{k}+1}(x)|\ge \frac{1}{2^8}\cdot \frac{1}{\alpha_0^{n_{k}}\alpha_1^{2n_{k}}}\cdot |J_{n_{k}-1}(x)|. \end{equation*} \item for each $1\le \ell<\ell_{k+1}$, \begin{equation*}\label{ff12}|J_{n_k+1+\ell N}(x)|\ge \frac{1}{2^3}\cdot \left(\frac{1}{2^{2\ell}}\cdot \prod_{i=1}^{\ell}\frac{1}{q_N^2(w_i^{(k+1)})}\right)\cdot \left[\prod_{t=1}^{k}\left(\frac{1}{2^{2\ell_t+4}}\cdot \frac{1}{\alpha_0^{2n_t}\alpha_1^{2n_t}}\cdot\prod_{l=1}^{\ell_t}\frac{1}{q_N^2(w_l^{(t)})}\right)\right].\end{equation*} \item for $n_k+1+(\ell-1)N<n<n_k+1+\ell N$ with $1\le \ell\le \ell_{k+1}$, \begin{equation*}\label{ff13} |J_{n}(x)|\ge c\cdot |J_{n_k+1+(\ell-1)N}(x)|, \end{equation*} where $c=c(M, N)$ is an absolute constant. \end{itemize} \end{lem} \subsection{Mass distribution}\ We define a probability measure supported on the Cantor set $E$. Still express an element $x\in E$ as \begin{align*} x=[w_1^{(1)},\dots, w_{\ell_1}^{(1)}, &a_{n_1}, a_{n_1+1}, w_1^{(2)},\dots, w_{\ell_2}^{(2)}, a_{n_2},a_{n_2+1},\\ &\dots, w_1^{(k)},\dots, w_{\ell_k}^{(k)}, a_{n_k},a_{n_k+1},\dots], \end{align*} where $w\in \mathcal U$ and $$ \ \ \alpha_0^{n_k}\le a_{n_k}< 2\alpha_0^{n_k},\ \ \alpha_1^{n_k}\le a_{n_k+1}< 2\alpha_1^{n_k}\ \ {\text{for all}}\ k\ge 1. $$ We define a measure $\mu$ along the basic intervals $J_n(x)$ containing $x$ as follows. \begin{itemize} \item Let $n\le n_1+1$. \begin{itemize} \item for each $1\le \ell\le \ell_1$, define $$ \mu\big(J_{N\ell}(x)\big)=\prod_{i=1}^{\ell}\frac{1}{q_N(w_i^{(1)})^{2s}\cdot \alpha_0^{sN}}. $$ Because of the arbitrariness of $x$, this defines the measure on all basic cylinders of order $\ell N$. \item for each integer $n$ with $(\ell-1)N<n<\ell N$ for some $1\le \ell\le \ell_1$, define $$ \mu\big(J_n(x)\big)=\sum_{J_{\ell N}\subset J_n(x)}\mu\big(J_{\ell N}\big), $$ where the summation is over all basic cylinders of order $\ell N$ contained in $J_{n}(x)$. This is designed to ensure the consistency of a measure {and} defines the measure on the basic cylinders of order up to $n_1-1$. \item for each $0\le i\le 1$, define $$ \mu\big(J_{n_1+i}(x)\big)=\prod_{j=0}^i\frac{1}{\alpha_j^n}\cdot \mu\big(J_{n_1-1}(x)\big)= \prod_{l=1}^{\ell_1}\frac{1}{q_N(w_l^{(1)})^{2s}\cdot \alpha_0^{sN}}\cdot\prod_{j=0}^i\frac{1}{\alpha_j^n}. $$ \end{itemize} \item Let $n_{k}+1<n\le n_{k+1}+1$. Assume the measure of all basic intervals of order $n_{k}+1$ has been defined. \begin{itemize} \item for each $1\le \ell\le \ell_{k+1}$, define \begin{align}\label{ff7} \mu\big(J_{n_{k}+1+N\ell}(x)\big)&=\prod_{i=1}^{\ell}\frac{1}{q_N(w_i^{(k+1)})^{2s}\cdot \alpha_0^{sN}}\cdot \mu\big(J_{n_{k}+1}(x)\big)\nonumber\\ &=\left[\prod_{i=1}^{\ell}\frac{1}{q_N(w_i^{(k+1)})^{2s}\cdot \alpha_0^{sN}}\right]\cdot \left[\prod_{t=1}^k\left(\frac{1}{\alpha_0^{n_t}\alpha_1^{n_t}}\prod_{l=1}^{\ell_t}\frac{1}{q_N^{2s}(w_l^{(t)})\cdot \alpha_0^{sN}}\right)\right]. \end{align} \item for each integer $n$ with $n_k+1+(\ell-1)N<n<n_k+1+\ell N$ for some $1\le \ell\le \ell_{k+1}$, define $$ \mu\big(J_n(x)\big)=\sum_{J_{n_k+1+\ell N}\subset J_n(x)}\mu(J_{n_k+1+\ell N}). $$ {This defines the measure on the basic cylinders of order up to $n_{k+1}-1$.} \item for each $0\le i\le 1$, define \begin{align}\label{ff11} \mu\big(J_{n_{k+1}+i}(x)\big)&=\prod_{j=0}^i\frac{1}{\alpha_j^{n_{k+1}}}\cdot \mu\big(J_{n_{k+1}-1}(x)\big) \end{align} \end{itemize} \end{itemize} Look at (\ref{ff7}) for the measure of a basic cylinder of order $n_k+1+\ell N$ and its predecessor of order $n_k+1+(\ell-1)N$: the former has more one term than the latter, i.e. the term $$ \frac{1}{q_N^{2s}(w_{\ell}^{(k+1)})\alpha_0^{sN}} $$ which is uniformly bounded by some constant depending on $M,N, B$. Thus there is an absolute constant $c=c(M,N,B)>0$ such that for each integer $n$ with \begin{itemize} \item when $n_k+1+(\ell-1)N\le n\le n_k+1+\ell N$, \begin{equation*}\label{ff1}\mu\big(J_n(x)\big)\ge c\cdot \mu\big(J_{n_k+1+(\ell-1)N}(x)\big).\end{equation*} \item when $n_k+1\le n<n_{k+1}-1$, \begin{equation}\label{ff2} \mu\big(J_{n+1}(x)\big)\ge c\cdot \mu\big(J_n(x)\big). \end{equation} \end{itemize} \subsection{H\"{o}lder exponent of $\mu$ for basic cylinders}\ We begin with some simple relations between $A$ and $B$ and beyond. \begin{lem}\label{l2} Recall the real number $A$ and the integer $N$ given before in \eqref{fff1}. Then \begin{itemize} \item $\displaystyle \left(\frac{1}{\alpha_1\alpha_0^2}\right)^s=\frac{1}{\alpha_0}\cdot \frac{1}{\alpha_0^s}, \ \ {\text{equivalently}}\ \frac{1}{\alpha_0}=\left(\frac{1}{\alpha_0\alpha_1}\right)^{s}; $ \smallskip \item $\displaystyle \frac{1}{\alpha_0\alpha_1}\cdot \frac{1}{\alpha_0^s}\le \left(\frac{1}{\alpha_0^2\alpha_1^2}\right)^s\ \ {\text{equivalently}}\ \frac{1}{\alpha_0\alpha_1}\le \left(\frac{1}{\alpha_0\alpha_1^2}\right)^s; $ \smallskip \item Let $\epsilon>0$. Then we can {choose an} integer $N$ so large and $\{\ell_k\}$ so sparse that $$ 2^{2\ell_k+8}\le \left(2^{(N-1)\ell_k}\right)^{\epsilon}\text{ {and} }\ \ell_kN\ge (1-\epsilon)n_k\ \ {\text{for all}}\ k\ge 1. $$ \end{itemize} \end{lem} \begin{proof} Recall that we are in the case {when} $$ f_{{t_0,t_1}}(s)=\frac{sf_{{{t_0}}}(s)}{{{t_1}}\left[f_{{{t_0}}}(s)+\frac{s}{{{t_1}}}-\frac{2s-1}{{{t_0}}}\right]} =\frac{sf_{{{t_0}}}(s)}{{{t_1}}\left[\frac{s}{{{t_1}}}+\frac{1-s}{{{t_0}}}\right]}. $$ Thus, by recalling the choice of $A$, it follows that\begin{align*} \left(\frac{1}{\alpha_1\alpha_0^2}\right)^s=\frac{1}{\alpha_0}\cdot \frac{1}{\alpha_0^s}&\Longleftrightarrow \alpha_1^s=\alpha_0^{1-s} \Longleftrightarrow \left(\frac{B}{A}\right)^{\frac{s}{{{t_1}}}}=A^{\frac{1-s}{{{t_0}}}}\\ &\Longleftrightarrow {\frac{s}{{{t_1}}}}\log B=\left({\frac{s}{{{t_1}}}+\frac{1-s}{{{t_0}}}}\right)\log A\\ & \Longleftrightarrow {\frac{s}{{{t_1}}}}\log B=\frac{sf_{{{t_0}}}(s)}{{{t_1}}f_{{t_0,t_1}}(s)}\log A\\ &\Longleftrightarrow f_{{t_0,t_1}}(s)\log B=f_{t_0}(s)\log A, \end{align*} where the last equality is just how $A$ was chosen. Substitute the first equality into the second claim, it is nothing but to say$$ \alpha_1^s\le \alpha_1. $$ The last claim is trivial. \end{proof} We compare the measure and length of $J_{n}(x)$. (1) {Let} $n={n_{k}-1}$ which is equal to $n_{k-1}+1+\ell_{k}N$. {Recall (\ref{ff7}) with $k$ replaced by $k-1$, and then take $\ell=\ell_k$}. Summing over the product on $\alpha_0$ and using the third item in Lemma \ref{l2}, it follows that \begin{align*} \mu\big(J_{n_{k}-1}(x)\big)&\le \left(\frac{1}{\alpha_0^{n_{k}s}}\right)^{1-\epsilon}\left[\prod_{i=1}^{\ell_{k}}\frac{1}{q_N(w_i^{(k)})^{2s}}\right]\cdot \left[\prod_{t=1}^{k-1}\left(\Big(\frac{1}{\alpha_0^{n_t}\alpha_1^{n_t}\alpha_0^{n_t s}}\Big)^{1-\epsilon}\prod_{l=1}^{\ell_t}\frac{1}{q_N^{2s}(w_l^{(t)})}\right)\right].\end{align*} Then using the second item in Lemma \ref{l2} by changing $\alpha_0^{1+s}\alpha_1$ to $(\alpha_0^2\alpha_1^2)^s$, one has \begin{align*} \mu\big(J_{n_{k}-1}(x)\big)&\le \left(\frac{1}{\alpha_0^{n_{k}s}}\right)^{1-\epsilon}\left[\prod_{i=1}^{\ell_{k}}\frac{1}{q_N(w_i^{(k)})^{2s}}\right]\cdot \left[\prod_{t=1}^{k-1}\left(\Big(\frac{1}{\alpha_0^{2n_t}\alpha_1^{2n_t}}\Big)^{s(1-\epsilon)}\prod_{l=1}^{\ell_t}\frac{1}{q_N^{2s}(w_l^{(t)})}\right)\right]. \end{align*} At last, by the third item in Lemma \ref{l2}, we have $$ \prod_{l=1}^{\ell_t}\frac{1}{q_N^{2s}(w_l^{(t)})}\le \left(\frac{1}{2^{2\ell_t+8}}\cdot \prod_{l=1}^{\ell_t}\frac{1}{q_N^{2}(w_l^{(t)})}\right)^{s(1-\epsilon)}. $$ Finally, by comparing with the length of $J_{n_{k}-1}(x)$ (Lemma \ref{l6.5}), we arrive at $$ \mu\big(J_{n_{k}-1}(x)\big)\le 8\cdot |J_{n_{k}-1}(x)|^{s(1-\epsilon)}. $$ (2) {Let} $n={n_{k}}$. Recall (\ref{ff11}). By the first item in Lemma \ref{l2}, \begin{align*} \mu\big(J_{n_{k}}(x)\big)&=\frac{1}{\alpha_0^{n_{k}}}\cdot \mu\big(J_{n_{k}-1}(x)\big)\le 8\cdot \frac{1}{\alpha_0^{n_{k}}}\cdot |J_{n_{k}-1}(x)|^{s(1-\epsilon)}\\ &\le 8\cdot \left(\frac{1}{\alpha_0^{n_{k}}\alpha_1^{n_{k}}}\cdot \big|J_{n_{k}-1}(x)\big|\right)^{s(1-\epsilon)}.\end{align*} By comparing with the length of $J_{n_{k}}(x)$ (Lemma \ref{l6.5}), we arrive at $$ \mu\big(J_{n_{k}}(x)\big)\le 2^{11}\cdot \big|J_{n_{k}}(x)\big|^{s(1-\epsilon)}. $$ (3) {Let} $n=n_{k}+1$. Recall (\ref{ff11}). By the second item in Lemma \ref{l2}, \begin{align*} \mu\big(J_{n_{k}+1}(x)\big)&=\frac{1}{\alpha_0^{n_{k}}\alpha_1^{n_{k}}}\cdot \mu\big(J_{n_{k}-1}(x)\big)\le 8\cdot \frac{1}{\alpha_0^{n_{k}}\alpha_1^{n_{k}}}\cdot \big|J_{n_{k}-1}(x)\big|^{s(1-\epsilon)}\\ &\le 8\cdot \left(\frac{1}{\alpha_0^{n_{k}}\alpha_1^{2n_{k}}}\cdot \big|J_{n_{k}-1}(x)\big|\right)^{s(1-\epsilon)}.\end{align*} By comparing with the length of $J_{n_{k}+1}(x)$ (Lemma \ref{l6.5}), we arrive at $$ \mu\big(J_{n_{k}+1}(x)\big)\le 2^{11}\cdot \big|J_{n_{k}+1}(x)\big|^{s(1-\epsilon)}. $$ (4) {Let} $n=n_k+1+\ell N$ for some $1\le \ell<\ell_{k+1}$. Compare Lemma \ref{l6.5} and the formula (\ref{ff7}). In (\ref{ff7}), {after deleting} the term $\alpha_0^{sN}$ in the first product and {changing} $\alpha_0^{1+s}\alpha_1$ to $(\alpha_0^2\alpha_1^2)^s$ in the second product, we will arrive at $$ \mu\big(J_{n_k+1+\ell N}(x)\big)\le 2^{11}\cdot \big|J_{n_k+1+\ell N}(x)\big|^{s(1-\epsilon)}. $$ (5) For other $n$, let $1\le \ell\le \ell_k$ be the integer such that $$ n_k+1+(\ell-1)N\le n<n_k+1+\ell N. $$ Then \begin{align*} \mu\big(J_n(x)\big)\le \mu\big(J_{n_k+1+(\ell-1)N}(x)\big)\le 2^{11}\cdot \big|J_{n_k+1+(\ell-1)N}(x)\big|^{s(1-\epsilon)}\le 2^{11}\cdot c^{-1}\cdot \big|J_{n}(x)\big|^{s(1-\epsilon)}, \end{align*} where for the last inequality we have used Lemma \ref{l6.5} for the equivalence of the lengths of the two basic cylinders. In a summary, we have show that for some absolute constant $c_1$, for any $n\ge 1$ and $x\in E$, \begin{align}\label{g1} \mu\big(J_n(x)\big)\le c_1\cdot \big|J_n(x)\big|^{s(1-\epsilon)}. \end{align} \subsection{H\"{o}lder exponent of $\mu$ for a general ball} \ Recall Lemma \ref{l3} about the relation between the gap and the length of the basic cylinders: $$ G_n(x)\ge \frac{1}{M}\cdot |J_n(x)|. $$ We consider the measure of a general ball $B(x,r)$ with $x\in E$ and $r$ small. Let $n$ be the integer such that $$ G_{n+1}(x)\le r<G_{n}(x). $$ Then the ball $B(x,r)$ can only intersect one basic cylinder of order $n$, i.e. the basic cylinder $J_n(x)$, and so all the basic cylinders of order $n+1$ for which $B(x,r)$ can intersect are all contained in $J_n(x)$. Let $k$ be the integer such that $$ n_{k-1}+1\le n\le n_{k}. $$ (1) {Let} $n_{k-1}+1\le n<n_{k}-1$. By (\ref{ff2}) and (\ref{g1}), it follows that \begin{align*} \mu\big(B(x,r)\big)&\le \mu\big(J_n(x)\big)\le c\cdot \mu\big(J_{n+1}(x)\big)\le c\cdot c_1\cdot \big|J_{n+1}(x)\big|^{s(1-\epsilon)}\\&\le c\cdot c_1\cdot M\cdot \big(G_{n+1}(x)\big)^{s(1-\epsilon)}\le c\cdot c_1\cdot M\cdot r^{s(1-\epsilon)}. \end{align*} (2) {Let} $n=n_{k}-1$. The ball $B(x,r)$ can only intersect the basic cylinder $J_{n_k-1}(x)$ of order $n_k-1$. Now we consider how many basic cylinders of order $n_k$ contained in $J_{n_k-1}(x)$ and with non-empty intersecting with the ball $B(x,r)$. We write a basic cylinder of order $n_k$ contained in $J_{n_k-1}(x)$ as $$ J_{n_k}(u, a), \ {\text{for some}}\ \alpha_0^{n_k}\le a<2\alpha_0^{n_k}. $$ It is trivial that for each $a$, the basic cylinder $J_{n_{k}}(u,a)$ is contained in the cylinder $I_{n_k}(u,a)$ and the latter interval is of length $$ \frac{1}{q_{n_k}(q_{n_k}+q_{n_k-1})}\ge \frac{1}{8}\cdot \frac{1}{q_{n_k-1}^2(u)\alpha_0^{2n_k}}. $$ \begin{itemize} \item {Let} $$ r<\frac{1}{8}\cdot\frac{1}{q_{n_k-1}^2(u)\alpha_0^{2n_k}}. $$ Then the ball $B(x,r)$ can intersect at most three cylinders $I_{n_k}(u,a)$ and so three basic cylinders $J_{n_k}(u,a)$. Note that all those basic cylinder are of the same $\mu$-measure, thus \begin{align*} \mu\big(B(x,r)\big)&\le 3\mu\big(J_{n_k}(x)\big)\le 3 \cdot c_1\cdot |J_{n_k}(x)|^{s(1-\epsilon)}\\ &\le 3\cdot c_1\cdot M\cdot G_{n+1}(x)^{s(1-\epsilon)}\le 3\cdot c_1\cdot M\cdot r^{s(1-\epsilon)}. \end{align*} \item {Let} $$ r\ge \frac{1}{8}\cdot\frac{1}{q_{n_k-1}^2(u)\alpha_0^{2n_k}}. $$ The number of cylinders $I_{n_k}(u,a)$ for which the ball $B(x,r)$ can intersect is at most $$ {16r}\cdot q_{n_k-1}^{2}(u)\alpha_0^{2n_{k}}+2\le 2^5\cdot {r}\cdot q_{n_k-1}^{2}(u)\alpha_0^{2n_{k}}, $$ so at most this number of basic cylinders of order $n_k$ for which the ball $B(x,r)$ can intersect. Thus \begin{align*} \mu\big(B(x,r)\big)&\le \min\Big\{\mu\big(J_{n_k-1}(x)\big),\ \ 2^5\cdot {r}\cdot q_{n_k-1}^{2}(u)\alpha_0^{2n_{k}}\cdot \frac{1}{\alpha_0^{n_k}}\cdot \mu\big(J_{n_k-1}(x)\big)\Big\}\\ &\le c_1\cdot |J_{n_k-1}(x)|^{s(1-\epsilon)}\cdot \min\Big\{1, 2^5\cdot {r}\cdot q_{n_k-1}^{2}(u)\alpha_0^{n_{k}}\Big\}\\ &\le c_1\cdot \left(\frac{1}{q_{n_k-1}(u)^2 \alpha_0^{n_k}}\right)^{s(1-\epsilon)}\cdot 1^{1-s(1-\epsilon)}\cdot \Big(2^5\cdot {r}\cdot q_{n_k-1}^{2}(u)\alpha_0^{n_{k}}\Big)^{s(1-\epsilon)}\\ &=c_2 \cdot r^{s(1-\epsilon)}. \end{align*} \end{itemize} (3) {Let} $n=n_k$. By changing $n_k-1$ and $\alpha_0$ in case (2) to $n_k$ and $\alpha_1$ respectively and then following the same argument as in case (2), we can arrive the same conclusion. \subsection{Conclusion}\ Thus by applying the mass distribution principle (Proposition \ref{p1}), it yields that $$ \hdim E\ge s(1-\epsilon). $$ Since $E\subseteq {\mathcal{E}_{\bold{t}}(B)}$ and $\epsilon, s$ are arbitrary, we conclude that $$ \hdim {\mathcal{E}_{\bold{t}}(B)}\ge s_o. $$ \section{Completing the proof of Theorem \ref{BHWthm}} {\sc Upper bound}. For any $\epsilon>0$, one has $$ \Psi(n)\ge (B-\epsilon)^n \ \ {\text{for all}}\ n\gg 1. $$ Thus $$ {\mathcal{E}_{\bold{t}}(\Psi)}\subset \Big\{x\in [0,1): a_{n}^{{{t_0}}}(x)a_{n+1}^{{{t_1}}}(x)\ge (B-\epsilon)^n, \ {\text{i.m.}}\ n\in \mathbb N\Big\}. $$ Therefore, $$ \hdim \mathcal{E}_{\bold{t}}(\Psi)\le s_o(B-\epsilon). $$ Recall Proposition \ref{tb} for the continuity of $s_o=s_o(B)$ with respect to $B$. Then by letting $\epsilon\to 0$, the upper bound {for} $\hdim \mathcal{E}_{\bold{t}}(\Psi)$ follows. {{\sc Lower bound}. The argument for the lower bound of $\mathcal{E}_{\bold{t}}(\Psi)$ is almost the same as for $\mathcal{E}_{\bold{t}}(B)$ given in last section. So we only give the outline of the proof and mark some minor differences. Recall the definition of $s_o(B)$ and Lemma \ref{l6.2}. If $\frac{s_o(B)}{{{t_1}}}-\frac{2s_o(B)-1}{{{t_0}}}\le 0$, then by Theorem \ref{WaWu} and Lemma \ref{l6.2} it follows that $$ \hdim \mathcal{E}_{\bold{t}}(\Psi)\ge \hdim \Big\{x\in [0,1): a_{n+1}^{t_1}(x) \ge \Psi(n), \ {\text{i.m.}}\ n\in \mathbb N\Big\}=s_o(B). $$ Then we are in the remaining case {when} \begin{equation}\label{1} \frac{s_o(B)}{{{t_1}}}-\frac{2s_o(B)-1}{{{t_0}}}>0. \end{equation}} {At first, choose a real number {$\widetilde{B}>B$ close enough to} $B$ such that (\ref{1}) is still true when replacing $B$ by $\widetilde{B}$. Secondly fix integers $M,N$ sufficiently large such that $s:=s^{(2)}_N\big(M,\widetilde{B}\big)$ is in {a small enough} neighborhood of $s_o(\widetilde{B})$ so that $1>s>1/2$, and (\ref{ff5}) holds. At last define a real number $\widetilde{A}$ such that \begin{equation*} f_{{{t_0}}}(s)\log \widetilde{A}=f_{{t_0, t_1}}(s)\log \widetilde{B}. \end{equation*} By the definition of $B$, one can choose a sparse {enough} sequence of integers $\{n_k\}_{k\ge 1}$ such that $$ \Psi(n_k)\le \widetilde{B}^{n_k} \ {\text{for all}}\ k\ge 1. $$ Thus $$ \mathcal{E}_{\bold{t}}(\Psi)\supset \Big\{x\in [0,1): a_{n_k}^{{{t_0}}}(x)a_{n_k+1}^{{{t_1}}}(x)\ge \widetilde{B}^{n_k} \ {\text{for all}}\ {k\ge 1}\Big\}. $$} {So we are almost in the same situation as {when} proving the lower bound {for} $\hdim \mathcal{E}_{\bold{t}}(B)$. The only difference, {besides the notational differences $(A,B) \mapsto (\widetilde{A}, \widetilde{B})$}, is that the number of the integers in the interval $(n_k+1, n_{k+1})$ may not be a multiple of $N$. } {Therefore, for all $k\ge 1$, write {(shifting the indices from $n_0+1$ to $0$)} $$ (n_{k}-1)-(n_{k-1}+1)=\ell_{k}N+i_{k} \ {\text{for some}}\ 0\le i_{k}<N, $$ and define a Cantor subset of $\mathcal{E}_{\bold{t}}(\Psi)$ as \begin{align*} \widetilde{E} =\Bigg\{x\in [0,1): \widetilde{A}^{\frac{n_k}{{{t_0}}}}\le a_{n_k}(x)&<2 {\widetilde{A}^{\frac{n_k}{{{t_0}}}}}, \ \left(\frac{\widetilde{B}^{n_k}}{\widetilde{A}^{n_k}}\right)^{1/{{t_1}}}\le a_{n_k+1}(x)<2\left(\frac{\widetilde{B}^{n_k}}{\widetilde{A}^{n_k}}\right)^{1/{{t_1}}} {\text{for all}} \ k\ge 1; \\ & a_{n_k+2}(x)=\cdots =a_{n_k+1+i_{k+1}}(x)=2 \ {\text{for all}} \ k\ge 0; \\ &{\text{and}}\ a_n(x)\in \{1,\dots, M\} \ {\text{for other $n\in \mathbb N$}}\Bigg\}. \end{align*} Use the same notation as in Section \ref{s6}: $$ \mathcal U=\{w=(\sigma_1,\dots, \sigma_N): 1\le \sigma_i\le M, \ 1\le i\le N\} $$ and $$ \alpha_0=\widetilde{A}^{1/{{t_0}}}, \ \ \alpha_1=\left(\frac{\widetilde{B}}{\widetilde{A}}\right)^{1/{{t_1}}}, $$ and {define} $J_n(x)$ in the same way. A generic element $x\in \widetilde{E}$ can be written as \begin{align*} x=\big[\eta^{(1)}, w_1^{(1)},\dots, w_{\ell_1}^{(1)}, a_{n_1}, a_{n_1+1},\ & \eta^{(2)}, w_1^{(2)},\dots, w_{\ell_2}^{(2)}, a_{n_2},a_{n_2+1},\\ \dots, \ & \eta^{(k)}, w_1^{(k)},\dots, w_{\ell_k}^{(k)}, a_{n_k},a_{n_k+1},\dots\big], \end{align*} where $\eta^{(k)}=(\underbrace{2, \dots, 2}_{i_k})$, $w\in \mathcal U$, and $$ \alpha_0^{n_k}\le a_{n_k}< 2\alpha_0^{n_k}, \ \alpha_1^{n_k}\le a_{n_k+1}< 2\alpha_1^{n_k}\ \ {\text{for all}}\ k\ge 1. $$ Recall {that} $s=s^{(2)}_N(M, \widetilde{B})$. We define the measure {of} the basic intervals $J_n(x)$ containing $x$ as follows. Note that for all $x\in \widetilde{E}$ their partial quotients $a_n(x)$ have only one choice for all $$ n_k+1<n\le n_k+1+i_{k+1}, \ {\text{with}}\ k\ge 0. $$ So when defining a mass distribution $\mu$ on $\widetilde{E}$, one must have that$$ \mu\big(J_n(x)\big)=\mu\big(J_{n_k+1}(x)\big), \ {\text{for all}}\ n_k+1<n\le n_{k}+1+i_{k+1}. $$ Except such a restriction, we define the measure on $\widetilde{E}$ in the way as did in Section \ref{s6}: Let $n_{k}+1<n\le n_{k+1}+1$. Assume the measure of all basic intervals of order $n_{k}+1$ has been defined. \begin{itemize} \item for each $n_k+1<n\le n_k+1+i_{k+1}$, define $$ \mu\big(J_n(x)\big)=\mu\big(J_{n_k+1}(x)\big), $$ \item for each $1\le \ell\le \ell_{k+1}$, define \begin{align*} \mu\big(J_{n_{k}+1+i_{k+1}+N\ell}(x)\big)&=\prod_{i=1}^{\ell}\frac{1}{q_N(w_i^{(k+1)})^{2s}\cdot \alpha_1^{sN}}\cdot \mu\big(J_{n_{k}+1+i_{k+1}}(x)\big). \end{align*} \item for each integer $n$ with $n_k+1+i_{k+1}+(\ell-1)N<n<n_k+1+i_{k+1}+\ell N$ for some $1\le \ell\le \ell_{k+1}$, define $$ \mu\big(J_n(x)\big)=\sum_{J_{n_k+1+i_{k+1}+\ell N}\subset J_n(x)}\mu(J_{n_k+1+i_{k+1}+\ell N}). $$ \item for each $0\le i\le 1$, define \begin{align*} \mu\big(J_{n_{k+1}+i}(x)\big)&=\prod_{j=0}^i\frac{1}{\alpha_j^{n_{k+1}}}\cdot \mu\big(J_{n_{k+1}}(x)\big) \end{align*} \end{itemize} Then we will use the mass distribution principle (Proposition \ref{p1}) to reach our conclusion that $$\hdim \mathcal{E}_{\bold{t}}(\Psi)\ge s_o(B).$$ So the {remaining} task is to compare the $\mu$-measure of a ball $B(x,r)$ with $r$. The gap estimation (Lemma \ref{l3}) is still true without any change and the estimation on $q_n(x)$ (Lemma \ref{l6.4}) is similar just by adding some terms of the power of 2. Then the remaining argument can {proceed as} in Section \ref{s6} with some obvious modifications. We omit the details.} {\color{blue}} \section{Final Comments}\label{final} One might wonder to extend Theorem \ref{BHWthm} to all $m\geq 2$. Our methods for the upper bound calculations extend easily to any $m$, but the major difficulty lies in establishing the lower bound and proving that it is equal to the upper bound estimate. To be precise, it is possible to prove the following formula: \begin{thm}\label{BHWthm2} Let $\Psi:\mathbb N\to\mathbb R_{\ge 1}$ {be such that $1<B< \infty$. Then} \begin{equation*} \begin{array}{ll} \hdim {\mathcal E_{\mathbf t}}(\Psi ) & \leq \inf \{s\ge 0: P(T, -s\log |T'|-f_{{{t_0,\dots, t_{m-1}}}}(s)\log B)\le 0\} \ \mathrm{for \ all} \ m, \ but \\ [3ex] \hdim {\mathcal E_{\mathbf t}}(\Psi )& {\ge} \inf \{s\ge 0: P(T, -s\log |T'|-f_{{t_0,t_1}}(s)\log B)\le 0\}, \end{array} \end{equation*} where $f_{{{t_0,\dots, t_{m-1}}}}(s)$ is given by the following iterative {procedure} with the starting value as ${f_{t_0}(s)=\frac{s}{t_0}},$ and {\begin{align*} f_{t_0,\dots,t_{\ell}}(s)&=\frac{sf_{t_0,\dots, t_{\ell-1}}(s)}{t_{\ell}f_{t_0,\dots,t_{\ell-1}}(s)+\max\left\{0, s-(2s-1)\frac{t_{\ell}}{t_i}, 0\le i\le \ell-1\right\}}\\ &=\frac{s{f_{t_0,\dots, t_{\ell-1}}(s)}}{t_{\ell}{f_{t_0,\dots,t_{\ell-1}}}(s)+\max\left\{0, s-(2s-1)\frac{t_{\ell}}{\max_{0\le i\le \ell-1}t_i}\right\}}. \end{align*}} \end{thm} We believe that, for $1<B<\infty,$ $$\hdim {\mathcal E_{\mathbf t}}(\Psi ) \geq \inf \{s\ge 0: P(T, -s\log |T'|-f_{{t_0,\dots, t_{m-1}}}(s)\log B)\le 0\}$$ should hold. From the definition of the functions $f_{{t_0,\dots, t_{\ell-1}}}$, the {appearance of the expression \linebreak $\max_{0\le i\le \ell-1}t_i$} in it means that {the partial quotients corresponding to some exponents} $t_i$ will not contribute to the dimension. Thus the major difficulty is to figure out which partial quotients contribute essentially to the dimension and which are not. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
69f413baa77db79fb1aab4ee1ed5ae13402c0f99
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{sec:intro} With intelligence marching from the cloud to the edge, autonomous robots are rising in real-world deployment. They require data to be processed from sensing to actuation in a closed-loop manner. The onboard circuits typically take power from the battery, limiting supported operational time. Recently, with neural networks being deployed at the edge, the demanding real-time performance and energy constraints are becoming increasingly difficult to achieve. Aerial platforms have an additional constraint of payload which limits the weight of the onboard sensors and compute. For tiny terrestrial robots, the form factor becomes a crucial requirement. Therefore, there is a strong need to develop energy-efficient compute platforms and processing frameworks to enable robotic edge intelligence. Several hardware platforms have been used for robotic applications. CPUs and GPUs are designed to handle a wide range of robotic tasks and algorithms development. However, they usually consume 10-100~W of power, which are orders of magnitude higher than available resources on edge robotic systems. Edge processors like Jetson Nano and Google TPU have been used for programmability at a smaller form factor for faster system-level prototyping~\cite{chinchali2021network}. FPGAs are attracting attention due to its reconfigurability and hardware-efficiency, and have been presented for robotic perception~\cite{gao2021ielas,wan2021energy}, localization~\cite{liu2021archytas,liu2022energyefficient}, and planning~\cite{murray2016microarchitecture,neuman2021robomorphic}. The partial reconfiguration technique takes this flexibility one step further, where part of FPGA resources can be reconfigured at runtime without compromising the other parts of applications~\cite{wan2021survey,liu2021robotic}. Edge-robotic specialized ASICs boost the energy-efficiency and customization even further. The constrained power budget on edge robots requires these chips to have a few mW of power consumption. For robotic perception, Jeon et al.~\cite{jeon_uav} present a feature extraction accelerator at 2.7~mW power for micro aerial vehicles. For robotic autonomous navigation, Navion~\cite{suleiman2019navion} accelerates the visual-inertial odometry of nano drones, Li et al.~\cite{li2020high} speed up mutual information computation, and Li et al.~\cite{li2019879gops} combine neural network with physical localization model with proposed specialized accelerators. For end-to-end learning-based robotic control, Kim et al.~\cite{rl_acc} develop a reinforcement learning accelerator for micro drones with 1.1~mW power consumption. For multi-robots scenarios, Honkote et al.~\cite{search_rescue} design a low-power SoC for distributed and collaborative swarm robot systems. Different techniques ranging from voltage-mode circuits~\cite{voltage_mode_circuits}, quantized neural networks~\cite{binary_nn,lam2019quantized,tambe2020algorithm} and sparse coding~\cite{sparse_coding} have been utilized in restricting the power consumption. Several neural network accelerators have been proposed for general vision applications~\cite{cnn_acc1} which can be readily applied to robotic tasks. Apart from energy consumption, memory bandwidth availability also becomes critical for high-speed edge robotic applications. A newer type of visual sensor called event cameras is being explored for such scenarios. Event cameras provide a stream of asynchronous events, and only part of frames are processed at every time instance, allowing high bandwidth, high speed, and high dynamic range~\cite{dvs_survey}. System-level applications of this include a 3~ms latency robotic goalie \cite{robotic_golie} and looming obstacle avoiding drone~\cite{dvs_drone}. The event-based processing modality of event cameras also lends itself naturally to processing bio-inspired spiking neural networks that benefit from data sparsity and low power. Tasks like lane detection and prey capturing~\cite{looming_object} have been demonstrated using spiking neural networks and event camera pairs. In this paper, we will show a series of our proposed ultra-low-power accelerators and system demonstrations for edge robotics, with an emphasis on circuit and system technologies. These demonstrations augment the large landscape of edge robotic systems in reinforcement and swarm learning and neuro-inspired computing technologies. Section~\ref{sec:RL} presents a time-domain mixed-signal accelerator that supports embedded reinforcement learning control. Section~\ref{sec:slam} introduces a hybrid-digital mixed-signal hardware design to enable edge swarm intelligence. Section~\ref{sec:slam} presents an oscillator-based hardware platform for neuro-inspired mapping, localization, and control. Section~\ref{sec:benchmarking} further provides benchmarking and simulation frameworks for cross-layer robotic workload evaluation. The paper concludes with challenges and opportunities for next-generation hardware-efficient robotic computing (Section~\ref{sec:challenges}). \section{Reinforcement Learning on the Edge} \label{sec:RL} This section presents our reinforcement learning neuromorphic accelerator for robotic autonomous navigation and exploration~\cite{amravati201855nm,amaravati201855}. We propose an energy-efficient time-domain mixed-signal computational framework to enable ultra-low-power operations. \subsection{Reinforcement Learning Algorithm} \label{subsec:algo_RL} Achieving true autonomy requires robots to perform continuous learning through frequently interacting with environments. Reinforcement learning (RL) is such a paradigm where robots take actions in environments to maximize the notion of cumulative reward. Among all RL algorithms, Q-learning is one of the well-studied techniques, which is implemented in this design. Q-learning seeks to find the optimal action policy ($A_t$) given the current state ($S_t$) to maximizes the reward ($R_t$). It works on the principle of the action-value function $[Q(S_t, A_t)]$, where $Q$ is iteratively updated from Bellman equation. For a detailed overview of RL algorithms and applications, interested readers are referred to~\cite{sutton2018reinforcement}. \subsection{Circuit and System of RL Neuromorphic Accelerator} \label{subsec:arch_RL} \subsubsection{Overview} \ \newline \indent Fig.~\ref{fig:RL_system} shows the system diagram of our RL neuromorphic accelerator. It consists of ultrasonic sensors, an RL test chip, a Raspberry Pi-based micro-controller, and motor drivers. The ultrasonic sensors feed depth information to the input layer of the neural network through an array of stochastic synapses. A three-layer neural network is implemented to process sensor data and generate actions that robots will follow. The micro-controller stores $(S_t, A_t, R_t, S_{t+1})$ in a scratchpad memory during training and sends action commands from the chip to the motor controllers. \begin{figure}[h] \vspace{-0.1in} \centering\includegraphics[width=\columnwidth]{figs/Sec2/RL_system_diagram_v2.png} \caption{\small The system architecture of our RL neuromorphic accelerator, including different circuit blocks and the interface to the external micro-controllers and motor drivers.} \label{fig:RL_system} \vspace{-0.1in} \end{figure} \subsubsection{Time-Domain Mixed-Signal Circuits} \ \newline \indent To enable low-power edge intelligence on the robotic platform with low bit precision, we propose a time-domain mixed-signal (TD-MS) computational framework for energy-efficient and accurate operations. Fig.~\ref{fig:TDMS}\textcolor{blue}{a} illustrates the TD-MS multiply-and-accumulate (MAC) unit. A pulse input from sensor or hidden layers is used to enable the up-down counter that is triggered by a digitally controlled oscillator (DCO) (Fig.~\ref{fig:TDMS}\textcolor{blue}{b}). The output of the counter is the product of weight $W_i$ (digital) and pulse width $T_{pi}$ of the gating signal (analog). The proposed TD-MS design demonstrates unique advantages: 1) improved energy-efficiency at lower bit width compared to digital MAC operation (Fig.~\ref{fig:TDMS}\textcolor{blue}{c}); (2) the energy to compute is proportional to the significance of the computation - a feature in human brain but missing in digital logic (Fig.~\ref{fig:TDMS}\textcolor{blue}{d-}\ref{fig:TDMS}\textcolor{blue}{e}); (3) 45\% lower system area, 47\% lower interconnect power and 16\% lower leakage power compared to digital design. \begin{figure}[h] \centering\includegraphics[width=\columnwidth]{figs/Sec2/TDMS_v2.pdf} \caption{\small (a) Proposed Time-Domain Mixed-Signal (TD-MS) MAC unit. (b) Digitally controlled oscillator (DCO) in TD-MS unit. (c) Computational energy per MAC for digital and TD-MS design with different bit precision at 0.6~V. (d)(e) 2-D energy per MAC surface for digital and TD-MS design with 6-bit inputs at 0.6~V. \label{fig:TDMS} \end{figure} \subsubsection{Enabling Regularization via Stochasticity} \ \newline \indent To help generalize the learned neural network model to unknown environments, we introduce stochasticity to synapses with drop-connect. As shown in Fig.~\ref{fig:synapse}\textcolor{blue}{a}, the stochasticity is implemented with a buffer chain whose delays are randomly altered by a local high-speed linear-feedback-shift register (LFSR). Compared to the deterministic model, the stochastic network can achieve 1.7$\times$ speedup in convergence (Fig.~\ref{fig:synapse}\textcolor{blue}{b}). \begin{figure}[h] \centering\includegraphics[width=\columnwidth]{figs/Sec2/synapse.pdf} \caption{\small (a) Stochasticity is implemented by introducing varying delays between bit transitions using the linear-feedback-shift register (LFSR). (b) Stochasticity accelerates policy convergence.} \label{fig:synapse} \end{figure} \subsection{Evaluation} \label{subsec:eval_RL} The RL neuromorphic test chip is implemented and taped-out in 55~nm CMOS process (Fig.~\ref{fig:RL_result}\textcolor{blue}{a}). By exploring design space of proposed TD-MS circuits, we show that the chip can ensure correct functionality and voltage scalability from 1.0~V down to 0.4~V (Fig.~\ref{fig:RL_result}\textcolor{blue}{b}). The chip is mounted on a tiny mobile robot for autonomous exploration and learning. Fig.~\ref{fig:RL_result}\textcolor{blue}{c} illustrates the increasing moved distance when robot continuously explores the space in the presence of obstacles. \begin{figure}[h] \centering\includegraphics[width=\columnwidth]{figs/Sec2/RL_result.pdf} \caption{\small (a) RL neuromorphic chip die-photo. (b) Design space showing scalability till 0.4~V. (c) Mobile robot system and its covered distance as a function of the number of clock cycles or iterations. } \label{fig:RL_result} \vspace{-0.15in} \end{figure} \section{Swarm Intelligence on the Edge} \label{sec:swarm} Going beyond single tasks, when facing a variety of tasks, swarms of robots usually collaborate with each other to solve problems. This section presents our energy-efficient hardware accelerator that enables swarm robotic applications~\cite{cao201914,cao201965}. \subsection{Swarm Intelligence Algorithms} \label{subsec:swarm_algo} Swarm algorithms can be broadly classified into two categories, ones based on physical and mathematical models (model-based) and ones based on learning (model-free). Among model-based algorithms, artificial potential field (APF) is a commonly-used approach for collaborative path planning, where the motion force is obtained by aggregating the attractive and repulsive potential field. Model-free algorithms usually allow each robot to learn continuously to establish a model with real-world knowledge without human intervention. RL-based cooperative algorithms have shown great promise. Interested readers are directed to~\cite{la2014multirobot} for more details. Interestingly, we observe that both model-based and model-free autonomy paradigms have similar mathematical structures (Fig.~\ref{fig:unified_paradigm}). Therefore, we identify the commonalities and develop a unified architecture to support real-time swarm intelligence. \begin{figure}[h] \centering\includegraphics[width=\columnwidth]{figs/Sec3/unified_compute_paradigm.png} \caption{\small Common unified compute paradigm that supports both model-based and model-free swarm algorithms.} \label{fig:unified_paradigm} \vspace{-0.1in} \end{figure} \subsection{Circuit and System of Swarm Intelligence Accelerator} \subsubsection{Overview} \ \newline \indent The system architecture of our swarm intelligence accelerator is shown in Fig.~\ref{fig:swarm_arch}. Noticing that both model-based and model-free are combinations of nonlinear and linear operations, we design a nonlinear function evaluator (NFE) and linear processing unit (LPU). NFE supports nonlinear operations by using a piecewise linear approximation of nonlinear functions. LPU supports all addition and multiplication linear operations. Most operations are implemented in the digital domain except for MAC that leverages mixed-signal computing. The datapath of NFE and LPU is bi-directional, so data can move between each other seamlessly and preserve locality. We also observe that several required functions show symmetry and periodicity, which provides a further chance to reduce the number of computations and comparisons. \begin{figure}[h] \centering\includegraphics[width=.7\columnwidth]{figs/Sec3/swarm_arch.png} \caption{\small The system architecture of our swarm intelligence accelerator, supporting both swarm-based and swarm-free autonomy paradigms with a nonlinear function evaluator and linear processing unit. } \label{fig:swarm_arch} \end{figure} \subsubsection{Hybrid-Digital Mixed-Signal Circuits Architecture} \ \newline \indent Swarm algorithms need to support various swarm sizes in dynamic environments. The required bit precision will increase from 3-bit to 8-bit, with swarm size increasing from 2 agents to 20 agents. TD-MS MACs show energy advantages over digital counterparts for low bit width, but exhibit higher energy with increasing operand size (Fig.~\ref{fig:TDMS}\textcolor{blue}{c}). To address this issue, we propose a hybrid-digital mixed-signal (HD-MS) MAC kernel, where computation is purely TD-MS for bit-width$\leq$5, and hybrid of TD-MS and digital for 6$\leq$bit-width$\leq$8. Fig.~\ref{fig:swarm_hdms}\textcolor{blue}{a} shows the circuit schematic of the HD-MS design. The HD-MS MAC kernel consists of a conventional TD-MS multiplier, a 5-8-bit TD-MS controller, and a 5-8-bit digital adder-shifter. The TD-MS multiplier computes $\leq$5-bit operation. The TD-MS controller and digital adder-shifter reconfigure the multiplier to a higher bit width with seamless shift-and-add operations. Fig.~\ref{fig:swarm_hdms}\textcolor{blue}{b} demonstrates that HD-MS have energy-benefits at higher precision compared with TD-MS. Compared with digital implementation, HD-MS exhibits 81\% (for 3-bit) to 31\% (for 8-bit) energy per MAC reduction. \begin{figure}[h] \centering\includegraphics[width=\columnwidth]{figs/Sec3/HDMS_v2.pdf} \caption{\small (a) Circuit schematic of the HD-MS design, including the 5-bit TD-MS kernel and the digital peripherals to enable efficient scaling to 8-bit. (b) Energy/MAC (normalized to a digital implementation) for TD-MS and HD-MS implementations. We observe that HD-MS outperforms TD-MS and digital for large swarm sizes.} \label{fig:swarm_hdms} \vspace{-0.1in} \end{figure} \subsection{Evaluation} \label{subsec:swarm_testchip} Our swarm intelligence accelerator is fabricated in 65~nm CMOS process (Fig.~\ref{fig:swarm_testchip}\textcolor{blue}{a}). Fig.~\ref{fig:swarm_testchip}\textcolor{blue}{b} demonstrates its scalability with bit precision, indicating a peak energy efficiency of 0.22~pJ/MAC (for 3-bit) and 1.76~pJ/MAC (for 8-bit). The average energy efficiency varies from 9.1 TOPS/W (for 3-bit) to 1.1~TOPS/W (for 8-bit). This bit precision scalability allows efficient computation for various swarm sizes. \begin{figure}[h] \centering\includegraphics[width=\columnwidth]{figs/Sec3/swarm_testchip.pdf} \caption{\small (a) Swarm intelligence chip die-photo. (b) Measured energy per MAC across different bit widths at VCC = 0.4, 0.6, and 0.8 V.} \label{fig:swarm_testchip} \vspace{-0.15in} \end{figure} We mounted the chip on a robotic car (Fig.~\ref{fig:swarm_demo}\textcolor{blue}{a} and Fig.~\ref{fig:swarm_demo}\textcolor{blue}{b}). The platform interfaces with a Raspberry Pi, sensors, motor controllers, and radios. We implement four exampled swarm intelligence algorithms, namely path planning, pattern formation, predator-prey, and joint-exploration, where the first two are model-based and the last two are model-free. Fig.~\ref{fig:swarm_demo}\textcolor{blue}{c} shows large variations in energy consumption and the number of actions for each task, illustrating that future robotic platforms need to support a wide variety of algorithms, as the complexities of environments and task can dramatically change. \begin{figure}[h] \vspace{-0.1in} \centering\includegraphics[width=\columnwidth]{figs/Sec3/swarm_demo.pdf} \caption{\small (a) Our swarm intelligence accelerator is mounted on a robotic car with peripheral circuits. (b) Experimental setup. (c) Energy and performance for different template swarm algorithms.} \label{fig:swarm_demo} \vspace{-0.15in} \end{figure} \section{Neuro-inspired Computing on the Edge} \label{sec:slam} Recently, spiking neural networks (SNNs), with bio-inspired sparse spike-based temporally coded computing, offer an energy-efficient alternative for edge robotic tasks. This section presents our proposed incorporation of SNNs for SLAM and prey tasks for ultra-low-power robotic applications~\cite{yoon202031,yoon2020neuroslam}. \subsection{Simultaneous Localization and Mapping Algorithms} \label{subsec:slam_algo} Simultaneous localization and mapping (SLAM) forms an essential component of many autonomous navigation applications. SLAM algorithms can be broadly classified into two categories: visual-based and neuro-based. Visual-based SLAM requires the robot to identify its position from the beginning of motion and generate a map of the movement using only the images captured during motion. Previous approaches like probabilistic SLAM and keyframe-based SLAM remain inadequate due to the constrained power budget of edge systems. Neuro-based SLAM performs the computation in a more energy-efficient way and is applicable to ultra-low-power edge robotics. Neuroscientific exploration in rodent brains showed their phenomenal capacity to efficiently localize themselves (Fig.~\ref{fig:ratslam_mapping}). Place cells and head direction cells are identified as neuronal circuits tuned to respond at a particular position and direction of motion, respectively. Our NeuroSLAM accelerator takes inspiration from RatSLAM~\cite{ratslam} algorithm that mimics the neuromorphic connectivity and incorporates bio-inspired hardware to achieve ultra-low-power SLAM tasks. \begin{figure}[h] \vspace{-0.1in} \centering\includegraphics[width=.75\columnwidth]{figs/Sec4/ratslam_mapping.png} \caption{\small Mapping between the position of rodent and the excited place cells in rodent's brain} \label{fig:ratslam_mapping} \vspace{-0.1in} \end{figure} \subsection{Circuit and System of NeuroSLAM Accelerator} \subsubsection{Architecture and Circuit Block} \ \newline \indent The flow of data in the NeuroSLAM accelerator is shown in Fig.~\ref{fig:neuroslam_intro}\textcolor{blue}{a}, where the captured image is first compared with the previous image for visual odometry to estimate the displacement from previous image capture. This is followed by template matching with previous images to detect any possibility of loop closure. The translation is added with the previous position to find the current direction of motion with digital head direction cells, and the path integration is injected in the pose cell array. Pose cell array is made of oscillator-based continuous attractor network in spiking neural network fashion, as shown in Fig.~\ref{fig:neuroslam_intro}\textcolor{blue}{b}. The output is extracted to calculate the experience map. \label{subsec:slam_algo} \begin{figure}[h] \centering\includegraphics[width=.92\columnwidth]{figs/Sec4/neuroslam.pdf} \caption{\small (a) Overview of our NeuroSLAM accelerator. (b) Spiking continuous attractor network mimicking rodent's pose cell behavior.} \label{fig:neuroslam_intro} \vspace{-0.15in} \end{figure} \subsubsection{Test Chip Measurement} \ \newline \indent The NeuroSLAM test chip is fabricated in 65~nm technology (Fig.~\ref{fig:neuroslam_results}\textcolor{blue}{a}). The template matching, odometry, and path integration are carried out in low-bit resolution with an efficient attractor network. The chip achieves SLAM with only 23.82~mW. The dependence of power consumption on input voltage is shown in Fig.~\ref{fig:neuroslam_results}\textcolor{blue}{b}. The attractor network shows a high compute efficiency of 8.79 TOPS/W. Our NeuroSLAM accelerator demonstrates the incorporation of neuro-inspired analog hardware in the digital pipeline to enable edge intelligence in severe energy-constrained systems. \begin{figure}[h] \centering\includegraphics[width=.9\columnwidth]{figs/Sec4/neuroslam_results.pdf} \caption{\small(a) NeuroSLAM chip die-photo. (b) Measured operational frequency and power consumption of NeuroSLAM accelerator.} \label{fig:neuroslam_results} \vspace{-0.2in} \end{figure} \subsection{Neuro-Inspired End-to-End Spike-Only Processing} Another neuro-inspired system exploration is the first autonomous sensing-to-actuation end-to-end spike-only processing pipeline for hexapod robots~\cite{lele2021end}. The goal is to demonstrate the functionality of spike-only processing and evaluate the potential of event-driven processing modalities. As shown in Fig.~\ref{fig:snn_cl}, event camera/dynamic vision sensor (DVS) is used as the sensory input to generate asynchronous event stream. The information is processed through SNN to activate one of the three gait selection neurons. The central pattern generator (CPG) is trained such that every gait selection neuron activates gait in a different direction to allow controlled movement. A task of identifying and approaching the nearest target is demonstrated using this platform. The ultra-low-power (2.55~mJ/step) consumption of this system with dedicated spiking hardware (e.g., Loihi) highlights the potential of neuro-inspired event-driven systems for edge applications. \begin{figure}[t!] \centering\includegraphics[width=.75\columnwidth]{figs/Sec4/snn_cl_v2.pdf} \vspace{-0.07in} \caption{\small Bio-inspired closed-loop end-to-end spike-only robot. (a) Event camera mounted on hexapod provides (b) input stream to (c) SNN for processing and (d) selecting gait to approach the nearest object. (e) Gait is executed by a spiking CPG causing leg movements.} \label{fig:snn_cl} \vspace{-0.25in} \end{figure} \section{Benchmarking and Software Infrastructure} \label{sec:benchmarking} Edge robotics is a cross-layer research field, spanning from environment modeling, autonomy algorithms, runtime systems to onboard compute architecture and circuits. The interaction between the layers impacts the efficacy and performance of the system~\cite{krishnan2020sky}, requiring software infrastructure for interdisciplinary research. Specially, we need a platform that can systematically benchmark each of these individual layers and also capture end-to-end cross-layer execution characteristics. Recently, some platforms for aerial robots have been proposed. MAVBench~\cite{boroujerdian2018mavbench} is a platform for physical-model-based aerial robots evaluation. MAVBench involves a closed-loop simulator and a benchmark suite for several computational kernels involving perception, planning, and control. Airlearning~\cite{krishnan2019air} and PEDRA~\cite{anwar2018navren} are simulation suits and benchmarks for learning-based aerial edge robots. They provide a rich set of virtual worlds, including indoor and outdoor environments, to enable autonomy generalization. In addition, PEDRA supports swarm intelligence with different collaboration paradigms. These benchmarking and software infrastructures inspire a proliferation of studies on edge robotic systems, such as reliability, system design methodology, and memory hierarchy optimization. Based on MAVBench, RoboRun~\cite{boroujerdian2021roborun} presents a robot runtime that leverages spatial-aware computing to dynamically improve performance and energy for heterogeneous operating environments. MAVFI~\cite{hsiao2021mavfi} proposes a fault injection framework for end-to-end reliability characterization of robotic workload, which is portable to robot operating system (ROS)-based applications. Based on AirLearning, Skyline~\cite{wan2021roofline}, Autopilot~\cite{krishnan2021machine}, and AutoSoC~\cite{krishnan2021autosoc} propose a visual performance model and automated design space exploration framework to design optimal onboard compute for aerial robots. Based on PEDRA, Anwar et al.~\cite{anwar2020autonomous} present a transfer learning-based approach to reduce the onboard computation required to train a neural network for autonomous navigation. Wan et al.~\cite{wan2021analyzing} and FRL-FI~\cite{wan2022frlfi} propose application-aware lightweight fault detection and mitigation techniques to enable reliable autonomy under hardware faults, for both learning-based and swarm-based intelligence. Zeng et al.~\cite{zeng2021decentralized} develop a mathematical framework for solving multi-task reinforcement learning problems based on policy gradient method. Anwar et al.~\cite{anwar2021multi} evaluate the robustness of swarm robotic systems under adversaries. Yoon et al.~\cite{yoon2019hierarchical} present a novel hierarchically memory system with STT-MRAM and SRAM to support real-time learning-based robotic exploration. We believe a holistic benchmarking and simulator infrastructure will uncover more cross-layer research findings of various fields of edge robotics. \section{Challenges and Opportunities} \label{sec:challenges} Robotic computing is a rising area and opens many research challenges and opportunities. At the device and circuit level, embedded non-volatile memory (e.g., RRAM, Ferroelectric memories) and monolithic 3D integration will provide opportunities for high-performance and energy-efficient robotic computing. At the architecture level, the robotic computing platform needs to be adaptive and reconfigurable to various scenarios and support a diverse set of applications, including both DNNs and SNNs. At the system level, a holistic benchmarking suite and a generic framework for mapping autonomy algorithms to heterogeneous hardware platforms will benefit the robotic computing development process. At the algorithm level, robots need to have the ability of lifelong learning and learning with sparse and limited data. The booming of swarm intelligence requires more effective and robust distributed learning across multiple agents. \section{Conclusion} \label{sec:conclusion} This paper investigates circuit and system technologies for energy-efficient edge robotics. We present three robotic accelerators for edge reinforcement learning, swarm intelligence, and neuron-inspired mapping and localization, with an emphasis on novel mixed-signal circuit technologies. We summarize robotic benchmarking and simulation frameworks along with how they foster research efforts in cross-layer robotic systems. Finally, we discuss the challenges and opportunities for the next-generation energy-efficient robotic computing platforms. \section*{Acknowledgements} The authors would like to thank Anvesha Amaravati, Saad Bin Nasir, Insik Yoon, Ningyuan Cao, Muya Chang, Jong-Hyeok Yoon, Aqeel Anwar Malik, and Justin Ting for their technical support. This work was supported in part by C-BRIC, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA. \section{Introduction} \bibliographystyle{ieeetr}
544ae5f1a5beef8604ecd603f0787d57e2fd26db
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} One can see the huge articles on higher moments of portfolio optimization as a spectrum of assumptions which varies from the most general and weakest assumptions to the most restricted and strong assumptions. Classification of these articles can be done in different contexts. An important context is whether we use parametric modeling or nonparametric modeling. It is interesting that these two paradigms have also been discussed in other fields of research such as machine learning where kernel methods such as kernel support vector machine(kernel-SVM) or nonlinear kernel dimensionality reduction is a nonparametric approach while methods like variational auto-encoder(VAE) is an example of parametric approaches. These two different paradigms has its root in statistics where we have parametric and nonparametric approach to solve statistical problems. An abstract way of explaining these two paradigms is well understood of we consider parametric approach as a way to make a finite dimensional approximation of probability distributions of returns, while in the nonparametric approach, no explicit form of distribution is assumed and our ignorance on reality avoids us to make strong assumptions of reality. In reality, space of distributions on asset returns are extremely complex. Thus, using parametric distributions builds a finite dimensional structure on the space of return distributions and therefore on the moments of the returns which are used in the objective function of portfolio optimization. The importance of utility function is well understood specially if a taylor series expansion of utility around the expected return is written to realize the connection of utility with higher moments such as skewness and kurtosis. Thus, \begin{equation} u(R)=u(E(R))+u'[E(R)][R-E(R)]+\frac{ u"[E(R)]}{{2!}} \sigma^2_R+\frac{u'''[E[R]]}{3!} M^3_R \end{equation} where $\sigma^2_R$ is the variance of expected return, and is the third moment about the mean of expected return which is skewness. The following utility functions are used in the literature: \begin{equation} \begin{split} u(R)&=ln(\lambda R) \\ u(R)&=\sqrt{\lambda R} \\ u(R)&=-e^{-\lambda R} \end{split} \end{equation} However, the third utility function is used more relatively for its nice properties. Taking the expectation of utility after Taylor expansion would produce: \begin{equation} \begin{split} E(u(R))&=-e^{\lambda E(R)}(1+\frac{\lambda}{1!}E(R-E(R))-\frac{\lambda^2}{2!}(R-E(R))^2 +\frac{\lambda^3}{31!}E(R-E(R))^3 \\ &-\frac{\lambda^4}{4!}(R-E(R))^4+O(R^5) ) \end{split} \end{equation} Thus, the following optimization problem is produced: \begin{equation} min_{\omega} -\omega'\mu+\lambda_1 \omega' \Sigma \omega-\lambda_2\omega'M_3 (\omega \otimes \omega) +\lambda_3 \omega' M_4(\omega \otimes \omega \otimes \omega) \end{equation} where $\omega \otimes \omega$ stands for Kronecker product and, $\lambda_1=\frac{\lambda^2}{2!} $ , $\lambda_2=\frac{\lambda^3}{3!}$ , $\lambda_3=\frac{\lambda^4}{4!}$ \citep{Glawischnig2013} has used an iterative approach to solve this objective function without using polynomial goal programming(PGP). They start with $\lambda$ equal to twenty and then reduce it in each step until two to avoid short selling of more than one hundred percent. The algorithm is multi-start to get robust results. The trick that \citep{Glawischnig2013} has used is that in the first iteration the skewness and kurtosis is calculated and then fixed to make the full optimization problem a valid quadratic optimization to be solved by quadratic solvers, but this iterative approach that changes the nature of the optimization problem and neglects the third order polynomials and 4th order polynomials could result a big difference. \citep{Glawischnig2013},\citep{Jondeau2006} almost use similar ideas and their approach has an advantage over PGP since the weights of the optimization are naturally formed by the taylor approximation of utility rather than arbitrary selection that is used in PGP. \section{Parametric and nonparametric modeling} The motivation comes from many different intuitions such as constraining the structure of return distributions of financial securities and approximate the complex nature of asset returns. \citep{Adcock2014} uses coherent multivariate probability distribution assumption for asset returns to be able to use stein’s lemma to create the mean-variance-skewness efficient hyper-surface. So the assumption is that the asset returns are defined by the convolution of a multivariate elliptically symmetric distribution and a multivariate distribution of non-negative random variables so that the efficient portfolios could be computed using quadratic programming on the efficient surface. Many researchers are amazed by the Multivariate skew normal (MSN) which was first introduced in \citep{azzalini1996} and was applied in finance in \citep{Adcock1999} is the center of many portfolio selection methods. One of the motivations of using MSN is the simplicity of the maximization of utility, specially if utility is an exponential function of return as explained in \citep{Adcock2005},\citep{Landsman2019}, the drawback of such an approach is explained in \citep{Adcock2005} when the preference parameter is too high or very small. But the bigger disadvantage of such an approach is simply shaping the mean-variance-skewness efficient surface by choosing an exponential utility function. Another drawback is that the generalization of this approach for kurtosis and higher moments is not straightforward. A generalization of MSN could be seen in \citep{SAHU2003} where an analytic forms of densities are obtained and there distributional properties are studied. These parametric modeling approaches to model distributions for returns are very diverse and many researchers have noticed that such as \citep{Jondeaua2003} that uses a generalized Student-t distribution or \citep{Mencia2009} which uses location scale mixture of normals and using maximum likelihood to infer the parameters. The log-normal distribution has also been used in the literature to model asset returns, but its skewness is a function of the mean and the variance, not a separate skewness parameter. An interesting example which combines parametric modeling(log normal) of portfolio distribution with goal programming is \citep{Changetal2008}. \citep{Glawischnig2013},\cite{Jondeau2006} are examples of nonparametric approach to portfolio optimization in higher moments since no distribution is assumed. The problem of skewness term in the optimization is that it makes the optimization a non-convex problem and therefore not tractable. This is the motivation of \citep{Konno1993} that without any assumption on the form of probability distribution of returns tries to approximate the third order term due to skewness by a piecewise linear approximation, however this approximation should be tested on more experimental data to see to what extent the approximation is valid. \citep{Konno1998} uses a similar approach and resolves the third order nonlinearity of skewness by representing it with a difference of two convex functions and then using branch and bound to solve the mean-variance-skewness portfolio optimization problem. A more detailed and visual derivation of \citep{Konno1998} can be seen in \citep{Konno1995},\citep{Konno2005}. The problem with ideas in \citep{Konno1993},\citep{Konno1998},\citep{Konno1995},\citep{Konno2005} is that it can not be generalized to higher moments easily and the order of complexity will be high. Another example for nonparametric higher moment portfolio optimization is based on the concept of shortage function and the geometric representation of mean-variance-skewness portfolio is illustrated in \citep{Kerstens2011} \section{A priori, interactive and posteriori} The literature on higher moment portfolio optimization can be classified in a different context. In this context, it is important how the preferences are presented in the optimization process and therefore three different paradigms are discussed in the literature namely a priori, interactive and posteriori. A priori methods such as goal programming and utility function method are used when the preference of investor are known beforehand. In goal programming, first all objective optimization problems are solved regardless of other objectives. Then a final scalar objective optimization is solved that has some weights that need to be fixed. Even if all possible weights are checked, still some Pareto efficient solutions may be missing. Examples of using goal programming for higher moments portfolio optimization are \citep{Aksarayli2018},\citep{Bergh2008}. Although it is a simple algorithm, using appropriate weights is a debate. It can also produce solutions that are not Pareto efficient, and some algorithm is needed to project them back to the Pareto efficient solutions. Goal programming has many variants as explained in \citep{Aouni2014},\citep{Tamiz2013}. 1- Lexicographic 2- weighted, see for example \citep{Chang2011} 3- polynomial, see for example \citep{Chang2008},\citep{Chunhachinda1997},\citep{Davies2009},\citep{Lai1991},\citep{Mhiri2010},\citep{Proelss2014} and also \citep{Ghahtarani2013} if robustness with respect to some coefficients of the optimization is a concern as well. 4- stochastic 5- fuzzy Another paradigm is to use posteriori methods, where the target is finding all Pareto efficient frontier. Methods such as linear weighting method, weighted geometric mean and Normal Boundary Intersection (NBI)\citep{Audet2008}, Modified Normal Boundary Intersection (MNBI), Normal Constraint, Multiple-objective Branch-and-Bound,epsilon-constraints method and Pascoletti Serafini scalarization. Finally, there is interactive paradigm where the investor iteratively solves the optimization and gets feedback from solutions to find the Pareto optimal solutions. An example of using epsilon-constraints for portfolio optimization can be seen in \citep{Xidonas2010},\citep{Xidonasetal2010},\citep{Xidonas2011} which also includes an interactive filtering process to consider investor preferences. Methods like NBI are computationally expensive since for each iteration, an optimization problem should be solved but on the other hand has a geometrical intuition which can also be used for other methods as is explained in \citep{Kanafi2015} \section{Pascoletti Serafini Scalarization (SP)} SP has two parameters namely a and r, which are chosen from. The method is originally from \citep{Pascoletti1984} but is generalized in \citep{Eichfelder2008}. The novelty of their approach is on conditions to bound the parameter a on a restricted hyperplane, but it can only be used in two dimensions and therefore not applicable to our four dimensional objective function that covers skewness and kurtosis as well. \citep{Khorram2014} attempts to find the restricted set for parameter a but only the trivial point of zero for parameter a is considered and also the simple EP cone is considered. \begin{figure} \includegraphics[width=\linewidth]{equivalence.png} \caption{equivalence of multiobjective optimization algorithms} \label{fig:bigPicture} \end{figure} \subsection{Shortage Function (SF)} \citep{Briec2004} introduced the shortage function as a portfolio performance measure in the traditional mean-variance portfolio framework. The original shortage function is computed by solving the following problem: \begin{equation} \label{eq-SF} \begin{split} max \ \delta \\ E[R(y^{k})]+\delta g_{E}&\leq E[R(x)] , \\ Var[R(y^{k})]-\delta g_{V} &\geq Var[R(x)] , \\ Sk[R(y^{k})]+\delta g_{S}&\leq Sk[R(x)] , \\ \sum x_i&=1 \ ,x_{i}\geq 0 , i=1,...,n \end{split} \end{equation} where g is the direction vector and the effect of choosing g is well described in \citep{Kerstens2012}. A good explanation of how shortage function can be useful for mean-variance-skewness portfolio framework is explained in \citep{Briec2007}. The connection between shortage function and NBI method is explained in the present paper, however the connection between shortage function and polynomial goal programming (PGP) is well described in \citep{Briec2013}. Now, the following special case of shortage function is defined which is called modified shortage function (MSF) and is useful to show its connection with NBI method. \renewcommand{\thesubsection}{\arabic{subsection}} \newtheorem{MSFdefinition}{Definition}{subsection} \begin{MSFdefinition}{Modified shortage function(MSF) is defined as the solution of the following optimization problem}\label{MSFdefinition} \begin{equation} \label{eq-MSF} \begin{split} max \ \delta \\ E[R(y^{k})]+\delta g_{E}&= E[R(x)] , \\ Var[R(y^{k})]-\delta g_{V} &= Var[R(x)] , \\ Sk[R(y^{k})]+\delta g_{S}&= Sk[R(x)] , \\ \sum x_i&=1 \ ,x_{i}\geq 0 , i=1,...,n \end{split} \end{equation} \end{MSFdefinition} \subsection{NBI and SP} In this section, the equivalence of NBI and MSF are proved. One of the most important scalarization technique is normal boundary intersection method (NBI) which is as follows \begin{equation} \label{eq-NBI} \begin{split} max \ s \\ \Phi\beta+s\bar{n}&=f(x)-f^{*} \\ s \in R ,& \ x\in\Omega \end{split} \end{equation} Different optimization problems in \eqref{eq-NBI} for different $\beta\in R^{m}_{+}$, having $\sum_{i=1}^{m} \beta_{i}=1$ are solved. Here $f^{*}$ denotes the so-called ideal point and the matrix $\Phi\in R^{m\times m}$ consists of the columns $f(x^i)-f^{*} \ ,i=1,\hdots,m$ and the vector $\bar{n}$ is defined as normal unit vector to the hyperplane directing to the negative orthant. The problem of this method is that not all minimal points can be found as a solution of NBI. The following lemma shows the direct connection between NBI and modified SP. \renewcommand{\thesubsection}{\arabic{subsection}} \newtheorem{NBISPLemma}{Lemma}[subsection] \begin{NBISPLemma}{\citep{Eichfelder2008}}\label{NBISP} A point ($\bar{s},\bar{x}$) is a maximal solution of NBI with $\beta \in R^{m}$ , $sum_{i=1}^{m}\beta_{i}=1$, if and only if $(-\bar{s},\bar{x})$ is a minimal solution of $\overline{SP}(a,r)$ with $a=f^{*}+\Phi\beta$ and $r=-\bar{n}$ \end{NBISPLemma} \subsection{NBI and MSF} Another connection is between NBI and modified shortage function as is proved in the next proposition: \renewcommand{\thesubsection}{\arabic{subsection}} \newtheorem{NBIMSFtheorem}{Proposition}[subsection] \begin{NBIMSFtheorem}{}\label{NBIMSFtheorem} Modifed shortage function scalarization is equivalent to NBI by the following substitutions: \begin{equation} \begin{split} s&=\delta \\ f_{1}(x)&=E[R(x)] \\ f_{2}(x)&=V[R(x)] \\ f_{3}(x)&=Sk[R(x)] \\ \Phi \beta+f^{*}&=c \\ \bar{n}&=g \end{split} \end{equation} where $c=\begin{bmatrix} E[R(y^{k})] \\ Var[R(y^{k})] \\ Sk[R(y^{k})] \end{bmatrix}$ \begin{proof} proof is straightforward by a simple substitution. \end{proof} \end{NBIMSFtheorem} \subsection{SP and SF} \renewcommand{\thesubsection}{\arabic{subsection}} \newtheorem{equivalencetheorem}{Proposition}[subsection] \begin{equivalencetheorem}{}\label{equivalence} Shortage function scalarization is equivalent to pascoletti serafini scalarization by the following substitutions: \begin{equation} \begin{split} \delta&=-t \\ a_{1}&=E(R(y^{k'}))=2E(R(x))-E(R(y^{k})) \\ a_{2}&=V(R(y^{k'}))=2V(R(x))-V(R(y^{k})) \\ a_{3}&=E(R(y^{k'}))=2S(R(x))-S(R(y^{k})) \\ r_{1}&=g_{E} \\ r_{2}&=g_{V} \\ r_{3}&=g_{S} \end{split} \end{equation} \begin{proof} Since $max \ -t=min \ t$ and the reference point can be set as any point in $R^3$, substituting the reference point(a) and the direction(r) in SP constaint which is $a+tr-f(x)\in K$ and choosing the cone K to be trivial $R^{3+}$ the proof is complete. \end{proof} \end{equivalencetheorem} \subsection{NBI and goal programming} It is shown here that the popular goal programming(PGP) method which is widely used in portfolio optimization literature has close connection with NBI and is defined as follows: \renewcommand{\thesubsection}{\arabic{subsection}} \newtheorem{PGPdefinition}{Definition}{subsection} \begin{PGPdefinition}{}\label{PGPdefinition} Polynomial goal programming (PGP) is defined as: \begin{equation} \begin{split} PGP(\alpha,\beta)&=min \{d_{1}^\alpha+d_{3}^{\beta};d_{1}=z_{1}^{*}-z_{1},z_{2}=1,d_{3}=z_{3}^{*}-z_{3} \} \\ z_{1}^{*}&=max \{z_{1};z_{2}=1 \} \\ z_{3}^{*}&=max \{z_{3};z_{2}=1 \} \end{split} \end{equation} \end{PGPdefinition} \renewcommand{\thesubsection}{\arabic{subsection}} \newtheorem{NBIPGPtheorem}{Proposition}[subsection] \begin{NBIPGPtheorem}{}\label{NBIPGPtheorem} A solution to NBI problem is also a solution to PGP portfolio optimization problem. \end{NBIPGPtheorem} \begin{proof} Since $(x^{*},s^{*},\lambda^{*})$ is the solution of NBI it satisfies the first order KKT conditions: \begin{equation} \label{eq-stationarityForNBI} \begin{split} \nabla_{X}F(x^{*})\lambda^{*}&=0 \\ 1+\hat{n}\lambda^{*}&=0 \end{split} \end{equation} where $\lambda\in R^{3}$ represents the multipliers corresponding to the 3 equality constraints namely return,variance and skewness constraint. On the other hand, the first oder KKT condition for PGP can be decomposed for two different sets of coordinates. Stationarity equations for the first set of coordinates $(\omega_{1},\omega_{2},\omega_{3})$ results: \begin{equation} \label{eq-stationarityForFirstSet} \nabla_{\omega}Z \ \mu^{\star}=0 \end{equation} Now stationarity with respect to second set of coordinates $(d_{1},d_{3})$ yields: \begin{equation} \label{eq-stationarityForSecondSet} \begin{split} \alpha d_{1}^{\alpha -1}+\mu^{(1)}&=0 \\ \mu^{(2)}&=0 \\ \beta d_{3}^{\beta -1}+\mu^{(3)}&=0 \end{split} \end{equation} Now expanding $1+\hat{n}\lambda^{star}=0$ in \ref{eq-stationarityForNBI} generates: \begin{equation} \label{expandedInnerProduct} 1+\hat{n}_{1}\lambda_{1}^{*}+\hat{n}_{2}\lambda_{2}^{*}+\hat{n}_{3}\lambda_{3}^{*}=0 \end{equation} Simplifying \ref{eq-stationarityForFirstSet} would produce: \begin{equation} \label{simplified} 1+\alpha \frac{d_{1}^{\alpha -1}}{\mu^{(1)}}=0 \end{equation} combining \eqref{simplified} and \eqref{expandedInnerProduct} and solving for $\alpha$ , $\beta$ results: \begin{equation} \label{alphabeta} \begin{split} \alpha&=\frac{\hat{n}_{1}\lambda_{1}^{*}+\hat{n}_{2}\lambda_{2}^{*}+\hat{n}_{3} \lambda_{3}^{*}}{d_{1}^{\alpha-1}} \\ \beta&=-\frac{\mu{(3)}}{d_{3}^{\beta-1}} \end{split} \end{equation} Equivalently, $(\omega^{*},d^{*},\mu^{*})$ is the solution of PGP problem and since F and X and in NBI problem are the same as Z and $\omega$ in PGP problem and for any solution and lagrange multipliers there exists equivalent ones by suitable substitutions for $\alpha$ and $\beta$ like \eqref{alphabeta} the proof is complete. \end{proof} \section{Methodology} There are some quality measures that can be used to figure out which algorithm is better than the other. These measures are well explained in \citep{Eichfelder2008} namely coverage error($\epsilon$), uniformity level($\delta$) and cardinality(number of points) \\ These three measures are conflicting in nature and another multiobjective optimization on top of the main optimization problem may be formed for a rigorous analysis. Satisfying these type of measures are what most researchers refer to as adative considerations. In another context, there are two important aspects in any multiobjective optimzation namely accuracy and diversity. The former forces the solutions to converge to pareto frontier while the latter makes the efficient set equidistanced as much as possible. There are many other measures in the literature such as hypervolume indicator but implementing some of them makes the algorithm very slow and even convergence of them are not proved. \begin{figure} \includegraphics[width=\linewidth]{methods.png} \caption{multiobjective optimization methods} \label{fig:methods} \end{figure} \\ So far, mostly Pascoletti Serafini methods are discussed in the present paper but there are other ideas in the literature as depicted in Figure~\ref{fig:methods}. Set oriented methods steer a set of solutions at each iterations such as \citep{Hernandez2013} for cell mapping method or \citep{Dellnitz2005} for subdivision algorithms. In the present paper, two adaptive algorithm for higher moment multiobjective portfolio optimization are given. The first one is an adaptive epsilon constraint method while the second one is not based on scalarization and has its root in \citep{Hillermeier2001} but is recently developed by many researchers as in \citep{Martin2018} , \citep{Schutze2020} and also generalized for problems having inequality constraints in \citep{Beltran2020}. Both of the proposed algorithms in this section are based on KKT conditions but the approaches are slightly different. Both methods are designed to produce equidistance pareto frontier points. The equidistance parameter in the first algorithm is $\alpha$ while in the second algorithm it is called $\tau$ to mimick the variables in the related historical articles. So both methods are adaptive in the sense of equal distance points on efficient frontier but no other considerations are taken for the delta and cardinality quality measures since they are expected to produce good results, otherwise they add to the complexity of the algorithms.The first algorithm is based on Epsilon constraint which is a special case of Pascoletti Serafini while the second algorithm is based on continuation methods and the connections are well shown in Figure~\ref{fig:methods}. \subsection{Adaptive Epsilon Constraint} Adaptive ECS is described as \begin{equation} \label{PKepsilon} \begin{split} min &\ f_{k}(x) \\ f_{i}(x) &\leq \epsilon_{i} , i \in \{1,\hdots,m\} \setminus \{k\} \\ x \in \Omega \end{split} \end{equation} The scalar optimization problem \eqref{PKepsilon} can be formulated as \\ \begin{equation} \label{eq-PKepsilon} \begin{split} min \ t \\ \epsilon_{i}-f_{i}(x) &\geq 0 \ i\in \{1,\hdots,m\} \setminus \{k\} \\ t-f_{k}(x)&\geq 0 \\ g_j(x)&\geq 0 \ j\in \{1,\hdots,p\} \\ h_{l}(x)&=0 \ l\in \{1,\hdots,q\} \\ t \in R &, \ x \in R^{n} \\ \end{split} \end{equation} It is proved by Theorem 2.27 in \citep{Eichfelder2008} how it is possible to relate SP to epsilon-constraint method via lagrange multipliers. So using the following substitutions for $a_i$ and r, if $\bar{x}$ is a minimal solution of \eqref{eq-PKepsilon} . Thus, \eqref{eq-PKepsilon} is equivalent to SP(a,r) with the following substitutions for $a_i$ and r : \begin{equation} \begin{split} a_{i}&=\epsilon_i \ \forall i \in \{1,\hdots,m\} \setminus \{k\} \\ a_{k}&=0 \\ r&=e_{k} \end{split} \end{equation} The full algorithm is shown in algorithm~\ref{alg:algorithm-label}. \begin{algorithm}[H] \caption{Adaptive $\epsilon$ constraint method for mean-variance-skewness } \label{alg:algorithm-label} Input: Choose the desired number $N_1$ of discretization points for the range of the functions $f_1$(i.e. in direction $v^1=(1,0,0)^T$ and $N_2$ for the range of function $f_2$ (i.e. in direction $v^2=(0,1,0)^T$ ) \\ Step 1: solve the optimization problems $min_{x \in \Omega} f_{i}x$ with minimal solution $x^{min,i}$ and minimal value $f_i(x^{min,i})$ for i=1,2 as well as $max_{x \in \Omega} f_{i}x$ with maximal solution $x^{max,i}$ and maximal value $f_{i}(x^{max,i})=\epsilon_{i}^{max}$ for i=1,2 \\ Step 2: Set $L_i=\frac{\epsilon_{i}^{max}-\epsilon_{i}^{min}}{N_i}$ and solve the problem $P_3(\epsilon)$ \\ for all parameters $\epsilon \in E$ with $E:=\{ \epsilon=(\epsilon_1,\epsilon_2)\in R^2 | \epsilon_i=\epsilon_{i}^{min}+\frac{L_{i}}{2}+l_i.L_i$ \\ for $l_i=0,...,N_{i-1} , i=1,2 \}$ \\ Determine the set $A^{E}=\{ (\epsilon,\bar{x},\bar{\mu}) | \bar{x} \} $ where $\bar{x}$ is a minimal solution of $(P_{3}(\epsilon))$ with parameter $\epsilon$ and lagrange multiplier $\bar{\mu}$ to the constraints $f_i(x) \leq \epsilon_{i}$ \\ $,i=1,2$ for $\epsilon \in E $ \\ Step 3: Determine the set $D^{H^0,f}:=\{f(x)|\exists \epsilon \in R^{2}, \mu \in R^{2}_{+} with (\epsilon,x,\mu)\in A^{E} \} $ \\ Input: Choose $y\in D^{H^{0},f}$ \ with $y=f(x^{\epsilon})$ and $(\epsilon ,x^{\epsilon},\mu^{\epsilon}) \in A^{E}$ if y is a sufficient good solution, then stop. Else, if additional points in the neighborhood of y are desired, give a distance $\alpha \in R$, $\alpha>0$ in the image space and the number of desired new points $\bar{n}=(2k+1)^2-1$ (for a $k \in N$) and go to Step 4. \\ Step 4: Set $\epsilon^{ij}:=\epsilon+i.\frac{\alpha}{1+(\mu_{1}^{\epsilon})^{2}} \begin{bmatrix} 1 \\ 0 \\ \end{bmatrix} + j.\frac{\alpha}{1+(\mu_{2}^{\epsilon})^{2}} \begin{bmatrix} 0 \\ 1 \\ \end{bmatrix} $ \\ for all \\ $(i,j) \in \{ (i,j)\in Z^{2} | i,j \in \{-k,\hdots,k\},(i,j) \neq (0,0) \} $ and solve problem $(P_{3}(\epsilon^{i,j}))$ \\ if there exists a solution $x^{i,j}$ with Lagrange multiplier $\mu^{i,j}$, then set $A^{E}:=A^{E} \cup \{ \epsilon^{i,j}, x^{i,j} , \mu^{i,j} \}$. \\ Go to step 3. \\ Output: The set $D^{H_{0},f}$ is an approximation of the set of weakly efficient points \end{algorithm} \begin{figure}[H] \includegraphics[width=\linewidth]{EC2zoom.png} \caption{Pareto Front using adaptive epsilon constraint method with 2500 points} \label{fig:EC2zoom} \end{figure} The simulation results for adapative epsilon constraint method is illustrated in Figure~\ref{fig:EC2zoom}. As is explained in the previous section $\epsilon$ constraint method is just a special case of SP and the following algorithm is used for multiobjective portfolio optimization having three objectives namely return, variance and skewness. \subsection{Adaptive Multi-start Pareto Tracer} There are two main ideas in this approach: 1-KKT condition in single objective is generalized to multiobjective as is described in \citep{Hillermeier2001}. 2-A predictor corrector idea which first predicts the next move in decision space and then corrects it by a multiobjective gradient descent. The algorithm \ref{alg:AMPT} is a modification of \citep{Martin2018} by doing it in a multi start way and shaping the objective space distribution and customizing it for portfolio optimization of three objectives namely mean, variance and skewness. Consider the multiobjective optimization problem defined bellow: \begin{equation} \begin{split} \label{eq-PT-MOP} min \ &F(x) \\ s.t \ h(x)&=0 \end{split} \end{equation} where F is a vector of objectives $F:R^{n}\rightarrow R^{k}$. F in \ref{eq-PT-MOP} is actually a three dimensional vector of mean,variance and skewness and decision space has dimension n which refers to number of assets or factors in a multifactor investment framework. h in \ref{eq-PT-MOP} is the constraint that the sum of allocations to different assets should be one. A predictor corrector method is developed in \citep{Hillermeier2001} by considering \begin{equation} \label{eq-Hiler} \tilde{F}(x,\alpha)=\begin{bmatrix} \sum_{i=1}^{k}\alpha_{i}\nabla f_{i}(x) \\ \sum_{i=1}^{k}\alpha_{i}-1 \end{bmatrix}=0 \end{equation} The set of KKT points of \ref{eq-PT-MOP} is contained in the null set of $\tilde{F}$ which is the idea behind many continuation methods along $\tilde{F}^{-1}(0)$ as written in \ref{eq-Hiler}. So a simple representation for the tangent vectors to Pareto set can be written as \begin{equation} \label{eq-Hiler-prime} \tilde{F}(x,\alpha)\begin{bmatrix} \nu \\ \mu \end{bmatrix} =\begin{bmatrix} \sum_{i=1}^{k}\alpha_{i}\nabla^{2} f_{i}(x) & \nabla f_{1}(x) & \hdots & \nabla f_{k}(x) \\ \sum_{i=1}^{k}\alpha_{i}-1 & 1 & \hdots & 1 \end{bmatrix} \begin{bmatrix} \nu \\ \mu \end{bmatrix}=\begin{bmatrix} 0 \\ 0 \end{bmatrix} \end{equation} Thus the vector in \ref{eq-Hiler-prime} can be expressed as \begin{equation} \label{eq-vmu} v_{\mu}=-W^{-1}_{\alpha}J^{T}\mu \end{equation} where $W_\alpha$ in \eqref{eq-vmu} is defined as \begin{equation} W_\alpha:=\sum_{i=1}^{k}\alpha_{i}\nabla^{2}f_{i}(x) \in R^{n\times m} \end{equation} and J is defined by \begin{equation} J=J(x)=\begin{bmatrix} \nabla f_{1}(x)^{T} \\ \vdots \\ \nabla f_{k}(x)^{T} \in R^{k \times n} \end{bmatrix} \end{equation} Now d has a special meaning that expresses the first order approximated movement in objective space for infinitesimal step sizes and is defined below \begin{equation} \label{eq-d} d:=J\nu \end{equation} Using \eqref{eq-vmu} and the definition of d in \eqref{eq-d} makes \begin{equation} \label{eq-jmuIsD} J\nu_{\mu_{d}}=-JW^{-1}_{\alpha}J^{T}\mu=d \end{equation} Now any possibility for selecting d will generate a different distribution on pareto front. Since the resulting $\nu_{mu}$ are tangents to Pareto set and the aim is making even spread of points on Pareto front, directions d should be selected such that it makes orthonormal basis of tangent space to Pareto front at F(x). One of the natural ways to do this is utilizing QR factorization of $JW^{-1}J^{T}$ and by selecting \begin{equation} d_{i}:=q_{i+1} \ i=1,\hdots,k-1 \end{equation} $\mu$ can be solved. Now it is possible to obtain the predictor $p:=x+t\nu$ to make an evenly distributed set of solutions along the Pareto front the following approximation for t can be chosen \begin{equation} \begin{split} \norm{F(x_{i})-F(x_{i+1}) &\approx \tau} \\ t=\frac{\tau}{\norm{J\nu_{\mu}}} \end{split} \end{equation} The predictor part explained so far cares about going through the tangent direction in Pareto set to create an equidistanced set of Pareto front points. The next part is the corrector which cares about convergence issues and is explained in \citep{Fliege2009}and the present paper implements it for the corrector part of the algorithm, but there are other methods that can be used for this step such as \citep{Povalej2014} which approximates the second derivative matrices instead of evaluating them although both methods have superlinear rate of convergence. The corrector part implemented in the present paper is based in minimization of the following optimization problem. \begin{equation} \label{eq-corrector} \begin{split} min \ g(t,s)&=t \\ s.t \ \nabla F_{j}(x)^{T}s+\frac{1}{2}s^{T}\nabla^{2}F_{j}(x)s-t&\leq 0 \ (t,s)\in R \times R^{n} \end{split} \end{equation} Thus, the full predictor-corrector algorithm is shown in Algorithm~\ref{alg:AMPT}. \begin{algorithm}[H] \caption{Adaptive Multi-start Pareto Tracer for mean-variance-skewness } \label{alg:AMPT} Repeat Predictor-corrector loop: \\ Input : create some bundles to prepare for the multistart algorithm Predictor part: \\ Step 1: calculate $\mu$ by solving \eqref{eq-jmuIsD} \\ Step 2: calculate direction $v_{mu}$ from \eqref{eq-vmu} \\ Step 3: update the predictor position in decision space by $p:=x+t\nu_{\mu}$ \\ Corrector part: \\ Step 4: solving \eqref{eq-corrector} and update x \end{algorithm} \begin{figure} \includegraphics[width=\linewidth]{PT1.png} \caption{adaptive multistart Pareto Tracer} \label{fig:PT1} \end{figure} Simulation results are shown in Figure~\ref{fig:PT1}. Since the current algorithm has many parameters to tune and to get all parts of pareto front in a faster way, an evolutionary algorithm could be combined with the current algorithm to make a hybrid algorithm. \section{Conclusions} The paradigms in higher moment multiobjective optimization are critically reviewed and the connection between some of them are explained in the present paper. It has been proved that shortage function method can be seen as a Pascoletti Serafini scalarization. Finally, two algorithms for portfolio optimization are suggested. The first one is based on scalarization paradigm and is called adaptive epsilon constraint method while the second one is a type of continuation method and is called adaptive multistart Pareto Tracer which bundles different local solutions to provide a global Pareto Front by both exploration and exploitation. \section{Future Works} The first suggested algorithm can be modified by handling variable structure ordering as is explained in \citep{Eichfelder2014} and \citep{Eichfelder2012}. So instead of a fixed cone K for ordering, a variable cone is considered and each point could have a different ordering corresponding to a different cone. The second suggested algorithm can be further developed by hybridizing it with evolutionary algorithms such as genetic algorithm to make it faster. Another line of research which could be theoretically interesting is to see if the continuation method suggested in the second algorithm could be seen as generalization of Pascoletti Serafini ,since the continuation framework is directly working on KKT conditions for a multiobjective optimization problem while scalarization takes advantage of KKT condition of a mono objective problem. \bibliographystyle{agsm}
b48d56f8ba67b264aac5603adb1edc8be50cfbba
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Gaussian Mutations in Genetic Algorithms} \begin{itemize} \item {\bf Introduction to genetic algorithms} \item {\bf Self-adaptation} \item {\bf The Gaussian mutation operator} \item {\bf Implementation of gaussian mutation results} \item {\bf Conclusion} \end{itemize} \section{{Introduction to genetic algorithms}} A genetic algorithm is a type of stochastic search algorithm that functions off of the fundamental principles of natural selection. Essentially, the algorithm works off of given string structures, typically organisms, which gradually evolve, and their fitness is characterized by survival and reproduction. The genetic algorithm sustains the survival of the fittest rule, where there is a randomized information exchange but with a structured method (Gholizadeh, 2013). With every succeeding generation, new strings are created and the fittest strings from the previous generations are used as a reference. The "winners" are considered the individuals whose genes enable them to survive and reproduce. With each successive selection, more resources are retained. The algorithm is capable of simulating this process and uses it to calculate the optimum of objective functions, thus making the algorithm suitable for solving optimization problems. In summary, the genetic algorithm (GA) uses neuro-evolutionary concepts and artificial intelligence to design better neural networks (Roetzel et al., 2020). When most of the algorithm's intricacies are removed, it essentially takes in a set of something and evolves it to its optimal state. Figure 1 shows the abbreviated process for a cluster of carbon atoms being evolved to organize in the lowest energy carbon structure (Figure 1). \begin{figure}[ht] \centering \includegraphics[height=3in]{figure1.eps} \caption{The GAs process of determining the buckminsterfullerene as the lowest energy configuration of 60 carbon atoms. It has a carbon population and knows it wants to solve for the lowest energy structure. It then generates a bunch of random configurations, which are then analyzed for their fitness based on their energy levels. The best ones mate and create a new generation, and this process repeats until an ultralow energy level generation is created (the buckminsterfullerene structure). It took scientists over 3 years to solve this, while the GA can complete this process in about 4 hours (self-made).} \end{figure} GAs have been applied to a variety of fields, including inverse kinematic problems (IKPs) for advanced robotic designs for transhumeral prosthetics (Števo et al., 2019) and tumor tracking in combination with intermittent Kalman predictors (IKPs) by using genetic algorithms to solve combinatorial optimization problems to conclude the measurements of tumor motion (Aspeel et al., 2019). Currently, the popular usage of GAs falls in the category of generating an ANN capable of determining a very precise solution for an infinite set of possibilities, more specifically a search or optimization problem. Genetic algorithms are considered a subset of artificial intelligence and are a part of the broader scope of evolutionary computing (Eiben, 2003). \section{{ Self-adaptation}} What is currently considered one of the most exciting fields of research for evolutionary computing and genetic algorithms (Hinterding et al., 2005) is the concept of self-adaptation in GAs, especially for numeric functions. Self-adaptation is the property of a genetic algorithm to adapt its algorithm to a specific problem when calculating the solution for it. This can be done for one or more facets of evolution, and approaches for self-adapting for both mutation strength and population size, for example, have been successful on initial testing (Hinterding et al., 2005). The mutation genetic operator introduces genetic diversity with the genetic algorithm population of chromosomes. It can be likened to biological mutation. There are many types of mutation operators, including bit string, boundary, and Gaussian, most of which deal with the introduction of a random value to a specific gene, which influences the expression of the offspring. The Gaussian mutation operator has proven to be an optimal and popular choice for self-adaptation in genetic algorithms (Li-hui Zhang Ph.D. et all., 2015). In Gaussian mutation, a random unit Gaussian distribution value is added to a selected gene by the operator. The value is thus added to each element of the gene carrier's vector, which causes the development of a new offspring (Figure 2). \begin{figure}[ht] \centering \includegraphics[height=0.25in]{figure2.eps} \caption{Gaussian Mutation of the a) parent to form the b) offspring, causing variance (Dorronsoro, 2004).} \end{figure} Aside from Gaussian mutation, there are other popular operators, including the one-point crossover reproduction operator (Figure 3), and the bit-flipping mutation operator (Figure 4). \begin{figure}[ht] \centering \includegraphics[height=1in]{figure3.eps} \caption{One point crossover of parents a) and b) to form the c) and d) offspring from reproduction. A subsequence of the parents, represented as strings, is swapped to create two new offspring (Dorronsoro, 2005). } \ \centering \includegraphics[height=0.25in]{figure4.eps} \caption{The bit flipping mutation of the a) parent to form the b) offspring. A singular binary bit in the original parent string is flipped to create a new string for a different offspring (Dorronsoro, 2005).} \end{figure} However, Gaussian mutation operators are still widely used in genetic algorithms, especially in a variety of object-oriented programming languages and their use of Gaussian distribution values. This is because of the Gaussian curve's (Figure 5) usage in probability density functions and standard deviation. Usage of Gaussian values is promising for mutation simulations, and when introducing new information to the population of the GA. \begin{figure} \centering \includegraphics[height=2in]{figure5.png} \caption{A selection of Normal Distribution Probability Density Functions (PDFs). Both the mean, $\mu$, and variance, $\sigma^2$, are varied. The red curve on the graph represents the standard normal distribution. The key is given on the graph (Inductive Load, 2008).} \end{figure} \section{{The Gaussian mutation operator}} Take $x \in [a, b]$ as a real variable. The Gaussian mutation operator $M_g$ changes the variable $x$ to \begin{center} $M_g(x) := min(max(N(x, \sigma),a),b)$ \end{center} \noindent The operator also leverages the Gauss error function. Recall that all solutions in the genetic algorithms are the chromosomes. These chromosomes go through mutation (random gene alterations), crossover (swapping genetic material), and selection (only the fittest survive). Mutation operators use a single parent chromosome and induce mutation on some of the selected genes. The offspring when using the Gaussian mutation operator is given by \begin{center} $x^{'}_{i} = \sqrt{2} \sigma (b_i - a_i)erf^{-1}(u^{'}i)$ \end{center} \noindent , where $x^{'}_{i}$ is the offspring, $\sigma$ is a fixed parameter for all variables, and $[a_i, b_i]$ represents the range of random gene $x_i$. $erf^{-1}$ represents the inverse of $erf$, which is the Gauss error function, defined by $erf(y) = \frac{2}{\sqrt{\pi}}\int_{0}^{y} e^{-t^{2}}\,dt$. The inverse Gauss error function is applied to a calculation ui,, which is a random value $u_i$ within the range of $(0,1)$, which is inside of a formula that gives $u_i$, ("Mutation Algorithms for Real-Valued Parameters (GA)", 2018). This formula can be defined using IFTTT (if this then that) notation. With evolutionary programming, the manipulation of parameters is done through self-adaptation. This learning format allows the GA to constantly tweak the strategy parameters when it's iterating through different generations to find the optimum, essentially searching for the value of the parameters themselves to increase their performance levels. Parametric optimization values are subject to change; their fluctuations depend on the solved function. With Gaussian mutation, convergence is completed much more efficiently, and domain searching and solution tuning are much better (Hartfield, 2010), thereby increasing the performance of the GA. The Gaussian mutation's shape can also be controlled by applying a parameter to it. This mutation operator is called the $\emph{q}$-Gaussian mutation, where the shape of the mutation distribution is controlled via a real parameter $\emph{q}$ (Tinós et all., 2011). The $\emph{q}$-Gaussian, therefore, operates on a different distribution than the traditional Gaussian mutation. This is known as the q-analog of the normal curve, and its distribution is symmetrical at about 0. Figure 6 shows its probability density function. \begin{figure}[ht] \centering \includegraphics[height=2.5in]{figure6.png} \caption{Probability density plots of $\emph{q}$-Gaussian distributions. This maps the probability density function of the q-Gaussian distribution itself (IkamusumeFan, 2014).} \end{figure} $\emph{q}$-Gaussians allow the mutation distribution shape of the GA self-adaptive, which makes them effective in solving dynamic optimization problems (Yang et al., 2010), where the variable values are subject to change over time. \section{Implementation of gaussian mutation results} The implementation of Gaussian mutation has been successful in solving a variety of optimization problem types. A notable instance of this is in $\emph {An Improved Real-Coded Genetic Algorithm Using the Heuristical Normal Distribution and Direction-Based Crossover}$, where a multi-offspring improved real-coded genetic algorithm (MOIRCGA) using the Gaussian distribution (in combination with direction-based crossover was proven to solve constrained optimization problems. The MOIRCGA finished with supremacy over a real-coded genetic algorithm (RCGA) not leveraging Gaussian distribution to find a globally optimal solution to the problem (Wang et al., 2019), in part due to its much faster conversion speed than the RCGA (Figure 7). \begin{figure}[ht] \centering \includegraphics[height=3.8in]{figure7.png} \caption{Optimization results for the MOIRCGA as compared to the RCGA in the An Improved Real-Coded Genetic Algorithm Using the Heuristical Normal Distribution and Direction-Based Crossover Experiment (Wang et al., 2019). The MOIRCGA was superior to the RCGA and some of the other benchmarked algorithms.} \end{figure} \noindent Within this experiment, the algorithms were tested and compared with other literature, and the problem was used in the parameter optimization of the Cantilever Beam design with a discrete cross-sectional area to determine the validity of the MOIRGCA, and sixteen test functions were evaluated as well. The experimentation concludes with, "optimization results show that the function value obtained with MOIRCGA is superior to that obtained with RCGA'' (Wang et al., 2019). Within this experiment, the algorithms were tested and compared with other literature, and the problem was used in the parameter optimization of the Cantilever Beam design with a discrete cross-sectional area to determine the validity of the MOIRGCA, and sixteen test functions were evaluated as well. The experimentation concludes with, "optimization results show that the function value obtained with MOIRCGA is superior to that obtained with RCGA'' (Wang et al., 2019). \subsection {Robotic appendages.} Genetic algorithms have been cited as one of the most useful forms of artificial intelligence (Cheng, 2011). The algorithms have extensive implications in robotics, especially within movement or functionality (Davidor, 1991). The robotics inverse kinematics problem (IKP) describes "finding a vector of joint variables which produce [the] desired end-effector location" (DeMers et al., 1997). The problem has an infinite amount of solutions, making it difficult to solve using classical computation. However, genetic algorithms have been presented to create the best robotic arm (Sekaj et al., 2014) based on trajectory control. In $\emph {Optimization of Robotic Arm Trajectory Using Genetic Algorithm}$, a genetic algorithm that could optimize for energy consumption, operating time, rotation changes, and trajectory was proposed for the industrial robot ABB IRB 6400FHD (Števo et all., 2014). In $\emph {Genetic Algorithm Based Approach for Autonomous Mobile Robot Path Planning}$, a GA with an improved crossover operator was suggested to be applied to a path planning problem for a computer vision autonomous robot for movement and navigation. \subsection {Molecular geometry.} In the introductory section of this independent study, an example of determining carbon fullerenes as the lowest energy configuration of sixty carbon atom clusters. This example for a genetic algorithm was extracted from $\emph{Molecular Geometry Optimization with a Genetic Algorithm}$, in which a genetic algorithm was used to determine the lowest energy geometry for carbon compounds, ranging up to $C_60$. The experiment, however, wasn't limited, as their method was capable of determining the lowest energy structure of an atomic cluster in any arbitrary model potential. Original solution candidates generated were used as a reference for fitness to generate a new generation of solutions to produce lower energy solutions. The genetic model itself was cited to be more accurate than the simulated annealing, and was stated to have "dramatically outperformed" it, and decreased the computational time of concluding carbon fullerenes by over 200$\%$ when compared to a human biochemist (Deaven et al., 1995). In $\emph{First-Principles Molecular Structure Search with a Genetic Algorithm}$, a genetic algorithm was built using Conda's RDKit library in Python to accelerate computational chemistry. The algorithm focused on identifying low-energy conformers for a given molecule using first principle searches (Supady et al., 2015). The algorithm was successful (Figure 8) in doing so.\begin{figure}[ht] \centering \includegraphics[height=3in]{figure8.png} \caption{"Evaluation of the results for the subset of the Astex Diverse Set. (A) The relative energy of all found conformers as a function of the RMSD to the reference ligand (blue circles). The green squares depict the reference ligand structures after DFT optimization. (B) An overlay between the reference ligand (green) and the best match (blue) is presented together with the corresponding RMSD value" (Supady et al., 2015).} \end{figure}\subsection{Sentiment Lexicon Optimization.} Another use of genetic algorithms is in analyzing emotions, also known as opinion mining or sentiment analysis. In ALGA: Adaptive lexicon learning using genetic algorithm for sentiment analysis of microblogs, the common problem in sentiment analysis, "improving polarity classification of sentiments in microblogs by building adaptive sentiment lexicons'' (Keshvarz et al., 2017), was solved using a novel genetic algorithm. In the six datasets through which it was tested, the algorithm achieved over 80$\%$ accuracy. In summary, the algorithm was able to build the most accurate collection of words, and how their semantics relate to their sentiment orientation (Keshvarz et al., 2017). \section{Conclusion} \section*{Acknowledgements} This research was conducted independently and presented at a UPenn SEAS functional computing paper competition. Author Okezue Bell is affiliated with both Harvard-MIT and Moravian Academy, though this research was conducted independently. \bibliographystyle{unsrt}
613498e465d357a9cad934e8e696a1fa21c64887
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{sec:intro} In intelligent speech processing applications, the Keyword Spotting (KWS) system, including wake-up word detection, plays an important role in human-computer interaction. KWS aims to detect a predefined keyword or a set of keywords in a continuous audio stream. Studies have been proposed to deliver robust approaches with high detection accuracy, etc. Alan and Robert adopted dynamic time warping (DTW) for keyword spotting back in 1985 \cite{1}, then hidden Markov models (HMM)\cite{2,3,4}, deep neural networks (DNN)\cite{Sun2017CompressedTD, Panchapagesan2016MultiTaskLA,5} and other various neural network structures including convolutional neural networks (CNN) \cite{Sainath2015ConvolutionalNN}, temporal convolutional neural networks \cite{TemporalCnn1,TemporalCnn2}, recurrent (RNN) neural networks\cite{10.1007/978-3-540-74695-9_23,WOLLMER2013252} and Transformer\cite{kwsTransformer} are also proposed for this task However, the probability of false alarm becomes higher under complex acoustic environments and ambiguous content. Without further adaptation, the KWS system may misclassify filler as keywords since some of the filler actually sound close to the keywords. Those are the adversarial samples which are called confusing words (CW). Moreover, it is expensive to acquire human recorded adversarial samples for training a KWS system that can accurately classifies confusing words, especially when the keywords are customized by the users. In this paper, we discussed the concept of confusing words. And we will release a supplemental database called HI-MIA-CW which we used the same setup of the HI-MIA\cite{9054423} database to record There are about 16k audios with 12 confusion words patterns in table \ref{confusion_word} for the keyword in HI-MIA database from new 30 speakers. This data is included in our evaluation set. Then we propose several methods to generate some adversarial samples for simulating real confusing words employed on an end-to-end approach to address the aforementioned issue. The idea is motivated by the maximum mutual information (MMI) criterion to improve the discriminative power of the model\cite{Povey+2016}. The first technique is to concatenate the waveform of the real subword audio. The second adversarial samples augmentation is performed by a text-to-speech (TTS) system. The use of synthesized speech for data augmentation is also not new\cite{rygaard2015using,mimura2018leveraging,TurkishSyn}. \cite{lin2020training,huang2021synth2aug} shows that synthetic data can help train the keyword spotting model. The third method is that apply random masking on speech signals to simulate confsuing words like keyword audio that are interrupted by mute in the middle. Moreover, we use domain embeddings extracted from pre-trained LSTM domain classifier to help overcome the domain shift problems. To the best of our knowledge, we are the first one to discuss the concept of confusing words in KWS scenarios and explore the augmentation methods to generate adversarial samples of confusing words to improve the performance of the wake-up word detection system. Both augmentation methods achieve significant improvement on the end-to-end KWS model. Especially for TTS augmentation, the false rejects rate drops from 68.60\% to 9.81\% at twenty false alarms in one hours compared with the one that is evaluated with the system trained without the aforementioned confusing words augmentation approaches. This rest of the paper is organized as follows. Section 2 discuss the confusing words and release the database. Section 3 describes the framework of the CNN based KWS system. Section 4 presents our augmentation methods. Section 5 discusses the experimental results, and the conclusion is provided in section 6. \section{Confusion words} The definition of confusion words changes with the application scenario. In the natural language processing (NLP) scenarios, some confusing words with similar meanings, but different spellings and pronunciations, will appear in similar contexts and some other confusing words are misspelled. \cite{wang2013automatic} present a system named Automatic Confusion words Extraction (ACE), which takes a Chinese word as input and automatically outputs its easily misspelled confused words. Unlike confusion words in the NLP domain, confusion words in the speech domain are those words that sound similar to predefined keywords or is the part of the keywords. Let's take the keyword "ni hao, mi ya"(Hello Mia) as an example. Based on the idea of similar pronunciation and fragmentation, we came up with the following confusing words in table \ref{confusion_word}. Table \ref{confusion_word} show the phoneme sequence of the keyword and confusion words, where the subscript of phoneme represents tone. Confusion words as adversarial samples attack the acoustic model in the wake-up word system, cause false rejections that severely disrupt the usage experience. In order to compare the performance of models in practical applications, we used the same setup of HI-MIA database \cite {9054423} to further record 16k audios with the same 12 confusion word patterns in table \ref{confusion_word} from new 30 speakers. The supplemental database HI-MIA-CW is released.\footnote{https://github.com/Mashiro009/HI-MIA-CW}\footnote{http://openslr.org} \begin{table}[h] \small \setlength{\tabcolsep}{1.0mm}{ \centering \caption{Phoneme sequences of the keyword and confusion words} \label{confusion_word} \begin{tabular}{ccc} \toprule Data Type & Words & Phoneme Sequence \\ \midrule Keyword & ni hao, mi ya & N $\text{I}_2$ H $\text{A}_3$ $\text{U}_3$ M $\text{I}_3$ YH $\text{I}_4$ $\text{A}_4$ \\ \midrule \multirow{12}*{Confusion Words} & ni hao mi & N $\text{I}_2$ H $\text{A}_3$ $\text{U}_3$ M $\text{I}_3$ \\ & ni hao, ni hao & N $\text{I}_2$ H $\text{A}_3$ $\text{U}_3$ N $\text{I}_2$ H $\text{A}_3$ $\text{U}_3$\\ & ni hao ya & N $\text{I}_2$ H $\text{A}_3$ YH $\text{I}_4$ $\text{A}_4$\\ & hao mi ya & H $\text{A}_3$ $\text{U}_3$ M $\text{I}_3$ YH $\text{I}_4$ $\text{A}_4$\\ & ni mi ya & N $\text{I}_2$ M $\text{I}_3$ YH $\text{I}_4$ $\text{A}_4$\\ & ni hao & N $\text{I}_2$ H $\text{A}_3$ $\text{U}_3$\\ & mi ya, mi ya & M $\text{I}_3$ YH $\text{I}_4$ $\text{A}_4$ M $\text{I}_3$ YH $\text{I}_4$ $\text{A}_4$ \\ & hao mi, hao mi & H $\text{A}_3$ $\text{U}_3$ M $\text{I}_3$ H $\text{A}_3$ $\text{U}_3$ M $\text{I}_3$\\ & ni hao mi & N $\text{I}_2$ H $\text{A}_3$ $\text{U}_3$ M $\text{I}_1$\\ & hao mi ya & H $\text{A}_3$ $\text{U}_3$ M $\text{I}_1$ YH $\text{I}_4$ $\text{A}_4$\\ & mi ya, mi ya & M $\text{I}_1$ YH $\text{I}_4$ $\text{A}_4$ M $\text{I}_1$ YH $\text{I}_4$ $\text{A}_4$\\ & hao mi, hao mi & H $\text{A}_3$ $\text{U}_3$ M $\text{I}_1$ H $\text{A}_3$ $\text{U}_3$ M $\text{I}_1$\\ \bottomrule \end{tabular} } \vspace*{-0.4cm} \end{table} \section{MODEL ARCHITECTURE} \label{sec:format} In this section, we present our baseline system, which is modified from the CNN-based KWS system \cite{Sainath2015ConvolutionalNN}. As shown in Figure \ref{CNN_framework}, our baseline system consists of three modules:(i) a feature extraction module, (ii) a convolutional neural network and (iii) a posterior processing module. The feature extraction module converts the audio signals into acoustic features. 80 dimensional log-mel filterbank features are extracted from a speech frame with 50ms long and 12.5ms shift. Then we apply a segmental window with 121 frames to generate training samples that contain enough context information as the input of the model. \begin{figure}[th] \centering \includegraphics[width=0.4\textwidth]{model.png} \caption{Framework of the baseline system.} \label{CNN_framework} \end{figure} Our backbone network consists of three convolutional layers each followed by a maximum pooling layer. For all three CNN layers, the kernel size is set to (3,3), the stride is (1,1), and the pooling size is set to (2,2). Two fully connected layers and a final softmax activation layer are applied as the back-end prediction module to obtain the keyword occurrence probability. The acoustic feature sequence is transformed into a posterior probability sequence of selected keywords by the model. We perform the keyword detection algorithm over a sliding window with length $T_s$. Here we use $\textbf{x}^{(i)} = \{x_i, x_{i+1}, \dots , x_{i+T_s} \}$ to denote one input window over the segment $X$ that contains $N$ frames. Then the keyword confidence score is calculated as follows: \begin{equation} conf(X) = \max_{1\leq t\leq N-T_s} P_{keyword}(\textbf{x}^{(t)}) \end{equation} where $P_{keyword}(x^{(t)})$ is the posterior probability of the keyword appearing in the window started at frame $t$. The KWS system triggers once when the confidence score exceeds a predefined threshold. \section{Adversarial Samples} \label{sec:pagestyle} Models that perform well on the test data set might fail in real life applications where many testing samples are confusion words. This problem becomes more important in the case of customized wake up words defined by the users. In this case, to reduce the performance degradation when applying KWS in unmatched scenarios and improve the robustness of KWS, we propose three methods to generate adversarial samples for confusion words. \subsection{Waveform Concatenation} To obtain training samples of confused words, it is natural to use unit selection and waveform concatenation.\cite{concatenate} shows the difference between concatenative and neural TTS system. We use a Large Vocabulary Conversational Speech Recognition(LVCSR) to align the audio and the text in labeled public speech dataset, then truncate the audio to get waveform of each subword of the keyword. Truncated audio may come from different speakers. We simply concatenate the waveform according the order of the subwords in keywords and confused words to generate the adversarial samples. \subsection{Text-to-speech Augmentation} We obtain synthesized data from a mandarin multi-speaker TTS system \cite{Cai2020}. In this setup, 7k speakers from publicly available datasets and internal datasets are collected and used for synthesized. For each speaker, we first extract the speaker embedding with one utterance by using the TTS system. Then 3 kinds of synthesized samples are generated by the multi-speaker TTS system conditioned on the speaker embedding: (i) positive samples that whose content is the keywords (ii) negative samples that do not contains keywords, (iii) adversarial negative samples that are confusion words that have contents close to the keywords. \subsection{Masked Audio} We applied random masking on keyword samples and used them as the adversarial negative data in training to improve the robustness of our KWS model. The KWS model should yield undetected results when having these masked samples since masked samples simulate confusion words like keyword audio that are interrupted by mute in the middle. For each positive sample, we generate corresponding masked samples online by replacing 40\%-60\% audio signals with Gaussian white noise, unlike SpecAug\cite{specaugment} which uses the mean value. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{lstmconcat.png} \caption{Framework of the domain embedding system.} \label{lstmconcat_figure} \end{figure} \subsection{Domain Adaptation} Based on the assumption that the distribution of synthetic data and real data are different, we incorporate the domain adaptation method of Environmental Domain Embeddings with TTS augmentation in the training step in order to improve the robustness of the model and make it fit the distribution of real data better. As shown in Figure \ref{lstmconcat_figure}, we train the KWS system with the domain embedding derived from a pre-trained domain classifier. The method is inspired by \cite{embASR} and \cite{2005.03633} which applied the domain embedding to incorporate domain knowledge into network training and improved the performance of the keyword classifier on far-field conditions. The domain classifier is trained by samples from different domain which include real domain, synthetic domain and concatenated domain. The domain classifier consists of two stacked LSTM layers followed by an average pooling layer and a final fully-connected linear layer. Domain embeddings are extracted from the output of the pooling layer and its dimension is fixed to 128. The domain classifier is trained before the training of CNNs. When we train the CNN model, we extract the domain embedding from the pre-trained domain classifier and concatenate the embedding to the output of the first fully-connected layer. Then the concatenated features are fed into two linear layer for predicting the posterior probability of the keyword. \section{EXPERIMENTAL RESULTS} \label{sec:typestyle} \subsection{Dataset} Natural speech recorded by native speakers and generated adversarial samples are both used for training in our experiments. For natural speech data, the HI-MIA dataset \cite {9054423} is used as the positive samples. The HI-MIA dataset includes speech data recorded by one close-talking microphone and six 16-channel circular microphone arrays. Each utterance contains content with four Chinese characters ``ni hao, mi ya" (Hello, Mia). We only use the recordings from the single-channel close-talking microphone. Samples from 300 randomly selected speakers are used as the training set, and samples from 30 speakers are used as the Hi-mia test set. The Aishell-1 \cite {8384449} dataset is used as the negative sample of real speech data. Utterances from 300 speakers are selected for training, and utterances from 30 speakers are used as the Aishell-test set. For concatenated data, Each subword in keyword contains about 3k samples, and we concatenate the waveform online to synthesize keywords (concat-wake) and confusion words(concat-cw) as train set. For synthetic data, we have 7k different utterance samples from all speakers that is synthesized according to the keyword text. They are used as synthetic positive keywords(synt-wake) train set. And we also have samples that includes 12 confusion word patterns with 90k different voices, where utterance from all speakers are used as the synthetic confusion words (synt-cw) train set. Also we mask the postive samples online as negative samples according to the Section 4(mask). In addition, 188k negative sample audio are synthesized with provided text from Aishell-2 \cite{1808.10583} (synt-neg). In order to compare the performance of models in practical applications, we also used the HI-MIA-CW database as the test set(real-cw). The statistics of the data we used for training and testing is shown in table \ref{dataset}, where the term `Real' denotes natural speech, including utterances from Hi-MIA database (Real Positive), Aishell-1 (Real Negative) and HI-MIA-CW (Real Confusion Words). \begin{table}[h] \centering \caption{Dataset statistics (P: positive, N: negative) \label{dataset} \begin{tabular}{cccccc} \toprule Samples & Label & Train & Test \\ \midrule HI-MIA & P & 23k & 2k \\ Aishell-1 & N & 105k & 10k \\ HI-MIA-CW & N & - & 16k \\ Concatenated Keywords & P & 23k & - \\ Concatenated Confusion Words & N & 23k & - \\ Synthesized Keywords & P & 7k & - \\ Synthesized Confusion Words & N & 90k & - \\ Synthetic Negative & N & 188k & - \\ Masked & N & 23k & - \\ \bottomrule \end{tabular} \vspace*{-0.4cm} \end{table} \subsection{Experimental Setup} We preprocess the Hi-mia training set by trimming the beginning silence and force align the audio by a speech recognition system trained on the AISHELL-2 dataset. For each sample, we obtain the start time of pronouncing the word "ni" and use the following 121 frames as the final input, where 121 frames are enough for speaking the keyword according to the alignment information. Our models are trained for 100 epochs with Nesterov momentum Stochastic gradient descent optimizer. The initial learning rate of the optimizer is set to 0.1 and decays when the training loss has not decreased for several rounds. During evaluation, we have a sliding window with a frame length of 121 for each utterance and detect the occurrence of the expected keyword. Six KWS systems are trained and evaluated regrading different training setups in our experiments: (i) \textbf{baseline}: use all real samples (include Real Positive set and the Real Negative set), which are shown in table \ref{dataset}, for training. (ii) \textbf{real+concat-*}: use the all real samples, all concatenated samples(include concat-wake and concat-cw) for training. (iii) \textbf{real+syn-*}: use all real samples, all synthetic samples(include synt-wake, synt-cw and synt-neg) for training. (iv) \textbf{real+mask}: use all real samples, and the masked samples for training. (v) \textbf{real+concat-*+syn-*+mask}: use all real samples, all concatenated samples, all synthetic samples, and the masked samples for training. (vi) \textbf{real+concat-*+syn-*+mask+EMB}: use all real samples, all concatenated samples, all synthetic samples, and the masked samples for training. Pre-trained Domain Classifier is incorporated in this setup There are two combination sets for evaluation: (i) \textbf{real}: use the test set from Real Positive set and Real Negative set for evaluation. (ii) \textbf{real + real-cw}: in addition to the test sets mentioned above, the natural samples of confusion words (real-CW) are also included. \subsection{Results} \begin{figure}[h] \centering \includegraphics[width=0.35\textwidth]{fa1_1.jpg} \caption{Performances of models on the real test sets} \label{fig:real_test} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.35\textwidth]{fa20_1.jpg} \caption{Performances of models on the real+real-cw test sets} \label{fig:realcw_test} \end{figure} Results are shown in Figure \ref{fig:real_test}, \ref{fig:realcw_test} and Table \ref{result_baseline1}, \ref{result_baseline2}, where Figure \ref{fig:real_test} and Table \ref{result_baseline1} shows the performance of models on real test sets (Hi-mia + Aishell1) without confusing words testing samples, Figure \ref{fig:realcw_test} and Table \ref{result_baseline2} shows performance of models on the real + real-cw test set. As for real test sets, we choose the false rejection rate under one false alarm per hour as each model's performance criterion separately. Table \ref{result_baseline2} presents the KWS performance of five models regarding the false rejection rate when the false alarm rate per hour is 20, as adding confusion words in the test set makes the task more challenging. \begin{table}[h] \centering \caption{Performances of models trained with different methods on the real test sets (the false rejection (FR) rate (\%) under one false alarm (FA) per hour)} \label{result_baseline1} \begin{tabular}{ccccc} \toprule Training set & real \\ \midrule baseline & 0.417 \\ \midrule real + concat-* & 1.67 \\ real + syn-* & 1.37 \\ real + mask & \textbf{0.334} \\ real + concat-* + synt-* + mask & 3.29 \\ real + concat-* + synt-* + mask + EMB & 0.523 \\ \bottomrule \end{tabular} \vspace*{-0.4cm} \end{table} \begin{table}[h] \centering \caption{Performances of models trained with different methods on the real+real-cw test sets (the false rejection (FR) rate (\%) under twenty false alarms per hour)} \label{result_baseline2} \begin{tabular}{ccccc} \toprule Training set & real+real-cw \\ \midrule baseline & 68.60 \\ \midrule real + concat-* & 46.05 \\ real + syn-* & 38.54 \\ real + mask & 63.63 \\ real + concat-* + synt-* + mask & 16.87 \\ real + concat-* + synt-* + mask + EMB & \textbf{9.81} \\ \bottomrule \end{tabular} \vspace*{-0.4cm} \end{table} From Table \ref{result_baseline1} and \ref{result_baseline2} we can obtain the following observations. First, the baseline system performed well in real test sets without confusing word samples. However, the baseline system's performance will degrade dramatically on confusion words examples, which is frequently happened in real-life applications. Second, directly adding adversarial samples will lead to performance degradation on the real test set but masked samples help train the system and achieve best result 0.334 on the real test set. Moreover, after the domain embedding algorithm is applied, the system also maintain the performance on the real test set and achieve the result 0.523. It is because domain adaptation methods help the system to learn to fit the distribution of real data better and overcome the the degradation in performance due to the domain shift. Third, the confusion word test set's accuracy has been significantly improved by adding adversarial synthetic samples. It can also be found that adding concatenated samples and adding masked samples does not improve the performance as much as TTS synthesized ones. This may be due to insufficient simulation of confusion word when masked samples are added separately, while possibly misleading the system to learn whether there is a Gaussian distribution of judgments. Also the concatenated samples show steep changes in the splicing breakpoints in the spectral features and do not simulate the confusion words well enough. Fourth, adding synthetic samples along with concatenated and masked samples can help the system to better learn the difference between confusing words and keywords, which achieve the result 16.87\% on the real+real-cw testing set. Finally, comparing to the baseline without any augmentation, this augmentation setup with the domain adaptation method achieves best performance on the real+real-cw testing set and shown great robustness on confusion words scenarios as the false rejection rate under twenty false alarms per hour decreases from 68.60\% to 9.81\%. \section{CONCLUSIONS} \label{sec:majhead} In this paper, we discuss the concept of the confusion words and focus on the task of small-footprint keyword spotting in this scenarios, then show the effectiveness of generating the adversarial samples to train a keyword recognition system. Confusing words that sound very similar to the keywords lead to a significant degradation in system performance. We release the supplemental database HI-MIA-CW and adopt three augmentation strategies to enhance the robustness, including concatenated samples, masked samples and synthesized samples with the domain adaptation methods. Experimental results show that our proposed methods can effectively maintain the accuracy on general real test data and at the same time, achieve significant improvement under the test condition with confusing word samples. \section{ACKNOWLEDGMENTS} \label{sec:copyright} This research is funded in part by the National Natural Science Foundation of China (62171207), the Fundamental Research Funds for the Central Universities (2042021kf0039), Key Research and Development Program of Jiangsu Province (BE2019054), Science and Technology Program of Guangzhou City (201903010040, 202007030011) and Lenovo. \bibliographystyle{IEEEbib}
0e9e597fb3756454587adbebe90bba34b32868b4
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Supplemental Material} \section{Analysis Details} \textbf{\textit{Gravitational Wave Mismatch Calculation.}} The mismatch between two observed waveforms $h^1(t)$ and $h^2(t)$ is defined as one minus the maximum overlap $\mathcal{O}(h^1,h^2)$, \begin{equation} \mathcal{M}(h^1,h^2) = 1 - \max_{ \{\chi_i\}}\mathcal{O}(h^1,h^2)\,, \end{equation} where the overlap is given by \begin{equation} \mathcal{O}(h^1,h^2) = \frac{\langle h^1|h^2\rangle}{\sqrt{ \langle h^1|h^1 \rangle \langle h^2|h^2 \rangle}}\,\,. \label{eq:overlap} \end{equation} Here, $\langle \cdot | \cdot \rangle$ is a detector-noise weighted inner product and optimization is carried out over a set $\{\chi_i\}$ of parameters impacting the overlap (e.g., shifts in waveform phases, polarization angles, arrival times) \cite{damour:98}. In the simplest case, we can choose $\langle \cdot | \cdot \rangle$ as the frequency-domain noise weighted inner product \cite{cutler:94}, \begin{equation} \langle a|b \rangle_f = 4 \mathrm{Re} \int_0^\infty \frac{\tilde{a}(f) \tilde{b}^*(f)}{S_n(f)} df\,. \label{eq:scalar} \end{equation} Here, $S_n(f)$ is the detector noise power spectral density and $\tilde{a}(f)$ is the Fourier transform of $a(t)$. The real gravitational wave signal $h(t)$ observed by a single detector is given by \begin{equation} h(t) = F_+ h_+ + F_\times h_\times\,\,, \label{eq:antenna} \end{equation} where $F_+$ and $F_\times$ are the detector antenna pattern functions that depend on the sky location of the source and polarization basis (see, e.g., \cite{harry:11}). We now consider two scenarios: (1) A best case in which both $h_+$ and $h_\times$ are measured by two optimally oriented GW detectors at Advanced LIGO design sensitivity (``ZDHP'' for zero-detuning, high-power \cite{LIGO-sens-2010}). (2) The realistic scenario of the two Advanced LIGO interferometers with the sensitivity at the time of GW150914. For both cases, we need the two-detector inner product for two detectors $\alpha$ and $\beta$, which is defined \cite{harry:11} as the sum of the single-detector contributions, \begin{equation} \langle h^1|h^2 \rangle_\mathrm{2 det} = \langle h^{1,\alpha}|h^{2,\alpha} \rangle_s + \langle h^{1,\beta}|h^{2,\beta} \rangle_s\,. \label{eq:twodetector} \end{equation} Here, $h^{1,\alpha}$ is waveform 1 as seen by detector $\alpha$ through Eq.~\ref{eq:antenna} and so forth. The single-detector inner product $\langle \cdot, \cdot \rangle_s$ used is that given by Eq.~\ref{eq:scalar} with the exception that we integrate over some frequency interval defined by $[f_\mathrm{min},f_\mathrm{max}]$. In practice, we obtain the necessary Fourier transforms by using the Fast Fourier Transform algorithm after tapering the ends of the time domain signal and padding with zeros for all waveforms to have the same length in the time domain. For scenario (1), we follow \cite{blackman:17} and define an optimal two-detector $\mathcal{O}_\mathrm{opt}$ overlap by choosing detectors oriented so that one detector is maximally sensitive to $h_+$ (and insensitive to $h_\times$) while the opposite is true for the other detector. We then have \begin{equation} \langle h^1|h^2 \rangle_\mathrm{opt} = \langle h_+^1| h_+^2 \rangle_s + \langle h_\times^1| h_\times^2 \rangle_s\,, \end{equation} with $S_n(|f|)$ in Eq.~\ref{eq:scalar} chosen as the Advanced LIGO ZDHP noise power spectral density. $\mathcal{O}_\mathrm{opt}$ is then given by Eq.~\ref{eq:overlap} with $\langle \cdot | \cdot \rangle_\mathrm{opt}$ and the mismatch is obtained as $\mathcal{M}_\mathrm{ZDHP} = 1 - \max \mathcal{O}_\mathrm{opt}$. We optimize over time shifts and polarization angle shifts of the waveforms. Since we consider only the $(2,2)$ GW mode, we simply assume a face-on direction of GW propagation, and orbital phase shifts are identical to polarization phase shifts. See \cite{blackman:17} for further details. For scenario (2), we use the inner product of Eq.~\ref{eq:twodetector} with the Advanced LIGO Hanford and Livingston antenna patterns \cite{ligo-antenna} for GW150914 and the parameters given in \cite{ligosys:17}. We employ the actual Hanford and Livingston noise power spectral densities at the time of GW150914 provided at \url{https://losc.ligo.org/events/GW150914/}. We obtain $\mathcal{M}_\mathrm{GW150914} = 1 - \max \mathcal{O}_\mathrm{GW150914}$ for the $(2,2)$ GW mode by optimizing over time shifts, polarization angle shifts, and orbital phase shifts. We neglect contributions from other GW modes. \textbf{\textit{Reduction in Log-Likelihood due to Mismatch.}} In GW parameter estimation, the posterior probability of a BBH parameter vector $\vec{\vartheta}$ is determined from the prior and likelihood. The GW likelihood function (e.g., \cite{LIGO_properties:2016}) is given by \begin{equation} \label{loglikelihood} \mathcal{L}(d|\vec{\vartheta}) \propto \, \mathrm{exp} \left[ -\frac{1}{2} \left \langle h^\mathrm{M}(\vec{\vartheta})-d | h^\mathrm{M}(\vec{\vartheta})-d \right \rangle \right] \, . \end{equation} Here, $d = h^\mathrm{GR} + n$ is the data observed in the detectors consisting of the GR signal (we use ``GR'' as a synonym for ``true'') and detector noise $n$. $h^\mathrm{M}$ is the template waveform generated by some waveform model. The log-likelihood is then \begin{equation} \begin{split} \log \mathcal{L} = \mathrm{C} -\left[ \frac{1}{2} \left \langle h^\mathrm{M}|h^\mathrm{M} \right \rangle + \frac{1}{2} \langle h^{\mathrm{GR}}|h^{\mathrm{GR}}\rangle \right. \\ \left. + \frac{1}{2} \langle n|n \rangle - \langle n | h^\mathrm{M}-h^\mathrm{GR}\rangle - \langle h^\mathrm{M} |h^{\mathrm{GR}} \rangle \right] \, , \end{split} \end{equation} where $C$ is a constant of proportionality. Suppose that $h^\mathrm{M}$ is different from the true signal, \begin{equation} h^\mathrm{M} = (1+\epsilon_1)h^\mathrm{GR} + \epsilon_2 h^\bot \, , \end{equation} where $\langle h^\bot | h^\mathrm{GR}\rangle = 0$. Here $\epsilon_1$ and $\epsilon_2$ are numbers and we consider the limit $\epsilon_{1,2} \ll 1$. Any $h^\mathrm{M}$ can be decomposed in this way. The log-likelihood becomes \begin{equation} \begin{split} \log \mathcal{L} = \log \mathcal{L}_0 - \frac{1}{2}\epsilon_1^2 \langle h^{\mathrm{GR}}|h^{\mathrm{GR}}\rangle - \frac{1}{2} \epsilon^2_2 \langle h^{\mathrm{\bot}}|h^{\mathrm{\bot}}\rangle \\ +\epsilon_1 \langle n|h^{\mathrm{GR}}\rangle + \epsilon_2 \langle n|h^\bot\rangle \, , \end{split} \end{equation} where $\log \mathcal{L}_0$ is the log-likelihood when $h^{M}=h^{\mathrm{GR}}$. The expected reduction in the log-likelihood is then \begin{equation} \mathrm{E}[\delta \log\mathcal{L} ] = \frac{1}{2}\epsilon_1^2\langle h^\mathrm{GR}|h^\mathrm{GR}\rangle + \frac{1}{2}\epsilon_2^2\langle h^\bot|h^\bot\rangle \, . \end{equation} We now allow a small bias in the distance to the source by rescaling $h^\mathrm{M}$ by $(1+\epsilon_1)^{-1}$ with which we obtain the convenient expression \begin{equation} \mathrm{E}[\delta \log \mathcal{L} ] = \frac{1}{2} \epsilon_2^2 \langle h^\bot | h^\bot \rangle + \mathcal{O}(\epsilon^3) \, . \end{equation} The mismatch between $h^{\mathrm{GR}}$ and $h^\mathrm{M}$ is \begin{align} \mathcal{M}(h^{\mathrm{GR}},h^\mathrm{M}) &= 1- \frac{ \langle h^\mathrm{GR}|h^\mathrm{M}\rangle}{\sqrt{\langle h^\mathrm{GR}|h^\mathrm{GR}\rangle \langle h^\mathrm{M}|h^\mathrm{M} \rangle }} \\ &= \frac{1}{2}\epsilon_2 \frac{\langle h^\bot|h^\bot\rangle}{\langle h^{\mathrm{GR}}|h^{\mathrm{GR}}\rangle} + \mathcal{O}(\epsilon^3) \, , \end{align} where optimization over phase shifts, time shifts, etc. is implicit. The signal-to-noise ratio $\varrho$ is given by $\varrho^2 = \langle h^\mathrm{GR}|h^\mathrm{GR}\rangle$. With this, we find \begin{equation} \mathrm{E}[\delta \log\mathcal{L}] \approx \varrho^2 \mathcal{M}\,\,. \end{equation} The posterior probability will be affected by a factor of Euler's number $e$ when $\delta \log\mathcal{L} = 1$, which can be considered a mild observational inconsistency. Hence, the mismatch $\mathcal{M}$ will begin to have an effect on GW data analysis when \begin{equation} \mathcal{M} \gtrsim \frac{1}{\varrho^2} \, . \end{equation} \section{Numerical Convergence} We carry out additional simulations at coarse-grid resolutions $\Delta x_1=1.00 \, M$ and $\Delta x_3=1.60 \, M$, in addition to our standard-resolution simulations of $\Delta x_2=1.25 \, M$. For our convergence analysis, we choose the vacuum (G0) and the highest density (G4) as two extremes of the simulations we carry out. We focus our analysis on the gravitational waveforms since these are the most important output of our simulations. In Fig.~\ref{fig:vac_convergence}, we show numerical convergence in the Newman-Penrose scalar $\psi_4$ between the different resolutions for the G0 vacuum simulation. We consider phase and amplitude differences separately. The amplitude is defined as \begin{equation} A(t) = \sqrt{\mathrm{Re}[\psi_{4}(t)]^2 + \mathrm{Im}[\psi_{4}(t)]^2}\, , \end{equation} while the phase is defined as \begin{equation} \phi(t) = \mathrm{tan}^{-1}\left( \frac{\mathrm{Im}[\psi_{4}(t)]}{\mathrm{Re}[\psi_{4}(t)]} \right)\, , \end{equation} where $\mathrm{Re}[\psi_4]$ and $\mathrm{Im}[\psi_4]$ are the real and imaginary parts of $\psi_4$, respectively. Our numerical scheme is fourth-order, hence, we expect fourth-order convergence and a self-convergence factor of \begin{equation} Q_s = \frac{\Delta x_3^n-\Delta x_2^n}{\Delta x_2^n-\Delta x_1^n} = 0.3505\,, \end{equation} where $n$ is the order of convergence. In Fig.~\ref{fig:vac_convergence}, we rescale the differences between highest resolution and second-highest (i.e.\ standard) resolution by $1/Q_s$. These rescaled curves lie essentially on top of the curves for the differences between second-highest and lowest resolution, demonstrating approximate fourth-order convergence. In Fig.~\ref{fig:neg7_convergence} we perform the same analysis for the highest-density simulation G4. In this case, the hydrodynamics plays an important role in driving the coalescence. If our finite-volume implementation dominates the numerical error, we expect second-order convergence when the flow is smooth. However, soon after the start of the simulation, steep density gradients and shocks develop for which our numerical scheme (as any high-resolution shock capturing scheme) is only first-order convergent. Hence, we can only expect first-order convergence. We compute a first-order self-convergence factor $Q_s = 0.7143$, with $1/Q_s=2.85$. Figure~\ref{fig:neg7_convergence} shows that we obtain roughly first-order convergence in GW amplitude and phase. \begin{figure}[t] \hspace{-0.26cm}\includegraphics[width=\linewidth]{figX4} \caption{Fourth-order convergence for the G0 vacuum simulation. The dashed line at $0 \,M$ corresponds to merger, which we define as the maximum of the $h_{22}$ amplitude, and the time is given relative to merger. \textit{Top:} Amplitude differences between our lowest $(\Delta x_3)$, standard $(\Delta x_2)$, and highest-resolution $(\Delta x_1)$ simulations. We scale the differences using the self-convergence factor $1/Q_s=2.85$ corresponding to fourth-order convergence for this choice of resolutions. \textit{Bottom:} Phase angle differences also exhibiting fourth-order convergence.} \label{fig:vac_convergence} \end{figure} \begin{figure}[t] \hspace{-0.26cm}\includegraphics[width=1.0286\linewidth]{figX5} \caption{Convergence of the G4 simulation. The waveforms are aligned at merger, as defined in Figure 1, and all times are given relative to merger. The merger time is $0\, M$, marked with a dashed vertical line. \textit{Top:} Difference in waveform amplitude. We scale the difference between $\Delta x_2$ and $\Delta x_1$ by a self-convergence factor of $1/Q_s=1.4$, corresponding to first-order convergence. These are the simulations with the highest gas density and the evolution shows steep density gradients and shocks. Hence, we expect first-order convergence. \textit{Bottom:} Phase angle differences between the different resolution pairs, also exhibiting approximate first-order convergence.} \label{fig:neg7_convergence} \end{figure} In order to clarify how numerical resolution effects the main results of our paper, we have calculate mismatches between various resolutions for the G0 and G4 cases. For the G0 case we find the mismatches to be $1.6 \times 10^{-3}$ between high and medium resolution, $2.9\times 10^{-3}$ between medium and low resolution, and $3.5\times 10^{-3}$ between high and low resolution. For the G4 case the mismatches are $3.5 \times 10^{-5}$ between high and medium resolution, $1.0 \times 10^{-4}$ between medium and low resolution, and $2.3 \times 10^{-4}$ between high and low resolution. Comparing these results with the mismatches listed in Table 1 (in the main paper), we conclude that our main conclusions are independent of numerical resolution.
ff25ddb42337ebf934757278470f43b2404d7110
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Dynamic Treatment Regimes (DTRs), also known as adaptive treatment strategies or treatment policies, are a key tool for providing data-driven sequential decision-making support. A DTR is a sequence of decision functions that take up-to-date patient information as input and produce a recommended treatment. Thus, a DTR is a mathematical representation of the sequential decision-making process. Using this representation, we can use previously collected decision-making data to estimate an ``optimal'' DTR, where optimality is most often defined in terms of expected outcome. That is, a DTR is optimal if it produces the best outcome, on average, over a patient population. We will use this definition of optimality throughout our work. Each decision in an optimal DTR is made in the service of achieving maximal expected outcome. However, the outcome of any particular individual under an optimal regime may vary widely from this expectation. Indeed, DTRs have been applied in many very challenging areas of medicine, including psychiatry, cancer, and HIV, where patient outcomes are known to be highly variable, or, equivalently from our perspective, difficult to predict. It is with this variability in mind that we consider different methods for assessing the variability in {\em individual outcomes} under a given DTR. Our objective is to quantify for the decision-maker not our certainty about the expectation of outcomes, but rather our uncertainty about what the observed outcome might be {\em for a particular patient.} We begin by formally defining DTRs, and we review point and interval estimation techniques for relevant parameters of the optimal DTR. We then review definitions and existing methods for confidence intervals, prediction intervals, and tolerance intervals. Following this background, we formally describe our problem of interest in the context of using DTRs to provide decision support. We will see that the main technical challenge associated with constructing tolerance intervals for DTRs stems from not having a sample drawn from the correct distribution. Thus, our methods will use re-weighting and re-sampling to allow us to apply existing tolerance interval methods in this setting. To help illustrate the technical challenge, we first describe a na\"{i}ve strategy for constructing tolerance intervals whose performance is poor, and we then present two novel strategies for constructing valid tolerance intervals for the response under a given dynamic treatment regime. We present an empirical evaluation of the methods, and we conclude by discussing their implications and directions for future work. \section{Background}\label{ss:background} In the following, we review basic concepts pertaining to DTRs, the estimation of optimal regimes, and concepts and issues surrounding interval estimation and prediction. \subsection{Dynamic Treatment Regimes} \newcommand{\mathrm{E}}{\mathrm{E}} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathcal{Y}}{\mathcal{Y}} \renewcommand{\P}{\mathrm{Pr}} \newcommand{\mathsf{T}}{\mathsf{T}} DTRs are a mathematical formalism meant to capture the decision-making cycle of information gathering, followed by treatment choice, followed by outcome evaluation. They have been defined at different levels of generality by many authors \citep{Schulte2014,laber14dynamic,laber14setvalued,lizotte12linear,Nahum-Shani2012,Nahum-Shani2012a,lizotte10multiple,shortreed11mlj}. Here, we focus on regimes with two decision points; thus for this work we consider a DTR to be a sequence of two functions $(\pi_1,\pi_2)$ which map up-to-date patient information at the first and second decision points, respectively, to distributions over the space of available treatments at each decision point. We represent the information (covariates) about a given patient at point $t$ by $s_t$, which we view as a realization of a random variable $S_t$. Similarly, we denote the chosen treatment (action) by $a_t$, which is a realization of $A_t$. For a patient who follows a DTR $(\pi_1,\pi_2)$, we will have $A_1 \sim \pi_1(s_1)$ and $A_2 \sim \pi_2(s_1,a_1,s_2)$. We let $y$ be the observed outcome or {\em reward} attained by a patient after following a regime, and we follow the convention that larger values of $y$ are preferable. For a patient following a given regime, we observe $(s_1, a_1, s_2, a_2, y)$, the {\em trajectory} for that patient. Trajectory data may come from various observational and experimental sources, for example from Sequential Multiple Assignment Randomized Trials (SMARTs) \citep{Nahum-Shani2012,Nahum-Shani2012a,Collins2014}. A SMART is an experimental design under which patients follow a DTR that applies randomly assigned treatments. We will call such a DTR an {\em exploration} DTR or {\em exploration policy.} The goal of running a SMART is analogous to that of running a pragmatic randomized controlled trial---to evaluate the comparative effectiveness of different treatment options in an unbiased way. This comparative effectiveness information can then be used to estimate an {\em optimal} DTR. An optimal DTR is a pair of decision functions $(\pi_1,\pi_2)$ that maximize $\mathrm{E}[Y|S_1,A_1,S_2,A_2;\pi_1,\pi_2]$ where $A_t \sim \pi_t(S_t)$. Thus, an optimal DTR produces maximal expected outcome when applied to a population of patients. In this work, we focus on the setting where the {\em exploration} DTR is stochastic, but the candidate optimal DTRs under consideration are deterministic. \subsection{$Q$-learning} \newcommand{\hat{Q}}{\hat{Q}} \newcommand{\hat{\pi}}{\hat{\pi}} \newcommand{\mathord{=}}{\mathord{=}} Several methods are available for estimating an optimal DTR from data collected under an exploration DTR. Here, we review one such method called {\em $Q$-learning} \citep{Schulte2014,Huang2015}. $Q$-learning works by estimating $Q$ functions ($Q$ for ``quality'') that predict expected outcome given current covariates and treatment choice. In our 2-decision point setting, we have \begin{equation*} Q_2(s_1,a_1,s_2,a_2) = \mathrm{E}[Y|S_1\mathord{=} s_1, A_1\mathord{=} a_1, S_2\mathord{=} s_2,A_2\mathord{=} a_2]. \end{equation*} Note that unlike the expectation in the previous section which averages over patients, $Q_2$ gives the expectation of $Y$ {\em conditioned on particular patient observations and treatment choices.} The definition of $Q_2$ implies an optimal decision function $\pi^*_2(s_1,a_1,s_2) = \arg\max_{a'_2} Q_2(s_1,a_1,s_2,a'_2)$. $Q_2$ can be estimated using any regression method. Having obtained an estimate $\hat{Q}_2$ of $Q_2$, our estimate of the optimal second decision function is $\hat{\pi}^*_2(s_1,a_1,s_2) = \arg\max_{a'_2} \hat{Q}_2(s_1,a_1,s_2,a'_2)$. The optimal $Q$-function for the first decision point produces the conditional mean of $Y$ given $S_1$ and $A_1$ and {\em given that the optimal decision function $\pi^*_2$ is used at the second decision point.} In $Q$-learning, we estimate $Q_1$ by \begin{equation*} \hat{Q}_1(s_1,a_1) \approx \mathrm{E}[\max_{a'_2} \hat{Q}_2(s_1,a_1,S_2,a'_2) |S_1\mathord{=} s_1, A_1\mathord{=} a_1] \end{equation*} where the expectation is over $S_2$ conditioned on $S_1$ and $A_1$. The quantity $\max_{a'_2} \hat{Q}_2(s_1,a_1,S_2,a'_2)$ is sometimes called the {\em pseudooutcome}, and is denoted $\tilde y$. In order to estimate $Q_1$, we compute the pseudooutcome for each trajectory in our dataset, and then regress them on $S_1$ and $A_1$ to estimate $Q_1$. Again, any regression method can be used to estimate $Q_1$, in principle. Our corresponding estimate of the optimal first decision function is then $\hat{\pi}^*_1(s_1) = \arg\max_{a'_1} \hat{Q}_1(s_1,a'_1)$, and our estimate of the optimal DTR is $(\hat{\pi}^*_1,\hat{\pi}^*_2)$. Note that this DTR is deterministic. We focus on $Q$-learning in this work, but several other methods are available for estimating optimal DTRs, including $A$-learning \citep{Blatt2004,Schulte2014}, the closely-related $g$-estimation \citep{Moodie2009,Orellana2010,Barrett2014}, and direct policy search \citep{Zhao2014,Zhao2015}. \subsection{Interval Estimation} For consistency, in the following we use $y$s to represent observed outcomes, and $x$s to represent covariates, even in non-regression settings. \subsubsection{Confidence Intervals} \newcommand{{\ell_c}}{{\ell_c}} \newcommand{{u_c}}{{u_c}} \newcommand{{(\CIl,\CIu)}}{{({\ell_c},{u_c})}} A {\em confidence interval} ${(\CIl,\CIu)}$ with level $1 - \alpha$ for a parameter $\theta$ is a functional of a dataset $\mathcal{Y} = \{y_1, ..., y_n\}$ of realizations of a random variable $Y$, with the property that \begin{equation}\label{eq:CIprob} \Pr[\theta \in {(\CIl,\CIu)}] \ge 1 - \alpha. \end{equation} The probability statement (\ref{eq:CIprob}) is over datasets containing i.i.d.\ samples of $Y$. The goal of a confidence interval is to provide confidence information about the estimated location of an underlying distributional parameter. Though not our main focus, confidence intervals are by far the most well-known class of interval estimates, and they are closely related to the prediction and tolerance intervals we will develop and investigate. \subsubsection{Prediction Intervals} \newcommand{{\ell_p}}{{\ell_p}} \newcommand{{u_p}}{{u_p}} \newcommand{{(\PIl,\PIu)}}{{({\ell_p},{u_p})}} \newcommand{\mathrm{new}}{\mathrm{new}} A {\em prediction interval} ${(\PIl,\PIu)}$ with level $1 - \alpha$ is a functional of a dataset $\mathcal{Y} = \{y_1, ..., y_n\}$ of realizations of a random variable $Y$, with the property that \begin{equation}\label{eq:PIprob} \Pr[Y_{\mathrm{new}} \in {(\PIl,\PIu)}] \ge 1 - \alpha. \end{equation} Here, $Y_{\mathrm{new}}$ represents a single future observation that was not contained in the original data $\mathcal{Y}$. The goal of a prediction interval is to provide confidence information about where this new observation might fall. However, we note, as others have \citep{Vardeman2012}, that there is often confusion surrounding the probability statement (\ref{eq:PIprob}). In particular, the statement is over the joint distribution of $Y_1,...,Y_{n},Y_\mathrm{new}$. A prediction interval formed from a dataset traps {\em one additional observation} with probability $1 - \alpha$. It offers no guarantees about trapping more than one additional observation, and indeed no guarantees regarding our confidence in the {\em content} of an interval, that is, of the quantity $F_Y({u_p}) - F_Y({\ell_p})$ where $F_Y$ is the cumulative distribution function of $Y$. (For example, a prediction interval that has content $1.0$ half the time and content $0.9$ half the time has property (\ref{eq:PIprob}) for $\alpha=0.05$, as does an interval that always has content $0.95$.) \newcommand{\mathcal{N}}{\mathcal{N}} The well-known normal-theory prediction interval for $Y$ \citep{NeterJohnandWassermanWilliamandKutner1989} is given by \begin{equation} {(\PIl,\PIu)}_{\mathcal{N}} = \bar{y} \pm t_{\alpha/2;n-1}\hat{\sigma}_Y \sqrt{1 + \frac{1}{n}} \label{eq:normPI} \end{equation} where $\bar{y}$ is the sample mean, $\hat{\sigma}_Y$ the sample standard deviation, and $t_{\alpha/2;n-1}$ is the $\alpha/2$ quantile of a $t$-distribution with $n - 1$ degrees of freedom. Note that the validity of (\ref{eq:normPI}) is predicated on normality of $Y$, regardless of sample size. The corresponding prediction interval for $Y|X\mathord{=} x$ in the linear regression setting on $p$ parameters is \begin{equation}\label{eq:normRegPI} {(\PIl,\PIu)}_\mathcal{N} = \hat{y} \pm t_{\alpha/2;n-p} \hat{\sigma}_{Y|X\mathord{=} x} \sqrt{1 + x^\mathsf{T} (\mathrm{X}^\mathsf{T}\mathrm{X})^{-1} x} \end{equation} where $x$ represents the location of a new sample, $\hat{y}$ is the prediction of $\mathrm{E}[Y|X\mathord{=} x]$, $\hat{\sigma}_{Y|X\mathord{=} x}$ is the sample standard deviation of the residuals, $\mathrm{X}$ is the design matrix for the regression, and $t_{\alpha/2;n-p}$ is the $\alpha/2$ quantile of a $t$-distribution with $n - p$ degrees of freedom. Equation (\ref{eq:normRegPI}) is predicated on the normality of $Y|X=x$ and on homoscedasticity of the residuals. \subsubsection{Tolerance Intervals} \newcommand{{\ell_t}}{{\ell_t}} \newcommand{{u_t}}{{u_t}} \newcommand{{(\TIl,\TIu)}}{{({\ell_t},{u_t})}} A {\em tolerance interval} ${(\TIl,\TIu)}$ with level $1 - \alpha$ and content $\gamma$ is also a functional of a dataset $\mathcal{Y} = \{y_1, ..., y_n\}$. It has the property that \begin{equation}\label{eq:TIprob} \Pr[F_Y({u_t}) - F_Y({\ell_t}) \ge \gamma] \ge 1 - \alpha. \end{equation} where $F_Y$ is the cumulative distribution function of $Y$. Thus, a tolerance interval formed from a dataset traps at least $\gamma$ of the probability content of $Y$ with probability $1 - \alpha$, where the $1 - \alpha$ probability is over datasets. One well-known normal theory approximate tolerance interval for $Y$ with confidence $1 - \alpha$ and content $\gamma$ is given by \cite{KrishnamoorthyKalimuthuandMathew2009} as \begin{equation}\label{eq:normTI} {(\TIl,\TIu)}_\mathcal{N} = \bar{y} \mp \hat{\sigma}_Y \sqrt{\frac{(n - 1) \chi^2_{\gamma;1,1/n}}{\chi^2_{\alpha;n - 1}}} \end{equation} where $\bar{y}$ is the sample mean, $\chi^2_{\gamma;1,1/n}$ is the $\gamma$ quantile of a non-central $\chi^2$ distribution with 1 degree of freedom and noncentrality parameter $1/n$, and $\chi^2_{\alpha;n-1}$ is the $\alpha$ quantile of a $\chi^2$ with $n-1$ degrees of freedom. The corresponding tolerance interval for $Y|X\mathord{=} x$ in the linear regression setting on $p$ parameters is \citep{Young2013} \begin{equation}\label{eq:normRegTI} {(\TIl,\TIu)}_\mathcal{N} = \hat{y} \mp \hat{\sigma}_{Y|X\mathord{=} x} \sqrt{\frac{(n - p) \chi^2_{\gamma;1,1/n^*}}{\chi^2_{\alpha;n - p}}} \end{equation} where $\hat{y}$ is the prediction of $\mathrm{E}[Y|X\mathord{=} x]$, $\hat{\sigma}_{Y|X\mathord{=} x}$ is the sample standard deviation of the residuals, $n^* = \hat{\sigma}^2_{Y|X\mathord{=} x} /\hat{\sigma}^2_{\hat{y}}$ is Wallis' ``effective number of observations" (\citeyear{Wallis1951}), and $\hat{\sigma}_{\hat{y}}$ is the standard error of $\hat{y}|X\mathord{=} x$. Again, validity of (\ref{eq:normTI}) and (\ref{eq:normRegTI}) is predicated on the normality of $Y$ and $Y|X\mathord{=} x$, respectively; (\ref{eq:normRegTI}) is also predicated on homoscedasticity. \cite{Wilks1941} proposed a non-parametric tolerance interval that assumes only continuity of $F_Y$. The interval is given by the sample values corresponding to the minimum and maximum ranks $r$ for which \begin{equation} (1 - F_{\mathrm{Beta}}(\gamma; n - 2r + 1, 2r)) > 1 - \alpha \label{eq:wilksRanks} \end{equation} where $F_{\mathrm{Beta}}$ is the beta cumulative distribution function. Thus, the interval is constructed simply by truncating the sample to the ranks satisfying (\ref{eq:wilksRanks}), and then taking the minimum and maximum of the truncated sample to be the lower and upper limits of the tolerance interval, respectively. \section{DTRs for Decision Support} DTRs are an ideal formalism for providing data-driven {\em decision support}. The most basic approach to providing decision support would be to estimate an optimal DTR from SMART data, and then provide the estimated DTR $(\hat{\pi}^*_1, \hat{\pi}^*_2)$ to a decision maker, perhaps as a computer-based tool that produces the estimated optimal treatment by using current patient information as input to the previously estimated DTR. Early in the development of DTRs it was recognized that this approach is problematic because it provides no confidence information about our recommendations. Just as we would not recommend one treatment over another if no statistically significant difference were obtained from a standard randomized controlled trial (RCT), neither should we recommend a single treatment in a DTR if in fact the alternatives are not known to be inferior with high confidence. This led to the development of confidence interval methods for the difference in mean expected outcome under different treatment choices within a regime \citep{Chakraborty2010,Chakraborty2013,Chakraborty2013a,laber14dynamic,Chakraborty2014}. Such intervals can give us confidence that if we do recommend a single treatment, that treatment will provide a better outcome, in expectation over patients. However, they do not provide any information about what the range of possible outcomes might actually be for an {\em individual} patient. In particular, large SMARTs with 100s to 1000s of patients may discover statistically significant differences in mean outcome even when the effect sizes are small to moderate and variance in outcomes is still substantial. If this is the case, it may be better to avoid recommending a single treatment, or at least to provide more nuanced information about what the patient's experience is likely to be under the different treatment options. In this work, we consider tolerance intervals as one method for providing this information. For a patient with $S_1 = s_1$ at the first decision point, rather than recommending treatment $\hat{\pi}^*_1(s_1)$ (even if it is statistically significantly better than the alternative in terms of mean outcome) we would present tolerance intervals for the outcome $Y$ under each possible action, and allow the decision-maker (or the patient-clinician dyad, in the context of patient-centred care \citep{Barry2012}) to decide on treatment based on the range of probable outcomes indicated by the intervals. For each interval, we condition on the observed $s_1$, the hypothetical $a_1$, and the estimated optimal regime $\hat{\pi}^*_2$ for the second stage. Thus, we will construct tolerance intervals for $Y|S_1=s_1,A_1=a_1;\pi_2 = \hat{\pi}^*_2$, marginal over $S_2$ (whose distribution is governed by $S_1$ and $A_1$) and $A_2$ (whose distribution is governed by $A_1$, $S_1$, $S_2$ and $\pi_2$.) To do so, we will adapt several standard methods because typically {\em we do not have observations drawn from this distribution.} This is because, as we noted above, data from SMART studies and similar sources are generated according to an exploration DTR $(\pi^0_1,\pi^0_2)$, rather than according to an estimated optimal DTR $(\hat{\pi}^*_1,\hat{\pi}^*_2)$. \subsection{Aside: Non-regularity} It is well-known that many kinds of inference on the parameters of an estimated optimal dynamic treatment regime, including confidence intervals, are plagued by issues of {\em non-regularity} \citep{laber14dynamic}. Briefly, non-regularity is a result of the sampling distributions of corresponding estimators changing abruptly as a function of the true underlying parameters. It can lead to bias in estimates and anti-conservatism in inference. In dynamic treatment regimes, non-regularity occurs and inference is problematic when two or more treatments produce (nearly) the same mean optimal outcome. In this work, we will not specifically develop methods that are robust to non-regularity. This is because even in the absence of non-regularity, i.e.\ when optimal $Q$ values are well-separated from sub-optimal ones, there is significant variability in the performance of ``standard'' tolerance interval methods that is worthy of exploration and analysis. We will return to this point in the Discussion. \section{Methods}\label{ss:methods} We now detail our strategies for constructing tolerance intervals for $Y|S_1\mathord{=} s_1,A_1\mathord{=} a_1;\pi_2\mathord{=} \hat{\pi}^*_2$. As we mentioned above, the fundamental challenge of constructing intervals for this quantity is that in general we do not have samples drawn from this distribution---otherwise, we could use off-the-shelf tolerance interval methods. Note that we {\em can} use off-the-shelf methods for tolerance intervals for $Y|S_2\mathord{=} s_2, A_2\mathord{=} a_2$, because there is no need to account for future decision-making in that case; thus our work focuses on the first decision point. We begin by presenting a na\"{i}ve approach to constructing tolerance intervals that helps illustrate the main technical challenge to be addressed, and then we present our two proposed strategies: {\em inverse probability weighting}, and {\em residual borrowing}. \subsection{Na\"{i}ve $Q$-Learning Tolerance Intervals} Standard $Q$-learning involves estimating $Q_1(s_1,a_1)$, which predicts the expected $Y$ under the optimal regime. However, it does so using the pseudooutcome $\tilde{Y} = \max_{a'_2} \hat{Q}_2(s_1,a_1,s_2,a'_2)$ as the regression target, rather than the observed $Y$. Since the pseudooutcome targets are themselves predicted conditional means of $Y$, they carry no variance information about $Y|S_2,S_1,A_1$ under the estimated optimal policy, even among trajectories that (by chance) followed the estimated optimal policy. To see this, suppose that we had several trajectories, all of which had the same $s_1,a_1,s_2,a_2$, and all of whom happened to follow the estimated optimal policy. Even though their {\em observed} outcomes $y$ might have all been different, simply due to unexplained (but still important) variation in $Y$, they would all be assigned the same pseudooutcome value, and the sample variance of the pseudooutcomes in this group is zero. This observation highlights the key aspect of $Q$-learning and related methods that precludes direct estimation of variability in $Y$. Dynamic programming methods for estimating conditional means of sequential outcomes can ``throw away'' residual variance without negative repercussions when backing up values, essentially because of the law of total expectation. The benefit of this approach is a reduction in the variance of $Q$ estimates by allowing the use of the entire dataset of trajectories for estimating $Q$-functions for earlier decision points. The drawback is that such methods cannot directly estimate other distributional properties of $Y$, including variance and higher-order moments, quantiles, and so on. If most of the variability in $Y$ were explained by $S_2$ and $A_2$---that is, if the variance of $Y|S_2,A_2$ were nearly zero---we might be able to construct approximate tolerance intervals for $Y$ by constructing parametric tolerance intervals for the pseudooutcome, for example using (\ref{eq:normRegTI}). In the case of a saturated model with discrete $S_1$ and $A_1$, we could construct non-parametric tolerance intervals for each pattern of $(s_1,a_1)$ using the pseudooutcome with (\ref{eq:wilksRanks}). However, as expected, will see in our empirical results that this approach is not very effective if in fact the variance of $Y|S_2,A_2$ is not near zero. \subsection{Inverse Probability Weighting} One approach to obtaining variance information about $Y$ under $\hat{\pi}_2^*$ is to select from our dataset only those trajectories whose second-stage treatment matches what $\hat{\pi}_2^*$ would have assigned, i.e., the trajectories $(s_1,a_1,s_2,a_2,y)$ for which $a_2 = \hat{\pi}_2^*(s_1,a_1,s_2)$. This subset contains all of the trajectories that have positive probability under the estimated DTR. \newcommand{\mathbf{dom}\;}{\mathbf{dom}\;} Consider a joint distribution over $S_2,A_2,\Pi,A_2^0,A_2^*,M,Y$ conditioned on $S_1$ and $A_1$. (All statements in the remainder of this subsection are implicitly conditioned on $S_1$ and $A_1$; explicitly maintaining this is too cumbersome.) Here, $A_2^0$ is the action chosen by $\pi_2^0$, and $A_2^*$ is the action chosen by $\hat{\pi}^*_2$, which is assumed to be deterministic given $S_2$. Let $M$ (for {\em match}\footnote{Note we are not matching trajectories with other trajectories---we are identifying trajectories whose action matches a DTR of interest.}) be $1$ if $A_2^* = A_2^0$, or $0$ otherwise. Let $\Pi$ be binary, and define $A_2$ such that $A_2 = A_2^0$ if $\Pi = 0$ and $A_2 = A_2^*$ if $\Pi = 1$. The dependencies among all of these variables are illustrated in Figure~\ref{fig:deps} using a directed graphical model \citep{Koller2009}. The distribution of $Y$ among matched trajectories is governed by $Y|\Pi=0,M=1$. The distribution of $Y$ among trajectories gathered using $\hat{\pi}^*_2$ is $Y|\Pi = 1$. Note that while the distribution of $Y|S_2,A_2,\Pi\mathord{=} 0,M\mathord{=} 1$ is identical to the distribution of $Y|S_2,A_2,\Pi\mathord{=} 1$ due to the conditional independence structure, the distribution of $Y|\Pi=0,M=1$ may be different from $Y|\Pi = 1$ if there is dependence of $M$ on $S_2$. We describe this phenomenon using the following lemma. \clearpage \begin{figure}[!h] \centering \begin{tikzpicture}[>=stealth'] \node [events] (state) {$S_2$}; \node [events, above right = 0.1cm and 1cm of state](a0){$A_2^0$}; \node [events, below right = 0.1cm and 1.5cm of state](astar){$A_2^*$}; \node [events, below right = 0.1cm and 1cm of a0] (match) {$M$}; \node [events, below right = 0.7cm and 0.1cm of state](a){$A_2$}; \node [events, below = 2cm of state] (y) {$Y$}; \node [events, below = 0.5cm of astar] (pi) {$\Pi$}; \draw [->] (state) -- (a0); \draw [->] (state) -- (astar); \draw [->] (a0) -- (match); \draw [->] (astar) -- (match); \draw [->] (state) -- (y); \draw [->] (a0) -- (a); \draw [->] (astar) -- (a); \draw [->] (pi) -- (a); \draw [->] (a) -- (y); \end{tikzpicture} \caption{Graphical model depicting the dependence structure of $S_2,A_2^0,A_2^*,A_2,\Pi,M,Y|S_1,A_1$. Note that the structure is the same for all values of $S_1$ and $A_1$.} \label{fig:deps} \end{figure} \newtheorem{lemma}{Lemma} \begin{lemma}\label{lem:ratio} Let $S_2,A_2^0,A_2^*,A_2,\Pi,M,Y$ be defined as above, and assume $\Pr(S_2) > 0 \implies \Pr(S_2|M\mathord{=} 1) > 0$. Then \begin{equation*} \Pr(Y|\Pi\mathord{=} 1) = \sum_{S_2} \frac{\Pr(S_2)}{\Pr(S_2|M\mathord{=} 1)} \Pr(Y,S_2|\Pi \mathord{=} 0, M\mathord{=} 1). \end{equation*} \end{lemma} \begin{proof} In the following, we abuse notation by allowing $\Pr$ to represent a probability or a density, as appropriate, and we allow $\sum$ to indicate a sum or an integral. The message in any case remains the same. First we note that \begin{equation} \Pr(Y|\Pi\mathord{=} 0,M\mathord{=} 1) = \sum_{S_2,A_2} \Pr(Y,S_2,A_2|\Pi\mathord{=} 0,M\mathord{=} 1) = \sum_{S_2,A_2} \left\{ \begin{aligned} \Pr(Y|S_2,A_2,\Pi\mathord{=} 0,M\mathord{=} 1)\cdot \\ \Pr(A_2|S_2,\Pi \mathord{=} 0, M\mathord{=} 1)\cdot \\ \Pr(S_2|\Pi\mathord{=} 0, M\mathord{=} 1) \end{aligned} \right\}.\label{eq:YgP0M1} \end{equation} The data generating distribution under $\hat{\pi}^*_2$ is \begin{align} \Pr&(Y|\Pi\mathord{=} 1) \nonumber \\ &= \sum_{S_2,A_2} \Pr(Y,S_2,A_2|\Pi\mathord{=} 1)\nonumber \\ &= \sum_{S_2,A_2} \Pr(Y|S_2,A_2,\Pi\mathord{=} 1) \Pr(S_2,A_2|\Pi\mathord{=} 1)\nonumber \\ &= \sum_{S_2,A_2} \Pr(Y|S_2,A_2,\Pi\mathord{=} 0,M\mathord{=} 1) \Pr(S_2,A_2|\Pi\mathord{=} 1)\label{eq:YgP1} \end{align} where the last step follows from conditional independence of $Y$ and $(\Pi,M)$ given $S_2$ and $A_2$. Furthermore, \begin{align} \Pr&(S_2,A_2|\Pi\mathord{=} 1) \nonumber \\ &= \Pr(A_2|S_2,\Pi\mathord{=} 1) \Pr(S_2|\Pi\mathord{=} 1)\nonumber \\ &= \Pr(A_2|S_2,\Pi \mathord{=} 0, M\mathord{=} 1) \Pr(S_2|\Pi\mathord{=} 1)\nonumber \\ &= \Pr(A_2|S_2,\Pi \mathord{=} 0, M\mathord{=} 1) \Pr(S_2|\Pi\mathord{=} 0) \label{eq:S2A2gP1} \end{align} where the second step follows because $A_2^*$ is deterministic given $S_2$\footnote{This assumption is critical: if $A^*_0|S_2$ is not deterministic, the relationship between $Y|\Pi\mathord{=} 1$ and $Y|\Pi\mathord{=} 0,M\mathord{=} 1$ is more complicated.} and from the definition of $\Pi$ and $M$, and the third step follows from independence of $S_2$ and $\Pi$. By combining (\ref{eq:YgP1}) and (\ref{eq:S2A2gP1}) and comparing with (\ref{eq:YgP0M1}), we obtain \begin{align*} \Pr(Y|\Pi\mathord{=} 1) & = \sum_{S_2,A_2} \left\{ \begin{aligned} \Pr(Y|S_2,A_2,\Pi\mathord{=} 0,M\mathord{=} 1)\cdot \\ \Pr(A_2|S_2,\Pi \mathord{=} 0, M\mathord{=} 1)\cdot \\ \Pr(S_2|\Pi\mathord{=} 0) \end{aligned} \right\} \\ = & \sum_{S_2} \left\{ \begin{aligned} \Pr(Y|S_2,\Pi\mathord{=} 0,M\mathord{=} 1)\cdot \\ \Pr(S_2|\Pi\mathord{=} 0) \end{aligned} \right\} \\ = & \sum_{S_2} \frac{\Pr(S_2|\Pi\mathord{=} 0)}{\Pr(S_2|\Pi\mathord{=} 0,M\mathord{=} 1)} \Pr(Y,S_2|\Pi \mathord{=} 0, M\mathord{=} 1)\\ = & \sum_{S_2} \frac{\Pr(S_2)}{\Pr(S_2|M\mathord{=} 1)} \Pr(Y,S_2|\Pi \mathord{=} 0, M\mathord{=} 1) \end{align*} where the final step is by independence of $S_2$ and $\Pi$. \end{proof} \newtheorem{corollary}{Corollary} \begin{corollary} If $S_2$ and $M$ are independent, then $Y|\Pi\mathord{=} 0,M\mathord{=} 1$ has the same distribution as $Y|\Pi\mathord{=} 1$. \end{corollary} \begin{proof} Follows immediately from Lemma~\ref{lem:ratio}. \end{proof} To achieve independence of $S_2$ and $M$, we could ensure during data collection that $A_2^0$ is independent of $S_2$, which in turn can be achieved by equal randomization independent of $S_2$. This is common, but not universal, in SMART designs \citep{Collins2014}. If $A_2^0 | S_2\mathord{=} s_2 \sim \mathrm{Bernoulli}(\theta^0)$ and $A_2^* | S_2\mathord{=} s_2 \sim \mathrm{Bernoulli}(\theta^*_{s_2})$, then \begin{align*} \Pr(M\mathord{=} 1|S_2\mathord{=} s_2) & = \theta^0 \theta^*_{s_2} + (1 - \theta^0)(1 - \theta^*_{s_2})\\ & = 1 - \theta^0 + \theta^*_{s_2}(2\theta^0 - 1). \end{align*} Hence, if $\theta^0 = 0.5$, then $\Pr(M\mathord{=} 1|S_2) = \Pr(M\mathord{=} 1) = 0.5$, and $\Pr(S_2|M\mathord{=} 1) = \Pr(S_2)$. Using this subset of trajectories whose $s_2$ matches $\hat{\pi}^*_2(s_2)$, we can regress $Y$ on $S_1$ and $A_1$ to construct tolerance intervals using (\ref{eq:normRegTI}), or, as above, we can construct non-parametric tolerance intervals for each pattern of $(s_1,a_1)$ using ($\ref{eq:wilksRanks}$). \begin{figure*} \includegraphics[width=\textwidth]{figures/meanTIs} \caption{Comparison of coverage and width of inverse probability weighted tolerance interval methods. Axes represent a space of simple generative models. Lighter colouration indicates better performance.\label{fig:meanTIs}} \end{figure*} Dependence of $M$ on $S_2$ is problematic because of the effect of $S_2$ on $Y$. When $M$ depends on $S_2$, conditioning on $M$ can affect the distribution of $Y$ through $S_2$, meaning that the distribution of $Y|S_1,A_1,\Pi\mathord{=} 0,M \mathord{=} 1$ we estimate by collecting data under $\pi^0_2$ is not what we would have obtained had we collected data under $\hat{\pi}^*_2$ and ignored (i.e. marginalized over) $M$. To correct the problem of the distribution of $S_2|S_1,A_1$ among the matched trajectories, we employ {\em inverse probability weighting.} To do so, we construct a propensity score model, not for the probability of treatment, but for the {\em probability of following the estimated optimal DTR}, i.e.\ $\Pr(M\mathord{=} 1|S_2,S_1,A_1)$. Using this model, we can then re-weight the trajectories so that the distribution of $S_2|M\mathord{=} 1,S_1,A_1$ matches the distribution of $S_2|S_1,A_1$ as well as possible. The weight function is therefore \begin{equation}\label{eq:impweights} w(s_1,a_1,s_2) = \frac{\Pr(S_2\mathord{=} s_2|S_1\mathord{=} s_1,A_1\mathord{=} a_1)} {\Pr(S_2\mathord{=} s_2|S_1\mathord{=} s_1,A_1\mathord{=} a_1,M\mathord{=} 1)}. \end{equation} These are sometimes known as {\em importance weights}. We note that in causal inference, importance weights are sometimes used to adjust for an association between the probability of receiving treatment and the observed outcome. Here, they are used to adjust for an association between the probability of following the estimated optimal policy and the observed outcome through the variable $S_2$. Note that estimating the two densities in (\ref{eq:impweights}) separately is not necessary to estimate the function $w$; it can be estimated using any density ratio estimation method. Logistic regression is one common approach but many others are available. In related weighting methods for causal inference, practitioners have found that a flexible model for $w$ is often preferable to a simpler one \citep{Ghosh2011}. To use the weighted data for building tolerance intervals, we must adapt existing methods for use with the weights. To build normal-theory regression tolerance intervals using the weighted data, we first estimate $\hat{y}|X\mathord{=} x$ using weighted least squares. We then use the resulting mean estimate, together with a weight-based sandwich estimate of $\hat{\sigma}_{\hat y}$ to construct the tolerance interval as per (\ref{eq:normRegTI}). To build non-parametric tolerance intervals, we obtain weighted estimates of the ranks obtained by linear interpolation of the weighted empirical distribution \citep{Harrell2015}. We then construct the Wilks interval as per (\ref{eq:wilksRanks}). \newcommand{\mathrm{Bernoulli}}{\mathrm{Bernoulli}} Figure~\ref{fig:meanTIs} shows the empirical results of applying weighted tolerance intervals in a simple scenario. Our goal here is to verify that the weighting scheme can counteract some of the dependence on $M$. (We will evaluate them more fully in the next section.) The data are drawn from a two-variable generative model with $M \sim \mathrm{Bernoulli} (0.5)$ and $Y|M\mathord{=} m \sim \mathcal{N}(\mu_m,\sigma_m)$. Our goal is to produce a tolerance interval for $Y$, marginal over $M$, using only data for which $M\mathord{=} 1$. The sample size for $M=1$ was $n = 500$, and the weights were computed analytically. Parameters for $Y|M \mathord{=} 0$ were fixed at $\mu_0 = 0$ and $\sigma_0 = 1$. Parameters for $Y|M \mathord{=} 1$ were varied to illustrate how performance of the weighted tolerance intervals changed as the distribution of $Y|M\mathord{=} 1$ deviated from the marginal distribution of $Y$. The top row of heatmaps shows the coverage of each method, that is, the proportion of times out of 1000 Monte Carlo replicates for which the computed tolerance interval had at least $\gamma = 0.9$ probability content. The confidence level $1-\alpha$ was set to $0.95$; in the plot, Monte Carlo coverages that are not statistically significantly different from 0.95 are coloured pure white. Over-coverage is coloured blue, and under-coverage is coloured orange. The second row plots the average width of the tolerance intervals, normalized by the width of the optimal tolerance interval constructed from the true quantiles of $Y$, with unit relative width coloured white. Methods beginning with U are unweighted, and methods beginning with W are weighted. Methods containing NP are nonparametric, and those without NP are normal-theory. (Table \ref{tab:legend} gives the complete key to the method names.) Note that except when $\mu_1 = 0$ and $\sigma_1 = 1$, $Y$ is nonnormal. As one would expect, performance when $\mu_1=0,\sigma_1=1$ is very good across all methods; in this case, $\Pr(Y|M=1) = \Pr(Y)$, and weighting is not needed. When $\mu_1$ is near zero and/or $\sigma_2$ is larger than $\sigma_0$, most of the mass of $\Pr(Y|M=1)$ overlaps the mass of $\Pr(Y)$, and all intervals tend to over-cover. This is indicated by the blue regions in the upper-left corner of the coverage plots, and is larger in the weighed methods than the unweighted methods. Conversely, when $\Pr(Y|M=1)$ does adequately overlap the mass of $\Pr(Y)$ because $\mu_1$ is farther from $0$ and/or $\sigma_1$ is less than $\sigma_0$, we see undercoverage indicated by the orange in the lower-right of the plots. Again, this is mitigated by weighting. The non-parametric methods provide better coverage than the normal-theory methods; this is not surprising since $Y$ is not normal in most cases. The width plots verify that the weighted methods bring the extreme widths observed from the unweighted methods closer to optimal. This example verifies that the weighted methods we propose can substantially reduce over- and under-coverage in cases where there is mismatch between the observed distribution and the distribution of interest. However, they cannot eliminate it entirely when the distributions of $Y$ and $Y|M\mathord{=} 1$ are very different. This is to be expected; estimating say the mean of one distribution using an importance-weighted sample is challenging in practice. Estimating the {\em tails} of that distribution is even more challenging. Nonetheless, there is value in the weighted approach, and we will explore it further in the DTR setting in the next section. \subsection{Residual Borrowing} \newcommand{\mathcal{E}}{\mathcal{E}} We now present a different approach to ensuring that our analysis captures the joint distribution $Y,S_2|S_1,A_1$ correctly, and hence captures variability in $Y|S_1,A_1$ correctly when we marginalize over $S_2$. To do so, we return to the $Q$-learning approach, which estimates $\mathrm{E}[Y|S_2,A_2]$ using regression. As discussed above, the pseudooutcome $\tilde y$ for each trajectory represents our best estimate of $\mathrm{E}[Y|S_1 \mathord{=} s_1, A_1\mathord{=} a_1, S_2\mathord{=} s_2, A_2]$ when $A_2 \sim \pi^*_2(s_1,a_1,s_2)$. This estimate is available for all trajectories in our dataset, including those for which $M\mathord{=} 1$. Rather than na\"ively constructing tolerance intervals based on the regression of $\tilde{Y}$ on $S_1$ and $A_1$, we create a new pseudooutcome $\check y$ for each point: For trajectories with $m\mathord{=} 1$, we set $\check y = y$. For trajectories with $m = 0$, we set $\check{y} = \tilde y + \epsilon$, where $\epsilon \sim \mathcal{E}$, and $\mathcal{E}$ is an estimate of the distribution of the residuals among trajectories with $M = 1$. We call this procedure {\em residual borrowing.} We then construct tolerance intervals using the regression of $\check y$ on $S_1$ and $A_1$. Unlike the $\tilde y$, the $\check y$ retain information about the distribution of $Y|S_2,A_2$. Furthermore, since we use all of the trajectories in our original dataset, our empirical distribution of $S_2|S_1,A_1$ is representative of the true generative model. The distribution $\mathcal{E}$ could be the empirical distribution of the appropriate residuals, or it could be a smoothed estimate, e.g., a kernel density estimate. In our simulations, we found that a smoothed estimate works better than sampling from the empirical distribution. \section{Empirical Results}\label{ss:empirical} \begin{table} \caption{Plot Acronyms for Tolerance Intervals\label{tab:legend}} \begin{tabular}{l|l} {\bf RBQNPTI} & Residual-Borrowing Non-parametric TI \\ {\bf RBQTI} & Residual-Borrowing Normal-theory TI \\ {\bf UNPTI} & Unweighted Non-parametric TI \\ {\bf UTI} & Unweighted Normal-theory TI \\ {\bf WNPTI} & Weighted Non-parametric TI \\ {\bf WTI} & Weighted Normal-theory TI \\ \end{tabular} \end{table} We now present results of six tolerance interval methods, which are listed in Table~\ref{tab:legend}, using a simulation study. Our goals are to: 1. verify that inverse probability weighted methods can succeed where the unweighted methods fail, and test their limits; and, 2. to assess the difference in performance between the inverse probability weighted methods and the residual borrowing methods. Note that we do not include results from the na\"ive method as it performs very poorly. The generative model from the study is taken from \cite{Schulte2014}, with modifications. We begin by reviewing that model and discussing our modifications to it; we then present and discuss the performance of our methods. \subsection{Generative Model} The generative model has $2$ decision points. $S_1$ is binary, $A_1$ is binary, $S_2$ is continuous, $A_2$ is binary, and $Y$ is continuous. The generative model under the exploration DTR is given by \newcommand{\xi_{\psi}}{\xi_{\psi}} \newcommand{\xi_{\phi}}{\xi_{\phi}} \newcommand{\mathrm{Ydist}}{\mathrm{Ydist}} { \begin{align*} S_{1} & \sim \text{Bernouli}(0.5) \nonumber\\ A_{1}^0|S_{1}\mathord{=} s_{1} & \sim \text{Bernoulli}\{\text{expit}(\boxed{\xi_{\phi}{}}\{\phi_{10}^{0}+\phi_{11}^{0}s_{1}\})\}\\ S_{2}|S_{1}\mathord{=} s_{1},A_{1}\mathord{=} a_{1} & \sim \text{Normal}(\delta_{10}^{0}+\delta_{11}^{0}s_{1}+\delta_{12}^{0}a_{1}+\delta_{13}^{0}s_{1}a_{1},2)\\ A_{2}^0|S_{1}\mathord{=} s_{1}, S_{2}\mathord{=} s_{2}, A_{1}\mathord{=} a_{1} & \sim \text{Bernoulli}\{\text{expit}(\boxed{\xi_{\phi}{}}\{\phi_{20}^{0}+\phi_{21}^{0}s_{1}+\phi_{22}^{0}a_{1} +\phi_{23}^{0}s_{2}+\phi_{24}^{0}a_{1}s_{2}+\phi_{25}^{0}s_{2}^{2}\})\}\\ Y|S_{1}\mathord{=} s_{1},S_{2}\mathord{=} s_{2},A_{1}\mathord{=} a_{1},A_{2}\mathord{=} a_{2} & \sim \boxed{\mathrm{Ydist}{}}\{\mu_Y(s_{1},s_{2},a_{1},a_{2}),\boxed{\sigma^2_\varepsilon}\}\\ \mu_Y(s_{1},s_{2},a_{1}a_{2}) & = \beta_{20}^{0}+\beta_{21}^{0}s_{1}+\beta_{22}^{0}a_{1}+\beta_{23}^{0}s_{1}a_{1} +\beta_{24}^{0}s_{2}+\beta_{25}^{0}s_{2}^{2} +a_{2}\boxed{\xi_{\psi}}(\psi_{20}^{0}+\psi_{21}^{0}a_{1}+\psi_{22}^{0}s_{2}) \end{align*} } \noindent Here, $\text{expit}(x)=e^{x}/ (e^{x}+1)$. The original model is indexed by \begin{equation*} \begin{array}{llrrrrr} \phi_{1}^{0} & = (\phi_{10}^{0},& \phi_{10}^{0})\\ & = (0.3,& -0.5)\\[1mm] \delta_{1}^{0} & = (\delta_{10}^{0},& \delta_{11}^{0},& \delta_{12}^{0},& \delta_{13}^{0}) \\ & = (0,& 0.5,& -0.75,& 0.25)\\[1mm] \phi_{2}^{0} & =(\phi_{20}^{0},& \phi_{21}^{0},& \phi_{22}^{0},& \phi_{23}^{0},& \phi_{24}^{0},& \phi_{25}^{0}) \\ & = (0,& 0.5,& 0.1,& -1,& -0.1,& 0)\\[1mm] \beta_{2}^{0} & =(\beta_{20}^{0},& \beta_{21}^{0},& \beta_{22}^{0},& \beta_{23}^{0},& \beta_{24}^{0},& \beta_{25}^{0}) \\ & = (3,& 0,& 0.1,& -0.5,& -0.5,& 0)\\[1mm] \psi_{2}^{0} & =(\psi_{20}^{0},& \psi_{21}^{0},& \psi_{22}^{0})\\ & = (1,& 0.25,& 0.5) \end{array} \end{equation*} to which we have added four parameters: $\xi_{\psi}$ is a factor multiplying $\psi_2^0$, its default is 1; $\xi_{\phi}$ is a factor multiplying $\phi_2^0$, its default is 1; $\mathrm{Ydist}(\mu,\sigma_\varepsilon^2)$ gives the conditional distribution of $Y$ with given mean and variance; its default is the normal distribution and the default $\sigma^2_\varepsilon$ is 10. We have emphasized these parameters by displaying them in boxes. Our parameter $\xi_{\phi}$ allows us to control the degree to which state information influences treatment selection under the exploration (data-gathering) DTR. For $\xi_{\phi} = 1$, we have the original exploration used by Schulte et al., and for $\xi_{\phi} = 0$, we have uniform randomization over treatments independent of state and previous treatment. $\xi_{\psi}$ allows us to control the effect of treatment $A_2$ on $Y$. For $\xi_{\psi} = 1$ we have the treatment effect specified by Schulte et al., and for $\xi_{\psi} = 0$ we have no treatment effect at the second stage. $\mathrm{Ydist}$ allows us to control the shape of the error distribution to see its effect on the tolerance interval methods; Schulte et al.\ used a normal error, but we will explore heavier and lighter-tailed errors while holding variance constant. This family of generative models allows us to explore what happens to the performance of tolerance interval methods when we have dependence of $S_2$ on $A_2$ during the generating process. While most of the SMART studies we are aware of use a simple randomization strategy where the distribution of $A_2$ does not depend on $S_2$ (which is the case here when e.g.\ $\xi_{\phi} = 0$, giving a simple 50:50 randomization strategy), we expect that more studies akin to ``adaptive trials" with state-dependent randomization will become attractive in the future. \newcommand{I}{I} Based on the function\footnote{Denoted $m$ by Schulte et al.} $\mu_Y$ which determines the expected value of $Y|S_1,A_1,S_2,A_2$, we can immediately see that the optimal second stage decision function is \begin{align*} \pi^*_2(a_1,s_2) & = \arg\max_{a_2'} a_2' \xi_{\psi} (\psi_{20}^{0}+ \psi_{21}^{0} a_{1}+\psi_{22}^{0} s_{2}) \\ & = I \{ \xi_{\psi} (\psi_{20}^{0}+ \psi_{21}^{0} a_{1}+\psi_{22}^{0} s_{2}) > 0\}. \end{align*} \subsection{Working Model} Our working model for $Q_2$ is \begin{align}\label{eq:Q2working} Q_2(s_1,a_1,s_2,a_2;\beta_2,\psi_2) = \beta_{20}+\beta_{21}s_{1}+\beta_{22}a_{1}+\beta_{23}s_{1}a_{1} +\beta_{24}s_{2}+\beta_{25}s_{2}^{2} + a_{2}(\psi_{20}+\psi_{21}a_{1}+\psi_{22}s_{2}) \end{align} Having computed least squares estimates $\hat\beta_2$ and $\hat\psi_2$, our estimate of the optimal second-stage decision function is \begin{equation}\label{eq:pi2working} \hat{\pi}_2^*(s_1,a_1) = I \{\hat\psi_{20}+ \hat\psi_{21} a_{1}+\hat\psi_{22}^{0} s_{2} > 0\} \end{equation} and the pseudooutcome for the $i$th trajectory is \begin{align*} \tilde{y}_i = \hat\beta_{20}+\hat\beta_{21}s_{1i}+\hat\beta_{22}a_{1i}+\hat\beta_{23}s_{1i}a_{1i} +\hat\beta_{24}s_{2i}+\hat\beta_{25}s_{2i}^{2} + |\hat\psi_{20}+ \hat\psi_{21} a_{1i}+\hat\psi_{22}^{0} s_{2i}|_+. \end{align*} Our working model for $Q_1$ is the saturated model \begin{equation}\label{eq:Q1working} Q_1(s_1,a_1;\beta_1,\psi_1) = \beta_{10}+\beta_{21}s_{1}+a_1(\psi_{10} + \psi_{11}s_1). \end{equation} Having computed least squares estimates $\hat\beta_2$ and $\hat\psi_2$ by regressing the pseudooutcomes on $s_1$ and $a_1$, our estimate of the optimal first-stage decision function would be\footnote{\cite{Schulte2014} give the true optimal values of $\beta_1$ and $\psi_{1}$ as a function of the other model parameters.} \begin{equation} \hat{\pi}_1^*(s_1,a_1) = I \{\hat\psi_{10}+ \hat\psi_{11} a_{1} > 0\}. \end{equation} \subsection{Tolerance Intervals} In many studies of DTR methods, the focus is on point and interval estimates of the optimal stage 1 decision parameters \citep{Chakraborty2010,Chakraborty2013,Chakraborty2013a,laber14dynamic,Chakraborty2014}. In this work, we will investigate methods for constructing tolerance intervals for \begin{align*} Y|S_1\mathord{=} 0,A_1\mathord{=} 0;\hat{\pi}^*_2 ~~~~&~~~~ Y|S_1\mathord{=} 0,A_1\mathord{=} 1;\hat{\pi}^*_2\\ Y|S_1\mathord{=} 1,A_1\mathord{=} 0;\hat{\pi}^*_2 ~~~~&~~~~ Y|S_1\mathord{=} 1,A_1\mathord{=} 1;\hat{\pi}^*_2. \end{align*} Note that our goal is to construct tolerance intervals for $Y$ under the {\em estimated optimal regime} rather than under the optimal regime. The reason for this is pragmatic: we assume that it is the estimated optimal regime that would be deployed in future to support decision-making. We begin by estimating $\hat{\pi}^*_2$ using the working models (\ref{eq:Q2working},\ref{eq:pi2working}). We then compute the pseudooutcome $\tilde y_i$ for each trajectory, and the match indicator $m_i = I\{\hat{\pi}^*_2(s_{1i},a_{1i},s_{2i}) = a_{2i}\}$. \subsubsection{Unweighted Methods} To construct the unweighted normal-theory TIs, we regress $y$ on $s_1$ and $a_1$ according to working model (\ref{eq:Q1working}) but using only trajectories with $m=1$. We then apply (\ref{eq:normRegTI}) to construct the four tolerance intervals. To construct the unweighted nonparametric TIs, we divide the trajectories with $m = 1$ into four mutually exclusive groups according to their $(s_1,a_1)$ values. We then construct the four tolerance intervals by applying the Wilks method (\ref{eq:wilksRanks}) to each group. \subsubsection{Weighted Methods} To construct the weights, we first form kernel density estimates $\hat f_\mathcal{E}(s_2;a_1,a_1,m\mathord{=} 1)$ for $S_2|S_1\mathord{=} s_1,A_1\mathord{=} a_1,M\mathord{=} 1$ and $\hat f_\mathcal{E}(s_2;s_1,a_1)$ for $S_2|S_1\mathord{=} s_1,A_1\mathord{=} a_1$. The weight for a trajectory with index $i$ that has $m = 1$ is then given by \begin{equation} w_i = \frac{\hat{f}_\mathcal{E} (s_{2i};s_{1i},a_{1i})}{\hat{f}_\mathcal{E} (s_{2i};s_{1i},a_{1i},m=1)}. \end{equation} While logistic regression might be viewed as a more obvious choice for this task, we found that its attendant monotonicity assumptions were often violated, and that the pair of kernel density estimates were the simplest way to produce a more flexible model in this low-dimensional setting. To construct the weighted normal-theory TIs, as above we compute a weighted regression of $y$ on $s_1$ and $a_1$ according to working model (\ref{eq:Q1working}) but using only trajectories with $m=1$. We then apply (\ref{eq:normRegTI}) to construct the four tolerance intervals; in this case, we use the sandwich estimate \citep{Huber1967,White1980} with the weights to compute $\hat\sigma_{Y|X=x}$. This makes the method somewhat more robust. To construct the unweighted nonparametric TIs, we divide the trajectories with $m = 1$ into four mutually exclusive groups according to their $(s_1,a_1)$ values. We then construct the four tolerance intervals by applying our weighted modification of the Wilks method (\ref{eq:wilksRanks}) to each group. \subsubsection{Residual Borrowing} For the residual borrowing methods, within each $(s_1,a_1)$ group, we first form a kernel density estimate $\hat{f}_R(r;s_1,a_1)$ using the residuals $y_i - \tilde{y}_i$ among the trajectories with $m = 1$. We then set $\check{y}_i = y_i$ for each trajectory with $m_i=1$, and sample $\check{y}_i$ from the kernel density estimate for trajectories with $m_i=0$. We then either regress $\check{y}_i$ using the working models to create the regression tolerance intervals, or we again divide up the data according to $s_1$ and $a_1$ to construct non-parametric tolerance intervals. \subsection{Results} Using the foregoing generative model, working models, and tolerance interval methods, we ran a suite of simulations to investigate performance. Experiments varied by $\xi_{\phi},\xi_{\psi},\sigma^2_\varepsilon$, and $\mathrm{Ydist}$, for a total of $1,089$ different experimental settings. Both $\xi_{\phi}$ and $\xi_{\psi}$ were varied from $0$ to $1$ in $0.1$ increments, and $\sigma^2_\varepsilon$ took values in $\{10,1,0.1\}$. We examined settings with $\mathrm{Ydist}$ as normal, uniform, and $t$ with 3 degrees of freedom, each scaled to have the appropriate $\sigma^2_\varepsilon$. For each setting, we drew 1000 simulated datasets each of size $n=1000$, computed tolerance intervals using each of the six methods, and evaluated their {\em content}, that is, what proportion of $Y$ was captured by each interval, and their {\em relative width}, given by $({u_t} - {\ell_t}) / h^*$, where $h*$ is the width of the optimal tolerance interval computed using the $\gamma/2$ and $1 - \gamma/2$ quantiles of the true distribution. For all experiments, we set $1 - \alpha = 0.95$ and $\gamma = 0.9$. All kernel density estimates were one-dimensional, and used the default optimal bandwidth. All experimental code was written in R \citep{R2015}, and is publicly available. Figures~\ref{fig:10_ALL}, \ref{fig:1_ALL}, and \ref{fig:.1_ALL} display the results of all of our experiments as heatmaps using the same approach as Figure~\ref{fig:meanTIs}. Monte Carlo coverages that are not statistically significantly different from 0.95 are coloured pure white, Over-coverage is coloured blue, and under-coverage is coloured orange. The second row of each subplot gives the average width of the tolerance intervals. Figure \ref{fig:10_NORM} contains the original model setting proposed by \cite{Schulte2014} in the upper-right corners of its heatmaps. In this setting, the weighted and unweighted normal-theory tolerance intervals undercover slightly, while the weighted and unweighted non-parametric methods overcover, and are much wider. The residual-borrowing methods perform best in this setting, with the normal-theory residual-borrowing intervals achieving near-nominal coverage with modest width. There is relatively little variation in coverage and width across $\xi_{\phi}$ and $\xi_{\psi}$ in this setting, we believe because the noise level is quite high relative to the effect of $a_2$ even when $\xi_{\psi} = 1$. In Figure \ref{fig:10_T3}, $\mathrm{Ydist}$ was chosen to be a $t$-distribution with 3 degrees of freedom, scaled to have variance $\sigma^2_\varepsilon = 10$ and shifted by $\mu_Y$. In this heavy-tailed setting, it is the non-parametric residual-borrowing method that slightly undercovers, while the other methods overcover somewhat. As in the normal case, the weighted and unweighted nonparametric methods are very wide. Figure \ref{fig:10_UNIF} uses a scaled and shifted uniform distribution for $\mathrm{Ydist}$, again maintaining $\sigma^2_\varepsilon = 10$. In this light-tailed setting, in contrast to Figure~\ref{fig:10_T3}, it is the normal-theory intervals which tend to be wide, while the non-parametric ones are narrower. The residual-borrowing intervals are wide as well. All intervals achieve nominal or greater coverage in this setting. \begin{figure} \begin{subfigure}{\columnwidth} \includegraphics[width=\columnwidth]{figures/SD_sqrt10/BIAS} \caption{$\sigma^2_\varepsilon ={10}$ ~~~~~~~~\\ ~\\} \end{subfigure} \begin{subfigure}{\columnwidth} \includegraphics[width=\columnwidth]{figures/SD_1/BIAS} \caption{$\sigma^2_\varepsilon=1$ ~~~~~~~~\\ ~\\} \end{subfigure} \begin{subfigure}{\columnwidth} \includegraphics[width=\columnwidth]{figures/SD_sqrt.1/BIAS} \caption{$\sigma^2_\varepsilon=0.1$ ~~~~~~~~\\ ~\\} \end{subfigure} \caption{\label{fig:BIAS}Bias in the estimated value of the optimal policy. This is a surrogate measure of non-regularity; note that maximal bias occurs when $\psi$, which controls the effect of $A_2$, is small. Phi Factor ($\xi_{\phi}$ in the text) controls the effect of covariates on the exploration DTR, and Psi Factor ($\xi_{\psi}$ in the text) controls the effect of $A_2$ on the conditional mean of $Y$.} \end{figure} We see a striking change as we examine the lower-noise settings in Figure~\ref{fig:1_ALL}, which have $\sigma^2_\varepsilon = 1$. Here, we start to see dependence of performance on $\xi_{\psi}$ and $\xi_{\phi}$. As in Figure~\ref{fig:10_NORM}, in Figure~\ref{fig:1_NORM} we see the normal theory intervals undercovering, although we now see a definite trend that worsens as $\xi_{\phi}$ increases, and as $\xi_{\psi}$ decreases. We also see this trend among the non-parametric methods, which range from overcovering to undercovering as we move across $\xi_{\phi}$ and $\xi_{\psi}$. Overall, we see the greatest coverage when the effect of $A_2$ is quite strong (topmost rows), or if the dependence of $A_2^0$ on $S_2$ is weak (leftmost rows.) As we discussed earlier, when $\xi_{\phi} = 0$ (leftmost columns) there is no dependence of $M$ on $S2$, and thus weighting is unnecessary. Furthermore, we not only obtain a uniform probability of $M=1$ across $S_2$, but also a uniform probability of $A_2^0$ across $S_2$. This uniformity likely leads to improved estimates of $Y|S_2,A_2$, and in turn to better coverage of the tolerance intervals. The decrease in performance for low $\xi_{\psi}$ may be due to non-regularity: when $\xi_{\psi}=0$, there is in fact no effect of $A_2$ on $Y$. However, assuming continuity of the appropriate distributions, our estimated $\hat\psi_2$ will be nonzero almost always, and our plug-in estimate of the value of $\hat{\pi}_2^*$ will be positive almost always. Defining $\hat{a}_{2i}^* = \hat{\pi}_2^*(s_{1i},a_{1i},s_{2i})$, the empirical bias in the value of $\hat{\pi}_2^*$ is \begin{multline*} \sum_i \hat{a}_{2i}^*(\hat\psi_{20} + \hat\psi_{21} a_{1i}+\hat\psi_{22} s_{2i}) - \\ \hat{a}_{2i}^*\xi_{\psi} (\psi_{20}^0 + \psi_{21}^0 a_{1i}+ \psi_{22}^{0} s_{2i}). \end{multline*} Figure~\ref{fig:BIAS} shows the average empirical bias in our estimate of the average value of using $\hat{\pi}_2^*$, as a function of $\xi_{\phi}$ and $\xi_{\psi}$. We can see that the bias is concentrated at the bottom of the plots, near $\xi_{\psi} = 0$. This is precisely where there is more than one nearly-optimal action and non-regularity is known to be a problem. We see the problems worsen in Figure~\ref{fig:.1_ALL}, where we set $\sigma^2_\epsilon = 0.1$. We hypothesise that this is because proportionately even more of the variability in $Y$ is attributable to variability in $S_2$, and accurate estimation of $Y|S_2,A_2$ becomes that much more important. All of the matched subset methods have severe undercoverage for large values of $\phi$ and low values of $\psi$. Weighted methods mitigate this. The residual-borrowing methods achieve much better coverage, but at the cost of much wider intervals. \subsection{Discussion}\label{ss:discussion} Based on our simulation study experiments, we believe that designing the exploration DTR to have uniform randomization over actions is highly beneficial for estimating tolerance intervals. When this is the case, all methods gave reasonable results in almost all scenarios. Some knowledge of the error distribution may help choose a method that will result in reasonable widths. If uniform exploration is not possible, the residual-borrowing methods appear to be the most robust to undercoverage, followed by the weighted methods, followed by the unweighted methods. That said, it would be prudent to perform a simulation study under a scenario ``close'' to the analysis at hand if possible; to facilitate this we have released our R code \citep{R2015} so that researchers and practitioners can explore other scenarios. \section{Example: STAR*D} We present an example of the application of the TI methods we have described to real-world clinical trial data. The Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study followed an initial population of 4041 patients as they were treated using different antidepressant medications and cognitive behavioural therapy \citep{Rush2004}. There were a total of three decision points at which randomisation took place, with different treatment options available at each one. Outcomes were measured using the clinician-rated Quick Inventory of Depressive Symptomatology \citep{Rush2003}. We will examine two such decision points corresponding to Level 2 and Level 3 of the study, which will correspond to the first and second decision points in our analysis. We construct tolerance intervals for STAR*D at Level 2 (our decision point 1), having estimated a $Q$ function and estimated optimal policy for Level 3 (our decision point 2.) We use exactly the same $Q$-learning working model and estimation procedure as \cite{Schulte2014} to develop $\hat{\pi}_2^*$ and the pseudooutcomes; we refer the interested reader to their work for more details. In summary, the state variables we use are up-to-date QIDS measures of patient symptoms, and the outcomes we use are based on later QIDS measurements that have been negated so that higher values are preferable. At decision point 1, we elect to use a binary state variable indicating whether the previous slope in QIDS score for a patient is greater than the median. Higher QIDS scores indicate worse symptom levels, so this state variable effectively identifies patients whose disease status is worsening most quickly. At both decision points, the treatment choice is whether to ``augment'' the current medication with another, or to ``switch'' to another medication altogether. We applied the six TI methods described previously to the data, using the choice to switch or augment treatment as $A_1$, and letting $S_1$ be an indicator variable for QIDS slope being greater than the median slope. We see that generally the intervals are quite wide, and that there is severe overlap of TIs for different treatments. This reflects the high variance and low treatment effect we observe in this data. However, the intervals do capture prognostic information: the intervals for $S_1=\mbox{``Yes''}$ (indicating severely worsening symptoms) are wider, with a decreased lower bound indicating that such patients may have poorer outcomes relative to those with more stable symptoms prior to the decision point. The maximum attainable outcome in this problem is $0$, since QIDS cannot go below $0$. We note that the parametric TI methods can produce upper bounds greater than $0$ and lower bounds that appear to be a bit optimistic. Hence, we suggest that one of the non-parametric methods would be a sensible choice for STAR*D. \begin{figure*} \includegraphics[width=\textwidth]{figures/STARD_TIs} \caption{\label{fig:stard}Tolerance intervals for STAR*D at Level 2. The six TI methods used previously are applied to the data, using the choice to switch or augment treatment as $A_1$, and letting $S_1$ be an indicator variable for previous QIDS slope being greater than the median. In this setting, higher outcomes are preferable, but higher QIDS scores (and slopes) indicate worse symptoms.} \end{figure*} \section{Conclusion}\label{ss:conclusion} We have developed and evaluated tolerance interval methods for dynamic treatment regimes that can provide more detailed prognostic information to patients who will follow an estimated optimal regime. We began by reviewing in detail different interval estimation and prediction methods and then adapting them to the DTR setting. We illustrated some of the challenges associated with tolerance interval estimation stemming from the fact that we do not typically have data that were generated from the estimated optimal regime. We gave an extensive empirical evaluation of the methods and discussed several practical aspects of method choice. We demonstrated the methods using data from a pragmatic clinical trial. We now take the opportunity to discuss future directions of research on tolerance intervals for dynamic treatment regimes. \subsection{Future Directions} Our work lays the foundation for extending tolerance interval methods for dynamic treatment regimes in several different directions. The normal theory TI methods we employed used an estimate of the residual distribution that is pooled over $S_1$ and $A_1$. The non-parametric methods estimated the residual distributions separately for the different discrete $S_1,A_1$. A compromise solution that partially shares residual information across different configurations of $(S_1,A_1)$, perhaps in a data-driven, adaptive fashion, may provide improved performance and wider applicability. (Note that the non-parametric methods we described are not applicable if $S_1$ is continuous.) We have treated DTRs with two decision points, but in general we would like to have tolerance intervals for multiple decision points. Such methods would potentially have to address uncertainty stemming from ``parameter sharing,'' across time points. It is known \citep{Chakraborty2016} that the effects of model misspecification and non-regularity can compound in the multiple decision point setting, and the impact of this on tolerance intervals is not yet known. While we assumed a single outcome measure $Y$ throughout our work, several methods have been described for estimating DTRs in the presence of multiple outcomes \citep{lizotte12linear,laber14setvalued,lizotte15momdps}. Joint tolerance intervals/tolerance regions for this setting would be equally important as they are in the standard, single-outcome setting. We observed some problems associated with biased estimates of the value of the estimated policy, which is caused by non-regularity. The problem of non-regularity in optimal DTR estimation has been addressed in the confidence interval setting using different approaches, including pre-testing \citep{laber14dynamic} and shrinkage \citep{Chakraborty2010,Chakraborty2013}. We have not explicitly incorporated either of these ideas in the methods we presented; doing so may lead to methods that are more robust to small or zero treatment effects at the second stage yet do not pay a high cost in terms of width. \cite{Fernholz2001} have presented a method to re-calibrate tolerance intervals using the bootstrap. They propose a bootstrap method to estimate the content $\gamma$ of a given tolerance interval---first they construct a tolerance interval with nominal (or ``requested'') content $\gamma$, but then they use the bootstrap to estimate what the actual content. This could potentially be used to identify when tolerance methods fail on dynamic treatment regimes, or they may be used simply to give more accurate confidence information to the decision maker. For example, we may attempt to construct a tolerance interval for $\gamma = 0.9$, but if it turns out that the actual content is $0.85$, the interval may still be useful if the decision-maker is made aware of this fact. Future work to adapt the calibration procedure could prove promising. Finally, a Bayesian approach to the predictive estimation problem may prove fruitful in some settings. \cite{Saarela2015} have laid groundwork for this direction of research. \subsection*{Acknowledgements} This work was supported by the Natural Sciences and Engineering Research Council of Canada. Data used in the preparation of this article were obtained from the limited access datasets distributed from the NIH-supported ``Sequenced Treatment Alternatives to Relieve Depression'' (STAR*D). STAR*D focused on non-psychotic major depressive disorder in adults seen in outpatient settings. The primary purpose of this research study was to determine which treatments work best if the first treatment with medication does not produce an acceptable response. The study was supported by NIMH Contract \#N01MH90003 to the University of Texas Southwestern Medical Center. The ClinicalTrials.gov identifier is NCT00021528. \bibliographystyle{plainnat}
d112e66597af097fe452d512e79ed598c5d29ba3
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section*{Acknowledgments} We are grateful to T.~Brauner, D.~Fernandez-Fraile, Mei Huang, and T.~Koide for helpful discussions and to John W.~Clark for reading the manuscript and comments. This work was supported in part by the Helmholtz Alliance Program of the Helmholtz Association, contract HA216/EMMI ``Extremes of Density and Temperature: Cosmic Matter in the Laboratory'', the Helmholtz International Center for FAIR within the framework of the LOEWE program launched by the State of Hesse, by the Deutsche Forschugsgemeinschaft (Grant SE 1836/1-2) and CompStar, a research networking programme of the European Science Foundation.
f55fe79f639b700ac9d8c129038253053cbbe7ec
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} For any positive integer x, ''Mobius function'' $\mu(x)$ is defined as: * 0, if x is divisible by a square of any prime * 1, if the factorization of x contains an even number of distinct primes * -1, otherwise Mertens function of a positive integer x is defined as the sum of values of the Mobius function for 1 to x: $$M(n) = \sum_{x=1}^n \mu(x)$$ This function scales roughly as $n^{1/2}$ over the interval where it's been studied. An old hypothesis, "Mertens conjecture" (Stieltjes, 1885) proposed that $|M(n)| < n^{1/2}$ for all n. This was disproved by Odlyzko and te Riele (1985)~\cite{1}, though no explicit counterexample is known. The largest known value of $|M(n)/n^{1/2}|$, 0.570591 at n=7766842813, was found in ~\cite{2}. Values of M(n) for all consecutive n up to $10^{14}$ were computed in ~\cite{3}. No larger values were found. A computer algorithm to calculate isolated values of M(n) in $O(n^{2/3+\epsilon})$ time was described in ~\cite{4} and used to calculate several values up to $10^{16}$. The algorithm described next is, in a sense, a variation of the algorithm by ~\cite{4}. The running time is still $O(n^{2/3+\epsilon})$. A distinctive feature of this algorithm is that executing it at any value of n allows to simultaneously compute all values of $M(y)$ for integer $y=\lfloor n/c \rfloor$ for all integer $c>1$. \section{Algorithm} The starting point for the algorithm is a standard identity: $$\sum_{x=1}^n M(\lfloor n/x \rfloor) = 1, n>1$$ Or, equivalently, $$M(n) = 1 - \sum_{x=2}^n M(\lfloor n/x \rfloor) $$ It's been noted in ~\cite{4} that the sequence $\lfloor n/x \rfloor$ takes on only $\lceil 2\sqrt{n} \rceil $ distinct values. We can rewrite the right-hand side as follows $$M(n) = 1 - \sum_{x=2}^{n/t} M(\lfloor n/x \rfloor) - \sum_{x=1}^{t} (\lfloor \frac{n}{xt} \rfloor - \lfloor \frac{n}{(x+1)t} \rfloor )M(x)$$ where $t > \lceil \sqrt{n} \rceil$. Now notice that the right hand side can be calculated recursively. In order to calculate M(n), we need to know M(n/2), M(n/3), M(n/4), .... In turn, M(n/2) requires M(n/4), M(n/6), etc. Suppose that we pick a value of $u>\sqrt{n}$ and create an array of $\lfloor n/u \rfloor$ elements, which would end up holding values of $M(\lfloor n/x \rfloor)$ for $1 \leq x \leq \lfloor n/u \rfloor$. We then calculate all $M(y)$ for $1 \leq y\leq u$ in order by direct sieving (this would take $O(u^{1+\epsilon}$ time). For each $y$, we update the array to account for contributions of $M(y)$ according to the formula above. It's not hard to see that updating takes $O((n/u^{1/2})^{1+\epsilon})$ time. The overall time for these two tasks is minimized at $u=O(n^{2/3+\epsilon})$ and it also scales as $O(n^{2/3+\epsilon})$. Finally, when we are done sieving, it is a simple task to compute final values of M, and it takes negligible $O(n/u \ln{n/u}) \approx O(n^{1/3 + \epsilon})$ time. Furthermore, it is easy to see that, if we want to compute $M(x)$ for multiple close values of $x$, we only need to do sieving once. Adjusting $u$ will result in optimal running time of $u=O(n^{2/3+\epsilon} N^{2/3+\epsilon})$, where $N$ is the number of values $x$ for which we're trying to calculate $M(x)$. The structure of the algorithm, with many large elements in the array that can be computed in parallel and more or less independently from each other, suggests that the algorithm may benefit from optimization for a general-purpose GPU (e.g. NVIDIA Fermi). \section{Implementation} Since u is much larger than the amount of computer memory (for example, for $x=10^{22}$, $u \approx 10^{14}$, but modern desktop computers rarely have more than $\approx 10^{10}$ bytes of memory), it is necessary to divide calculations into blocks of 'y' and to handle each block in order. For each block, the first step is to compute consecutive values of $M(y)$ throughout by sieving. Duplicate operations are avoided, and optimal performance is achieved if the block is greater than $u^{1/2}$. The nature of this computation is such that it involves a large number of random memory accesses across the entire block, and this is not a task that's particularly suitable for GPU optimization (among other reasons, the CPU has much larger L1 and L2 caches). Therefore, this task is done on the CPU. A naive algorithm that performs sieving in a block $[y_1..y_2]$ can be described as follows: 1. Create an array A of integers with $y_2-y_1+1$ elements, corresponding to $y_1$, $y_1+1$, ..., $y_2$. Set these all to 1 initially. 2. For every prime number $p$ between 2 and $\lceil \sqrt{y_2} \rceil$, multiply elements in the array corresponding to multiples of $p$ by $-p$. 3. For every such prime, set all elements in the array corresponding to multiples of $p^2$ to 0. 4. Go through the array. For every element $y$, calculate the Mobius function $\mu(y)$: - If $A[y-y_1]$ is 0, then $\mu(y)$ is 0 - Otherwise, if $|A[y-y_1]|$ is equal to $y$, then $\mu(y)$ is equal to the sign of $A[y-y_1]$ - Otherwise, $\mu(y)$ is equal to minus the sign of $A[y-y_1]$ 5. Add up all values of $\mu(y)$ to calculate $M(y)$. The number of operations, according to the prime number theorem, is $O((y_2-y_1) \ln{\ln{\sqrt{y_2}}})$. While there's not much that can be done about the overall growth rate (obviously, the task can't be done faster than $O(y_2-y_1)$, and the log factor only plays a minor role, since, even for $y_2 = 10^{20}$, $\ln{\ln{\sqrt{y_2}}}) \approx 3.1$), we can improve the constant factor. One significant improvement can be achieved by modifying the temporary array to store approximate logarithms of prime products instead of exact values. This serves two purposes: it replaces each integer multiplication with an integer addition (which is often substantially faster); and it reduces the memory throughput of the algorithm by a factor of 8, because the array A in the range of values we're interested in has to contain 64-bit values, but base-2 logarithms can be stored in 8-bit variables. This is the modified algorithm: 0. (Preparation, has to be done once) Create an array of log-primes: For each prime $p_i$, the corresponding log-prime $l_i = \lfloor \log_2{p_i}\rfloor \operatorname{|} 1 $ (here '$|$' is binary "OR".) 1. Create an array A of 8-bit integers with $y_2-y_1+1$ elements, corresponding to $y_1$, $y_1+1$, ..., $y_2$. Set these all to 0 initially. 2. For every prime number $p$ between 2 and $\lceil \sqrt{y_2} \rceil$, add $l_i$ to all elements in the array corresponding to multiples of $p$. 3. For every such prime, set the most significant bit (by performing binary OR with 0x80) in all elements corresponding to multiples of $p^2$ within the block. 4. Go through the array. For every element $y$, calculate the Mobius function $\mu(y)$: - If the most significant bit of $A[y-y_1]$ is set, then $\mu(y)$ is 0 \ - Otherwise, if $A[y-y_1]$ is greater than $\lfloor \log_2{y} \rfloor - 1$, then $\mu(y)$ is equal to $1-2(A[y-y_1] \operatorname{\&} 1)$ (here "\&" is binary "AND"). - Otherwise, $\mu(y)$ is equal $-1+2(A[y-y_1] \operatorname{\&} 1)$ 5. Add up all values of $\mu(y)$ to calculate $M(y)$. It can be proved that sieving according to the algorithm will result in correct results, as long as $y_2$ is less than approximately $10^{27}$. Further, we can precompute results of sieving with the first several primes and square primes, storing them in separate array, and copying this array instead of rerunning sieving. For example, the state of the temporary array after sieving with the first 4 primes and the first 2 square primes is periodic with the period of 2*2*3*3*5*7*11 = 13860. Finally, the whole process can be modified to make use of multiple CPU cores, through the use of OpenMP. To perform the array updating in the minimal amount of time, we have to divide the (x,y) space into several distinct regions and process each region using a different procedure. \begin{figure} \centerline{\epsfig{file=zones.png,width=5in}} \caption{Regions of (x,y) space} \label{fig1} \end{figure} In regions 1 and 1' (defined as $y<c_1*(n/x)^{1/2}$ for some $c_1>1$), almost every (x,y) pair makes a nonzero contribution to one of the elements of the array. Adequate performance can be achieved by creating one GPU thread to handle a region with dimensions (1,512) and to add up contributions consecutively from every pair. An important factor that slows down computations in these regions is the necessity to perform at least one integer 64-bit division for every (x,y) pair. Integer division is an extremely slow operation that is not supported natively by the GPU and can take as much as a hundred clock cycles. (It's also quite slow on the CPU, but less so.) To improve performance, we can apply an algorithm such as ~\cite{5}, which accelerates integer division by replacing it with integer multiplication, addition, and shift. It requires that we precompute several constants for each value of the divisor, but, as long as each divisor is used multiple times, this results in net reduction of computation time. We can precompute all constants in advance for divisors up to some constant C, where C is only limited by the amount of available memory. This will result in substantial speedup as long as $C \gg u^{1/2}$. In regions 2 and 2' (defined as $c_1*(n/x)^{1/2} \leq y<c_2*(n/x)^{1/2}$ where $c_2 \approx 10*c_1$), there is a substantial number of (x,y) pairs which don't contribute anything, but the memory access pattern is relatively dense. Good performance is achieved when each GPU thread handles a region with dimensions (1,8192). In the region 3, nonzero contributing pairs are sparse and processing time is limited by random memory access bandwidth. In the region 4 (defined as $y>c_3*n^{1/2}$, where $c \approx 2$), accesses are so sparse that it's not even justified to transfer results of the sieving onto the GPU any more: time spent transferring data over the PCI-e bus is larger than time that would be spent processing the same data with the CPU. At this point we no longer need the table of divisors used to speed up regions 1 and 2, so, if the amount of memory is a bottleneck, we can free that memory and increase the sieving block size up to several times $u^{1/2}$ (this is the only region where the ratio of block size to $u^{1/2}$ significantly greater than 1 is warranted). \section{Approximate algorithm} A different algorithm that can be employed in this task is based on the following remarkable formula: $$M(x) \approx 2 \operatorname{Re} \sum_{i=1}^{\infty} \frac{x^{\rho_i}}{\rho_i \zeta'(\rho_i)} $$ (1) or in a simple yet practical reformulation, $$M(x)/x^{1/2} \approx q_n(x) = 2\sum_{i=1}^n a_i \cos{(z_i \ln{x} + b_i)}$$ (2) where $$z_i = \operatorname{im} {\rho_i}$$ $$a_i = 1/|\rho_i \zeta'(\rho_i)|$$ $$b_i = -arg(\rho_i \zeta'(\rho_i))$$ and $\rho_i$ is the i'th zero of the Riemann zeta function. Somewhat more sophisticated reformulations, constructed by inserting a nontrivial kernel $f(n)$ inside the sum, are mentioned in the literature, but the formula above is particularly easy to calculate, and differences between different kernels for large $n$ are minor. Although this formula is only approximate, it's usefulness here stems from the fact that we can get a good approximation - for example, two significant digits of $M(x)/x^{1/2}$ - with just 1000 terms, regardless of the magnitude of x. Calculating $q_n$ is also a task that nicely affords to GPU optimization. In this case, the implementation is relatively straightforward. The only nuance that needs addressing involves the use of single-precision vs. double-precision floating point. Modern GPUs are generally faster when operating in single precision than in double precision. This is particularly true with regard to the calculation of the cosine. All modern GPUs have native hardware support of basic single-precision transcendental functions such as the cosine. GPUs in NVIDIA Fermi family, for example, have the throughput of 4 or 8 single-precision cosines per clock cycle per multiprocessor: the NVIDIA GTX 560 can, in theory, exceed $10^{11}$ cosines per second (compare with modern desktop CPU's, which can only manage on the order of $10^8$ cosines/second). On the other hand, double-precision cosines are not natively supported by NVIDIA or AMD and they can be a factor of 20 or so slower. But the algorithm we have here can't be implemented completely in single precision, unless both n and x are small. \section{Results} To test the algorithm described in this article, a four-core Intel i5-2500 CPU running at 4.4 GHz with 16 GB of memory and a NVIDIA GeForce GTX 560 video card (384 processing cores at 1.7 GHz) with 1 GB of memory were used. Several choices of operating systems are available in this hardware configuration. Of these, x64 Linux (e.g. Ubuntu) is notable, because it comes with a C/C++ compiler (gcc/g++) that natively supports a 128-bit integer type. Since we're trying to compute M(n) for n above the 64-bit limit ($2^{64} \approx 1.9 * 10^{19}$), native 128-bit support substantially simplifies the code. All the various constants and thresholds involved in this algorithm (such as dimensions handled by each GPU thread, or boundaries between regions 1/1' and 2/2') were tuned for peak performance. To verify correctness, many small values in $10^8 .. 10^{12}$ range were computed and results were compared with results obtained by naive O(n) time sieving. Values of Mertens function for $10^{13}, 10^{14}, 10^{15}$, and $10^{16}$ were compared against previously reported in the literature~\cite{4}. Calculating $M(10^{16})$ takes 1 minute: a substantial improvement, compared to the previously reported 1996 result where calculation for the same n took two days. (It should be noted that only a small part of this improvement is due to an overall increase in clock frequencies: the 1996 analysis employed a CPU running at 0.3 GHz, only one order of magnitude slower than hardware used here. Most of it is due to substantial parallelization and larger cache sizes.) For higher n, direct verification is problematic, but results can be compared with approximate values. In order to make use of the approximate algorithm, the list of the first 2,000,000 zeros of Riemann zeta was downloaded from ~\cite{6}. Zeros were then refined to 30 digits and corresponding values of $\zeta'(\rho_i)$ were computed using Mathematica. To ensure correctness, these values for the first 200,000 zeros were independently computed to 30 digits using the arbitrary-precision floating point arithmetic library "mpmath" ver. 0.17 ~\cite{7}, and results were checked against each other to confirm the lack of major discrepancies. Approximations of M(n) for many large values of n were calculated. These turned out to be in agreement with exact calculations: the residual error between the exact and the approximate values of $M(n)/n^{1/2}$ for $n>~10^8$ was approximately normally distributed with the standard deviation of $\approx 4.2*10^{-4}$. Finally, the code was used to compute values of M(n) for powers of 10 from $10^{17}$ to $10^{22}$. As expected, execution time scaled roughly as $n^{2/3}$. The computation for $10^{22}$ took 10 days. As described in the beginning of the article, computing $M(10^{22})$ simultaneously yields values of $M(y)$ for integer $y=\lfloor 10^{22}/c \rfloor$ for all integer c, including values such as $10^{19}$ and $10^{20}$; these agreed with their direct computations. Results are listed in table 1. \begin{center} Table 1. Values of $M(n)$ for select values of $n$. \begin{tabular}[h]{|c|c|c|c|}\hline $n$ & $M(n)$ & $M(n)/n^{1/2}$ & time \\\hline $10^{16}$ & $-3195437$ & -0.032 & 1 m\\ $10^{17}$ & $-21830254$ & -0.069 & 4.5 m\\ $10^{18}$ & $-46758740$ & -0.047 & 20.7 m\\ $10^{19}$ & $899990187$ & 0.285 & \\ $10^{20}$ & $461113106$ & 0.046 & \\ $10^{21}$ & $3395895277$ & 0.107 & \\ $10^{22}$ & $-2061910120$ & -0.021 & 10 d \\ \hline \end{tabular} \end{center} \begin{figure} \centerline{\epsfig{file=m.png,width=7in}} \caption{ Values of M(x) for x between $10^{16}$ and $10^{22}$} \label{fig2} \end{figure} \section{New extreme value} As mentioned above, the largest value of $|M(n)/n^{1/2}|$ reported in the literature so far is 0.570591 at n=7766842813. ~\cite{3} analyzed approximate values for n up to $10^{10^{10}}$ using the Riemann zeta algorithm and noticed several larger potential extremes, including one potentially within reach at the present level of technology. Table 4 in that article notes that the $q_{10^6}$ approximation (based on the first $10^6$ zeros) reaches the value of -0.585 at $n \approx 1.16*10^{19}$, indicating a new extreme in the neighborhood. To localize the extremum, a grid of values of M(n) in the neighborhood was systematically calculated. The density of the grid was increased till it became practical to cover intervals between the points of the grid with direct sieving (for example, if values of M(n) at $1.160986*10^{19}$ and $1.160987*10^{19}$ are known exactly, all values in the interval can be calculated, and the local extremum in this interval located, in a matter of hours). A plot with all computed points is shown in figure 2. The largest value of $|M(n)/n^{1/2}|$ found by this method turns out to be 0.585767684 (M(n)=-1,995,900,927) at $n=11,609,864,264,058,592,345$. Given the approach and the density of the grid, it is not possible to state with absolute certainty that the value is largest for, say, $n<10^{20}$, or even in the neighborhood of $1.16*10^{19}$. However, a heuristic argument can be made for negligible probability of finding a larger value below $10^{40}$. Start by observing that a larger extreme value requires having a large value of $|p_{10^6}|$. Below $10^{40}$, the function $|p_{10^6}|$ only exceeds 0.575 inside the interval from $1.1600*10^{19}$ to $1.1612*10^{19}$ (the next such occurrence is in the vicinity of $2.26*10^{42}$). Since, as noted above, the residual error $M(n)/n^{1/2}-q_{10^6}(n)$ is approximately normally distributed with the standard deviation of $\approx 5.7*10^{-4}$, and the difference between 0.585767 and 0.575 comes out to 19 standard deviations, the probability for $|M(n)/n^{1/2}|$ at any randomly-picked value of n to exceed 0.585767 conditional on $q_{10^6}(n)$ being less than 0.575 is vanishingly small (less than $10^{-78}$) and therefore the probability for this to happen at any n below $10^{40}$ is still vanishingly small. (This, however, is subject to a caveat, which is mentioned in the next section.) So it remains to explore the interval of $1.1600*10^{19}$ to $1.1612*10^{19}$. Suppose that we know values of $M(n)$ for $n=a$ and $n=b$. What can be said about the probability for M(n) to exceed a certan $M_0$ in the interval between $a$ and $b$, assuming that $b>a$, $M_0>M(a)$, and $M_0>M(b)$ (or, conversely, to go below $M_0$ if $M_0<M(a)$ and $M_0<M(b)$)? Locally, $M(n)$ can be thought of as essentially a random walk, which moves either up or down with the probability of $\sigma/2$ or sideways with the probability of $1-\sigma$ (where $\sigma=6/\pi^2$ is the share of squarefree numbers among all integers). The probability $P(M(n)>M_0)$ for such a random walk can be shown to be equal to $$P(M(n)>M_0) = \exp{(-(M(a)-M_0)(M(b)-M_0)/(2\sigma(b-a))}$$ to the first order in $b-a$ (which is sufficient for our needs). Of course, $M(n)$ is not a true random walk. It's not hard to see that a true random walk as described above would have values of $M(n)/n^{1/2}$ normally distributed with the standard deviation $6/\pi^2 \approx 0.608$. In reality, the standard deviation of $M(n)/n^{1/2}$ among computed values is only 0.11, suggesting that its dynamics involves a degree of regression towards the mean, and the probability estimate we have may be too high. Nevertheless, using this this estimate and applying to all intervals shown in figure 2 yields that the total probability to find a larger extreme for $|M(n)/n^{1/2}|$ in the neighborhood (and, consequently, below $10^{40}$) is less than $0.05$. \section{Search for the first counterexample} As the discussion above shows, it is highly improbable to find a counterexample to Mertens conjecture in the range where computation by the exact algorithm is tractable. Locating the first counterexample, therefore, is best approached using the approximate algorithm. Strictly speaking, the approximation (2) is only valid for all x in the assumption that the Riemann hypothesis is valid. If it fails, $|M(x)|$ can be expected to exceed $|x|$ for $x \approx \min_i t_i^{\frac{1}{s_i-0.5}}$, where $\rho_i = s_i + i t_i$ are nontrivial roots of Riemann zeta. Current limits on $\rho_i$ make it unlikely to see a counterexample below $10^{26}$ or so. On the other hand, if the Riemann hypothesis holds, simple investigation of (2) leads to the conclusion that the conjecture is unlikely to be violated over a much wider range of values. A general sum of the form (2) can be treated as a sum of a large number of random variables. According to the central limit theorem, such a sum should be expected approximately normally distributed. A sum of the first $10^6$ terms ($q_{10^6}$) would be normally distributed with standard deviation of $\sqrt{2}\sum_{n=1}^{10^6} a_i^2 \approx 0.17$. In practice, the presence of several large non-normal terms in the sum means that the actual tail distribution of values of ($q_{10^6}$) is even narrower. The rest of this section assumes that the Riemann hypothesis holds. The most straightforward algorithm that can be applied here is described in ~\cite{3}. It involves computing values of $p_n$ over a grid of values for some small value of $n$, recording values where $|q_n|$ exceeds a certain predefined threshold, and then studying areas where this happens in more detail. Unfortunately, this kind of brute-force search is too slow to yield the counterexample. Numerical evaluation of the probability distribution function (PDF) of (2) indicates that the search might need to be extended as high as $\exp{(10^{18})}$. Covering this region does not seem feasible at the present level of desktop PC technology: it would take an optimized software program and a typical high-end desktop PC hundreds of computer-years to reach $10^{18}$. It follows that a more intelligent algorithm needs to be used. One useful observation that can be made is that the range of values that need to be checked can be constrained significantly just by examining the first several terms of (2). For example, the analysis of the PDF of (2) and its subsequences shows that any counterexample is unlikely to have $|q_7(x)|<0.425$, which is true for fewer than 1 in 10,000 values. In addition, all subsequences of (2) are quasiperiodic. This is not particularly useful for long subsequences, but for short sequences, such as $q_4$ and $q_7-q_4$, we can find $n<10^9$ such that these functions are "almost" periodic with periods $2\pi n/z_1$. For example, $$(z_2/z_1)*274243136 = 407871396.0012$$ $$(z_3/z_1)*274243136 = 485262780.0010$$ $$(z_4/z_1)*274243136 = 590306027.0009$$ $$(z_5/z_1)*242101728 = 564116758.0001$$ $$(z_6/z_1)*242101728 = 643781791.9996$$ $$(z_7/z_1)*242101728 = 700862060.0012$$ which means that $q_4$ is approximately periodic with the period $274243136(2\pi/z_1)$ and $q_7-q_4$ is approximately periodic with the period $242101728(2\pi/z_1)$. We can accelerate the algorithm by computing values of $q_4$ and $q_7-q_4$ over one such period and then reusing them for multiple consecutive periods. Furthermore, we don't need to save actual values: it is much faster to compare each value against a predefined threshold and only save boolean flags which indicate whether the threshold is exceeded. This method ensures compact packing of the results (8 flags per byte) and minimizes memory troughput. Unfortunately, these algorithmic improvements are still insufficient to ensure locating a counterexample, but they allow us to extend coverage by two orders of magnitude. To work with $x \gg \exp{(10^{10})}$, it is necessary to control for the loss of precision at high x. For example, computing $p_{10,000}(\exp{(10^{11})})$ involves computing a cosine of $z_{10,000}*10^{11} + b_{10,000} \approx 10^{15}$. Since these computations are done by the GPU, whose internal representation of double-precision floating point values has the machine precision $2^{-53} \approx 1.1*10^{-16}$, the computed value of the cosine could have be off by as much as 0.1. For much larger x or n, values of cosines could be essentially meaningless. To address this, a reference table of constants $a_i$, $b_i$ and $z_i$ to 30 significant digits was kept; for actual calculations, the range of study was divided into blocks of length $\approx 2*10^{11}$, and a new, 'shifted' table was generated for the every block using high-precision software according to the formula $b'_i = (b_i+z_i*x_0) mod 2\pi$ The search was extended to $\exp{(10^{15})}$. Results up to $\exp{(10^{13})}$ were double-checked using brute-force search, and further verified using the 'mpmath' arbitrary-precision computing package. The search up to $\exp{(10^{15})}$ took approximately 7 days using the hardware mentioned in section 5. In line with expectations, no counterexamples were found. The largest value of $|q_{10^6}(x)|$ observed was $0.9565$ for $x \approx \exp{(5.0586*10^{14})}$. Some "increasingly large positive/negative values" (using the terminology of ~\cite{3}) are listed in tables 2 and 3. \begin{center} Table 2. Increasingly large positive values. \begin{tabular}[h]{|c|c|}\hline $\ln{n}$ & $q_{10^6}(n)$ \\\hline $22.773133$ & $+0.56959$\\ $97.526523$ & $+0.61863$\\ $984.282019$ & $+0.62512$\\ $1625.698493$ & $+0.62687$\\ $1957.803133$ & $+0.62849$\\ $2709.485814$ & $+0.65467$\\ $2794.384965$ & $+0.65955$\\ $12277.362671$ & $+0.79344$\\ $86458087.131684$ & $+0.80443$\\ $249548703.533702$ & $+0.81472$\\ $1467573228.501077$ & $+0.81702$\\ $1901564582.121964$ & $+0.82335$\\ $2500922487.505913$ & $+0.82884$\\ $3847517705.646364$ & $+0.83682$\\ $10407545552.85608$ & $+0.842485$\\ $21334043144.02927$ & $+0.86622$\\ $187114096628.77484$ & $+0.88636$\\ $1354137181464.62097$ & $+0.90578$\\ $6984497047106.74600$ & $+0.90744$\\ $84594507546024.46719$ & $+0.92208$\\ $117239588213313.90075$ & $+0.94102$\\ \hline \end{tabular} \end{center} \begin{center} Table 3. Increasingly large negative values. \begin{tabular}[h]{|c|c|}\hline $\ln{n}$ & $q_{10^6}(n)$ \\\hline $43.898387$ & $-0.58478$\\ $140.373835$ & $-0.58940$\\ $853.851589$ & $-0.67715$\\ $3005.762748$ & $-0.68878$\\ $102494.024866$ & $-0.69580$\\ $150020.464414$ & $-0.70773$\\ $178259.151801$ & $-0.71541$\\ $203205.659988$ & $-0.74083$\\ $860440.495719$ & $-0.75254$\\ $1365643.292004$ & $-0.75927$\\ $2765781.628095$ & $-0.76041$\\ $7078384.260482$ & $-0.76879$\\ $13670267.747472$ & $-0.78505$\\ $19371574.223934$ & $-0.78747$\\ $57334128.09084$ & $-0.80765$\\ $167211796.14902$ & $-0.82488$\\ $405441986.398094$ & $-0.84497$\\ $4016980126.87193$ & $-0.85146$\\ $86339883457.03526$ & $-0.87296$\\ $264421251554.46918$ & $-0.89290$\\ $1278282683343.76520$ & $-0.89907$\\ $3680547202477.03623$ & $-0.91392$\\ $18747209824980.73961$ & $-0.91405$\\ $55714219637174.49540$ & $-0.92025$\\ $117892199597999.02070$ & $-0.92078$\\ $143697547951999.01914$ & $-0.93038$\\ $258592103887306.71643$ & $-0.94097$\\ $505863698785929.24318$ & $-0.95652$\\ \hline \end{tabular} \end{center} \begin{figure} \centerline{\epsfig{file=trend.png}} \caption{Increasingly large positive and negative values of $q(x)$} \label{fig3} \end{figure} The distribution of values found here is in general agreement with expectations. Even though some authors (e.g. \cite{1}) observed tentative evidence for sign asymmetry (namely, that large positive values seemed to occur more often than large negative values), that does not appear to be the case here - the distribution found in this study is largely symmetric (see table 4.) Although no counterexamples were found in the area of study, further improvements to the algorithm and the computer hardware may bring this task within reach. \begin{center} Table 4. Tail ends of the frequency distribution of $|p_{7000}(x)|$ for $\ln{x} < 10^{15}$, sampled with the step $\pi/3072 z_1 \approx 7.24*10^{-5}$. \begin{tabular}[h]{|c|c|c|}\hline $|p_{7000}(x)|$ & positive values & negative values \\ \hline $0.88 .. 0.89$ & $4499$ & $4399$ \\ $0.89 .. 0.90$ & $1472$ & $1518$ \\ $0.90 .. 0.91$ & $346$ & $444$ \\ $0.91 .. 0.92$ & $91$ & $112$ \\ $0.92 .. 0.93$ & $22$ & $42$ \\ $0.93 .. 0.94$ & $0$ & $6$ \\ $0.94 .. 0.95$ & $0$ & $6$\\ \hline \end{tabular} \end{center} \bigskip
394cddb9878ddc84004726e5c4f4f2caaefb60e9
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} \label{sec:Introduction} The formation and evolution of early-type galaxies (ETGs) is a widely studied topic in present-day astrophysics, in particular due to a number of tight correlations between their observables, such as structural parameters, kinematics, colours and central supermassive black-hole masses \citep[e.g.][]{Illingworth1977, Sandage-Visvanathan1978, Djorgovski-Davis1987, Dressler1987, Ferrarese-Merritt2000, Gebhardt2000}. See \citet{Renzini2006} for a recent review. These correlations seem hard to explain at first sight in the hierarchical CDM formation paradigm \citep[e.g.][]{Binney1978, Blumenthal1984}, where ETGs are seen as the end products of the stochastical merging of smaller building blocks \citep[e.g.\ spiral galaxies;][]{Toomre-Toomre1972, Barnes1992}. Both observational and theoretical studies have progressed rapidly over the last decade, largely thanks to major observational programmes conducted on the ground and from space \citep[e.g.\ 2dF, SDSS, COSMOS, COMBO17;][]{Colless2001, Bernardi2003short, Scarlata2007short, Faber2007short} and due to the large-scale numerical simulations nowadays affordable by supercomputers \citep[e.g.][]{Hopkins2006}. The SAURON and ATLAS\textsuperscript{3D} projects provide detailed analysis on substantial samples of nearby ETGs based on deep integral-field spectroscopic data and serve as a local benchmark for ETG studies \citep[e.g.][]{Cappellari2007,Emsellem2007,Emsellem2011short}. Despite the tremendous increase in data volume and computational power, many questions remain very difficult to answer. This is due to several factors: (i)~the types of available data (usually only luminosity-weighted kinematics and imaging) and their quality (in terms of signal-to-noise ratio and spatial/spectral resolution) have remained limited compared to their volume, (ii)~mass modeling techniques often suffer from intrinsic degeneracies that are hard to overcome without very high quality data that can only be obtained for small samples of nearby ETGs. ETG studies that are now conducted \textsl{beyond} the local Universe with 8--10\,m-class and space-based telescopes suffer from many of the same issues that similar studies of the \textsl{local} Universe, with 4\,m-class telescopes, faced more than a decade ago. One of the prevailing and, even today, still not precisely answered questions is how important dark matter is in the inner baryon-dominated regions of ETGs \citep[e.g.][]{Saglia1992, Saglia1993, Bertola1993, Bertin1994, Carollo1995, Gerhard1998, Loewenstein-White1999, Gerhard2001, Keeton2001, Romanowsky2003, Mamon-Lokas2005a, deLorenzi2009}. Open questions include in particular the following: (i)~How much dark matter is there precisely inside the inner few effective radii of ETGs and how is it distributed? (ii)~How do the stellar and dark matter distributions of ETGs evolve with cosmic time and are there trends with galaxy mass? (iii)~Do these observations agree with theoretical predictions? Because dark matter is not directly observable but can only be inferred from other observations, these questions are particularly difficult to answer for more distant ETGs where data quality progressively deteriorates due to lack of signal-to-noise, even with the largest ground-based telescopes. To overcome a number of these hurdles, several systematic programmes were initiated over the last decade to combine the constraints of strong gravitational lensing with those of stellar kinematics \citep[LSD and SLACS; e.g.][]{Koopmans-Treu2002, Treu-Koopmans2004, Bolton2006, Treu2006, Koopmans2006, Koopmans2009, Bolton2008a, Auger2009}. This combination has turned out particularly powerful in breaking degeneracies in ETG mass models, despite the limited quality of the kinematic data \citep[e.g.][]{Treu-Koopmans2004, Koopmans2006, Koopmans2009, Auger2010b}. The reason is that the mass enclosed by the Einstein radius of a lens galaxy can be accurately determined and breaks the usual mass-anisotropy degeneracy of ETGs to a large extent \citep[see e.g.][for an explanation based on a simple power-law model]{Koopmans2004}. Whereas the LSD and SLACS programmes used predominantly luminosity-weighted stellar velocity dispersions and were modeled using the spherical Jeans equation \citep[e.g.][]{Koopmans-Treu2003, Treu-Koopmans2004}, one might argue that (slowly) rotating systems and/or systems with orbital anisotropy cannot be modeled properly that way \citep[see for example the discussion in][]{Kochanek2006}. To address these valid issues, a new and substantially larger observational programme of a sub-sample of SLACS lenses, using observations with the VLT and Keck, was started in 2006 \citep[][]{Czoske2008, Czoske2009short, Barnabe2010} to obtain two-dimensional kinematic fields (both first and second moments of the line-of-sight velocity distribution) out to their effective radii, complemented with a rigorous modeling effort based on two-integral axisymmetric models \citep[][]{Barnabe-Koopmans2007, Barnabe2009a}. The self-consistent combination of gravitational-lensing and stellar-kinematic data sets allows better constraints to be set on the mass distribution of ETGs \citep[][]{Czoske2008, Barnabe2009b, Barnabe2010,Barnabe2011}, but also to test less rigorous methods that might still have to be used at even higher redshifts (i.e.\ $z\ga 0.5$) until IFU observations from the next generation of telescopes (such as ESO's ELT) become available. Whereas the above-mentioned results show that these data provide more precise and accurate results on the \textsl{total} mass profile of ETGs in the inner few effective radii, several problems still remain that are critical for our understanding of galaxy formation. For example, what fraction of the mass inside several effective radii is due to dark matter and what are the relative mass distributions of the stellar and dark-matter components? This is much harder to answer with kinematic and lensing data alone, since there are many combinations of the stellar and dark-matter distributions that lead to similar lensing and kinematic data \citep[][]{Treu-Koopmans2004, Barnabe2009b}. If, on the other hand, the stellar mass-to-light ratio can be determined independently, these degeneracies can mostly be lifted. This assumes, however, that the broad-band colours provide sufficient information to break degeneracies in the IMF models, which in general they do not. Still, current constraints can already set limits on the IMF and stellar mass-to-light ratios that are becoming competitive \citep[][]{Grillo2008, Auger2009, Auger2010a, Spiniello2011pre}. To go beyond these broad-band inferred stellar mass-to-light ratios, one can also use the same (IFU) spectra to determine line indices from which more precise ages, metallicities and mass-to-light ratios of the ETG stellar population can be obtained \citep[e.g.][]{Trager2000a}. Whereas this information is going to be used in forthcoming publications, here we concentrate on describing the data for kinematic purposes. In this paper, we present the full VLT/VIMOS-IFU data of our sample of 17 SLACS early-type lens galaxies, collected for these purposes. The sample is presented in Sect.~\ref{sec:Sample}. We describe the integral-field spectroscopic observations using VIMOS on the VLT in Sect.~\ref{ssec:obs:VIMOS} and summarize the HST observations as needed for this project in Sect.~\ref{ssec:obs:HST}. The data reduction of the IFU data is described in some detail in Sect.~\ref{sec:data_reduction}, and the kinematic analysis, i.e.~the derivation of two-dimensional maps of line-of-sight velocity and velocity dispersion, in Sect.~\ref{sec:kinematic_analysis}. In Sect.~\ref{sec:other_sources}, we provide a list of additional objects that are visible in the VIMOS/IFU field of view around the lens galaxies. Finally, we conclude in Sect.~\ref{sec:conclusions}. Appendix~\ref{sec:noise_estimation} gives a recipe for estimating the expected photon noise on the reduced spectra based on a simple model. The fully self-consistent combined lensing/dynamics analysis of the data described in this paper is presented in the companion paper by \citet{Barnabe2011}. \section{Sample} \label{sec:Sample} The sample described in this paper consists of seventeen early-type galaxies from the Sloan Digital Sky Survey that have been confirmed as secure gravitational-lens systems in the SLACS survey.\footnote{In fact, one system, J1250B, was downgraded to grade \textit{B} (``possible lens'') in \citet{Bolton2008a}.} The SLACS survey searches for gravitational lens systems in the spectroscopic data of a subsample of SDSS galaxies comprising the luminous red galaxy sample \citep{Eisenstein2001short} and passive members of the MAIN sample \citep{Strauss2002short}, defined as having rest-frame \ion{H}{$\alpha$} equivalent widths less than $1.5$\,\AA. The presence of emission lines at a redshift higher than that of the target galaxy in the SDSS spectra indicates that there is a background galaxy within the fibre, which may have been lensed. Candidates were selected for follow-up snapshot imaging with HST, allowing in most cases a robust confirmation or rejection of the lensing hypothesis. See \citet{Bolton2006} for details on the selection procedure. \citet{Treu2006} conducted a number of tests to verify that the SLACS lens sample is statistically indistinguishable from the parent samples. Results on the structure of the early-type lens galaxies can therefore be taken as representative for the population of (massive) early-type galaxies as a whole. In order to be observable from Paranal, lens systems close to the equator ($\delta < 15\degr$) were selected from the (mostly northern) SLACS catalogue. The final sample of 17 systems chosen for VIMOS/IFU follow-up is evenly distributed in right ascension. The location of the 84 early-type SLACS galaxies studied by \citet{Auger2009} in the parameter space of redshift and velocity dispersion is shown in Fig.~\ref{fig:Sample_comparison}. The seventeen galaxies from the VIMOS/IFU subsample are marked; they are representative of the full SLACS early-type galaxy sample. Fig.~\ref{fig:Sample_comparison} also shows the 48~local early-type galaxies studied by the SAURON team \citep{Emsellem2004}, as well as 17~early-type galaxies in the Coma cluster studied by \citet{Thomas2007}. The VIMOS/IFU sample represents a major step out to cosmological distances compared to the SAURON and Coma samples. In terms of velocity dispersion, it overlaps with the more massive half of the SAURON sample and thus lends itself to comparison of the structure of early-type galaxies at $z\ga0.1$ to their local counterparts. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{F01-Sample-comparison}} \caption{Distribution of the SLACS/IFU sample in redshift and velocity dispersion. The velocity dispersion measurements are taken from this paper, the error bars are statistical errors. For comparison, the SLACS early-type sample of \citet[grey points]{Auger2009}, the SAURON sample at redshift $z\approx 0$ \citep[triangles:][]{Emsellem2004} and a sample of Coma cluster early-type galaxies \citep[stars:][]{Mehlert2000, Corsini2008} are shown.} \label{fig:Sample_comparison} \end{figure} \begin{table*} \small \centering \caption{The VIMOS/IFU sample. Apparent brightness $m_{V}$, effective radius $R_{\mathrm{eff},V}$ and Einstein radius $\theta_{\mathrm{Einst}}$ are from \citet{Auger2009}. Redshifts and SDSS velocity dispersion are from \citet{Bolton2008a}. VIMOS velocity dispersions were measured from aperture-integrated spectra from the IFU data; errors include only the effect of noise, not of template mismatch (5--10\,\%, Sect.~\ref{ssec:kinematic:method}). The last two columns specify the grism used for each object and the number of observing blocks spent on each target.} \begin{tabular}{lcccccccr@{$\pm$}lclr} \hline\hline Galaxy & $\alpha_\mathrm{J2000}$ & $\delta_\mathrm{J2000}$ & $m_{V}$ & $R_{\mathrm{eff},V}$ & $\theta_\mathrm{Einst}$ & $z_\mathrm{lens}$ & $z_\mathrm{source}$ & \multicolumn{2}{c}{$\sigma_\mathrm{SDSS,B08}$} & $\sigma_{\mathrm{VIMOS}}$ & Grism & OBs \\ & & & & \multicolumn{1}{c}{(arcsec)} & \multicolumn{1}{c}{(arcsec)} & & & \multicolumn{2}{c}{($\mathrm{km\,s^{-1}}$)} & ($\mathrm{km\,s^{-1}}$) & & \\ \hline SDSS\,J0037 & $00$:$37$:$53.21$ & $-09$:$42$:$20.1$ & $16.90$ & $2.68$ & $1.53$ & $0.1955$ & $0.6322$ & $279$ & $10$ & $245.3^{+ 6.9}_{- 7.2}$ & HR\_Blue & 9 \\[0.75ex] SDSS\,J0216 & $02$:$16$:$52.54$ & $-08$:$13$:$45.3$ & $18.36$ & $2.97$ & $1.16$ & $0.3317$ & $0.5235$ & $333$ & $23$ & $340.7^{+ 7.8}_{- 7.7}$ & HR\_Orange & 14 \\[0.75ex] SDSS\,J0912 & $09$:$12$:$05.31$ & $+00$:$29$:$01.2$ & $16.56$ & $4.29$ & $1.63$ & $0.1642$ & $0.3239$ & $326$ & $12$ & $306.5^{+10.9}_{-11.4}$ & HR\_Blue & 4 \\[0.75ex] SDSS\,J0935 & $09$:$35$:$43.93$ & $-00$:$03$:$34.8$ & $17.71$ & $4.12$ & $0.87$ & $0.3475$ & $0.4670$ & $396$ & $35$ & $330.4^{+ 9.0}_{- 8.5}$ & HR\_Orange & 12 \\[0.75ex] SDSS\,J0959 & $09$:$59$:$44.07$ & $+04$:$10$:$17.0$ & $17.94$ & $1.51$ & $0.99$ & $0.1260$ & $0.5350$ & $197$ & $13$ & $244.2^{+15.2}_{-14.7}$ & HR\_Orange & 4 \\[0.75ex] SDSS\,J1204 & $12$:$04$:$44.07$ & $+03$:$58$:$06.4$ & $17.45$ & $1.65$ & $1.31$ & $0.1644$ & $0.6307$ & $267$ & $17$ & $240.8^{+ 9.3}_{- 9.5}$ & HR\_Orange & 5 \\[0.75ex] SDSS\,J1250A & $12$:$50$:$28.26$ & $+05$:$23$:$49.1$ & $17.77$ & $1.91$ & $1.13$ & $0.2318$ & $0.7953$ & $252$ & $14$ & $219.9^{+ 5.7}_{- 5.4}$ & HR\_Orange & 6 \\[0.75ex] SDSS\,J1250B & $12$:$50$:$50.52$ & $-01$:$35$:$31.7$ & $15.68$ & $4.01$ & -- & $0.0871$ & $0.3529$ & $246$ & $ 9$ & $274.4^{+ 6.9}_{- 6.7}$ & HR\_Orange & 6 \\[0.75ex] SDSS\,J1251 & $12$:$51$:$35.71$ & $-02$:$08$:$05.2$ & $17.71$ & $5.34$ & $0.84$ & $0.2243$ & $0.7843$ & $233$ & $23$ & $201.5^{+ 8.1}_{- 8.7}$ & HR\_Orange & 12 \\[0.75ex] SDSS\,J1330 & $13$:$30$:$45.53$ & $-01$:$48$:$41.6$ & $17.56$ & $1.36$ & $0.87$ & $0.0808$ & $0.7115$ & $185$ & $ 9$ & $191.8^{+ 7.8}_{- 7.5}$ & HR\_Orange & 3 \\[0.75ex] SDSS\,J1443 & $14$:$43$:$19.62$ & $+03$:$04$:$08.2$ & $17.62$ & $1.38$ & $0.81$ & $0.1338$ & $0.4187$ & $209$ & $11$ & $226.8^{+ 9.7}_{- 9.2}$ & HR\_Orange & 4 \\[0.75ex] SDSS\,J1451 & $14$:$51$:$28.19$ & $-02$:$39$:$36.4$ & $16.92$ & $2.64$ & $1.04$ & $0.1254$ & $0.5203$ & $223$ & $14$ & $203.9^{+ 5.4}_{- 5.3}$ & HR\_Orange & 7 \\[0.75ex] SDSS\,J1627 & $16$:$27$:$46.45$ & $-00$:$53$:$57.6$ & $17.87$ & $2.05$ & $1.23$ & $0.2076$ & $0.5241$ & $290$ & $14$ & $272.6^{+ 7.8}_{- 8.9}$ & HR\_Orange & 11 \\[0.75ex] SDSS\,J2238 & $22$:$38$:$40.20$ & $-07$:$54$:$56.0$ & $17.18$ & $2.41$ & $1.27$ & $0.1371$ & $0.7126$ & $198$ & $11$ & $226.8^{+ 6.2}_{- 6.2}$ & HR\_Orange & 6 \\[0.75ex] SDSS\,J2300 & $23$:$00$:$53.15$ & $+00$:$22$:$38.0$ & $18.19$ & $1.93$ & $1.24$ & $0.2285$ & $0.4635$ & $279$ & $17$ & $250.4^{+ 6.4}_{- 7.0}$ & HR\_Orange & 10 \\[0.75ex] SDSS\,J2303 & $23$:$03$:$21.72$ & $+14$:$22$:$17.9$ & $16.77$ & $3.54$ & $1.62$ & $0.1553$ & $0.5170$ & $255$ & $16$ & $301.5^{+ 7.1}_{- 7.1}$ & HR\_Orange & 11 \\[0.75ex] SDSS\,J2321 & $23$:$21$:$20.93$ & $-09$:$39$:$10.3$ & $15.27$ & $4.79$ & $1.60$ & $0.0819$ & $0.5324$ & $249$ & $ 8$ & $248.1^{+ 4.4}_{- 4.4}$ & HR\_Blue & 5 \\[0.75ex] \hline \end{tabular} \label{tab:sample} \end{table*} \section{Observations} \label{sec:Observations} \subsection{Integral-field spectroscopy} \label{ssec:obs:VIMOS} We have obtained integral-field spectroscopic observations of 17~systems using the integral-field unit (IFU) of VIMOS \citep{LeFevre2001short}. VIMOS is a wide-field imager, multi-object and integral-field spectrograph mounted at the Nasmyth focus~B of the Very Large Telescope UT3 (Melipal) on Paranal, Chile. The integral-field unit samples the focal plane with a total of 6400~lenslets, each of which is coupled to an optical fibre. The opposite ends of the fibres are arranged to form pseudo-slits of 400 fibres each. The light emerging from the pseudo-slit is dispersed in the usual way using grisms and optionally order-separating filters, and the resulting spectra are recorded on four $2\mathrm{k}\times4\mathrm{k}$ EEV CCD chips. In low-resolution mode, the spectra from four pseudo-slits are stacked in the dispersion direction on each of the four CCDs. The arrangement is such that each CCD records the spectra from one quadrant of the field of the IFU head. In high-resolution mode, only the spectra from one pseudo-slit fit onto a CCD and only the central 1600 elements of the IFU head are used. The spatial scale in the focal plane can be chosen to be $0.67\,\mathrm{arcsec}$ or $0.33\,\mathrm{arcsec}$ per spectral element. We used the large spatial scale and high-resolution mode and thus work with a field of view of $27\arcsec\times27\arcsec$ covered by $40\times40$ spectra. Three of the 17 systems of the sample were observed as ESO Programme 075.B-0226 (henceforth the ``pilot programme'') between June 2005 and January 2006. The remaining fourteen systems were the targets of an ESO Large Programme, 177.B-0682 (the ``main programme''), and were observed between April 2006 and March 2007. For the pilot programme, the high-resolution blue grism (HR-Blue) was used, which has a nominal resolution of $R=2550$ and covers the wavelength range $4000\,$\AA\ to 6200\,\AA; for the main programme the HR-Orange grism was used, which covers a somewhat redder wavelength range between 5000\,\AA\ and 7000\,\AA\ at comparable resolution ($R=2650$). All observations were carried out in service mode. For this, the total observing time was broken down in observing blocks (OBs), each with an execution time of one hour and comprising science and calibration exposures. For the pilot programme, three dithered science exposures of integration time 555~seconds on one target were taken during an OB. For the main programme, a single exposure per OB was taken, with integration time 2060~seconds. The main observational constraint for scheduling the OBs for this project was a maximum seeing of $0.8\,\mathrm{arcsec}$ FWHM (full width at half maximum). We followed ESO's standard calibration plan which comprises three flat-field exposures and one arc-lamp exposure per OB, taken immediately after the science exposures. Bias exposures were taken during day time in blocks of five exposures. Standard stars are observed at varying intervals by ESO staff and were therefore in most cases not available for the same nights that our science data were taken. They are useful for relative flux calibration but not for absolute spectro-photometric calibration. \subsection{HST imaging} \label{ssec:obs:HST} Modelling stellar dynamics and the lens effect requires photometric information on both the lens galaxy and the lensed background source. We use various HST observations obtained for the SLACS project \citep[see][]{Bolton2008a, Auger2009}. Lens modelling aims at reconstructing the two-dimensional surface-brightness distribution of the background source. This is obtained from high-resolution ACS images from which elliptical B-spline models of the lens galaxies were subtracted \citep[see][for details]{Bolton2008a}. For eight systems, we have deep F814W images from Programmes 10494 and 10798 (Cycle 14, PI: L.~Koopmans) consisting of four exposures. For the remaining systems only single exposure images from snapshot Programme 10174 (Cycles 13, PI: L.~Koopmans) are available. The pixel scale of these images is $0.05\,\mathrm{arcsec}$. The surface-brightness distributions of the lens galaxies are obtained from NICMOS F160W images from Programmes 10494, 10798 and 11202 (Cycle 16, PI: Koopmans). The full resolution images have a pixel scale of $0.05\,\mathrm{arcsec}$ and include four exposures for a total exposure time of $2560\,\mathrm{s}$. In order to have proper flux weights for the kinematic maps we resample the NICMOS images to the VIMOS/IFU grid with $0.67\,\mathrm{arcsec}$ per spatial pixel after convolution to match the point spread function (PSF) of the VIMOS data, taken to be Gaussian with FWHM of $0.8\,\mathrm{arcsec}$. Three systems (J1251, J1627 and J2321) were not observed with NICMOS. For these systems we use the B-spline models to the F814W ACS images to estimate the surface brightness of the lens galaxy. We add random noise according to the ACS weight maps to mimic observational data. All images were astrometrically registered using the measured position of the lens galaxy and in some cases additional objects visible in the VIMOS field of view. The F814W images of the systems in the sample are shown in Fig.~\ref{fig:HST_Images}. \begin{figure*} \centering \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J0037-F814}}\hfill \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J0216-F814}} \hfill \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J0912-F814}}\hfill \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J0935-F814}}\\[2ex] \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J0959-F814}}\hfill \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J1204-F814}}\hfill \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J1250A-F814}}\hfill \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J1250B-F814}}\\[2ex] \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J1251-F814}}\hfill \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J1330-F814}}\hfill \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J1443-F814}}\hfill \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J1451-F814}}\\[2ex] \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J1627-F814}}\hfill \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J2238-F814}}\hfill \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J2300-F814}}\hfill \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J2303-F814}}\\[2ex] \resizebox{0.22\hsize}{!}{\includegraphics[clip]{F02-J2321-F814}} \caption{HST/F814W images of the systems in the sample. In images obtained in a snapshot programme cosmic ray hits were interpolated over for display purposes.} \label{fig:HST_Images} \end{figure*} \section{Data reduction} \label{sec:data_reduction} The VLT VIMOS/IFU data were reduced using the \texttt{vipgi} package which was developed within the framework of the VIRMOS consortium and the VVDS project. \texttt{vipgi}\footnote{http://cosmos.iasf-milano.inaf.it/pandora/vipgi.html} has been described in detail by \citet{Scodeggio2005short} and \citet{Zanichelli2005short}. In \citet{Czoske2008}, we have already given an overview of the data reduction procedure as applied to our data set, so that we restrict ourselves here to giving the main parameters. The wavelength calibration is based on about 20 helium and neon lines spread over the wavelength range. Fitting a third-order polynomial results in residuals with root-mean-square of $\sim 0.075\,$\AA, corresponding to $\sim 5\,\mathrm{km\,s^{-1}}$, significantly smaller than the spectral resolution and negligible in our analysis. After rectification of the two-dimensional spectra onto a linear wavelength grid with a dispersion of $0.644$\,\AA\ per pixel, one-dimensional spectra are extracted for each fiber in an optimal way using the algorithm of \citet{Horne1986}. We refer to the resulting collection of one-dimensional spectra as a ``data cube'' even though the data structure is, strictly speaking, not a three-dimensional cube. Spectra are labeled by the spatial coordinates $L$ and $M$ (or world coordinates $\alpha$ and $\delta$). A relative flux calibration (giving correct flux as a function of wavelength) is done using the standard observation nearest in time to the execution of a given science observing block. Since this does not necessarily come from the same night as the science exposures and no attempt is made to observe them at the same airmass, the standard observations can only provide a relative correction but no absolute spectro-photometric calibration. The varying transmissivity of the fibres is corrected by measuring and comparing the flux under a sky line in the different extracted spectra. We used [\ion{O}{i}]\,$\lambda 5577.4$ for this purpose, integrating over a range of seven pixels ($4.2~$\AA) and subtracting an estimate of the underlying continuum from a window of five pixels on one side of the line. The observed galaxies are smaller than the field of view of VIMOS/IFU, leaving a sufficient number of fibres pointing at blank sky to permit good sky subtraction. These sky fibres are identified automatically from a histogram of the total fluxes recorded in each fibre, integrated over the entire wavelength range. The sky fibres are then grouped according to the shape of the [\ion{O}{i}]\,$\lambda 5577$ line, a mean over the sky spectra in each group is computed and subtracted from each row according to the group it is judged to belong to. The sky subtraction works well even redwards of 6200\,\AA, where the sky is dominated by the OH molecular bands. The data cubes from all the exposures on a field are finally combined into a final data cube by taking the median. Telescope offsets between the exposures are corrected through spatial shifting by an integer number of fibres. Subpixel shifts could in principle be corrected for by interpolation between adjacent fibres but in practice it turns out that the centroids of the galaxies in image reconstructions of the individual exposures are hard to measure to subpixel accuracy, given the level of accuracy of the fibre transmission correction and the resulting noise in the image reconstructions. A short-coming of \textsc{vipgi} is that it does not provide noise estimates on the reduced spectra. Since proper weighting is desirable for the determination of kinematic and stellar population parameters, we reconstruct noise spectra from a simple model including photon noise and read-out noise. We require noise spectra for each spectral element in the data cube and for the aperture-summed spectra. It turns out that at each step of the reduction procedure, the noise estimate can be written as a rescaled version of the current intermediate data product, including the sky background. The necessary noise spectra can thus easily be produced by creating and rescaling a second data set, reduced in the normal manner but with sky subtraction turned off. A detailed derivation of the noise estimation procedure is given in Appendix~\ref{sec:noise_estimation}, which can also be read as a walk-through of the data reduction process. Fig.~\ref{fig:spectra} shows integrated spectra obtained by summing the one-dimensional spectra from fibres within elliptical apertures following the shape and size of each lens galaxy. Typical diameters of these apertures are around $4$~arcsec, i.e.~a little larger than the 3~arcsec circular aperture used by the SDSS. \begin{figure*} \centering \resizebox{0.495\hsize}{!}{\includegraphics{F03-J0037-spec}} \hfill\resizebox{0.495\hsize}{!}{\includegraphics{F03-J1250B-spec}}\\[-2ex] \resizebox{0.495\hsize}{!}{\includegraphics{F03-J0912-spec}} \hfill\resizebox{0.495\hsize}{!}{\includegraphics{F03-J1251-spec}} \\[-2ex] \resizebox{0.495\hsize}{!}{\includegraphics{F03-J2321-spec}} \hfill\resizebox{0.495\hsize}{!}{\includegraphics{F03-J1330-spec}} \\[-2ex] \ \hfill\resizebox{0.495\hsize}{!}{\includegraphics{F03-J1443-spec}} \\[-2ex] \resizebox{0.495\hsize}{!}{\includegraphics{F03-J0216-spec}} \hfill\resizebox{0.495\hsize}{!}{\includegraphics{F03-J1451-spec}} \\[-2ex] \resizebox{0.495\hsize}{!}{\includegraphics{F03-J0935-spec}} \hfill\resizebox{0.495\hsize}{!}{\includegraphics{F03-J1627-spec}} \\[-2ex] \resizebox{0.495\hsize}{!}{\includegraphics{F03-J0959-spec}} \hfill\resizebox{0.495\hsize}{!}{\includegraphics{F03-J2238-spec}} \\[-2ex] \resizebox{0.495\hsize}{!}{\includegraphics{F03-J1204-spec}} \hfill\resizebox{0.495\hsize}{!}{\includegraphics{F03-J2300-spec}} \\[-2ex] \resizebox{0.495\hsize}{!}{\includegraphics{F03-J1250A-spec}}\hfill\resizebox{0.495\hsize}{!}{\includegraphics{F03-J2303-spec}} \caption{Global spectra obtained by adding spectra within elliptical apertures with diameters between 5 and 14 spaxels, matched to the surface brightness of the objects. Wavelengths are in \AA. The three spectra top left are from the pilot programme and cover a different wavelength range then the spectra from the large programme. Emission lines from the sources are marked. The narrow residuals of sky lines at 5577\,\AA, 5890\,\AA\ and 6300\,\AA\ have been interpolated over for display purposes.} \label{fig:spectra} \end{figure*} The spectra are typical for early-type galaxies, with strong metal absorption lines, in particular \ion{Ca}{i} H~and K, the G and the Mg\,b bands. With the exception of J2321 \citep{Czoske2008}, no significant emission lines from the lens galaxies are detected. This is true even for those galaxies that show clear disk and spiral structure in the HST images. Most of the spectra clearly show [\ion{O}{ii}]\,$\lambda 3727$ emission from the lensed source, although in some cases this line falls outside the wavelength range of the VIMOS spectra. In some cases, in particular J1250B and J1451, several Balmer emission lines from the source can be seen. These become more prominent in the residuals of the kinematic fits, described in Sect.~\ref{sec:kinematic_analysis}. \section{Kinematic analysis} \label{sec:kinematic_analysis} \subsection{Method} \label{ssec:kinematic:method} \begin{table} \centering \caption{Stellar templates used for kinematic analysis of the VIMOS/IFU sample} \begin{tabular}{lll} \hline\hline Galaxy & Template star & Spectral type \\ \hline J\,0037 & HD\,249 & K1\,IV \\ J\,0216 & HD\,195506 & K2\,III \\ J\,0912 & HD\,121146 & K3\,IV \\ J\,0935 & HD\,195506 & K2\,III \\ J\,0959 & HD\,102328 & K3\,III \\ J\,1204 & HD\,221148 & K3\,III \\ J\,1250A & HD\,145328 & K1\,III \\ J\,1250B & HD\,216640 & K1\,III \\ J\,1251 & HD\,145328 & K1\,III \\ J\,1330 & HD\,25975 & K1\,III \\ J\,1443 & HD\,85503 & K2\,III \\ J\,1451 & HD\,114092 & K4\,III \\ J\,1627 & HD\,195506 & K2\,III \\ J\,2238 & HD\,195506 & K2\,III \\ J\,2300 & HD\,195506 & K2\,III \\ J\,2303 & HD\,121146 & K3\,IV \\ J\,2321 & HD\,77818 & K1\,IV \\ \hline \end{tabular} \label{tab:templates:IndoUS} \end{table} Many methods to determine the line-of-sight velocity distribution (LOSVD) of early-type galaxies from spectra have been proposed in the past \citep[e.g.][]{Rix-White1992, vanderMarel-Franx1993, Cappellari-Emsellem2004}. We use the conceptually simplest method, template-fitting in pixel space. Our method is implemented as a package (\textsc{slacR}) in the statistical language \textsc{R} \citep{R}\footnote{\texttt{http://www.r-project.org}. The \textsc{slacR} package can be obtained from the first author on request.}, following descriptions from several authors, in particular \citet{vanderMarel-Franx1993} and \citet{Kelson2000}. Compared to the version used in \citet{Czoske2008}, we have modified and extended our method sufficiently to warrant a full description of the method here. The model parameters $\bmath \theta$ are determined by minimizing the merit function \begin{equation} \label{eq:merit_function} S = \sum_{i} \frac{1}{\sigma_{i}^{2}} \left(s_{i} - \hat{s}(\lambda_{i}, {\bmath \theta})\right)^{2}\,, \end{equation} where the sum extends over all ``good'' pixels $\lambda_{i}$ in the observed spectrum $s_{i}$. The observed spectrum is transformed to the rest frame of the respective lens galaxy prior to the analysis by dividing observed wavelengths by a factor $1+z_{\mathrm{lens}}$. The effective resolution of the spectrum is increased to $R(1+z_{\mathrm{lens}})$ compared to the nominal resolution $R$ of the spectrograph (Sect.~\ref{ssec:obs:VIMOS}). The data-model $\hat{s}$ (in vector-matrix notation) is given by the equation \begin{equation} \label{eq:kinematic_model} \hat{s}_{i} = [ \bmath G *\bmath t ](\lambda_{i})\, p^{(m)}(\lambda_{i}) + q^{(n)}(\lambda_{i}) + \epsilon_{i}\,. \end{equation} Here, the vector $ \bmath G$ is the LOSVD, taken to be a Gaussian in velocity, with kinematic parameters $v$ (streaming motion) and $\sigma_{v}$ (velocity dispersion). The vector $\bmath t$ is a stellar template spectrum, taken from the Indo-US library of stellar spectra \citep{Valdes2004}. Templates were chosen by fitting a random sample of IndoUS spectra to the aperture-integrated VIMOS/IFU spectra and selecting one of the best-fitting (in the least-squares sense) template candidates. We did not always choose the template giving the absolute minimum $\chi^{2}$: It was noticed that HD\,195506 appeared among the best-fitting templates for several systems and was therefore chosen for all of these. Table~\ref{tab:templates:IndoUS} lists the templates used for the kinematic fits, as well as their spectral type and luminosity class. As expected, the selected template stars are predominantly late-type giants. The same template was used for fitting all the individual fibre spectra for each system. The effect of template mismatch on the derived kinematic parameters will be discussed below. \begin{figure} \centering \resizebox{\hsize}{!}{\includegraphics{F04-J0935_global_fit}}\\ \vspace*{2ex} \resizebox{\hsize}{!}{\includegraphics{F04-J0935-21-25-SN23}}\\ \vspace*{2ex} \resizebox{\hsize}{!}{\includegraphics{F04-J0935-20-26-SN12}} \caption{Kinematic fitting of spectra of system J0935. Top: global spectrum summed in an aperture of radius 2.3\,arcsec, $S/N \approx 45$ per wavelength bin. Centre: single spaxel with $S/N\approx 23$ per bin. Bottom: single spaxel with $S/N\approx 12$ per bin. The top panel of each pair shows the spectrum and the template fit in red. The bottom panel shows the residuals with the expected noise level in green (Appendix~\ref{sec:noise_estimation}). The grey areas mark regions that were excluded from the fits. Wavelengths are given in the rest frame of the lens ($z=0.3475$). The [\ion{O}{ii}] line from the source can be seen at 4057\,\AA\ in this plots.} \label{fig:kinematic_fits} \end{figure} For the kinematic fit, the template is first brought to the effective instrumental resolution of the observed spectrum by convolving with a kernel that is a Gaussian in wavelength. The convolution with the LOSVD $\bmath G$ is performed in $\log \lambda$ (equivalently: velocity), and the result is then resampled to the same wavelength grid as the data $\bmath s$, thus avoiding having to resample the data beyond what has been done during the data reduction. The functions $p^{(m)}(\lambda)$ and $q^{(n)}(\lambda)$ are multiplicative and additive correction polynomials of order $m$ and $n$, respectively. These polynomials are needed to correct any low-frequency differences between the galaxy and template spectra due to insufficient flux calibration, contamination by the continuum of the lensed background galaxy and other effects which are not related to the kinematics of the lens galaxy. In contrast to \citet{Czoske2008}, where only a short wavelength range around a single spectral feature was used and a linear correction function was sufficient, we here use the full wavelength range. The coefficients are determined by a linear fit nested within the non-linear optimization for the kinematic parameters $v$ and $\sigma_{v}$. The orders of the polynomials are determined by inspecting large-scale systematics in the residuals from the fits; we obtain satisfactory corrections for $m=4$ and $n=6$. A number of features in the spectra were excluded in the kinematic fits. Possibly present Balmer lines in the lens spectrum are due to younger stellar populations that are not adequately described by the late-type template stars. The strength of the \ion{Mg}{b} band and Na\,D lines is enhanced compared to that in the galactic stars used as templates \citep{Barth2002}. Also masked were night-sky emission lines and atmospheric absorption features, as well as emission lines from the lensed background sources. Finally, a 3-$\sigma$ clipping algorithm (three iterations) was applied to detect possible outliers. The noise $\bmath \epsilon$ is assumed to be normally distributed with mean zero and wavelength-dependent dispersion $\bmath \sigma_{\epsilon}$. Since \textsc{vipgi} does not produce noise estimates that take the details of the reduction procedure into account, we have to estimate the noise spectra after the fact, using a model including readout noise and photon noise as described in Sect.~\ref{sec:data_reduction} and Appendix~\ref{sec:noise_estimation}. The average signal-to-noise ratio used to characterize a spectrum was determined by dividing the spectrum by its noise spectrum and taking the median. Signal-to-noise ratios are thus given per wavelength pixel of width 0.644\,\AA. The noise was propagated to error estimates on the kinematic parameters using a Monte-Carlo method. Gaussian noise was added to the best-fit model with wavelength-dependent dispersion given by the corresponding noise spectrum and the resulting spectrum was fitted in the same way as the original spectrum. The quoted errors on the parameters of the original spectrum are the 16\% and 84\% percentiles of the kinematic parameters $v$ and $\sigma_{v}$ obtained from 300 such realizations. The errors due to template mismatch can be estimated from the distribution of best-fit values determined with 755 stellar spectra of widely varying stellar type. Reasonable candidate templates are defined for this purpose as those giving $\chi^{2} < \chi^{2}_{\mathrm{min}} + 1$, where $\chi^{2}_{\mathrm{min}}$ is the minimum value obtained with the set of stellar spectra. The rms values of $\sigma_{v}$ obtained with these template candidates lie between 5~and 10\,\% (maximum value is 10.6\,\% for J0959). For the two-dimensional kinematic maps, template mismatch will mostly have an effect on the overall level, but not on the structure if stellar populations across the galaxy are homogeneous. \subsection{Results} \label{ssec:kinematic:results} Example fits for an aperture-integrated spectrum and spectra from individual fibres are shown in Fig.~\ref{fig:kinematic_fits}. The template fits generally reproduce the observed spectra to the expected noise level. Fig.~\ref{fig:Kinematic_maps} shows the maps of velocity, velocity dispersion and spectral signal-to-noise. The kinematic maps are restricted to spectra with an average signal-to-noise ratio $S/N > 8$ (per pixel) for which kinematic parameters can be reliably measured. Two-thirds of the sample show little structure in the maps and display kinematics typical for nearly-isothermal pressure-supported slow rotators. Clear rotation patterns can be discerned in the velocity maps for the remaining third of the sample (e.g.~J0959, J1251 or J2238). Kinematic maps of local galaxies in general show a central peak of velocity dispersion; at the redshifts of our sample, the spatial resolution of the integral-field spectrograph is not sufficient to resolve this peak. Fig.~\ref{fig:sigma_comparison} compares our velocity dispersion measurements on the global (aperture-integrated) VIMOS spectra to measurements on the spectra from the Sloan Digital Sky Survey (Table~\ref{tab:sample}). The agreement between the different measurements is generally good, which gives us confidence in the quality of the spectra and the reliability of the analysis method. For the SDSS spectra we compare two independent measurements: the values listed by \citet{Bolton2008a} and new values determined with our code using the same wavelength ranges and template spectra as for the VIMOS spectra. The middle panel of Fig.~\ref{fig:sigma_comparison} compares these measurements, which differ only in the analysis method. The error bars on the slacR measurements include only the effect of noise in the spectra and do not take into account systematic effects such as choice of template and other parameters. The good agreement within the errors shows that systematic effects contribute only little to the uncertainties. A notable outlier is J0935 for which we measure $\sigma = 330\,\mathrm{km\,s^{-1}}$ compared to $396\,\mathrm{km\,s^{-1}}$ from \citet{Bolton2008a}. \begin{figure*} \centering \resizebox{0.48\hsize}{!}{\includegraphics{F05-J0037-kinmap}}\hfill \resizebox{0.48\hsize}{!}{\includegraphics{F05-J0216-kinmap}} \resizebox{0.48\hsize}{!}{\includegraphics{F05-J0912-kinmap}}\hfill \resizebox{0.48\hsize}{!}{\includegraphics{F05-J0935-kinmap}} \resizebox{0.48\hsize}{!}{\includegraphics{F05-J0959-kinmap}}\hfill \resizebox{0.48\hsize}{!}{\includegraphics{F05-J1204-kinmap}} \resizebox{0.48\hsize}{!}{\includegraphics{F05-J1250A-kinmap}}\hfill \resizebox{0.48\hsize}{!}{\includegraphics{F05-J1250B-kinmap}} \resizebox{0.48\hsize}{!}{\includegraphics{F05-J1251-kinmap}}\hfill \resizebox{0.48\hsize}{!}{\includegraphics{F05-J1330-kinmap}} \resizebox{0.48\hsize}{!}{\includegraphics{F05-J1443-kinmap}}\hfill \resizebox{0.48\hsize}{!}{\includegraphics{F05-J1451-kinmap}} \resizebox{0.48\hsize}{!}{\includegraphics{F05-J1627-kinmap}}\hfill \resizebox{0.48\hsize}{!}{\includegraphics{F05-J2238-kinmap}} \resizebox{0.48\hsize}{!}{\includegraphics{F05-J2300-kinmap}}\hfill \resizebox{0.48\hsize}{!}{\includegraphics{F05-J2303-kinmap}} \resizebox{0.48\hsize}{!}{\includegraphics{F05-J2321-kinmap}} \caption{Results of kinematic analysis of the SLACS/IFU sample. For each target, the left plot shows the map of systematic velocity in the lens galaxy and the middle plot shows the map of velocity dispersion. All velocities refer to the rest frame of the lens galaxy. The right plots show the mean signal-to-noise ration (per pixel of width $0.65\,$\AA) of the spectra. The axis labels are in arcsec. The maps are oriented such that north is to the top and east to the left.} \label{fig:Kinematic_maps} \end{figure*} The right panel compares measurements on the SDSS spectra to those on the VIMOS spectra, using identical methods. The different instrumental resolution ($R=1800$ for SDSS, $R=2500$ for VIMOS) has been taken into account. There appears to be a slight offset of the velocity dispersions obtained from the SDSS spectra compared to those from the VIMOS spectra. A similar trend is visible in the middle panel, while the comparison of the VIMOS measurements to the SDSS measurements from \citet{Bolton2008a} does not show any offset. \begin{figure*} \centering \includegraphics{F06-sigma_comparison} \caption{Comparison of velocity dispersion measurements. The VIMOS spectra used here are summed over all spaxels within the same aperture as the SDSS spectra and analysed using our method (\textsc{slacR}). For the SDSS spectra, values measured with \textsc{slacR} and values obtained from \citet[B08]{Bolton2008a} are used.} \label{fig:sigma_comparison} \end{figure*} \section{Secondary sources} \label{sec:other_sources} Some of the VIMOS/IFU fields in this sample contain objects in addition to the primary targets. For these fields we show larger-scale cut-outs from SDSS $r$-band images and label them by number. We extracted aperture spectra from the VIMOS data cubes and determined their redshifts by cross-correlation \citep{Tonry-Davis1979} with a set of SDSS template spectra. Coordinates relative to the primary target and redshifts are listed in Table~\ref{tab:other_sources}. The table also lists redshifts for the lens galaxies obtained by the same method in order to ensure that relative redshifts are accurate. Most of the additional objects are consistent with being at the same redshift as the lens galaxy. \begin{table} \centering \caption{Data for additional sources in VIMOS/IFU fields. Coordinates are in arcsec relative to the position of the lens galaxy (always labelled as ``1'') in each field. Redshifts marked with ``?'' are suggested by peaks in the cross-correlation but cannot be confirmed unambiguously by visual inspection. Objects with $z=0$ are stars.} \begin{tabular}{lrrl} \hline\hline Object & \multicolumn{1}{c}{$\Delta \alpha\,(\arcsec)$} & \multicolumn{1}{c}{$\Delta \delta\,(\arcsec)$} & \multicolumn{1}{c}{z} \\ \hline J0216-1 & $ 0 $ & $ 0 $ & 0.3311 \\ J0216-2 & $ +2.9$ & $+3.2$ & 0.3327 \\ J0216-3 & $ -0.9$ & $-9.2$ & 0.3339 \\ J0216-4 & $-13.1$ & $+3.5$ & 0.3327 \\ J0216-5 & $+12.4$ & $+4.8$ & 0 \\ \hline J0912-1 & 0 & 0 & 0.1638 \\ J0912-2 & $+4.3$ & $+6.4$ & 0.1609 \\ \hline J0935-1 & $ 0 $ & $ 0 $ & 0.3468 \\ J0935-2 & $+0.6$ & $+5.2$ & 0.3407 \\ J0935-3 & $+8.6$ & $-5.3$ & 0.3528 \\ \hline J1250B-1 & $ 0 $ & $ 0 $ & 0.0866 \\ J1250B-2 & $-5.1$ & $+11.1$ & 0.2621? \\ \hline J1251-1 & $ 0 $ & $ 0 $ & 0.2238 \\ J1251-2 & $+15.2$ & $+10.8$ & 0 \\ \hline J1451-1 & $ 0 $ & $ 0 $ & 0.1248 \\ J1451-2 & $-10.5$ & $-4.8$ & 0 \\ J1451-3 & $ +2.9$ & $+3.7$ & 0.5196? \\ \hline J2238-1 & $ 0 $ & $ 0 $ & 0.1365 \\ J2238-2 & $+4.4$ & $-4.6$ & 0.1358 \\ \hline J2300-1 & $ 0 $ & $ 0 $ & 0.2280 \\ J2300-2 & $-11.5$ & $-7.1$ & 0.2277? \\ \hline \end{tabular} \label{tab:other_sources} \end{table} \begin{figure*} \centering \resizebox{\hsize}{!}{ \includegraphics{F07-J0216-fc} \includegraphics{F07-J0912-fc} \includegraphics{F07-J0935-fc} \includegraphics{F07-J1250B-fc} } \resizebox{\hsize}{!}{ \includegraphics{F07-J1251-fc} \includegraphics{F07-J1451-fc} \includegraphics{F07-J2238-fc} \includegraphics{F07-J2300-fc} } \caption{Finding charts for the sources listed in Table~\ref{tab:other_sources}. Images are $r$-band images from the SDSS} \label{fig:other_sources:charts} \end{figure*} \section{Conclusions} \label{sec:conclusions} In this paper, we have presented integral-field spectroscopic data obtained with VIMOS/IFU on a sample of 17~early-type lens galaxies selected from the SLACS survey. The data permit spatially resolved reconstruction of the stellar kinematics in these galaxies, presented as maps of systematic velocity and velocity dispersion. The sample, which spans a redshift range of $0.08$ to $0.35$ for the lens galaxies, is well-suited for a comparison with local samples, such as SAURON \citep{Emsellem2004} and ATLAS\textsuperscript{3D} \citep{Cappellari2010short}. The distinction of slow-rotating and fast-rotating galaxies found in the SAURON sample \citep{Emsellem2007} and earlier work \citep[e.g.][]{Davies1983} is also evident from the kinematic maps for our sample. About a third of the galaxies in the sample show clear evidence for rotation in their velocity maps. While the spatial resolution of integral-field spectroscopic observations of galaxies at cosmological distances is necessarily much coarser than for local galaxies, the present sample offers the unique advantage that all the galaxies act as gravitational lenses and therefore offer the possibility to model their mass distribution using the two complementary methods of stellar dynamics and gravitational lensing. A Bayesian implementation of a fully self-consistent modelling algorithm for the joint analysis of stellar dynamics and gravitational lensing was developed by \citep{Barnabe-Koopmans2007} and applied to subsets of this sample by \citet{Czoske2008} and \citet{Barnabe2009b}. Modelling and analysis of the full data set is presented in \citet{Barnabe2011}. \section*{Acknowledgments} The data published in this paper have been reduced using \textsc{vipgi}, designed by the VIRMOS Consortium and developed by INAF Milano. M.B. acknowledges the support from an NWO programme subsidy (project number 614.000.417) from the Department of Energy contract DE-AC02-76SF00515. O.C. and L.V.E.K. were supported (in part) through an NWO-VIDI programme subsidy (project number 639.042.505). T.T. acknowledges support from the NSF through CAREER award NSF-0642621 and by the Packard Foundation through a Packard Fellowship.
b679333986f4d89d8c7093c061c49a41899b4a22
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Astrophysical context} \label{sect:context} The brightest star in the sky, Sirius\,A, has all its main parameters such as mass, luminosity, age... relatively well determined, and its abundances have now been studied in great detail using Space Telescope data by \citet{Landstreet2011}, where more background information may also be found. In previous work, the then available observations of Sirius were compared with results from a grid of models calculated with atomic diffusion and turbulence as a competing process by \citet{RicherMiTu2000}. Given the range of abundances observed for any given species, the agreement seemed satisfactory. In a recent paper \citep{VickMiRietal2010} similar results were obtained through a similar approach but with mass loss as the competing process. Because the observational error bars were too large, these authors were unable to delineate which of mass loss or turbulence was responsible for its abundance anomalies. It is currently uncertain as to which process is most efficient in competing with atomic diffusion in A stars. For O and early B stars, mass loss is most likely the dominant process. It is clearly observed in those stars at a rate sufficient to obliterate the effects of atomic diffusion. However in \MS{} A stars the expected mass loss rate due to radiative accelerations is smaller than in O stars by several orders of magnitude. Its presence is likely only if it is started by another mechanism \citep{Abbott82}, and it might involve only metals \citep{Babel95}. On the other hand, since surface convection zones are very thin, one expects little corona driven flux as observed on the Sun. It is then \emph{a priori} quite uncertain if A stars have any mass loss and the claimed mass loss rate for Sirius is an important observation. It is thus important to verify as precisely as possible if it is compatible with current observed abundance anomalies on the surface of Sirius. The measurement of the mass loss rate of Sirius is a difficult observation and awaited Hubble telescope measurements of the Mg\,II resonance lines \citep{BertinLaVietal95}. Their analysis of this spectral feature with a wind model leads to an uncertainty of 0.5 dex on the mass loss rate if Mg is all once ionized. However there is an additional uncertainty related to the evaluation of the fraction of \Mg{} that is once ionized. Their more credible evaluation of Mg ionization is based on setting the ionization rate equal to the recombination rate and leads to\footnote{This is slightly different from their Eq.\,[17] for which they had neglected the error bar given with their Eq.\,[7] but included an evaluation of the Mg II ionization based on a \emph{corrected LTE} value coming from atmosphere models \citep{SnijdersLa75} which do not seem appropriate for the wind region of interest here.}: \begin{equation} 6 \times 10^{-14} < - \frac{dM}{dt} < 5 \times 10^{-13} \Msol/\mbox{yr}. \label{eq:rate} \end{equation} On the other hand, turbulence has often been used in stellar evolution calculations to explain observed abundance anomalies. In F stars of clusters, turbulence could be responsible for the destruction of surface \Li{} \citep{TalonCh98,TalonCh2005} thereby leading to the so called \Li{} gap. Turbulence could also be responsible for reducing abundance anomalies caused by atomic diffusion on Am and Fm stars \citep{TalonRiMi2006}. It can naturally explain the disappearance of abundance anomalies as rotation increases in those objects. It could also play a role for the \Li{} abundance evolution in solar type stars \citep{PinsonneaultKaSoetal89,ProffittMi91a}. It has however always been found necessary to use a number of adjustable parameters for its description by physical models of turbulent transport and its role is uncertain. In this series of papers on the role of atomic diffusion in stellar evolution, turbulence was only introduced when models with atomic diffusion led to anomalies larger than observed. Only one parameter was adjusted in order to control the influence of turbulence in limiting the size of anomalies: the surface mixed mass \citep{RicherMiTu2000,MichaudRiRi2011}. It is possible to improve what we learn from the acccurate observations of Sirius by making more precise evolutionary calculations. Instead of the grid of solar metallicity models used in \citet{RicherMiTu2000} and in \citet{VickMiRietal2010}, this paper uses two new series of models which were respectively computed with turbulence and with mass loss as the process competing with atomic diffusion. These two series are precisely converged to the known properties of Sirius, $L$, \teff, $R$, $M$ and age. The original metallicity is determined and models with this metallcity are used to compare with observed abundances. This allows for a more precise and rigorous test than the calculations using grids of models. In this paper stellar evolution models with atomic diffusion as described in \citet{RicherMiTu2000} and in \citet{VickMiRietal2010} are used to determine the original metallicity of Sirius\,A using the age, radius and mass as constraints (Sect.\,\ref{sec:OriginalMetallicity}). Using this metallicity and the determined parameters, complete evolutionary models are then calculated (Sect.\,\ref{sec:Models}), and the surface abundances are compared with Landstreet's recent observations (Sect.\,\ref{sec:SurfaceAbundances}). The results are discussed in Sect.\,\ref{sec:Conclusion}. \section{Original metallicity} \label{sec:OriginalMetallicity} \begin{figure*} \centering \includegraphics[width=18cm]{Nsirius_main_pub.eps} \caption{ An HR diagram and the time evolution of \teff{}, $L$, $\log g$, $R$ and $Z_{\rm{surf}}$ are shown. Observationally determined $\pm 1 \sigma$ intervals are shown for $L$, \teff{}, $g$ and $R$. For \teff{} the spectroscopically determined error bar is in black while that determined using luminosity and radius is in red (see the text). Each model is color coded and identified on the figure. The adopted acceptable age range is from 200 to 250 Myr (see the text). In the HR diagram, the part of the curves between 200 and 250 Myr is solid; it is dotted outside of that interval. All models were calculated with turbulence. Models with mass loss could not be distinguished, in the HR diagram, from those of the same original mass and composition calculated with turbulence. A subset of the curves is shown on Fig.\,\ref{fig:HR1}.} \label{fig:HR} \end{figure*} The fundamental parameters required to carry out stellar evolution calculations, except for the original chemical composition, have been relatively well determined for Sirius\,A. The Hipparcos parallax can be used to determine its distance and when coupled with interferometry \citep{KervellaThMoetal2003}, to determine its radius, 1.711\,$\pm$0.013\,\ensuremath{\,\mbox{\it R}_{\odot}}. These authors also use the Hipparcos parallax to determine the luminosity from the magnitude and to refine the mass determination to 2.12 $\pm$0.06 \msol{}. From the luminosity and radius one can obtain $\teff = 9900 \pm$140\,K while from spectroscopy \citet{Lemke89} obtained $\teff = 9900 \pm$200\,K. The age of Sirius was discussed in Sect.\,4.1 of \citet{RicherMiTu2000}; using evolutionary time scales of Sirius B, they suggested 250\,$\pm$50\,Myr. We adopt the slightly more restrictive range 225\,$\pm$25\,Myr suggested by \citet{KervellaThMoetal2003} in their Sect.\,2.2, where they argue that Sirius B would have been more massive on the \MS{} than assumed by \citet{RicherMiTu2000}. Since the mass is well determined, we use a fixed mass of 2.1\msol{}, and then evolve models with a range of metallicities, using a scaled solar mix as given in Table\,1 of \citet{TurcotteRiMietal98}. Helium was adjusted to the fitted solar value for model H of \citet{TurcotteRiMietal98} as starting homogeneous composition for some of the calculations. For most of the calculations however, the He mass fraction was corrected by $\Delta Y =-0.02$\,dex because of the lower final metallicity\footnote{This 0.02 reduction of $Y$ for a 0.01 reduction of $Z$ is the same proportionality as used by \citealt{VandenBergSwRoetal2000} in building his Table\,2. LiBeB were taken from the meteoretic values of \citealt{GrevesseSa98}. }. In this paper a solar mass fraction, $\ensuremath{\,\mbox{\it X}_{\odot}}$, is often used in particular to normalize \emph{both} observed and calculated mass fractions. They are from Table\,1 of \citet{TurcotteRiMietal98} and correspond to the solar mass fractions at the birth of the Sun, more precisely the $Z_0(= 0.01999)$ for model H in Table 6 of that paper. The surface solar abundances of metals today are some 10\,\% smaller. Those normalizing factors are different from the solar abundances used by Landstreet for comparative purposes. Since, in this paper, the same normalizing factors are used for both observed and calculated quantities, they do not influence the comparison. Age, luminosity, \teff{} and radius are assumed well determined and are used as constraints to determine the original metallicity using models with turbulence. In Fig.\,\ref{fig:HR}, only models with $Z_0 = 0.009$, 0.010 and 0.011 are seen to satisfy all constraints within the predetermined error boxes. The radius is generally satisfied only for models younger than 250\,Myr. Furthermore only models with the He mass fraction reduced by 0.02 satisfy the constraint on $L$. Models with higher $Z$ and a solar He mass fraction do not satisfy the constraint from the radius as may be seen from the trend of the models with the solar He mass fraction. Below the deepest surface convection zone, the turbulent diffusion coefficient has been assumed to obey a simple algebraic dependence on density given, in the calculations with turbulence presented in this paper, by \begin{equation} \Dturb= 10^4 D(\He)_0\left(\frac{\rho_0}{\rho}\right)^4 \label{eq:DT} \end{equation} where $D(\He)_0$ is the atomic diffusion coefficient\footnote{The values of $D(\He)_0$ actually used in this formula were always obtained --- for programming convenience --- from the simple analytical approximation \hbox{$D(\He)=3.3\times10^{-15}T^{2.5}/[4\rho\ln(1+1.125\times10^{-16}T^3/\rho)]$} (in cgs units) for He in trace amount in an ionized hydrogen plasma. These can differ significantly from the more accurate values used elsewhere in the code.} of He at some reference depth. Let $\Delta M \equiv \Mstar - M_r$ be the mass outside the sphere of radius $r$. For this paper, a series of models with different surface mixed masses (proportional to our parameter $\Delta M_0$) were calculated. More precisely calculations were carried out with \begin{equation} \rho_0=\rho(\Delta M_0). \label{eq:Delta-M-0} \end{equation} where Eq.~(\ref{eq:Delta-M-0}) is given by the current stellar model. In words, in the calculations with turbulence reported in this paper, $\rho_0$ of Eq.~(\ref{eq:DT}) is the density found at depth $\Delta M = \Delta M_0$ in the evolutionary model. In practice the outer $\sim 3 \times \Delta M_0$ of the star is mixed by turbulence; for $\Delta M_0 = 10^{-6.0}$\,\Msol{} the concentration of most species is constant for $\Delta M \lta 10^{-5.5}$\,\Msol{}. As one increases $\Delta M_0$, one defines a one parameter family of models. \section{The models of Sirius\,A} \label{sec:Models} Two series of models were evolved for Sirius\,A. One in which the process competing with diffusion is turbulence as described in \citet{RicherMiTu2000} and \citet{MichaudRiRi2011} and one in which it is mass loss as described in \citet{VickMiRietal2010}. In both, using opacity spectra from \citet{IglesiasRo96}, all aspects of atomic diffusion transport are treated in detail from first principles. \begin{figure*} \centering \includegraphics[width=18cm]{g_x_pub.eps} \caption{\emph{Upper row} Radiative accelerations for five atomic species, after 232\,Myr of evolution in a model with a mass loss rate of 10${^{-13}}$ \Msol/yr (red curves) and in a model with turbulence (blue curves). The dotted lines represent gravity. \emph{Lower row} Corresponding mass fraction in the model with mass loss and in that with turbulence. The dashed lines are the original values. They correspond to an original metallicity of $Z_0 = 0.01$.} \label{fig:gR_x5} \end{figure*} The radiative accelerations of Mg, Si, Ca, Fe and Ni are shown in the upper row of Fig.\,\ref{fig:gR_x5} at $\sim$232\,Myr, the approximate age of Sirius\,A, in both a model calculated with mass loss (red curves) and one calculated with turbulence (blues curves). The two are practically superposed for $\DM > -5$ but are significantly different for many species closer to the surface (that is $\DM < -5$). In the lower row the corresponding internal concentrations for the model with mass loss and for that with turbulence are shown. When there is a difference between the \gr's for the two models they are caused by effects of saturation, as may be seen by comparing the abundances in the lower row. The $X(\Fe)$ and $X(\Ni)$ are larger in the model with turbulence for $\DM < -5$ than in the model with mass loss; the reduction of the photon flux at the wavelengths where \Fe{} and \Ni{} absorb the most is by a larger factor when the abundance is larger and so the \gr's are smaller in the model with turbulence. The large underabundance of \Ca{} at $\DM \sim -7.5$ in the wind model causes the larger \gr(\Ca) there, but the large underabundance is also caused by the maximum of \gr(\Ca) as discussed in Sec. 5.1.1 of \citet{VickMiRietal2010} to which the reader is referred for a detailed discussion of the interior wind solution. While the surface abundances of say \Fe{} and \Ni{} in the model with mass loss are within 0.3 dex of those in the model with turbulence, their interior mass fractions differ by a factor of about 5 for $-7< \DM <-5$. Figures \ref{fig:internal_x} and \ref{fig:gR} contain results for all calculated species; they are are shown in the online Appendix \ref{sec:Appendix}. \section{Surface abundances } \label{sec:SurfaceAbundances} \begin{figure*} \centering \includegraphics[width=9cm]{NSirius_Landstreet_etal_Fig3L_pub.eps} \includegraphics[width=9cm]{NSirius_Landstreet_etal_Fig3R_pub.eps} \caption{Observed and predicted abundances on Sirius\,A as a function of $Z$, the atomic number, \emph{left} in the model with turbulence calculated with an original metallicity of 0.009 for three slightly different values of turbulence and \emph{right} in the model with mass loss with an original metallicity of 0.010 calculated with four different mass loss rates. The model name Z0.009\_dY-0.02\_mtb1.0 stands for a model calculated with $Z_0= 0.009$, $\Delta Y = -0.02$, and $\Delta M_0 = 1.0 \times 10^{-6}$\,\msol. The model name MassLoss\_Z0.010W2E-13 stands for a model calculated with \hbox{$dM/dt = -2\times 10^{-13}$\,\msol/yr.} All dotted lines represent models of about 200\,Myr.} \label{fig:surfAbun} \end{figure*} \begin{figure*} \centering \includegraphics[width=9cm]{NSirius_Landstreet_etal_Fig4L_pub.eps} \includegraphics[width=9cm]{NSirius_Landstreet_etal_Fig4R_pub.eps} \caption{Observed and predicted abundances on Sirius\,A as a function of $Z$, the atomic number, in the model with turbulence for two slightly different values of original metallicity, 0.010 and 0.011, and in the model with a mass loss rate of \hbox{10$^{-13}$\,\msol/yr} and a metallicity of 0.01 (see Fig.\,\ref{fig:surfAbun} for model labeling definitions). In the \emph{left} panel, the two sets of observations are from \citealt{Landstreet2011}: the black circles are from his observations while the pink triangles are an average over all recent observations (see the text). In the \emph{right} panel, the averaged values are replaced by the actual data points of each observer as given in Table\,I of \citet{Landstreet2011} where {\it circles}, \citet{Landstreet2011}; {\it inverted open triangles}, \citet{LambertRoBe82}; {\it inverted three-point stars}, \citet{Lemke90}; {\it blue squares}, \citet{QiuZhChetal2001}; {\it diamonds}, \citet{HillLa93}; {\it asterisks}, \citet{HuiBonHoaBuAl97}; {\it plus}, \citet{RentzschHolm97}; {\it upright open triangles}, \citet{HolwegerSt93}; {\it pink squares}, \citet{SadakaneUe89}. See the text for the explanation of the differences between the error bars in the right and left panels. } \label{fig:surfAbun_turb_twin} \end{figure*} Given the constraints imposed by age, \teff{}, $L$ and $R$, and that the mass is 2.1\,\msol{}, the original metallicity is fixed to \hbox{$Z_0 = 0.010 \pm 0.001$} (see Sect.\,\ref{sec:OriginalMetallicity}). Specifically, the luminosity (middle left hand panel of Fig.\,\ref{fig:HR}) determines $Z_0 = 0.010 \pm 0.001$ and then the radius determines the acceptable age to lie between 200 and $\lta 230$\,Myr (middle right hand panel). There only remains mass loss rates \emph{or} mass mixed by turbulence that may be varied to define a range of predicted abundances that can then be compared with observations. Evolutionary models were calculated for $\Delta M_0= 1.0 $, 1.4 and 2.1$\times 10^{-6}\msol$ (see Eq.\,[\ref{eq:Delta-M-0}]) and for mass loss rates of 0.5, 0.7, 1.0 and 2.0 $\times 10^{-13}$\,\msol/yr. On Fig.\,\ref{fig:surfAbun}, predictions from some of them are compared with observations from column 2 of Table\,1 of \citet{Landstreet2011}. On Fig.\ref{fig:surfAbun_turb_twin} data from the other columns of his Table\,1 are also used in order to present a picture of the uncertainties of the observations, as briefly discussed below. In the left panel of Fig.\,\ref{fig:surfAbun}, results are shown for the case $Z_0= 0.009$ at 200 Myr (dotted line segments) and at 220 Myr (solid line segments) which is the age interval over which all constraints are satisfied according to Fig.\,\ref{fig:HR}. In practice, even though shown for all cases, the dotted and solid segments are barely distinguishable and merely widen the dots. Note the assumed original composition at age 0.0\,Myr in light blue on each panel of Figs.\,\ref{fig:surfAbun} and \ref{fig:surfAbun_turb_twin}. As the mass mixed by turbulence is decreased from 2.1 to 1.0$\times 10^{-6}\msol$, the surface abundance of elements supported by \gr{} (e.g. most \Fe{} peak elements) and the underabundance factor of sinking elements both increase. In the left panel of Fig.\,\ref{fig:surfAbun_turb_twin} similar results are shown for original metallicities of $Z_0 =0.010$ and 0.011 with $\Delta M_0= 1.4 \times 10^{-6}\msol$. Within the range of original metallicities acceptable according to Sect.\,\ref{sec:OriginalMetallicity} (from $Z_0 = 0.009$ to 0.011), the level of agreement between predicted surface abundances and observed ones does not change much although the $Z_0= 0.010$ case is slightly favored. Given the number of observed species, there is in practice little room for adjustment: as one may see from the left panel of Fig.\,\ref{fig:surfAbun}, the \Fe{} peak abundances favor the lower value of the mixed mass but the abundances of He, O, S and Ca rather favor the larger value. Predictions for four mass loss rates and a metallicity of \hbox{$Z_0= 0.010$} are shown in the right panel of Fig.\,\ref{fig:surfAbun}. Only one metallicity is shown since the effects of changing metallicity over the acceptable range of $Z_0= 0.009$ to 0.011 are small as briefly mentioned above for the turbulence models. The iron peak elements would probably favor a mass loss rate of 0.5 $\times 10^{-13}$\,\msol/yr, but the lower mass species then disagree more strongly, and the better compromise appears to be the 1.0 $\times 10^{-13}$\,\msol/yr case. The iron peak elements show approximately the same sensitivity to the mass loss range as to the mixed mass range illustrated in Fig.\,\ref{fig:surfAbun} but He, O and S are more sensitive to the mass loss rate\footnote{As the mass loss rate is reduced, the settling velocity becomes closer to the wind velocity in magnitude. This tends to amplify differences in settling velocities caused, among other factors, by small mass differences. For instance the abundance variation of $^4\He$ is a factor of 1.8 larger than that of $^3\He$ for the 0.5 $\times 10^{-13}$\,\msol/yr case but only a factor of 1.3 larger in the $ 10^{-13}$\,\msol/yr case (see the right panel of Fig.\,\ref{fig:surfAbun}).}. \begin{figure*} \centering \includegraphics[width=9cm]{eventail_Z0.010_dY-0.02_mtb1.4_SIRIUS_pub.eps} \includegraphics[width=9cm]{eventail_MassLoss_Z0.010W1E-13a_SIRIUS_pub.eps} \caption{Color--coded interior concentrations for the same $Z_0= 0.010$ models as in Fig.\,\ref{fig:surfAbun_turb_twin} at $\sim 233$\,Myr \emph{left} in the model with turbulence and \emph{right} in the model with mass loss. The radial coordinate is the radius and its scale is linear, but the logarithmic value of the mass coordinate above a number of points, \DM, is shown on the left of the horizontal black line. The concentration scale is given in the right insert. Small circles near the center of both models mark the central convection zone. While the surface abundances are very similar, as seen in Fig.\,\ref{fig:surfAbun_turb_twin}, the interior concentrations are quite different between $\DM = -3$ and the surface. } \label{fig:eventails} \end{figure*} In the left and right panels of Fig.\,\ref{fig:surfAbun_turb_twin} are shown the same three theoretical models (two calculated with turbulence and one with mass loss) compared to two different presentations of the results from Table 1 of \citet{Landstreet2011}. On the left panel are shown both his determinations (his column labeled \emph{L11}) and, as separate data points, his determinations averaged with those of the other observers he lists in his table, except for a few values he argues are erroneous or too uncertain to be worth including (his column labeled \emph{mean}). The error bars are standard deviations of the observations listed in his table. These do not include contributions from the error bars of the various authors. The actual values of the different observers listed in his Table 1 are shown in the right panel of Fig.\,\ref{fig:surfAbun_turb_twin}. The values which Landstreet excluded from his averages are not shown but all others are shown. One can argue that the difference in abundance values between the various observers is a better estimate of the uncertainty of the abundance determinations than the mean value with associated standard deviation shown in the left panel. Following a discussion with Landstreet (private communication) the error bars for his points (from his column \emph{L11}) were slightly increased for the right panel of Fig.\,\ref{fig:surfAbun_turb_twin} only. So all cases where, in the "L11" column of Table 1, he had 0.1, we increased to 0.15 because this is closer to the actual dispersion found for dominant ions with many lines. Noting that there is some additional uncertainty (about 0.1\,dex) due to imprecise fundamental parameters and microturbulence, another 0.1\,dex was added in quadrature to all sigmas, giving a minimum sigma of 0.18. The right panel of Fig.\,\ref{fig:surfAbun_turb_twin} gives our comparison between theory and observations. The agreement is not perfect for any value of turbulence or mass loss. In fact the difference between the mass loss and turbulent models is very small for atomic species lighter than Cr and one may question if abundances alone can really distinguish between the two given the error bars. Of the species included in our calculations, abundances were determined observationally for 17 atomic species and upper limits for 4. For the model $Z_0 =0.010$ with $\Delta M_0= 1.4 \times 10^{-6}\msol$ one counts 8 species within 1 $\sigma$ and an additional 6 within 2 $\sigma$. These numbers are respectively 8 and 7 for the $Z_0=0.011$ model with the same turbulence. One counts respectively 8 and 5 for the $Z_0 =0.010$ mass loss model with a mass loss rate of 1.0\,$\times 10^{-13}$\,\msol/yr. In addition, of the 4 upper limits, 3 are compatible with predictions. While the trend is right, the agreement is not so good for Ti, Cr and Mn. The interior concentrations in the two $Z_0 = 0.01$ models of Fig.\,\ref{fig:surfAbun_turb_twin} are shown in Fig.\,\ref{fig:eventails}. While the surface abundances are quite similar in the two, the interior concentrations are quite different for $\DM < -3$ (see Sect.\,\ref{sec:Models}). If one compares these results with Fig.\,20 of \citet{VickMiRietal2010}, one notes that the same mass loss rate of 1.0\,$\times 10^{-13}$\,\msol/yr had been found to lead to the prediction closest to observed abundances. While the age assumed for Sirius is about the same in the two papers, the larger mass of the models used in \citet{VickMiRietal2010} causes them to be more evolved and, so, have a smaller gravity at a similar age. This is an important difference, when one compares two models with approximately the same \teff{} and thus the same \gr's. Another difference comes from the original $Z_0$, which is 0.010 for the mass loss models of Fig.\,\ref{fig:surfAbun_turb_twin} of this paper, but 0.02 for those of Fig.\,20 of \citet{VickMiRietal2010}. \section{Conclusion} \label{sec:Conclusion} Using observationally determined stellar parameters for Sirius\,A one first fixed a metallicity and He mass fraction (Sect.\ref{sec:OriginalMetallicity}). Then, expected surface abundances were predicted as a function of either a surface mass mixed by turbulence or of a mass loss rate. Of the 17 abundances determined observationally, up to 15 can be predicted within 2 $\sigma$, and 3 of the 4 determined upper limits are satisfied. The three atomic species, B, N, and Na show the strongest disagreement. While the origin of the assumed turbulence is not determined, it could be either shear induced by differential rotation \citep{TalonRiMi2006} or it could be gravity waves \citep{TalonCh98}. If the main competing process is mass loss however, it has the great advantage of being already observed \citep{BertinLaVietal95}. The mass loss rate of 1.0 $\times 10^{-13}$\,\msol/yr that best reproduces abundance observations is slightly larger than the lower limit of 6 $\times 10^{-14}$\,\msol/yr determined from asymetries of Mg II lines using ST observations by these authors {(see Eq.\,[\ref{eq:rate}])}. Their estimate based on \emph{corrected LTE}, their Eq.\,[12], however gives a mass loss rate between 5.0 $\times 10^{-12}$ and 5.0 $\times 10^{-11}$\,\msol/yr which would lead to practically no surface abundance variations during evolution in contradiction with the observed Sirius abundances (see Figs.\,11 and 20 of \citealt{VickMiRietal2010} for a calculation with 1.0 $\times 10^{-12}$\,\msol/yr). Our results then support the arguments presented above in Sect.\,\ref{sect:context} in favor of their estimate based on \emph{radiative ionization fraction.} While by themselves, abundances do not favor mass loss or turbulence as the competing process, the agreement with the observationally determined mass loss rate favors mass loss. While there probably still remains some uncertainties in the observationally determined abundances, the disagreement between observations and calculations points to some weaknesses of the models. The first one may be related to the presence of Sirius B. Even though it is quite a wide pair, it had been suggested by \citet{RicherMiTu2000} that the disagreement with the C and N observations could be caused by the transfer of material from the former primary. This was further discussed by \citet{Landstreet2011} who also suggested that it could simultaneously explain the difficulty with the B upper limit. Sodium could also be affected, just as it is affected in the globular cluster M4 \citep{MarinoViMietal2011}. However the extent of the mass transfer in Sirius remains uncertain. Even if it is tempting to accept mass loss as the most important mechanism competing with diffusion in slowly rotating stars, it is also observed that abundance anomalies are much less important in rapidly rotating stars. An other mechanism linked to rotation must then be involved. Either rotation driven turbulence \citep{TalonRiMi2006} or meridional circulation \citep{CharbonneauMi91} could progressively reduce abundance anomalies as rotation increases. The weak dependence of abundance anomalies on the rotation rate could be due to its effect becoming larger than those of mass loss only as one approaches the 100 km/s limit of $v \sin i$ \citep{Abt2000} for the Am star phenomenom. In relation to Fig.\,\ref{fig:eventails}, it was suggested in Sect.\,\ref{sec:SurfaceAbundances} that asterosismology tests could distinguish between mass loss and turbulence as the competing mechanism for slowly rotating stars. While detailed evolutionary calculations with meridional circulation have not yet been carried out, one expects that laminar meridional circulation would lead to internal metal distributions, in 3 dimensions, similar to those that mass loss leads to, in 1 dimension, since both are advective and not diffusive processes. This opens the possibility of distinguishing between meridional circulation and rotation induced turbulence using asterosismology. For simplicity, these calculations were carried out with undifferentiated mass loss throughout Sirius\,A's evolution as seems appropriate for Am stars. However a 2.1\,\msol{} star starts its \MS{} at $\teff \sim 10500$\,K (see Fig.\,\ref{fig:HR}) where no H convection zone is present and which is probably within the HgMn domain. What would have been the mass loss rate then? The observation of Hg isotope anomalies on HgMn stars suggests that the mass loss would be differentiated (see \citealt{MichaudReCh74} and Sect.\,4 of \citealt{MichaudRi2008}). How would this affect surface abundances during later evolutionary stages such as reached by Sirius\,A? \begin{acknowledgements} We thank Dr John Landstreet for very kindly communicating to us his results ahead of publication and for useful discussions. We thank Dr David Leckrone for a constructive criticism of the manuscript and useful suggestions that led to significant improvements. This research was partially supported at the Universit\'e de Montr\'eal by NSERC. We thank the R\'eseau qu\'eb\'ecois de calcul de haute performance (RQCHP) for providing us with the computational resources required for this work. \end{acknowledgements}
e76c6182da5198d3890dcca6135d02889822f8cf
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Cross-section measurements} \label{app:CStables} Tables \ref{tab:xsection_systematics_eta0}-\ref{tab:xsection_systematics_eta4} list the values of the measured isolated prompt photon production cross-sections, for the \etaone, \etatwo, \etathree\ and \etafour\ regions, respectively. The various systematic uncertainties originating from the purity measurement, the photon selection and identification efficiency and the luminosity are shown. In addition, the correlated uncertainties between the efficiency and the purity determination are propagated as such and included separately ($\sigma_{corr}$). The total uncertainty is the combination of the statistical and systematic uncertainties (summed in quadrature), except for the uncertainty on the luminosity. \begin{table*} \caption{Measured isolated prompt photon cross-section for \etaonesh\ with statistical and systematic uncertainties. The total uncertainty includes both the statistical and all systematic uncertainties (summed in quadrature), except for the uncertainty on the luminosity.} \label{tab:xsection_systematics_eta0} \centering \footnotesize \begin{tabular}{rrr|rrrrrr|r} \hline\hline $E_T^\mathrm{min}$ & $E_T^\mathrm{max}$ & $d\sigma / dE_T$ & $\delta_\mathrm{stat}$ & $\delta_\mathrm{yield}$ & $\delta_{\mathrm{efficiency}}$& $\delta_{\mathrm{corr}}$ & $\delta_\mathrm{unfolding}$ & $\delta_\mathrm{tot}$ & $\delta_\mathrm{lumi}$ \\ $[\mathrm{GeV}]$ & $[\mathrm{GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$\\ \hline 45 & 55 & 83.3 & 0.5 & 4.8 & 3.3 & 3.4 & 2.5 & 7.2 & 2.8 \\ 55 & 70 & 32.7 & 0.3 & 1.8 & 1.2 & 1.2 & 1.0 & 2.7 & 1.1 \\ 70 & 85 & 12.3 & 0.2 & 0.6 & 0.4 & 0.4 & 0.4 & 0.9 & 0.4 \\ 85 & 100 & 5.3 & 0.1 & 0.2 & 0.2 & 0.2 & 0.2 & 0.4 & 0.2 \\ 100 & 125 & 2.2 & 0.05 & 0.09 & 0.08 & 0.07 & 0.07 & 0.2 & 0.07 \\ 125 & 150 & 0.80 & 0.03 & 0.03 & 0.03 & 0.02 & 0.03 & 0.06 & 0.03 \\ 150 & 200 & 0.26 & 0.01 & 0.01 & 9\pcs{-3} & 7\pcs{-3} & 8\pcs{-3} & 0.02 & 9\pcs{-3} \\ 200 & 400 & 2.8\pcs{-2} & 2\pcs{-3} & 2\pcs{-3} & 1\pcs{-3} & 4\pcs{-4} & 8\pcs{-4} & 3\pcs{-3} & 9\pcs{-4} \\ \hline\hline \end{tabular} \end{table*} \begin{table*} \caption{Measured isolated prompt photon cross-section for \etatwo, uncertainties as in Table~\ref{tab:xsection_systematics_eta0}.} \label{tab:xsection_systematics_eta1} \centering \footnotesize \begin{tabular}{rrr|rrrrrr|r} \hline\hline $E_T^\mathrm{min}$ & $E_T^\mathrm{max}$ & $d\sigma/ dE_T$ & $\delta_\mathrm{stat}$ & $\delta_\mathrm{yield}$ & $\delta_{\mathrm{efficiency}}$& $\delta_{\mathrm{corr}}$ & $\delta_\mathrm{unfolding}$ & $\delta_\mathrm{tot}$ & $\delta_\mathrm{lumi}$ \\ $[\mathrm{GeV}]$ & $[\mathrm{GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$\\ \hline 45 & 55 & 99.0 & 0.7 & 8.1 & 4.4 & 3.8 & 3.0 & 10.4 & 3.4 \\ 55 & 70 & 38.9 & 0.3 & 3.0 & 1.7 & 1.2 & 1.2 & 3.9 & 1.3 \\ 70 & 85 & 14.9 & 0.2 & 1.1 & 0.7 & 0.4 & 0.5 & 1.4 & 0.5 \\ 85 & 100 & 6.3 & 0.1 & 0.4 & 0.3 & 0.1 & 0.2 & 0.6 & 0.2 \\ 100 & 125 & 2.7 & 0.06 & 0.2 & 0.1 & 0.06 & 0.08 & 0.2 & 0.09 \\ 125 & 150 & 1.0 & 0.03 & 0.06 & 0.04 & 0.02 & 0.03 & 0.1 & 0.03 \\ 150 & 200 & 0.29 & 0.01 & 0.02 & 0.01 & 7\pcs{-3} & 9\pcs{-3} & 0.03 & 0.01 \\ 200 & 400 & 3.2\pcs{-2} & 2\pcs{-3} & 3\pcs{-3} & 2\pcs{-3} & 9\pcs{-4} & 1\pcs{-3} & 4\pcs{-3} & 1\pcs{-3} \\ \hline\hline \end{tabular} \end{table*} \begin{table*} \caption{Measured isolated prompt photon cross-section for \etathree, uncertainties as in Table~\ref{tab:xsection_systematics_eta0}.} \label{tab:xsection_systematics_eta3} \centering \footnotesize \begin{tabular}{rrr|rrrrrr|r} \hline\hline $E_T^\mathrm{min}$ & $E_T^\mathrm{max}$ & $d\sigma / dE_T$ & $\delta_\mathrm{stat}$ & $\delta_\mathrm{yield}$ & $\delta_{\mathrm{efficiency}}$& $\delta_{\mathrm{corr}}$ & $\delta_\mathrm{unfolding}$ & $\delta_\mathrm{tot}$ & $\delta_\mathrm{lumi}$ \\ $[\mathrm{GeV}]$ & $[\mathrm{GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$\\ \hline 45 & 55 & 41.9 & 0.4 & 4.6 & 3.1 & 1.2 & 1.3 & 5.8 & 1.4 \\ 55 & 70 & 15.7 & 0.2 & 1.6 & 1.0 & 0.4 & 0.5 & 2 & 0.5 \\ 70 & 85 & 6.4 & 0.2 & 0.5 & 0.4 & 0.2 & 0.2 & 0.7 & 0.2 \\ 85 & 100 & 2.4 & 0.08 & 0.2 & 0.2 & 0.05 & 0.08 & 0.3 & 0.08 \\ 100 & 125 & 1.0 & 0.04 & 0.07 & 0.08 & 0.02 & 0.03 & 0.1 & 0.03 \\ 125 & 150 & 0.36 & 0.02 & 0.03 & 0.03 & 8\pcs{-3} & 0.01 & 0.05 & 0.01 \\ 150 & 200 & 0.11 & 9\pcs{-3} & 0.01 & 7\pcs{-3} & 3\pcs{-3} & 4\pcs{-3} & 0.02 & 4\pcs{-3} \\ 200 & 400 & 1.1\pcs{-2} & 1\pcs{-3} & 1\pcs{-3} & 8\pcs{-4} & 2\pcs{-4} & 3\pcs{-4} & 2\pcs{-3}& 4\pcs{-4} \\ \hline\hline \end{tabular} \end{table*} \begin{table*} \caption{Measured isolated prompt photon cross-section for \etafour, uncertainties as in Table~\ref{tab:xsection_systematics_eta0}.} \label{tab:xsection_systematics_eta4} \centering \footnotesize \begin{tabular}{rrr|rrrrrr|r} \hline\hline $E_T^\mathrm{min}$ & $E_T^\mathrm{max}$ & $d\sigma / dE_T$ & $\delta_\mathrm{stat}$ & $\delta_\mathrm{yield}$ & $\delta_{\mathrm{efficiency}}$& $\delta_{\mathrm{corr}}$ & $\delta_\mathrm{unfolding}$ & $\delta_\mathrm{tot}$ & $\delta_\mathrm{lumi}$ \\ $[\mathrm{GeV}]$ & $[\mathrm{GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$ & $[\mathrm{pb/GeV}]$\\ \hline 45 & 55 & 68.9 & 0.6 & 7.6 & 3.8 & 3.9 & 2.1 & 9.6 & 2.3 \\ 55 & 70 & 26.4 & 0.3 & 2.7 & 1.3 & 1.3 & 0.8 & 3.3 & 0.9 \\ 70 & 85 & 10.0 & 0.2 & 0.9 & 0.5 & 0.5 & 0.3 & 1.2 & 0.3 \\ 85 & 100 & 4.2 & 0.1 & 0.3 & 0.3 & 0.2 & 0.1 & 0.5 & 0.1 \\ 100 & 125 & 1.7 & 0.06 & 0.1 & 0.1 & 0.08 & 0.05 & 0.2 & 0.06 \\ 125 & 150 & 0.55 & 0.03 & 0.03 & 0.03 & 0.02 & 0.02 & 0.06 & 0.02 \\ 150 & 200 & 0.17 & 0.01 & 0.01 & 0.01 & 6\pcs{-3} & 6\pcs{-3} & 0.02 & 6\pcs{-3} \\ 200 & 400 & 1.2\pcs{-2} & 1\pcs{-3} & 6\pcs{-4} & 3\pcs{-3} & 3\pcs{-4} & 4\pcs{-4} & 3\pcs{-3} & 4\pcs{-4} \\ \hline\hline \end{tabular} \end{table*} \onecolumn \clearpage \input{atlas_authlist} \end{document}
8c84630a44d98162d60c731d573dc45d834cf3da
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Since the inception of quantum mechanics (QM), it remains a debatable question how our everyday world view of macrorealism can be reconciled with the quantum formalism. Historically, this question was first pointed out by Schr$\ddot{o}$dinger \cite{sch} through his famous cat experiment. Since then, quite a number of attempts have been made to pose the appropriate questions relevant to this issue and to answer that questions. One effective approach to encounter this issue is to experimentally realize the quantum coherence of Schr$\ddot{o}$dinger cat-like states of large objects \cite{arndt}. Another approach within the formalism of QM is the decoherence program \cite{zur}. It explains how interaction between quantum systems and environment leads to classical behavior, but does not by itself provide the desired `cut' (\emph{\`a la} Heisenberg \cite{He25}). It is also argued that even if the decoherence effect is made negligible, the quantum behavior can be disappeared by the effect of coarse-graining of measurements\cite{bruk}. Proposal has also been put forwarded \cite{ghi} to modify the dynamics of standard formalism of QM allowing an unified description of microscopic and macroscopic systems. However, the above mentioned attempts do not exactly address the fundamental question whether macrorealism is, in principle, compatible with the formalism of QM. Macrorealism is a classical world view that asserts that the properties of macro-objects exist independently and irrespective of ones observation. Motivated by the Bell's theorem \cite{bell64}, in 1985, Leggett and Garg \cite{leggett85} formulated a class of inequalities based on the notions of macrorealism, which provides an elegant scheme for experimentally testing the compatibility between the macrorealism and QM. To be more specific, the notion of macrorealism consists of two main assumptions \cite{leggett85,leggett,A.leggett} are the following; \emph{ Macrorealism per se (MRps):} If a macroscopic system has two or more macroscopically distinguishable ontic states available to it, then the system remains in one of those states at all instant of time. \emph{Non-invasive measurement (NIM):} The definite ontic state of the macrosystem is determined without affecting the state itself or its possible subsequent dynamics. It is reasonable to assume that the systems in our everyday world, in principle, obeys the aforementioned assumptions of a macrorealistic theory. Based on these assumptions, the standard Leggett-Garg inequalities (LGIs) are derived. Such inequalities can be shown to be violated in certain circumstances, thereby implying that either or both the assumptions of MRps and NIM is not compatible with all the quantum statistics. In recent times, a flurry of theoretical studies on macrorealism and LGIs have been reported \cite{guhne,emary12,maroney14,kofler13,budroni15,budroni14,emary,halliwell16,kofler08,saha15,swati17,pan17} and a number of experiments have been performed by using various systems \cite{lambert,goggin11,knee12,laloy10,george13,knee16}. Let us encapsulate the simplest LG scenario. Consider that the measurement of a dichotomic observable $\hat{M}$ is performed at three different times $t_1$, $t_2$ and $t_3$ $ (t_3 \geq t_2 \geq t_1 )$. In Heisenberg picture, this in turn implies the sequential measurement of the observables $\hat{M}_{1},\hat{M}_{2}$ and $\hat{M}_{3}$ corresponding to $t_1$, $t_2$ and $t_3$ respectively. From the assumption of MRps and NIM, one can derive the a standard LGI is given by \begin{equation} \label{eq1} K_{3}=\langle \hat{M_{1}} \hat{M_{2}}\rangle + \langle \hat{M_{2}} \hat{M_{3}}\rangle - \langle \hat{M_{1}} \hat{M_{3}}\rangle \leq {1} \end{equation} Here $\langle {M_{1}} {M_{2}}\rangle=\sum_{m_{1},m_{2}=\pm1} m_{1},m_{2} P(M_{1}^{m_{1}}, M_{2}^{m_{2}})$ and similarly for other temporal correlation terms. By relabeling the measurement outcomes of each $M_i$ as $M_i = -M_i$ with $i = 1, 2,$ and $3$, three more standard LGIs can be obtained. Instead of three times, if the measurement of $M$ is performed $n$ times, then the standard LGI for the $n$-measurement LG strings can be written as \begin{eqnarray} \label{eq2} K_{n}=\langle \hat{M}_{1} \hat{M}_{2}\rangle +...+\langle \hat{M}_{n-1} \hat{M}_{n}\rangle-\langle \hat{M}_{1} \hat{M}_{n}\rangle \end{eqnarray} The inequality (\ref{eq2}) is bounded as \cite{emary} follows. If $n$ is odd, $-n\leq K_{n}\leq n-2$ for $n$ $\geq{3}$ and if $n$ is even, $-(n-2)\leq K_{n}\leq n-2$ for $n$ $\geq{4}$. For $n=3$, one simply recovers inequality (\ref{eq1}). For a two-level system, the maximum quantum value of $K_{n}$ is $(K_{n})_{Q}^{max}=n\cos\frac{\pi}{n}$. For $n=3$, $(K_{3})_{Q}^{max}=3/2$. Thus for a three-time standard LG scenario involving a dichotomic observable, the temporal Tsirelson bound of $K_3$ is $3/2$. It is proved \cite{ guhne} that this bound is irrespective of the system size. Within the standard framework of QM, the maximum violation of CHSH inequality \cite{bell64} is restricted by the Tsirelson bound\cite{chsh}, which is significantly less than the algebraic maximum of the inequality. The algebraic maximum may be achieved in post-quantum theory but not in QM. LGIs are often considered to be the temporal analog of Bell's inequality. However, it has been shown \cite{budroni14} that for a degenerate dichotomic observables in a qutrit system, the quantum value of $K_3$ goes up to $2.21$ and can even reach to algebraic maximum $3$ in the asymptotic limit of the system size. Such amount of violation is achieved by invoking a degeneracy breaking projective measurement which they termed as von Neumann rule. Recently, two of us have argued \cite{kumari} that such a violation of temporal Tsirelson bound has no relevance to the usual violation of LGIs. The purpose of the present paper is to provide improved quantum violation of macrorealism for qubit system. We argue that by keeping the assumptions of macrorealism intact, there is scope for formulating inequalities different from the standard LGIs. We note here an important observation that due to the sequential nature of the measurement, the LG scenario is flexible than CHSH one. Such flexibility allows us to formulate new variants of standard LGIs. For the simplest case of three-time measurement scenario, we first formulate an interesting variant of LGI and show that our proposed inequality provides considerably larger quantum violation compared to the standard LGIs. We then formulate more variants of standard LGIs by increasing number of measurements $n$ and show that the quantum violation increases with $n$. For sufficiently large $n$, the quantum values of variants of LGIs reach its algebraic maximum, even for qubit system. Such variants of LGIs thus provide improved test of macrorealism than standard LGIs. Further, in terms of no-disturbance (coined as no-signaling in time in LG scenario), we discuss how the variants of LGIs are conceptually elegant and can be considered better candidates for experimentally testing the macrorealism compared to standard LGIs. This paper is organized as follows. In Sec.II, we propose variant of LGI for three-time measurement scenario and demonstrate that it provide larger quantum violation compared to standard LGI. By increasing the number of measurements ($n$), in Sec.III, we formulate two more variants of LGIs. We show that for a qubit system, the quantum violation of our variants of LGIs increase with $n$ and can even reach algebraic maximum for large $n$ limit. In Sec.IV, we compare variant of LGIs with standard LGIs and no-signaling in time conditions. We summarize our results in Sec.V. \section{Variants of LGIs in three-time measurement scenario} We start by noting that the standard LGIs is a particular class of inequalities but is \emph{not }unique one. The flexibility of LG scenario allows us to formulate variants of LGIs different from the standard LGI given by Eq. (\ref{eq1}). We ensure that the assumptions of MRps and NIM used in the derivation of standard LGI remains the same. Let us again consider the three-time LG scenario involving measurement of dichotomic observables $\hat{M}_{1},\hat{M}_{2}$ and $\hat{M}_{3}$ in sequence. Now, instead of three two-time correlation functions used in Eq.(\ref{eq1}), we consider a three-time correlation function $\langle \hat{M}_{1} \hat{M}_{2}\hat{M}_{3}\rangle$, a two-time function $\langle \hat{M}_{1}\hat{M}_{2}\rangle$ and finally $\langle\hat{M}_{3}\rangle$. Using them, we propose an inequality is given by \begin{equation} \label{3time} K^3_{3}=\langle \hat{M}_{1} \hat{M}_{2} \hat{M}_{3}\rangle + \langle \hat{M}_{i} \hat{M}_{j} \rangle - \langle \hat{M}_{k} \rangle \leq {1} \end{equation} where $i,j=1,2,3$ with $j>i$. We call those inequalities as variant of LGIs. It is crucial to note again that, the assumptions of MRps and NIM remain same as in the derivation of standard LGIs. The inequalities (\ref{3time}) are violated by QM. In order to showing this, we take one inequality by choosing $i,j$ and $k$ are $1,2$ and $3$ respectively, and consider the qubit state is given by \begin{equation} \label{qubit} |{\psi(t_1)}\rangle = cos \theta |{0}\rangle + \exp(-i\phi) sin \theta |{1}\rangle \end{equation} with $\theta \in [0,\pi]$ and $\phi \in [0,2\pi]$. The measurement observable at initial time $t_{1}$ is taken to be Pauli observable $\hat{\sigma_{z}}$. The unitary evolution is given by $U_{ij} = \exp^{-i \omega (t_{j}-t_{i})\sigma_x}$ and $\omega$ is coupling constant. For simplicity, we consider $\tau=|t_{i+1}-t_{i}|$ and $g= \omega \tau$. The quantum mechanical expression of $K^3_{3}$ is given by \begin{eqnarray} \label{eq10} (K^3_{3})_{Q}&=& \cos2 g ( 4 \cos^{2} g \cos 2 \theta ) + \sin 4 g \sin 2 \theta \sin \phi \nonumber\\ &-& 2 \cos^{2} g \cos 2 \theta \end{eqnarray} which is state-dependent in contrast to the quantum value of standard LGI. To compare with the standard LGIs, let us write the quantum expression of $K_3$ is given by \begin{equation} \label{eq9} (K_{3})_{Q}=2\cos 2 g - \cos 4 g \end{equation} which is independent of the state. If the values of the relevant parameters are taken as $g = 1.72 $, $\theta= 2.04$ and $\phi=\pi/2$, the quantum value of $K^3_{3}$ is $1.93$, thereby violating the inequality (\ref{3time}). The maximum quantum value $(K^3_{3})$ can be shown to be $2$ for different coupling constants in between the evolutions. For simplicity, here we take same coupling constant $g$. The quantum value of $K^3_{3}$ is then larger than $(K^3)_{Q}^{max}=3/2$. The expressions $(K_{3})_{Q}$ and $(K^3_{3})_{Q}$ are plotted in Fig.(\ref{fig:01}). \begin{figure}[ht] \begin{minipage}[c]{0.55\textwidth} \includegraphics[width=1\textwidth]{K33} \end{minipage}\hfill \begin{minipage}[hc]{0.45\textwidth} \caption{The quantities $(K_{3})_{Q}$ and $(K_{3}^{3})_{Q}$ given by Eq.(\ref{eq10}) and Eq.(\ref{eq9}) respectively are plotted against $g$. The values of relevant parameters are $\theta= 2.04 $ and $\phi=\pi/2$. } \label{fig:01} \end{minipage} \end{figure} Thus, if the larger violation of an inequality is considered to be an indicator of more non-classicality, then the variant of LGI captures the notion of macrorealism better than the standard LGIs. \section{Variants of LGIs for $n$-time measurements} The above idea can be extended to $n$-time measurement scenario where $n>3$. For example, if $n=4$, we can formulate the a variant of LGI is given by \begin{equation} \label{4timek} K^{3}_{4}=\langle \hat{M}_{1} \hat{M}_{2} \hat{M}_{3}\hat{M}_{4}\rangle + \langle \hat{M}_{1}\hat{M}_{2} \hat{M}_{3} \rangle - \langle \hat{M}_{4} \rangle\leq {1} \end{equation} This inequality belongs to the same class of (\ref{3time}). Three more inequalities of this class can be obtained by changing the positions of $\hat{M}_{1},\hat{M}_{2}, \hat{M}_{3}$ and $\hat{M}_{4} $ in the last two terms of the inequality (\ref{4timek}). Interestingly, for $n=4$, there can be another variant of LGI can be proposed as \begin{equation} \label{4timel} \hat{L}^{3}_{4}=\langle \hat{M}_{1} \hat{M}_{2} \hat{M}_{3} \rangle + \langle \hat{M}_{2}\hat{M}_{3} \hat{M}_{4}\rangle - \langle \hat{M}_{1}\hat{M}_{4} \rangle\leq {1} \end{equation} Similar to the earlier case three more inequalities can be obtained. If number of measurements is further increased, one finds more variants of LGIs. Now, by generalizing the above formulation for $n$-time measurement scenario we propose the following two inequalities \begin{eqnarray} \label{ntimek} K^3_{n}=\langle \hat{M}_{1}\hat{M}_{2}...\hat{M}_{n}\rangle + \langle \hat{M}_{1}\hat{M}_{2}...\hat{M}_{n-1}\rangle- \langle \hat{M_{n}}\rangle \leq{1} \end{eqnarray} and \begin{eqnarray} \label{ntimel} \hat{L}^{3}_{n}&=&\langle \hat{M}_{1}\hat{M}_{2}\hat{M}_{3}\hat{M}_{4}...\hat{M}_{n-1}\rangle + \langle \hat{M}_{2}\hat{M}_{3}...\hat{M}_{n}\rangle\nonumber\\ &-& \langle \hat{M}_{1} \hat{M}_{n}\rangle \leq{1} \end{eqnarray} where $\langle \hat{M_{1}}...\hat{M_{n}}\rangle =\sum_{m_{1},...,m_{n}} m_{1}...m_{n} P(M_{1}^{m_{1}},..., M_{n}^{m_{n}})$ and similarly for other correlation. While inequality (\ref{ntimek}) belongs to the class of (\ref{3time}) and (\ref{4timek}), the inequality (\ref{ntimel}) belongs to the other class of inequalities given by (\ref{4timel}). But, both the $n$-time inequalities are derived from the same assumptions of macrorealism. Next, we examine the quantum violation of inequalities (\ref{4timek}) and (\ref{4timel}) for the state given by Eq. (\ref{qubit}). The quantum mechanical expressions of $K^{3}_{4}$ and $L^{3}_{4}$ are respectively given by \begin{eqnarray} \label{4timekq} (K^3_{4})_{Q}&=& \frac{1}{2} \big( 1 + \cos4 g + 8 \cos 2 g \sin ^{2} 2 g \cos2 \theta \nonumber\\ &-& 2 \sin 6 g \sin \theta\sin\phi \big) \end{eqnarray} \begin{eqnarray} \label{4timelq} (L^3_{4})_{Q}&=& 2 \cos^{2} g \cos 2 g \cos 2\theta - \cos 6 g \nonumber\\ &+& \frac{1}{2}\sin 4 g \sin 2\theta \sin\phi \label{34a} \end{eqnarray} The value of $(K^3_{4})_{Q}$ is $2.12$ at $g = 1.24, \theta = 1.90$ and $\phi = \pi/2$ and of $(L^3_{4})_{Q}$ is $2.03$ at $g = 0.42, \theta = 0.21$, and $\phi= \pi/2$. However, the above values of $(K^3_{4})_{Q}$ and $(L^3_{4})_{Q}$ are not temporal Tsrilson bound of (\ref{4timek}) and $(\ref{4timel})$, which is not very important to our present purpose. Note that, for a qubit system, the maximum quantum value of standard four-time LGI is $2\sqrt{2}$ and its macrorealist bound is 2. Then, in four-time measurement scenario, the difference between quantum and macrorealist values is $0.82$. But, in the case of our variant of LGIs, we have $(K^3_{4})_{Q}- K^3_{4}=1.12$ and $(L^3_{4})_{Q}- L^3_{4}=1.03$. It can also be seen that $(K^3_{4})_{Q}>(K^3_{3})_{Q}>(K^3)_{Q}$ and $(L^3_{4})_{Q}>(K^3)_{Q}$. Thus, by increasing the number of measurements the quantum violation of the variants of LGIs can be improved compared to the quantum violation of standard three or four-time LGIs. Further, we demonstrate that when $n$ is sufficiently large, the quantum values of $(K^3_{n})_{Q}$ and $(L^3_{n})_{Q}$ reach algebraic maximum of $K^3_{n}$ and $L^3_{n}$ respectively. For $n$-time sequential measurement, the calculation of correlation function in QM seems difficult task. In order to tackle this problem, we derive a compact formula for $n$-time sequential correlation given in Eq.(\ref{s8}) of Appendix A. For the qubit state given by (\ref{qubit}), the quantum expressions of $K^3_{n}$ for even $n$ is given by \begin{eqnarray} \label{eq11} (K^{3}_{n_{even}})_{Q}&=& (\cos2g)^{\frac{n}{2}} + (\cos2g)^{\frac{n}{2} -1 } - \big( \cos 2 (n-1)g\nonumber\\ & & \cos 2\theta + \sin 2 (n-1)g \sin 2\theta \sin\phi \big) \end{eqnarray} and for odd $n$ \begin{eqnarray} \label{eq14} (K^{3}_{n_{odd}})_{Q}&=& (\cos 2g)^{\frac{n-1}{2}} \cos 2\theta + (\cos 2g)^{\frac{n-1}{2}} - \big( \cos 2 (n-1)g\nonumber\\ & & \cos 2\theta + \sin 2 (n-1)g \sin 2\theta \sin\phi \big) \end{eqnarray} By considering $g = \frac{\pi}{2n}$, Eqs.(\ref{eq11}) and (\ref{eq14}) take the form \begin{eqnarray} \label{eq12} (K^{3}_{n_{even}})_{Q} &=& \big(\cos \frac{\pi}{n}\big)^{\frac{n}{2}} + \big(\cos \frac{\pi}{n} \big)^{(\frac{n}{2}-1)}\cos2\theta \nonumber\\ &+& \cos \frac{\pi}{n} \cos2\theta- \sin \frac{\pi}{n} \sin2\theta \sin\phi \end{eqnarray} and \begin{eqnarray} \label{eq15} (K^{3}_{n_{odd}})_{Q} &=& \big(\cos \frac{\pi}{n}\big)^{\frac{n-1}{2}} \cos2\theta + \big(\cos \frac{\pi}{n}\big)^{(\frac{n-1}{2})} \nonumber\\ &+& \cos \frac{\pi}{n} \cos2\theta- \sin \frac{\pi}{n}\sin2\theta\sin\phi \end{eqnarray} respectively. In the large $n$ limit, both of them reduces to \begin{eqnarray} \label{eq13} (K^{3}_{n_{even}})_{Q}=(K^{3}_{n_{odd}})_{Q}&\approx&1+2\cos2\theta \end{eqnarray} Thus, when $\theta\approx 0$, the quantities $(K^{3}_{n_{even}})_{Q} = (K^{3}_{n_{odd}})_{Q}\approx 3$, i.e., the algebraic maximum of the inequalities (\ref{ntimek}-\ref{ntimel}). Next, we calculate the quantum violation of the other variant of LGI given by (\ref{ntimel}) for the state in Eq.(\ref{qubit}). The quantum expression of $L^{3}_{n}$ for even $n$ is given by \begin{eqnarray} \label{eq16} (L^{3}_{n_{even}})_{Q}&=& (\cos 2 g)^{\frac{n}{2} - 1} \cos 2\theta + (\cos 2 g)^{\frac{n}{2} - 1} \big(\cos 2 g \cos 2\theta \nonumber\\ &+& \sin 2 g \big) - \cos 2 (n-1)g \end{eqnarray} If $n$ is odd, we have \begin{eqnarray} \label{eq18} (L^{3}_{n_{odd})_{Q}}&=& (\cos 2g)^{\frac{n-1}{2}} + (\cos 2g)^{\frac{n-1}{2}} \nonumber\\ &-&\cos 2 (n-1)g \end{eqnarray} which is independent of the state. Similar to the earlier case, again by taking $g = \frac{\pi}{2n}$, from Eqs. (\ref{eq16}) and (\ref{eq18}), we have \begin{eqnarray} \label{eq17} (L^{3}_{n_{even}})_{Q} &=& \big(\cos \frac{\pi}{n} \big)^{\frac{n}{2} - 1}\cos 2\theta + \cos\frac{\pi}{n}+\big(\cos \frac{\pi}{n} \big)^{(\frac{n}{2} - 1)} \nonumber\\ && \Big( \cos \frac{\pi}{n}\cos2\theta+\sin\frac{\pi}{n} \sin2\theta \sin\phi \Big) \end{eqnarray} and \begin{eqnarray} \label{eq19} (L^{3}_{n_{odd}})_{Q}&=& 2 \big(\cos \frac{\pi}{n} \big)^{\frac{n-1}{2}} +\cos\frac{\pi}{n} \end{eqnarray} For large $n$, the quantum value of $(L^{3}_{n_{odd}})_{Q}$ is 3 which is independent of the state and the qantity $(L^{3}_{n_{even}})_{Q} $ approaches the algebraic maximum $3$ when $\theta\approx 0$. The Eqs.(\ref{eq17}) and (\ref{eq19}) are plotted in Figure \ref{fig2} to demonstrate how the quantum values of $(L^{3}_{n_{odd}})_{Q}$ and $(L^{3}_{n_{even}})_{Q}$ approach to algebraic maximum with increasing the number of measurements $n$. \begin{figure}[ht] \begin{minipage}[c]{0.5\textwidth} \includegraphics[width=1\textwidth]{L16} \end{minipage}\hfill \begin{minipage}[hc]{0.43\textwidth} \caption{The quantities $(L^{3}_{n_{odd}})_{Q}$ and $(L^{3}_{n_{even}})_{Q}$ given by (\ref{eq17}) and Eq.(\ref{eq19}) respectively are plotted against number of measurements $n$ by taking $\theta=0$. Both the quantities approach algebraic maximum $3$ of the inequalities (\ref{ntimek}-\ref{ntimel})for large $n$.} \label{fig2} \end{minipage} \end{figure} \section{Comparing variants of LGIs with other formulations of macrorealism} Fine \cite{fine} theorem states that the CHSH inequalities are necessary and sufficient condition for local realism. Since standard LGIs are often considered to be the temporal analogue of CHSH inequalities one may expect that they also provide the necessary and sufficient condition for macrorealism. In recent works, Clemente and Kofler \cite{clemente} showed that no set of standard LGIs can provide the necessary and sufficient condition for macrorealism. However, a suitable conjunction of no-signaling in time (NSIT) conditions provides the same. In this connection, two of us \cite{swati17} have shown that the Wigner formulation of LGIs are stronger than standard LGIs but they also do not provide necessary and sufficient condition for macrorealism. Against this backdrop, in this section, we shall analyze the status of our variant of LGIs for three-time measurement scenario. For this, let us first find the connection between standard LGIs, NSIT condition and macrorealism. NSIT condition is the statistical version of NIM condition. It is analogous to the no-signaling condition in Bell's theorem, however violation of NSIT condition does not provide any inconsistency with physical theories. It simply assumes that the probability of a outcome of measurement remains unaffected due to prior measurement. Clearly, the satisfaction of all NSIT conditions in any operational theory ensures the existence of global joint probability condition $P(M_{1}^{m_1},M_{2}^{m_2},M_{3}^{m_3})$ where $m_1,m_2,m_3=\pm1$ and in such a case no violation of any LGI can occur. A two-time NSIT condition can be written as \begin{equation} NSIT_{(1)2}:P(M_{2}^{m_2})=\sum_{m_1} P_{12}(M^{m_1}_{1},M^{m_2}_{2}) \end{equation} which means that the probability $P(M_{2}^{m_2})$ is unaffected by the prior measurement of $M_{1}$. Similarly, a three-time NSIT condition is given by \begin{eqnarray} \nonumber NSIT_{(1)23}:P(M_{2}^{m_2},M_{3}^{k})&=&\sum_{m_1}P_{123}(M_{1}^{m_1},M_{2}^{m_2},M_{3}^{m_3})\\ \end{eqnarray} Here $P_{123}(M_{1}^{m_1},M_{2}^{m_2},M_{3}^{m_3})$ denotes the joint probabilities when all the three measurements are performed. Clemente and Kofler \cite{clemente} have shown that a suitable conjunction of two-time and three-time NSIT conditions provides the necessary and sufficient condition for macrorealism, i.e., \begin{eqnarray} \label{nsit} NSIT_{(2)3} \wedge NSIT_{(1)23} \wedge NSIT_{1(2)3} \Leftrightarrow MR \end{eqnarray} where MR denotes macrorealism. We first show how standard LGIs do not provide necessary and sufficient condition for macrorealism. Such an argument was first initiated in \cite{maroney14} and discussed in detail in \cite{swati17}. But for making the present work self-contained we encapsulate the essence of the argument. Let us consider the pairwise marginal statistics of the experimental arrangement when all three measurements ($M_1$, $M_2$ and $M_3$) are performed and introduce the following quantity \begin{eqnarray} \nonumber D_{1}(M_{2}^{m_2},M_{3}^{m_3})&=&P(M_{2}^{m_2},M_{3}^{m_3})\\ \label{d11} &-&\sum_{m_1} P_{123}(M_{1}^{m_1},M_{2}^{m_2},M_{3}^{m_3}) \end{eqnarray} which quantifies the amount of disturbance created (in other words, degree of violation of NSIT condition) by the measurement $M_1$ at $t_1$ to the measurements of $M_2$ and $M_3$ at $t_2$ and $t_3$ respectively. Similarly, \begin{eqnarray} \nonumber D_{2}(M_{1}^{m_1},M_{3}^{m_3})&=&P(M_{1}^{m_1},M_{3}^{m_3})\\ \label{d22} &-&\sum_{m_2} P_{123}(M_{1}^{m_1},M_{2}^{m_2},M_{3}^{m_3}) \end{eqnarray} \begin{eqnarray} \nonumber D_{3}(M_{1}^{m_1},M_{2}^{m_2})&=&P(M_{1}^{m_1},M_{2}^{m_2})\\ \label{d33} &-&\sum_{m_3} P_{123}(M_{1}^{m_1},M_{2}^{m_2},M_{3}^{m_3}) \end{eqnarray} Note that, since no information can travel backward in time, $D_{3}(M_{1}^{m_1},M_{2}^{m_2})=0$ in any physical theory. For two-time measurements, we can define similar quantity, for example, $D_{1}(M_{2}^{m_2})$. Standard LGIs are derived by assuming the satisfaction of all NSIT conditions. But, in QM, the NSIT conditions are, in general, not satisfied. This, in fact, is the reason of the violation of LGIs in QM. It is then straightforward to understand that the difference between $K_3$ and $(K_3)_{123}$ plays an important role for the violation of LGI. Clearly, if $K_{3}=(K_{3})_{1,2,3}$ is satisfied, the LGI will \textit{not} be violated. When all the three measurements are performed for measuring each correlation, the expression of $K_{3}$ in inequality(\ref{eq1}) can be written \begin{eqnarray} \nonumber (K_{3})_{123}&=& \langle M_{1}M_{2}\rangle_{123} +\langle M_{2}M_{3}\rangle_{123}-\langle M_{1}M_{3}\rangle_{123}\nonumber\\ \label{d27} &=& 1-4\alpha \end{eqnarray} where $\alpha =P(M_{1}^{+},M_{2}^{-},M_{3}^{+})+P(M_{1}^{-},M_{2}^{+},M_{3}^{-})$. Using Eqs.(\ref{d11}) and (\ref{d22}) we can write \begin{eqnarray} \label{xx} && K_{3}-(K_{3})_{123}=\\ \nonumber &&\sum_{m_2 =m_3}D_{1}(M_{2}^{m_2},M_{3}^{m_3})-\sum_{m_1= m_3}D_{2}(M_{1}^{m_1},M_{3}^{m_3})\\ \nonumber &-&\sum_{m_2 \neq m_3} D_{1}(M_{2}^{m_2},M_{3}^{m_3})+\sum_{m_1\neq m_3}D_{2}(M_{1}^{m_1},M_{3}^{m_3}) \end{eqnarray} Since $\sum D_{1}(M_{2}^{m_2},M_{3}^{m_3})=0$, $\sum D_{2}(M_{1}^{m_1},M_{3}^{m_3}) =0$ and $ K_{3}\leq 1$, from Eq.(\ref{xx})we obtain \begin{eqnarray} \nonumber &&2\sum_{m_2=m_3}D_{1}(M_{2}^{m_2},M_{3}^{m_3})-2\sum_{m_2=m_3}D_{2}(M_{1}^{m_1},M_{3}^{m_3})\\ &+&(K_{3})_{123}\leq 1 \end{eqnarray} By putting the value of $(K_{3})_{123}$ from Eq.(\ref{d27}) we have \begin{eqnarray} \nonumber \sum_{m_2=m_3}D_{1}(M_{2}^{m_2},M_{3}^{m_3})-\sum_{m_1=m_3}D_{2}(M_{1}^{m_1},M_{3}^{m_3})\leq 2\alpha\\ \end{eqnarray} We have thus written down the standard LGIs in terms of NSIT conditions. For the violation of standard LGI in (\ref{eq1}) the relation \begin{eqnarray} \nonumber \sum_{m_2=m_3}D_{1}(M_{2}^{m_2},M_{3}^{m_3})-\sum_{m_1=m_3}D_{2}(M_{1}^{m_1},M_{3}^{m_3}) > 2\alpha\\ \label{lgsatis} \end{eqnarray} needs to be satisfied in QM. This implies that for violation of standard LGI at least one of the two three-time NSIT conditions ($NSIT_{(1)23}$ and $NSIT_{1(2)3}$) required to be violated. However, mere violations of NSIT conditions do not guarantee the violation of LGIs which depends on the interplay between the violations of two NSIT conditions and on a threshold value $2\alpha$. Thus, NSIT conditions are necessary for LGI but not sufficient \cite{maroney14, swati17}. Next, we compare our variant of LGIs with standard LGIs and NIST conditions. We found that violation of variant of LGIs can be shown to be larger than the standard LGIs ($(K_{3}^{3})_{Q}>(K_{3})_{Q}^{max}$). Before writing variant of LGI in terms of NIST conditions, we note the following interesting points. Let us write one of the variant of LGIs for three-time measurement scenario is given by \begin{equation} \label{3t} K^3_{3}=\langle \hat{M}_{1} \hat{M}_{2} \hat{M}_{3}\rangle + \langle \hat{M}_{1} \hat{M}_{2} \rangle - \langle \hat{M}_{3} \rangle \leq {1} \end{equation} Since $\langle \hat{M}_{1} \hat{M}_{2} \hat{M}_{3}\rangle=(\langle \hat{M}_{1} \hat{M}_{2} \hat{M}_{3}\rangle)_{123}$ and $\langle \hat{M}_{1} \hat{M}_{2} \rangle=(\langle \hat{M}_{1} \hat{M}_{2} \rangle)_{123}$, then disturbance can ionly come due to the term $\langle \hat{M}_{3} \rangle $. Intuitively one may then expect that whenever the quantity $D_{12}(M_{3}^{m_{3}})$ defined as \begin{eqnarray} \nonumber \label{d222} D_{12}(M_{3}^{m_{3}})=P(M_{3}^{m_{3}})-\sum_{m_{1},m_{2}} P_{123}(M_{1}^{m_{1}},M_{2}^{m_{2}},M_{3}^{m_{3}})\\ \end{eqnarray} is positive, one may expect the violation of the variant of LGI given by Eq.(\ref{3t}). Thus, one may infer that the NSIT condition $NSIT_{(12)3}$ provides the necessary and sufficient condition for variant of LGI. But, we shall shortly see that similar to the case of standard LGI, $NSIT_{(12)3}$ provides the necessary but not the sufficient condition. Using similar approach adopted for standard LGIs, we express the variant of LGI given by (\ref{3t}) in terms of NSIT condition. Then, the expression of $K_{3}^{3}$ in inequality (\ref{3t}) can be written when all three measurements are performed is given by \begin{eqnarray} \nonumber (K_{3}^{3})_{123}&=& \langle \hat{M}_{1}\hat{M}_{2}\hat{M}_{3}\rangle_{123} +\langle \hat{M}_{1}\hat{M}_{2}\rangle_{123}-\langle \hat{M}_{3}\rangle_{123}\\ \label{beta} &=& 1-4\beta \end{eqnarray} where $\beta =P(M_{1}^{+},M_{2}^{-},M_{3}^{+})+P(M_{1}^{-},M_{2}^{+},M_{3}^{+})$. Using Eq.(\ref{d222}), we can write \begin{eqnarray} K_{3}^{3}-(K_{3}^{3})_{123}=\sum_{m_{3}}D_{12}(M_{3}^{m_{3}}) \end{eqnarray} Since $ K_{3}^{3}\leq 1$, using Eq.(\ref{beta}) we obtain \begin{eqnarray} \sum_{m_{3}}D_{12}(M_{3}^{m_{3}})\leq 4\beta \end{eqnarray} For the violation of variants in inequality (\ref{3time}) the following relation needs to be satisfied in QM is given by \begin{eqnarray} \label{dis} \sum_{m_{3}}D_{12}(M_{3}^{m_{3}}) > 4\beta \end{eqnarray} which is the variant of LGI written in terms of the $NSIT_{(12)3}$ condition. It can be seen from (\ref{dis}) that mere violation of $NSIT_{(12)3}$ do not provide the violation of inequality (\ref{3t}), the value of $\sum_{m_{3}}D_{12}(M_{3}^{m_{3}})$ needs to greater than a non-zero threshold value $4\beta$. Thus, NSIT condition is necessary for the violation of variant of LGI but not sufficient. Using the similar argument we can derive the condition of violation of inequality (\ref{ntimek}) for $n$-number of measurements in terms of NSIT condition as \begin{eqnarray} \label{eqdn} \sum_{m_{n}}D_{1,2..(n-1)}(M_{n}^{m_{n}}) > 4\gamma \end{eqnarray} where $\gamma=P(M_{1}^{m_{1}},M_{2}^{m_{2}}....M_{n}^{m_{n}})+.... 2^{n-2}$ terms. The quantity $D_{1,2..(n-1)}(M_{n}^{m_{n}})$ denotes the amount of disturbance caused by $n-1$ number of prior measurements. Intutively, it increases with the number of measurements and becomes maximum when quantum value of the inequality (\ref{eqdn}) reaches its algebraic maximum. \section{Summary and Discussion} The quantum violation of standard LGIs for a dichotomic system is restricted by temporal Tsrilson bound which is significantly lower than the algebraic maximum. In this paper, we note an important observation that the standard LGIs are a class of inequalities but \emph{not} the unique one. There is a scope of formulating new variant of inequalities based on the assumptions of MRps and NIM. For the simplest case of three-time measurement scenario, we first proposed new variants of LGIs which are different from the standard LGIs. For a qubit system, we demonstrated that such macrorealist inequalities provide larger quantum violation than standard LGIs. By increasing the number of measurements $n$, we proposed more variants of LGIs. We found that the quantum violation of variants of LGIs increase with the increment of $n$. Interestingly, for a sufficiently large value of $n$, the quantum violation of variant of LGIs reach their algebraic maximum. Thus, we obtained the quantum violation of LGIs up to its algebraic maximum, even for a state in qubit system. Further, we have compared the variants of LGIs proposed in our paper with the standard LGIs and NSIT condition. \section*{Acknowledgements} AKP acknowledges the support from Ramanujan Fellowship research grant (SB/S2/RJN-083/2014). MQ acknowledge the Junior Research Fellowship from SERB project (ECR/2015/00026).
244e64d00c5172f127da9d38f12a81178ac688fb
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} Phosporus abundance is of considerable interest in the search for life forms in exoplanets. It is the backbone element in the DNA molecule, enabling chemical bonds among myriad nucleotides that constitute the complex double-helical structure. However, ascertaining nucleosynthesis pathways and determining the actual abundance of phosphorus (Z = 15) is challenging because it is much lower than low-Z $\alpha$-elements, and orders of magnitude lower than the other five most common elements of the six that constitute the CHONPS-based life organisms on the Earth. The photospheric solar abundances numerically relative to hydrogen are (Asplund \etal 2009): C (2.7 $\times 10^{-4}$), N (6.8 $\times 10^{-5}$) O (4.9 $\times 10^{-4}$), P (2.6 $\times 10^{-7}$) and S (1.3 $\times 10^{-5}$). Phosphorus abundances have been measured from the mid-infrared \p3 17.9 \mum observations of late stages of stellar remnants such as planetary nebulae (Pottsch \etal 2008, Pottasch and Bernard-Salas 2008, Otsuka \etal 2011) and from ground based observations of FGK dwarf stars (Maas \etal 2017). Whereas phosphorus, with an odd atomic number Z=15, can be synthesized during the AGB phase of low-mass stars, it is thought to be mainly produced in evolutionary stages of massive stars, before and during supernovae explosion by neutron capture with silicon. An analysis using singly ionized {\sc P\,ii} lines found up to 100 times the P/Fe ratio in the young core-collapse SNR Cassiopeia A than the Milky Way average (Koo \etal 2013). Phosphorus is chemically very reactive, so its low gas phase abundance may also be difficult to determine due to dust and grain formation. Despite its astrophysical and increasing astrobiological importance, theoretical spectral analysis is hampered by the paucity of radiative and collisional atomic data for phosphorus ions. It is surprising that very little data for the important low ionization stages of P-ions in stellar and nebular sources are available, relative to nearly all other first and second row elements (viz. Pradhan and Nahar 2011). Electron impact excitation cross sections for {\sc P\,i} have been calculated in the Born approximation for excitations from the ground $3p$ up to several $n\ell$ sub-orbitals (Ganas 1998). Elaborate Dirac R-Matrix calculations for photoionization of low-lying ground and metastable levels of {\sc P\,iii} have been done over a small photon energy range, with good agreement with experimental measurements (Wang \etal 2016). Recently, sophistcated Breit-Pauli R-Matrix (BPRM) calculations have been carried out for photoionization of a large number of {\sc P\,ii} levels using an 18-level coupled channel wavefunction expansion for {\sc P\,iii} (Nahar \etal 2017a,b,c); very good agreement was found with the experimental {\sc P\,ii} photoionization cross sections measured at the Berkeley {\it Advanced Light Source}, particularly for the detailed resonance structures in the near-threshold region. These earlier works forms the basis for the calculations reported in this {\it Letter}. There are no other previous calculations for collisional excitation of low ionization stages of P-ions. We also develop an atomic physics framework for astrophysical spectral diagnostics in nebular environments as function of temperature, density and ionization equilibrium, potentially leading to more accurate abundance determination. \section{Phosphorus abundance analysis} Previous works are based on observational analysis of relative intensities of phosphorus lines compared to other elements. For example, nebular abundances have been determined in planetary nebulae (PNe) NGC 3242 and NGC 6369 from mid-IR observations of \p3 forbidden 17.9 \mum line using the {\it Infrared Spectrograph} aboard the {\it Spitzer Space Telescope} and the {\it Short Wavelength Spectrograph} on the {\it Infrared Space Observatory} (Pottasch and Bernard-Salas 2008) and NGC 2392 (Pottasch \etal 2008). A comparison of abundances for NGC 3242 shows phosphorus underabundance in a number of PNe, by a factor of 5 relative to solar, and more than a factor of 3 in NGC 6369. The large discrepancies are attributed to dust formation (Pottasch and Bernard-Salas 2008). However, the results are model dependent since they entail atomic parameters and ionization fractions not known to high precision. For a single observed line the ion abundance may be derived from the measured intensity ratio under certain conditions. Relative to recombination line H$\beta$, we may write (Pottasch and Beintema 1999) \be \frac{I_{ion}}{I_{H\beta}} = N_e \frac{N_{ion}}{N_{p+}} \frac{\lambda_{H\beta}}{\lambda_{ji}} \frac{A_{ji}}{\alpha_R(H\beta)} \left( \frac{N_j}{N_{ion}} \right) \ee \noindent where N$_{ion}$ is the ionic abundance, N$_j$ is the upper level population, A$_ji$ is the Einstein decay rate between levels $j \rightarrow i$, and $\alpha_R$ is the H$\beta$-recombination coefficient. The present case of \p3 is similar to the well-known {\sc C\,ii} 157 \mum line emitted via the $2s^22p ^2P^o_{1/2-3/2}$ transition (Blum and Pradhan 1991). Theoretically, we write line emissivity for the \p3 FIR transition formed with a given phosphorus abundance as (Pradhan and Nahar 2011) $$ \epsilon(17.9 \mu m) = \frac{h\nu A (^2P^o_{3/2} - ^2P^o_{1/2})}{4\pi} \times \frac{N(^2P^o_{3/2})}{\sum_i N_i(({\sc P\,III})} \times \frac{n({\sc P\,III})}{n(P)}$$ \be \times \frac{n(P)}{n(H)} \times n(H) \ \ ergs/cm^3/sec. \ee \noindent The sum in the denominator on the RHS of Eq. (2) refers to all levels included in the atomic model. Calculating the level populations requires rate coefficients for contributing atomic processes that may be due to recombination-cascades, electron impact excitation and fluorescent excitation from an external radiation field. In addition, Eq. (2) also depends on ionization balance for existing states in the plasma, as well as the elemental abundance itself -- the quantity to be determined. However, if the two closely-spaced levels are effectively de-coupled from other levels in the ion then a simple expression gives the emissivity in terms of only the electron impact excitation rate coefficient and transition energy $h\nu$, \be \epsilon (^2P^o_{3/2}-^2P^o_{1/2}) = N_e N_{ion} q(^2P^o_{3/2}-^2P^o_{1/2}) h\nu / 4\pi. \ee \noindent Eq. (3) implicitly assumes that all excitations to the upper level $^2P^o_{3/2}$ would be followed by downward decay to the ground state $^2P^o_{1/2}$, leaving the temperature-dependent electron impact excitation rate coefficient $q$ as the only important quantity to be calculated. That, in turn, is related to the Maxwellian averaged effective collision strength $\Upsilon_{ij}(T_e)$ as \be q_{ij}(T_e) = \frac{g_j}{g_i} q_{ji} e^{-E_{ij}/kT_e} = \frac{8.63 \times 10^{-6}}{g_i T^{1/2}} \Upsilon(T_e), \ee \noindent and \be \Upsilon_{ij}(T_e) = \int_0^{\infty} \Omega_{ij} (E) \exp(-E/kT_e) d(E/kT_e), \ee \noindent where $E_{ij}$ is the energy difference and $\Omega_{ij}$ is the collision strength for the transition $i \rightarrow j$. The exponentially decaying Maxwellian factor implies that at low temperatures only the very low energy $\Omega_{ij}(E))$ would determine the $\Upsilon(T_e)$. Furthermore, the detailed $\Omega(E)$ is generally a highly energy-dependent function due to autoionizing resonances, which leads to temperature sensitivity in the rate coefficient q($T_e$) via $\Upsilon(T_e)$ as in Eqs. (4-5). \section{Theory and computations} A brief theoretical description of the calculations is given. In particular, we describe relatvistic effects and the representation of the \eion system. \subsection{Relativistic fine structure} The relativistic Hamiltonian (Rydberg units) in the Breit-Pauli R-matrix (BPRM) approximation is given by \begin{equation} \begin{array}{l} H_{N+1}^{\rm BP} = \\ \sum_{i=1}\sp{N+1}\left\{-\nabla_i\sp 2 - \frac{2Z}{r_i} + \sum_{j>i}\sp{N+1} \frac{2}{r_{ij}}\right\}+H_{N+1}^{\rm mass} + H_{N+1}^{\rm Dar} + H_{N+1}^{\rm so}. \end{array} \end{equation} where the last three terms are one-body relativistic corrections of the Breit interaction, respectively: \begin{equation} \begin{array}{l} {\rm the~mass~correction~term},~H^{\rm mass} = -{\alpha^2\over 4}\sum_i{p_i^4},\\ {\rm the~Darwin~term},~H^{\rm Dar} = {Z\alpha^2 \over 4}\sum_i{\nabla^2({1 \over r_i})}, \\ {\rm the~spin-orbit~interaction~term},~H^{\rm so}= Z\alpha^2 \sum_i{1\over r_i^3} {\bf l_i.s_i}. \end{array} \end{equation} \subsection{Wavefunction representation and calculations} Based on the coupled channel approximaton, the R-matrix method (Burke 2011) entails a wavefunction expansion of the \eion system in terms of the eigenfuctions for the target ion. In the present case we are interested in low-lying FIR transition within the ground configuration $3s^23p$ and the next excited configuration $3s3p^2$. Therefore we confine ourselves to an accurate wavefunction representation for the first 18 levels dominated by the {\it spectroscopic} configurations $[1s^2,2s^2,2p^6] 3s^23p (^2P^o_{1/2,3/2)}), 3s3p^2 (^4P_{1/2,3/2,5/2}, ^2D_{3/2,5/2}, ^2S_{1/2}, \\ ^2P_{1/2,3/2}), 3s^23d (^2D_{3/2,5/2}), 3s^24s (^2S_{1/2}), 3s^24p (^2P^o_{1/2,3/2}), \\ 3p^3 (^4S^o_{3/2},^2D^o_{3/2,5/2},^2P^o_{1/2,3/2})$. The atomic structure calculations using the code SUPERSTRUCTURE (Eissner \etal 1974), and the BPRM calculations are described in Nahar \etal (2017a). The calculated and experimentally observed energies generally agree to within 5\% for all levels; the relatively small and sensitive fine structure splitting differs by 15\% (Nahar 2017a). Fig.~1 presents the Grotrian energy level diagram of {\sc P\,iii}. As noted above, the ground state $3s^23p ^2P^o_{1/2,3/2}$ fine structure is well separated by about 0.6 Ry or 7 eV from the next excited $3s3p^2$ configuration terms and levels (Nahar \etal 2017a; see Pradhan and Nahar (2011) for a general description of atomic processes and calculations). By comparison, in {\sc C\,ii} it is less than 0.4 Ry, and approximation in Eq. (4) has been utilized assuming that the Boltzmann factor $exp(-E_{ij}/kT)$ effectively de-couples the electron impact excitation of the forbidden FIR transition from higher levels of the ion (Eq.~4). For example, at $T_e = 10^4$K we have exp(-E/kT) $\approx$ exp(-16E) and the value of $q(T_e)$ for allowed UV transitions is orders of magnitude lower compared to the FIR transition. Even though the observed and experimental values are close, a small difference in resonance positions relative to threshold can introduce a significant uncertainty in the effective collision strengths. The observed energies were substituted for theoretical ones in order to reproduce the threshold energies more accurately. This is of particular importance for excitation at low temperatures dominated by near-threshold resonances. \begin{figure} \centering \includegraphics[width=\columnwidth,keepaspectratio]{f1.eps} \caption{Energy diagram of {\sc P\,iii} showing the ground $3s^23p$ and the first excited configuration $3s3p^2$ levels. The energy separation of the ground state fine structure $^2P^o_{1/2,3/2}$ transition at 17.9 \mum is 0.0051 Ry, and well separated from the dipole allowed UV transitions of the $^2P^o_{1/2,3/2} - ^2D, ^2S, ^2P$ multiplets between $915-1345 \AA$ with E $>$ 0.68 Ry. \label{energies}} \end{figure} The BPRM collision strengths were computed up to 5 times the energy of the highest level in the atomic calculations, $3p^3 (^4S^o_{3/2})$ at 1.45 Ry. Particular care is taken to test and ensure convergence of collision strengths with respect to partial waves and energy resolution. Total \eion symmetries up to (LS)J$\pi$ with J $\leq 19.5$ were included in the calculations, though it was found that the collision strengths for forbidden transitions converged for J $\leq 7$. An energy mesh of $\Delta E \sim 10^{-5}$ Rydbergs was used to resolve the near-thresold resonances. The resonances were delineated in detail prior to averaging over the Maxwellian distribution. \section{Results} We describe the two main sets of results for the FIR and the UV transitions, as well as the diagnostic lines. Collision strengths have been computed for all 153 transitions among the 18 {\sc P\,iii} levels. Selected results are presented below; no previous data are available for comparison. \subsection{The forbidden 17.9 \mum transition} The calculated fine structure collision strength is shown in Fig. 2a, that exhibits considerable autoionizing resonance structures and energy dependence throughout the range up to the highest level of the {\sc P\,iii} ion included in the BPRM wavefunction expansion, $E(3p^3 \ ^2P^o_{3/2})$ = 1.45 Ry. The fall-off for E $>$ 1.0 Ry indicates that the collision strength is much lower at higher energies, and has converged for this forbidden transtion. One particularly noteworthy feature is that the 17.9 \mum FIR transition is very strong, with large collision strengths and resonances just above the excitation threshold at E $\approx$ 0.1 Ry. That yields a maximum effective collision strength $\Upsilon(T_e) > 2.0$ between $10^3-10^4$K; by comparison the strong 157 micron transition in {\sc C\,II} has a value of $\sim$1.6 (Blum and Pradhan 1991). Consequently, the exciation rate coefficient and emissivity (Eqs. 3, 4) would indicate strong observable intensity relative to other FIR lines from other elements (viz. Pottasch and Bernard-Salas). In addition, the energy dependence of $\Omega(E)$ in Fig.~3(a) leads to variation of more than a factor of 3 in $\Upsilon(T_e)$ in Fig.~3b. Therefore, the intensity of the line is a sensitive indicator of temperature in the typical nebular range of $10^2 - 10^5$K, encompassing spectral formation in important sources such as PNe and SNRs. \begin{figure} \centering \includegraphics[width=\columnwidth,keepaspectratio]{f2a.eps} \includegraphics[width=\columnwidth,keepaspectratio]{f2b.eps} \caption{(a) Collision strength for the 17.9 \mum \p3 IR fine structure transition. High resolution at near-threshold energies is necessary for accuracy in rate coefficients at low temperatures. (b) Maxwellian averaged effective collision strengths $\Upsilon(T_e)$ (Eq. 5). There is a factor of 3 or more variation broadly peaking at typical nebular temperatures Te$_e > 10^3$K. structures. \label{fir}} \end{figure} \subsection{Allowed UV transitions} There are a number of intercombination and dipole allowed E1 transitions between the odd parity ground state fine structure levels $3s^23p (^2P^o_{1/2-3/2})$ and the even parity $3s3p^2 (^4P_{1/2,3/2,5/2},^2D_{3/2,5/2}, ^2S_{1/2}, ^2P_{1/2,3/2})$ levels. However, laboratory and theoretical radiative data for measured wavelengths and Einstein A-values available from the National Institute of Standards and Technology show only the three transitions in the $^2P^o_{1/2,3/2} - ^2D_{3/2,5/2}$. Fig.~3a presents sample collision strengths for fine struture components of dipole transitions in the three allowed multiplets. The BPRM calculations again show resonance structures below the highest target ion threshold at 1.45 Ry due to low-$\ell$ partial waves included in the calculations with $\ell_o \approx 10$. As these are E1 transitions, the collision strengths rise with increasing energy owing to divergent higher partial wave contributions $\ell > \ell_o$. The general form in the high energy region may be approximated by the Bethe formula $\Omega \sim a lnE$, where $a$ is related to the dipole oscillator strength, assuming high-$\ell$ collisions as radiatively induced (Pradhan and Nahar 2011). We therefore, match the BPRM collision strengths at 1.45 Ry to the Bethe expression. While there may be some uncertainty in the vicinity of this energy region, the overall behaviour of the collision strengths in Fig.~3a appears to be correct (c.f. Blum and Pradhan (1991) for C~II collision strengths for similar transitions). The effective collision strengths $\Upsilon(T_e)$ in Fig.~3b show the expected rising behaviour with temperature, typical of allowed transitions. \begin{figure} \centering \includegraphics[width=\columnwidth,keepaspectratio]{f3a.eps} \includegraphics[width=\columnwidth,keepaspectratio]{f3b.eps} \caption{(a) Collision strengths $\Omega(E)$ for sample UV fine structure transitions from the ground level $^2P^o_{1/2} \rightarrow ^2D_{3/2}, ^2S_{1/2}, ^2P_{3/2}$ For E $>$ 1.5 Ry the Coulomb-Bethe form $\Omega(E) \sim ln E$ is employed, typical of dipole allowed transitions at high energies and partial waves. (b) Maxwellian averaged effective collision strengths $\Upsilon(T_e)$ for the transitions transitions in (a). \label{uv}} \end{figure} \subsection{Maxwellian averaged collision strengths} In Table~1 we present the effective collision strengths (Eq. 3) for the four FIR and UV transitions reported herein. The tabulation is carried out at a range of temperatures typical of nebular environments $10^2-10^5$K. It is striking how much stronger the forbidden FIR 17.9 \mum is relative to the allowed UV transitions, and dominates collisional excitation to all other levels by up to two orders of magnitude for temperatures between 100-10,000 K, although the values become comparable towards higher temperatures as shown in Fig.~3b. That further numerically supports the approximation that the FIR line intensity may be little affected by excitation to higher levels. \begin{table*} \begin{minipage}{188mm} \caption{Effective Maxwellian averaged collision strengths for FIR and UV transitons in {\sc P\,iii}} \begin{tabular}{cccccccccc} \hline $LogT$(K) & $^2P^o$ & $^2P^o-^2D$ & $^2P^o-^2S$ &$^2P^o-^2P$ & $LogT$(K) & $^2P^o$ & $^2P^o-^2D$ & $^2P^o-^2S$ &$^2P^o-^2P$\\ J-J' & 1/2-3/2 & 1/2-3/2 & 1/2-1/2 & 1/2-3/2 & J-J' & 1/2-3/2 & 1/2-3/2 & 1/2-1/2 & 1/2-3/2\\ $\lambda$ & 17.9 $ \mu$m & 1334.8 $\AA$ & 998.6 $\AA$ & 914.5 $\AA$ & $\lambda$ & 17.9 $ \mu$m & 1334.8 $\AA$ & 998.6 $\AA$ & 914.5 $\AA$ \\ \hline 2.0& 7.73(-1)& 1.20(-3)& 1.18(-3)& 1.13(-3)& 3.6& 2.19& 4.76(-2)& 4.68(-2)& 4.51(-2)\\ 2.1& 9.45(-1)& 1.50(-3)& 1.48(-3)& 1.43(-3)& 3.7& 2.16& 5.99(-2)& 5.89(-2)& 5.67(-2)\\ 2.2& 1.13& 1.90(-3)& 1.86(-3)& 1.79(-3)& 3.8& 2.10& 7.54(-2)& 7.41(-2)& 7.14(-2)\\ 2.3& 1.33& 2.39(-3)& 2.34(-3)& 2.26(-3)& 3.9& 2.04& 9.50(-2)& 9.33(-2)& 8.99(-2)\\ 2.4& 1.52& 3.00(-3)& 2.95(-3)& 2.84(-3)& 4.0& 1.97& 1.20(-1)& 1.18(-1)& 1.13(-1)\\ 2.5& 1.70& 3.78(-3)& 3.72(-3)& 3.58(-3)& 4.1& 1.90& 1.51(-1)& 1.48(-1)& 1.43(-1)\\ 2.6& 1.85& 4.76(-3)& 4.68(-3)& 4.51(-3)& 4.2& 1.83& 1.90(-1)& 1.86(-1)& 1.79(-1)\\ 2.7& 1.98& 5.99(-3)& 5.89(-3)& 5.67(-3)& 4.3& 1.77& 2.39(-1)& 2.34(-1)& 2.26(-1)\\ 2.8& 2.08& 7.54(-3)& 7.41(-3)& 7.14(-3)& 4.4& 1.73& 3.00(-1)& 2.93(-1)& 2.84(-1)\\ 2.9& 2.16& 9.50(-3)& 9.33(-3)& 8.99(-3)& 4.5& 1.69& 3.75(-1)& 3.63(-1)& 3.56(-1)\\ 3.0& 2.20& 1.20(-2)& 1.18(-2)& 1.13(-2)& 4.6& 1.65& 4.63(-1)& 4.43(-1)& 4.45(-1) \\ 3.1& 2.23& 1.51(-2)& 1.48(-2)& 1.43(-2)& 4.7& 1.60& 5.62(-1)& 5.32(-1)& 5.55(-1) \\ 3.2& 2.24& 1.90(-2)& 1.86(-2)& 1.79(-2)& 4.8& 1.55& 6.70(-1)& 6.32(-1)& 6.91(-1) \\ 3.3& 2.25& 2.39(-2)& 2.34(-2)& 2.26(-2)& 4.9& 1.47& 7.87(-1)& 7.45(-1) & 8.64(-1) \\ 3.4& 2.24& 3.00(-2)& 2.95(-2)& 2.84(-2)& 5.0& 1.39& 9.19(-1)& 8.82(-1) & 1.09 \\ 3.5& 2.22& 3.78(-2)& 3.72(-2)& 3.58(-2)&&&&\\ \hline \end{tabular} \end{minipage} \end{table*} \subsection{Discussion} The results reported herein should enable spectral diagnostics of both the \p3 forbidden 17.9 \mum line as well as UV transitions with a practically complete 18-level collisional-radiative atomic model. The FIR and UV lines can not be observed with the same spectrometer and their spectral formation may be governed by different physical conditions, as well as subject to extinction that is highly wavelength dependent and would differntially attenuate line intensities. Some temperature dependence may be deduced from the energy behaviour inherent in the collision strengths data presented, and derived line emissivities. Based on extensive benchmarking of R-matrix data with experiments, we estimate the accuracy of the effective collision strengths between 10-20\%. We may calculate nebular phosphorus abundance as outlined in Section 2 Eqs. 1-5, based on the \p3 17.9 \mum line intensity ratio relative to H$\beta$. We assume a temperature $10^4$K, density $10^4 \ cm^{-3}$, transtion energy $h\nu$ = 0.069 eV, solar P-abundance, and ionic ratio P~III/P = 0.33. Using $\Upsilon(10^4K)$ from Table 1, the rate coefficient $q = 7.77 \times 10^{-8} \ cm^{3}/sec$ and $(4 \pi / N_p N_e) \epsilon(17.9 \mu m) = 7.33 \times 10^{-28} \ ergs/cm^3/sec$. Nebular recombination H$\beta$ line intensities $(4\pi/N_pN_e) j(H\beta)$ are: $8.3 \times 10^{-26}$ (Case A) and $1.24 \times 10^{-25} \ ergs \ cm^3 \sec$ (Case B). Therefore, $\epsilon(17.9\mu m)/H\beta \ = 8.8 \times 10^{-3}$ (Case A) and $5.9 \times 10^{-3}$ (Case B) respectively. These $\epsilon(17.9\mu m)/H\beta$ line ratios lie in the range observed in several PNe, but the P-abundances heretofore derived are factor of 3-4 lower than solar (viz. Pottasch and Bernard-Salas 2008); present work may yield higher abundances. Further refinements can be made by considering additional atomic processes such as level-specific (e~+~P~IV) $\rightarrow$ {\sc P\,iii} recombination-cascades, and flourescent excitation from an external radiation field such as in PNe central stars with $T_{rad} \approx 80,000-120,000$K. A more elaborate calculation can be done using Eq.~(2) that would combine the collisional-radiative model with a photoionization model that describes P-ionization states more accurately than, say, the {\sc P\,iii}/P value of 0.33 assumed above. However, these improvement would require extensive new atomic calculations for photoionization and \eion recombination (e.g. Nahar \etal 2017). An interesting possibility is that of laser action in the 17.9 \mum line, similar to that explored for the {\sc C\,ii} 157 micron transition (Peng and Pradhan 1994). Population inversion may occur owing to the extremely small magnetic dipole (M1) radiative decay rate A($^2P^o_{3/2}-^2P^o_{1/2}$) = 1.57$\times 10^{-3}/sec$ (NIST {\it Atomic Spectral Database}: www.nist.gov). Equating \ne $q$ = A, we obtain \ne = 2.0 $\times 10^4$ cm$^{-3}$. Therefore, at electron densities \ne $> 10^4$ cm$^{-3}$, electron impact excitation exceeds spontaneous decay, and population inversion and laser emission may occur in higher density SNRs or other sources. \subsection{Conclusion} Accurate collision strengths including fine structure with relativistic effects are computed for diagnostics of the \p3 forbidden FIR and allowed UV lines to enable a more precise re-examination of phosphorus abundance. The results show signficant temperature dependence that should provide additional information on the physical environment and spectral formation. In particular, this work suggests searches for the \p3 FIR line using {\it Spitzer} IRS data and abundance determination. Further work is in progress on photoionization and collisional excitation of P-ions relevant to this investigation. All data are available from the authors and archived in S. N. Nahar's database NORAD at: www.astronomy.ohio-state.edu$\sim$nahar, and TIPTOPBase at the Opacity Project/Iron Project webpage: http://cdsweb.u-strasbg.fr/OP.htx. \section*{Acknowledgments} The computational work was carried out at the Ohio Supercomputer Center in Columbus Ohio. This work was partially supported by the Astronomy Division of the U.S. National Science Foundation (SNN and AKP), and from the Indo-US Science and Technology Forum and Science and Engineering Research Board, Government of India (RN). \label{lastpage} \frenchspacing \def\aa{{\it Astron. Astrophys.}\ } \def\aasup{{\it Astron. Astrophys. Suppl. Ser.}\ } \def\adndt{{\it Atom. data and Nucl. Data Tables.}\ } \def\aj{{\it Astron. J.}\ } \def\apj{{\it Astrophys. J.}\ } \def\apjs{{\it Astrophys. J. Supp. Ser.}\ } \def\apjl{{\it Astrophys. J. Lett.}\ } \def\baas{{\it Bull. Amer. Astron. Soc.}\ } \def\cpc{{\it Comput. Phys. Commun.}\ } \def\jpb{{\it J. Phys. B}\ } \def\jqsrt{{\it J. Quant. Spectrosc. Radiat. Transfer}\ } \def\mn{{\it Mon. Not. R. astr. Soc.}\ } \def\pasp{{\it Pub. Astron. Soc. Pacific}\ } \def\pra{{\it Phys. Rev. A}\ } \def\pr{{\it Phys. Rev.}\ } \def\prl{{\it Phys. Rev. Lett.}\ }
bbb1007149a0de2556bdf320a5696bb4f84df225
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} It often happens that some infinite subgroups exhibit a nice and simple behavior on the level of their finite subgroups. An amazing example of such situation is given by the following result due to C.\,Jordan (see~\cite[Theorem~36.13]{Curtis-Reiner-1962}). \begin{theorem} \label{theorem:Jordan} There is a constant~\mbox{$J=J(n)$} such that for every finite subgroup~\mbox{$G\subset\operatorname{GL}_n(\mathbb{C})$} there exists a normal abelian subgroup~\mbox{$A\subset G$} of index at most $J$. \end{theorem} This motivates the following definition. \begin{definition}[{see \cite[Definition~2.1]{Popov2011}}] \label{definition:Jordan} A group~$\Gamma$ is called \emph{Jordan} (alternatively, we say that~$\Gamma$ \emph{has Jordan property}) if there is a constant~$J$ such that for every finite subgroup~\mbox{$G\subset\Gamma$} there exists a normal abelian subgroup $A\subset G$ of index at most~$J$. \end{definition} In other words, Theorem~\ref{theorem:Jordan} says that the group $\operatorname{GL}_n(\mathbb{C})$ is Jordan. The same applies to any linear algebraic group, since it can be realized as a subgroup of a general linear group. It was noticed by J.-P.\,Serre that Jordan property sometimes holds for groups of birational automorphisms. \begin{theorem}[{\cite[Theorem~5.3]{Serre2009}, \cite[Th\'eor\`eme~3.1]{Serre-2008-2009}}] \label{theorem:Serre} The group of birational automorphisms of $\mathbb{P}^2$ over the field $\mathbb{C}$ (or any other field of characteristic $0$) is Jordan. \end{theorem} Yu.\,Zarhin pointed out in~\cite{Zarhin10} that there are projective complex surfaces whose birational automorphism groups are not Jordan; they are birational to products $E\times\mathbb{P}^1$, where $E$ is an elliptic curve. The following result of V.\,Popov classifies projective surfaces with non-Jordan birational automorphism groups. \begin{theorem}[{\cite[Theorem~2.32]{Popov2011}}] \label{theorem:Popov} Let $X$ be a projective surface over $\mathbb{C}$. Then the group of birational automorphisms of $X$ is not Jordan if and only if~$X$ is birational to~\mbox{$E\times\mathbb{P}^1$}, where $E$ is an elliptic curve. \end{theorem} Automorphism groups having Jordan property were studied recently in many different contexts. Yu.\,Prokhorov and C.\,Shramov in \cite[Theorem~1.8]{ProkhorovShramov-RC} and \cite[Theorem~1.8]{Prokhorov-Shramov-2013} proved that this property holds for groups of birational selfmaps of rationally connected algebraic varieties, and some other algebraic varieties of arbitrary dimension. Actually, their results were initially obtained modulo a conjectural boundedness of terminal Fano varieties (see e.\,g.~\cite[Conjecture~1.7]{ProkhorovShramov-RC}), which was recently proved by C.\,Birkar in~\cite[Theorem~1.1]{Birkar}. Also Yu.\,Prokhorov and C.\,Shramov classified Jordan birational automorphism groups of algebraic threefolds in~\cite{ProkhorovShramov-dim3}. Some results about birational automorphisms of conic bundles were obtained by T.\,Bandman and Yu.\,Zarhin in~\cite{BandmanZarhin2015a}. For other results on Jordan birational automorphism groups see \cite{Prokhorov-Shramov-JCr3}, \cite{Prokhorov-Shramov-p-groups}, and \cite{Yasinsky2016a}. S.\,Meng and D.-Q.\,Zhang proved in \cite{MengZhang} that an automorphism group of any projective variety is Jordan. T.\,Bandman and Yu.\,Zarhin proved a similar result for automorphism groups of quasi-projective surfaces in~\cite{BandmanZarhin2015}, and also in some particular cases in arbitrary dimension in~\cite{BandmanZarhin2017}. For a survey of some other relevant results see~\cite{Popov-Jordan}. \'E.\,Ghys asked (following a more particular question posed earlier by W.\,Feit) whether a diffeomorphism group of a smooth compact manifold is always Jordan. Recently B.\,Csik\'os, L.\,Pyber, and E.\,Szab\'o in \cite{CsikosPyberSzabo} provided a counterexample following the method of~\cite{Zarhin10}; see also \cite{Riera-LieGroups} for a further development of this method, and \cite[Corollary~2]{Popov-Diff} for a non-compact counterexample. However, Jordan property holds for diffeomorphism groups in many cases; see \cite{Riera2016}, \cite{Riera-Spheres}, \cite{TurullRiera2015}, \cite{Riera-OddCohomology}, \cite{GuazziZimmermann}, \cite{Zimmermann-Survey}, \cite{Zimmermann-ConnectedSum}, \cite{Zimmermann2014}, \cite{Zimmermann-Isometries}, and references therein. Also there are results for groups of symplectomorphisms, see \cite{Riera-Symp} and \cite{Riera-HamSymp}. The goal of this paper is to generalize Theorem~\ref{theorem:Popov}, and to some extent the result of~\cite{MengZhang}, to a different setting, namely, to the case of compact complex surfaces (see~\S\ref{section:minimal-surfaces} below for basic definitions and background). We prove the following. \begin{theorem}\label{theorem:Aut} Let $X$ be a connected compact complex surface. Then the automorphism group of $X$ is Jordan. \end{theorem} One can also show (see~\cite[Theorem~1.3]{Riera-OddCohomology} or Theorem~\ref{theorem:Riera-generators} below) that the number of generators of any finite subgroup of an automorphism group of a compact complex surface~$X$, and actually of a diffeomorphism group of an arbitrary compact manifold, is bounded by a constant that depends only on~$X$. The main result of this paper is as follows. \begin{theorem} \label{theorem:main} Let $X$ be a connected compact complex surface. Then the group of birational automorphisms of $X$ is not Jordan if and only if~$X$ is birational to~\mbox{$E\times\mathbb{P}^1$}, where~$E$ is an elliptic curve. Moreover, there always exists a constant~\mbox{$R=R(X)$} such that every finite subgroup of the birational automorphism group of~$X$ is generated by at most~$R$ elements. \end{theorem} The plan of the paper is as follows. In \S\ref{section:Jordan} we collect some elementary facts about Jordan property, and other boundedness properties for subgroups. In \S\ref{section:minimal-surfaces} we recall the basic facts from the theory of compact complex surfaces, most importantly their Enriques--Kodaira classification. In \S\ref{section:automorphisms} we recall some important general facts concerning automorphisms of complex spaces. In \S\ref{section:class-VII} we study automorphism groups of non-projective surfaces with non-zero topological Euler characteristic; an important subclass of such surfaces is formed by minimal surfaces of class~VII with non-zero second Betti number (which are still not completely classified). In~\S\ref{section:Hopf} and~\S\ref{section:Inoue} we study automorphism groups of Hopf and Inoue surfaces, respectively; these are all possible minimal surfaces of class~VII with trivial second Betti number. In~\S\ref{section:Kodaira} we study automorphism groups of Kodaira surfaces. In \S\ref{section:non-negative} we study automorphism groups of other minimal surfaces of non-negative Kodaira dimension, and prove Theorems~\ref{theorem:Aut} and~\ref{theorem:main}. Finally, in Appendix~\ref{section:appendix} we collect some auxiliary group-theoretic results about infinite discrete groups and their automorphisms that we use in~\S\ref{section:Inoue} and~\S\ref{section:Kodaira}. Our general strategy is to consider the compact complex surfaces according to Enriques--Kodaira classification. One feature of our proof that we find interesting to mention is that Inoue and Kodaira surfaces are treated by literally the same method which is based on the fact that they are (diffeomorphic to) solvmanifolds (cf.~\cite[Theorem~1]{Hasegawa}), and for which we never met a proper analog in the projective situation. It is possible that one can generalize this approach to higher dimensional solvmanifolds. Note also that some of our theorems follow from more general results of I.\,Mundet i Riera, cf. Theorems~\ref{theorem:class-VII} and~\ref{theorem:Riera} (and also the discussion in the end of~\S\ref{section:class-VII}). We are grateful to M.\,Brion, M.\,Finkelberg, S.\,Gorchinskiy, S.\,Nemirovski, D.\,Osipov, and M.\,Verbitsky for useful discussions. The final version of the paper was prepared during the authors' stay in the Max Planck Institute for Mathematics, Bonn. The authors thank this institute for hospitality and support. \section{Jordan property} \label{section:Jordan} In this section we recall some group-theoretic properties related to the Jordan property, and prove a couple of auxiliary results about them. \begin{definition} We say that a group $\Gamma$ \emph{has bounded finite subgroups} if there exists a constant $B=B(\Gamma)$ such that for any finite subgroup $G\subset\Gamma$ one has $|G|\leqslant B$. \end{definition} The following result is due to H.\,Minkowski (see e.\,g.~\cite[Theorem~5]{Serre2007} and~\mbox{\cite[\S4.3]{Serre2007}}). \begin{theorem} \label{theorem:Minkowski} For every $n$ the group $\operatorname{GL}_n(\mathbb{Q})$ has bounded finite subgroups. \end{theorem} \begin{definition}\label{definition:strongly-Jordan} We say that a group~$\Gamma$ is \emph{strongly Jordan} if it is Jordan, and there exists a constant $R=R(\Gamma)$ such that every finite subgroup in $\Gamma$ is generated by at most $R$ elements. \end{definition} Note that Definition~\ref{definition:strongly-Jordan} is equivalent to a similar definition in~\cite{BandmanZarhin2015}. An example of a strongly Jordan group is given by~$\operatorname{GL}_n(\mathbb{C})$. This follows from the fact that every abelian subgroup of $\operatorname{GL}_n(\mathbb{C})$ is conjugate to a group that consists of diagonal matrices. Note however that even the group $\mathbb{C}^*$ contains \emph{infinite} abelian subgroups of arbitrarily large rank. The following elementary result will be useful to study Jordan property. \begin{lemma} \label{lemma:group-theory} Let $$ 1\longrightarrow\Gamma'\longrightarrow\Gamma\longrightarrow\Gamma'' $$ be an exact sequence of groups. Then the following assertions hold. \begin{itemize} \item[(i)] If $\Gamma'$ is Jordan (respectively, strongly Jordan) and $\Gamma''$ has bounded finite subgroups, then~$\Gamma$ is Jordan (respectively, strongly Jordan). \item[(ii)] If $\Gamma'$ has bounded finite subgroups and $\Gamma''$ is strongly Jordan, then $\Gamma$ is strongly Jordan. \end{itemize} \end{lemma} \begin{proof} Assertion~(i) is obvious. For assertion~(ii) see~\cite[Lemma~2.8]{Prokhorov-Shramov-2013} or~\cite[Lemma~2.2]{BandmanZarhin2015}. \end{proof} It is easy to see that if~$\Gamma_1$ is a subgroup of finite index in~$\Gamma_2$, then~$\Gamma_2$ is Jordan (respectively, strongly Jordan) if and only so is~$\Gamma_1$. At the same time Jordan property, as well as strong Jordan property, does not behave well with respect to quotients by infinite groups. Namely, a quotient of a strongly Jordan group by its subgroup may fail to be Jordan or to have all of its finite subgroups generated by a bounded number of elements. In spite of this we will be able to control the properties of some quotients by infinite groups that will be important for us. \begin{lemma}\label{lemma:diagonal-quotient-Z-1} Let $A$ be an abelian group whose torsion subgroup $A_{\mathrm{t}}$ is isomorphic to~\mbox{$(\mathbb{Q}/\mathbb{Z})^n$}, and let $\Lambda\subset A$ be a subgroup isomorphic to~$\mathbb{Z}^m$. Then the quotient group~\mbox{$\Gamma=A/\Lambda$} is strongly Jordan. \end{lemma} \begin{proof} The group $\Gamma$ is abelian and thus Jordan. Let $V\subset \Gamma$ be a finite subgroup and let~\mbox{$\tilde V\subset A$} be its preimage. Clearly, $\tilde V$ is finitely generated and can be decomposed into a direct product $\tilde V=\tilde V_{\mathrm{t}}\times \tilde V_{\mathrm{f}}$ of its torsion and torsion free parts. In particular, $\tilde{V}_{\mathrm f}$ is a free abelian group. Since $\tilde V_{\mathrm{f}}/(\tilde V_{\mathrm{f}}\cap \Lambda)$ is a finite group, one has $$ \operatorname{rk} \tilde V_{\mathrm{f}}=\operatorname{rk} (\tilde V_{\mathrm{f}}\cap \Lambda)\leqslant \operatorname{rk} \Lambda=m. $$ The group $\tilde V_{\mathrm{t}}$ is contained in $A_{\mathrm{t}}\cong (\mathbb{Q}/\mathbb{Z})^n$ and so it can be generated by $n$ elements. Thus~$\tilde V$ can be generated by $n+m$ elements, and the images of these elements in $\Gamma$ generate the subgroup~$V$. \end{proof} \begin{lemma}\label{lemma:central-subgroup} Let \begin{equation}\label{eq:central-subgroup} 1\longrightarrow\Gamma'\longrightarrow\Gamma\longrightarrow\Gamma'' \end{equation} be an exact sequence of groups. Suppose that $\Gamma'$ is central in $\Gamma$ (so that in particular~$\Gamma'$ is abelian) and there exists a constant $R$ such that every finite subgroup of $\Gamma'$ is generated by at most $R$ elements. Suppose also that there exists a constant $J$ such that for every finite subgroup $G\subset\Gamma''$ there is a cyclic subgroup $C\subset G$ of index at most $J$ (so that in particular $\Gamma''$ is strongly Jordan). Then the group $\Gamma$ is strongly Jordan. \end{lemma} \begin{proof} Let $G\subset\Gamma$ be a finite subgroup. The exact sequence~\eqref{eq:central-subgroup} induces an exact sequence of groups $$ 1\longrightarrow G'\longrightarrow G\longrightarrow G'', $$ where $G'$ is a subgroup of $\Gamma'$ (in particular, $G'$ is abelian), while $G''$ is a subgroup of~$\Gamma''$. There is a subgroup $\bar{G}\subset G$ of index at most $J$ such that $\bar{G}$ contains $G'$, and the quotient~$\bar{G}/G'$ is a cyclic group. To prove that the group $\Gamma$ is Jordan it is enough to check that $\bar{G}$ is an abelian group. The latter follows from the fact that $G'$ is a central subgroup of~$\bar{G}$. The assertion about the bounded number of generators is obvious. \end{proof} \begin{lemma}\label{lemma:GL-quotient-Z} Let $\Lambda$ be a finitely generated central subgroup of $\operatorname{GL}_2(\mathbb{C})$. Then the quotient group $\Gamma=\operatorname{GL}_2(\mathbb{C})/\Lambda$ is strongly Jordan. \end{lemma} \begin{proof} We have an exact sequence of groups $$ 1\longrightarrow \mathbb{C}^*/\Lambda\longrightarrow\Gamma\longrightarrow\operatorname{PGL}_2(\mathbb{C})\longrightarrow 1. $$ The group $\mathbb{C}^*/\Lambda$ is a central subgroup of $\Gamma$. Also, the group $\mathbb{C}^*/\Lambda$ is strongly Jordan by Lemma~\ref{lemma:diagonal-quotient-Z-1}. On the other hand, we know from the classification of finite subgroups of $\operatorname{PGL}_2(\mathbb{C})$ that every finite subgroup therein contains a cyclic subgroup of bounded index. Therefore, the assertion follows from Lemma~\ref{lemma:central-subgroup}. \end{proof} Most of the groups we will be working with in the remaining part of the paper will be strongly Jordan. However, we will only need to check Jordan property for them due to the following result. \begin{theorem}[{\cite[Theorem~1.3]{Riera-OddCohomology}}] \label{theorem:Riera-generators} For any compact manifold $X$ there is a constant~$R$ such that every finite group acting effectively by diffeomorphisms of~$X$ can be generated by at most~$R$ elements. \end{theorem} \section{Minimal surfaces} \label{section:minimal-surfaces} In this section we recall the basic properties of compact complex surfaces. Everything here (as well as in \S\ref{section:automorphisms} below) is well known to experts, but in some important cases we provide proofs for the reader's convenience. Starting from this point we will always assume that our complex surfaces are connected. Throughout this paper ${\mathscr{K}}_X$ denotes the canonical line bundle of a complex manifold~$X$. One has $\mathrm{c}_1({\mathscr{K}}_X)=-\mathrm{c}_1(X)$. Given a divisor $D$ on $X$, we will denote by $[D]$ the corresponding class in~\mbox{$H^2(X,\mathbb{Z})$}. \begin{definition} Let $X$ and $Y$ be compact complex surfaces. A proper holomorphic map~\mbox{$f\colon X\to Y$} is said to be a \textit{proper modification} if there are analytic subsets $Z_1\subsetneq X$ and $Z_2\subsetneq Y$ such that the restriction $f_{X\setminus Z_1}\colon X\setminus Z_1\to Y\setminus Z_2$ is biholomorphic. A \textit{birational} (or \textit{bimeromorphic}) \textit{map} $X\dashrightarrow Y$ is an equivalence class of diagrams \[ \xymatrix{ &Z\ar[dr]^g\ar[dl]_f& \\ X\ar@{-->}[rr]&&Y } \] where $f$ and $g$ are proper modifications, modulo natural equivalence relation. \end{definition} Birational maps from a given compact complex surface $X$ to itself form a group, which we will denote by~$\operatorname{Bir}(X)$. As usual, we say that two complex surfaces are birationally equivalent, or just birational, if there exists a birational map between them. \begin{remark} If $X$ and $Y$ are birationally equivalent compact complex surfaces, then the fields of meromorphic functions on $X$ and $Y$ are isomorphic. The converse is not true if the algebraic dimension of $X$ (and thus also of $Y$) is less than~$2$. \end{remark} A \emph{$(-1)$-curve} on a compact complex surface is a smooth rational curve with self-intersection equal to~$-1$. A compact complex surface is \emph{minimal} if it does not contain $(-1)$-curves. \begin{lemma}[see {\cite[\S\,IV.6]{BHPV-2004}}] \label{lemma:projective} Let $X$ be a compact complex surface. Suppose that there is a line bundle $\mathcal{L}$ on $X$ such that $\mathcal{L}^2>0$. Then $X$ is projective. \end{lemma} \begin{proposition}\label{proposition:Bir-vs-Aut} Let $X$ be a minimal surface. Suppose that $X$ is neither rational nor ruled. Then every birational map from an arbitrary compact complex surface $X'$ to $X$ is a proper modification. In particular, $X$ is the unique minimal model in its class of birational equivalence, and $\operatorname{Bir}(X)=\operatorname{Aut}(X)$. \end{proposition} \begin{proof} We may assume that $X$ is not projective, since otherwise the assertion is well known. Suppose that \[ \xymatrix{ &Z\ar[dr]^f\ar[dl]_g& \\ X'\ar@{-->}[rr]&&X } \] is a birational map that is not a proper modification. Then there exists a $(-1)$-curve $C$ contracted by $g$ but not contracted by $f$. Thus $C$ meets a fiber $f^{-1}(x)$ for some point~\mbox{$x\in X$}, since otherwise $X$ would contain a $(-1)$-curve. Contracting $(-1)$-curves in $f^{-1}(x)$ consecutively, we get a surface $S$ with a proper modification $h\colon Z\to S$, and a proper modification~\mbox{$t\colon S\to X$} such that $C_1=h(C)$ is a $(-1)$-curve and there exists another $(-1)$-curve $C_2$ meeting $C_1$ and contracted by $t$. If $C_1\cdot C_2>1$, then $(C_1+C_2)^2>0$ and the surface $S$ is projective by Lemma~\ref{lemma:projective}. Assume that $C_1\cdot C_2=1$. Then for $n\gg0$ we have \[ \mathrm{c}_1\left({\mathscr{K}}_S\otimes{\mathscr{O}}_S(-nC_1-nC_2)\right)^2=\mathrm{c}_1(S)^2+4n>0, \] so that the surface $S$ is again projective by Lemma~\ref{lemma:projective}. The obtained contradiction completes the proof. \end{proof} Given a compact complex surface $X$, we can consider its \emph{pluricanonical map}, which is the rational map given by a linear system $|{\mathscr{K}}_X^{\otimes m}|$ for $m\gg 0$. The dimension of its image is called the \emph{Kodaira dimension} of $X$ and is denoted by~$\varkappa(X)$; if the linear system $|{\mathscr{K}}_X^{\otimes m}|$ is empty for all~$m>0$, we put $\varkappa(X)=-\infty$. By $\mathrm{b}_i(X)$ we denote the $i$-th Betti number of~$X$. By $\mathrm{a}(X)$ we denote the algebraic dimension of~$X$, i.e. the transcendence degree of the field of meromorphic functions on~$X$. \begin{theorem}[{see~\cite[Corollary~IV.6.5]{BHPV-2004}}] A compact complex surface $X$ is projective if and only if~\mbox{$\mathrm{a}(X)=2$}. \end{theorem} The following is the famous Enriques--Kodaira classification of compact complex surfaces, see e.g.~\cite[Chapter~VI]{BHPV-2004}. \begin{theorem}\label{theorem:classification} Let $X$ be a minimal compact complex surface. Then $X$ is of one of the following types. \par\medskip\noindent {\rm \setlength{\extrarowheight}{1pt} \newcommand{\heading}[1]{\multicolumn{1}{c|}{#1}} \newcommand{\headingl}[1]{\multicolumn{1}{c}{#1}} \begin{tabularx}{\textwidth}{c|l|X|X|X} $\varkappa(X)$&\heading{type} & \heading{$\mathrm{a}(X)$} & \heading{$\mathrm{b}_1(X)$} & \headingl{$\chi_{\mathrm{top}}(X)$} \\\hline \multirow3{*}{$-\infty$}&rational surfaces& $2$& $0$ &$3$, $4$ \\ & ruled surfaces of genus $g>0$& $2$ &$2g$&$4(1-g)$ \\ & surfaces of class VII& $0$, $1$&$1$ &$\geqslant 0$ \\\hline \multirow6{*}{$0$}& complex tori& $0$, $1$, $2$&$4$& $0$ \\ & K3 surfaces& $0$, $1$, $2$& $0$&$24$ \\ & Enriques surfaces& $2$& $0$&$12$ \\ &bielliptic surfaces & $2$& $2$& $0$ \\ &primary Kodaira surfaces& $1$&$3$& $0$ \\ &secondary Kodaira surfaces& $1$& $1$& $0$ \\\hline \multirow1{*}{$1$}&properly elliptic surfaces& $1$, $2$&& $\geqslant 0$ \\\hline \multirow1{*}{$2$}&surfaces of general type & $2$&$\equiv 0\mod 2$& $> 0$ \\ \end{tabularx} } \end{theorem} \section{Automorphisms} \label{section:automorphisms} In this section we recall some important general facts about automorphisms of complex spaces. Let $U$ be a reduced complex space, see e.g. \cite{Serre1956} or \cite{Malgrange1968} for a definition and basic properties. Recall that a complex space is called \emph{irreducible} if it cannot be represented as a union of two proper closed analytic subsets. We denote by $T_{P, U}$ the Zariski tangent space (see \cite[\S2]{Malgrange1968}) to $U$ at a point $P\in U$. \begin{proposition}[{cf. \cite[Lemma 2.4]{Bialynicki-Birula1973}, \cite[Lemma 4]{Popov-Jordan}}] \label{proposition:fixed-point} Let $U$ be an irreducible Hausdorff reduced complex space, and $\Gamma\subset\operatorname{Aut}(U)$ be a finite group. Suppose that $\Gamma$ has a fixed point $P$ on $U$. Then the natural representation $$ \Gamma\longrightarrow\operatorname{GL}\big(T_{P,U}\big) $$ is faithful. \end{proposition} \begin{proof} Assume the contrary. Let $\mathfrak m=\mathfrak m_{P,U}$ be the maximal ideal of the local ring ${\mathscr{O}}_{P,U}$. We claim that the exact sequence \[ 0\longrightarrow \mathfrak m^2 \overset{\nu}{\longrightarrow}\mathfrak m \overset{\varsigma}{\longrightarrow}\mathfrak m/\mathfrak m^2\overset{}{\longrightarrow} 0 \] of $\Gamma$-modules splits. Indeed, take elements $f_1,\dots,f_n\in \mathfrak m$ such that their images $\varsigma(f_i)$ generate $\mathfrak m/\mathfrak m^2$ and consider the vector space $W\subset \mathfrak m$ generated by all $g\cdot f_i$, $g\in \Gamma$. This space is finite-dimensional and $\Gamma$-invariant. Hence $\mathfrak m^2\cap W$ is a direct summand, i.e. $$ W=V\oplus \left(\mathfrak m^2\cap W\right) $$ as a $\Gamma$-module for some~$V$. Thus the restriction $\varsigma|_{V}\colon V\to \mathfrak m/\mathfrak m^2$ is an isomorphism. Therefore, one has \begin{equation} \label{equation-actionT} \mathfrak m= V\oplus\mathfrak m^2. \end{equation} It is clear that $T_{P,U}\cong V^\vee$, and so the action of $\Gamma$ on $V$ is not faithful. Let $\Gamma_0\subset\Gamma$ be the kernel of this action, and let $V^d\subset \mathfrak m$ be the subspace generated by all products of at most $d$ elements of $V$. We claim that \begin{equation} \label{equation-actionT-d} \mathfrak m^d=V^d+\mathfrak m^{d+1}. \end{equation} We prove this claim by induction on $d$. For $d=1$ it coincides with \eqref{equation-actionT}. Assume that this claim holds for some $d$. Take any element $f\in \mathfrak m^{d+1}$. It can be written in the form \[ f= \sum f_iw_i,\quad f_i\in \mathfrak m^d,\quad w_i\in \mathfrak m. \] According to \eqref{equation-actionT} and \eqref{equation-actionT-d} we have \[ f_i=s_i+ h_i,\quad s_i\in V^d,\quad h_i\in \mathfrak m^{d+1}, \] \[ w_i=u_i+ v_i,\quad u_i\in V,\quad v_i\in \mathfrak m^{2}. \] Therefore, \[ f= \sum (s_i+ h_i)(u_i+ v_i)=\sum s_iu_i+\sum (s_iv_i+ h_iu_i+h_iv_i)\in V^{d+1}+ \mathfrak m^{d+2}. \] This proves \eqref{equation-actionT-d} for $d+1$. Therefore the restriction to $V^d$ of the natural map \[ \mathfrak m^d\longrightarrow \mathfrak m^d /\mathfrak m^{ d+1} \] is surjective. Hence $\Gamma_0$ acts trivially on $\mathfrak m^d/\mathfrak m^{ d+1}$ for any $d$. Take any element $f \in \mathfrak m$. By the above we have $$ f - g \cdot f \in \mathfrak m^{ d+1} $$ for every $g \in \Gamma_0$ and every $d>0$. On the other hand, one has $\cap \mathfrak m^d=0$ (see e.g. \cite[Corollary~10.18]{Atiyah-Macdonald-1969}). This implies that $f = g \cdot f$, i.e. $f$ is $\Gamma_0$-invariant. Thus $\Gamma_0$ acts trivially on $\mathfrak m$ and also on ${\mathscr{O}}_ {P,U}\cong\mathbb{C}\oplus\mathfrak m$. This means that the action of $\Gamma_0$ on the germs of holomorphic functions at $P$ is trivial. Let $U'$ be a sufficiently small $\Gamma_0$-invariant irreducible neighborhood of $P$. By definition of a reduced complex space, $U'$ is isomorphic to a subset in $\mathbb{C}^N$, and thus its points are separated by holomorphic functions. We claim that the action of $\Gamma_0$ on $U'$ is trivial. Indeed, choose a non-trivial element $g\in\Gamma_0$, and suppose that there is a point $P_1\in U'$ such that $P_2=g(P_1)$ is different from $P_1$. Let $f$ be a holomorphic function on $U'$ such that $f(P_1)\neq f(P_2)$. Then $g\cdot f\neq f$. However, the germs of $f$ and $g\cdot f$ at $P$ should be the same. Since $U'$ is irreducible, this gives a contradiction. Now let $U_0$ be the maximal open subset of $U$ such that $U_0$ contains~$P$, and the action of~$\Gamma_0$ on~$U_0$ is trivial; the above argument guarantees that $U_0$ is not empty. By assumption one has~\mbox{$U_0\neq U$}. Since $U$ is irreducible, this implies that there is a point $Q$ that is contained in the closure of $U_0$, but not in $U_0$ itself. If $Q$ is $\Gamma_0$-invariant, one can choose a $\Gamma_0$-invariant irreducible neighborhood $U_0'$ of $Q$ that is isomorphic to a subset of $\mathbb{C}^N$. This neighborhood contains an open subset of $\Gamma_0$-invariant points, means that the action of~$\Gamma_0$ on the whole~$U_0'$ is trivial. The latter is impossible by construction of $U_0$. Thus for some element~\mbox{$g\in\Gamma_0$} one has $g(Q)\neq Q$. This is impossible because $U$ is Hausdorff. The obtained contradiction completes the proof. \end{proof} \begin{remark} One cannot drop the assumption that $U$ is irreducible in Proposition~\ref{proposition:fixed-point}. Indeed, the assertion fails for the variety given by equation~\mbox{$xy=0$} in~$\mathbb{A}^2$ with coordinates~$x$ and~$y$, the point $P$ with coordinates~\mbox{$x=1$} and~\mbox{$y=0$}, and the group $\Gamma\cong\mathbb{Z}/2\mathbb{Z}$ whose generator acts by~\mbox{$(x,y)\mapsto (x,-y)$}. Similarly, the assertion fails for the simplest example of a non-Hausdorff reduced complex space, namely, for two copies of $\mathbb{A}^1$ glued along the common open subset~\mbox{$\mathbb{A}^1\setminus\{0\}$}, and the natural involution acting on this space. \end{remark} \begin{remark} The following observation was pointed out to us by M.\,Brion. A crucial step in the proof of Proposition~\ref{proposition:fixed-point} is the fact that the $\Gamma$-orbit of a function from $\mathfrak m_{P,U}$ generates a finite-dimensional subspace in $\mathfrak m_{P,U}$. If $U$ is an algebraic variety, this holds under a weaker assumption that $\Gamma$ is a reductive group. However, in the holomorphic setting this is not true any more. Indeed, let $U=\mathbb{A}^1$, and let the group $\Gamma\cong\mathbb{C}^*$ act on $U$ by scaling, so that the point $P=0$ is fixed by $\Gamma$. Let $f$ be a holomorphic function. Then the subspace of ${\mathscr{O}}_{P,U}$ generated by the $\Gamma$-orbit of $f$ is finite dimensional if and only if $f$ is a polynomial. We do not know if the assertion of Proposition~\ref{proposition:fixed-point} can be generalized to the case of reductive groups. \end{remark} Proposition~\ref{proposition:fixed-point} easily implies the following result. \begin{corollary} \label{corollary:fixed-point} Let $U$ be an irreducible Hausdorff reduced complex space, and~\mbox{$\Delta\subset\operatorname{Aut}(U)$} be a subgroup. Suppose that $\Delta$ has a fixed point $P$ on $U$, and let $$ \varsigma\colon\Delta\longrightarrow\operatorname{GL}\big(T_{P,U}\big) $$ be the natural representation. Suppose that there is a subgroup $\Gamma\subset\Delta$ of finite index such that the restriction $\varsigma\vert_{\Gamma}$ is an embedding. Then $\varsigma$ is an embedding as well. \end{corollary} \begin{proof} Let $\Delta_0\subset\Delta$ be the kernel of $\varsigma$. Since $[\Delta:\Gamma]<\infty$, we see that $\Delta_0$ is finite. Thus~$\Delta_0$ is trivial by Proposition~\ref{proposition:fixed-point}. \end{proof} Another application of Proposition~\ref{proposition:fixed-point} is as follows. \begin{lemma}\label{lemma:rational-curve} Let $X$ be a compact complex surface. Suppose that there is a finite non-empty $\operatorname{Aut}(X)$-invariant set $\mathcal{S}$ of curves on $X$ such that $\mathcal{S}$ does not contain smooth elliptic curves. Then the group $\operatorname{Aut}(X)$ is Jordan. \end{lemma} \begin{proof} Let $C$ be one of the curves from $\mathcal{S}$. Then the group $\operatorname{Aut}_C(X)$ of automorphisms of~$X$ that preserve the curve $C$ has finite index in $\operatorname{Aut}(X)$. Since $C$ is not a smooth elliptic curve, there is a constant $B=B(C)$ such that every finite subgroup of~\mbox{$\operatorname{Aut}_C(X)$} contains a subgroup of index at most $B$ that fixes some point on $C$. Indeed, if $C$ is singular, this is obvious; if $C$ is a smooth rational curve, this follows from the classification of finite subgroups of $\operatorname{Aut}(C)\cong\operatorname{PGL}_2(\mathbb{C})$; if $C$ is a smooth curve of genus at least~$2$, this follows from the Hurwitz bound. Now Proposition~\ref{proposition:fixed-point} implies that every finite subgroup of $\operatorname{Aut}_C(X)$ contains a subgroup of index at most $B$ that is embedded into $\operatorname{GL}_2(\mathbb{C})$. Therefore, the group $\operatorname{Aut}_C(X)$ is Jordan by Theorem~\ref{theorem:Jordan}, and hence the group $\operatorname{Aut}(X)$ is Jordan as well. \end{proof} \section{Non-projective surfaces with $\chi_{\mathrm{top}}(X)\neq 0$} \label{section:class-VII} In this section we will (mostly) work with non-projective compact complex surfaces $X$ with $\chi_{\mathrm{top}}(X)\neq 0$. In this case, by the Enriques--Kodaira classification (see Theorem \ref{theorem:classification}) one has $\chi_{\mathrm{top}}(X)> 0$. The main purpose of this section is to prove the following result. \begin{theorem}\label{theorem:class-VII} Let $X$ be a non-projective compact complex surface with $\chi_{\mathrm{top}}(X)\neq 0$. Then the group $\operatorname{Aut}(X)$ is Jordan. \end{theorem} Recall that an \emph{algebraic reduction} of a compact complex surface $X$ with $\mathrm{a}(X)=1$ is the morphism $\pi\colon X\to B$ given by the Stein factorization of the map $X\to \mathbb{P}^1$ defined by a non-constant meromorphic function. One can check that $\pi$ is an elliptic fibration, see \cite[Proposition~VI.5.1]{BHPV-2004}. \begin{lemma}\label{lemma-elliptic-curves} Let $X$ be a non-projective compact complex surface. If $X$ contains an irreducible curve $C$ which is not a smooth elliptic curve, then the group $\operatorname{Aut}(X)$ is Jordan. \end{lemma} \begin{proof} We claim that the surface $X$ contains at most a finite number of such curves. Indeed, if $\mathrm{a}(X)=0$, then $X$ contains at most a finite number of curves at all \cite[Theorem~IV.8.2]{BHPV-2004}. If $\mathrm{a}(X)=1$, then all curves on $X$ are contained in the fibers of the algebraic reduction by Lemma~\ref{lemma:projective}. The latter fibration is elliptic, so every non-elliptic curve is contained in one of its degenerate fibers. Now the assertion follows from Lemma~\ref{lemma:rational-curve}. \end{proof} \begin{lemma}\label{lemma:a-1} Let $X$ be a compact complex surface with $\chi_{\mathrm{top}}(X)\neq 0$. If $\mathrm{a}(X)=1$, then the group $\operatorname{Aut}(X)$ is Jordan. \end{lemma} \begin{proof} Let $\pi\colon X\to B$ the algebraic reduction, so that $B$ is a smooth curve and $\pi$ is an elliptic fibration. Since $\chi_{\mathrm{top}}(X)\neq 0$, the fibration $\pi$ has at least one fiber $X_b$ such that $F=(X_b)_{\operatorname{red}}$ is not a smooth elliptic curve. So the group $\operatorname{Aut}(X)$ is Jordan by Lemma~\ref{lemma-elliptic-curves}. \end{proof} For every compact complex surface $X$, we denote by $\overline{\operatorname{Aut}}(X)$ the subgroup of $\operatorname{Aut}(X)$ that consists of all elements acting trivially on $H^*(X,\mathbb{Z})$. This is a normal subgroup of $\operatorname{Aut}(X)$, and the quotient group $\operatorname{Aut}(X)/\overline{\operatorname{Aut}}(X)$ has bounded finite subgroups by Theorem~\ref{theorem:Minkowski}. Thus Lemma~\ref{lemma:group-theory}(i) implies that the group $\operatorname{Aut}(X)$ is Jordan if and only if~\mbox{$\overline{\operatorname{Aut}}(X)$} is Jordan. \begin{lemma}\label{lemma-b2-fixed-points} Let $X$ be a compact complex surface. Suppose that every irreducible curve contained in $X$ is a smooth elliptic curve. Let $g\in \overline{\operatorname{Aut}}(X)$ be an element of finite order, and $\Xi_0(g)$ be the set of all isolated fixed points of $g$. Then $$ |\Xi_0(g)|=\chi_{\mathrm{top}}(X). $$ \end{lemma} \begin{proof} The fixed locus $\Xi(g)$ of $g$ is a disjoint union $\Xi_0(g)\sqcup\Xi_1(g)$, where $\Xi_1(g)$ is of pure dimension $1$. Proposition~\ref{proposition:fixed-point} implies that every irreducible component of $\Xi_1(g)$ is its connected component. Thus every connected component of $\Xi_1(g)$ is a smooth elliptic curve, so that~\mbox{$\chi_{\mathrm{top}}(\Xi_1(g))=0$}. On the other hand, one has $$ \chi_{\mathrm{top}}(\Xi(g))=\chi_{\mathrm{top}}(X) $$ by the topological Lefschetz fixed point formula, and the assertion follows. \end{proof} \begin{lemma}\label{lemma-cyclic-subgroups} Let $X$ be a compact complex surface with $\chi_{\mathrm{top}}(X)\neq 0$. Suppose that every irreducible curve contained in $X$ is a smooth elliptic curve. Let $G\subset \overline{\operatorname{Aut}}(X)$ be a finite subgroup. If $G$ contains a non-trivial cyclic normal subgroup, then $G$ contains an abelian subgroup of index at most $12\chi_{\mathrm{top}}(X)$. \end{lemma} \begin{proof} Let $N\subset G$ be a non-trivial cyclic normal subgroup. By Lemma \ref{lemma-b2-fixed-points} the group $N$ has exactly $\chi_{\mathrm{top}}(X)>0$ isolated fixed points on~$X$ (and maybe also several curves that consist of fixed points). Since $N$ is normal in $G$, the group $G$ permutes these points. Thus there exists a subgroup of index at most $\chi_{\mathrm{top}}(X)$ in $G$ acting on $X$ with a fixed point. Now the assertion follows from Proposition~\ref{proposition:fixed-point} and the classification of finite subgroups of $\operatorname{GL}_2(\mathbb{C})$ (cf. \cite[Corollary 2.2.2]{Prokhorov-Shramov-JCr3}). \end{proof} \begin{lemma}\label{lemma:at-least-one-curve} Let $X$ be a compact complex surface with $\mathrm{a}(X)=0$ and $\chi_{\mathrm{top}}(X)\neq 0$. If $X$ contains at least one curve, then $\operatorname{Aut}(X)$ is Jordan. \end{lemma} \begin{proof} It is enough to prove that the group $\overline{\operatorname{Aut}}(X)$ is Jordan. The surface $X$ contains at most a finite number of curves by~\cite[Theorem~IV.8.2]{BHPV-2004}. By Lemma \ref{lemma-elliptic-curves} we may assume that all these curves are smooth and elliptic. Let $C_1,\dots, C_m$ be all curves on $X$, and let $\operatorname{Aut}^\sharp(X)\subset \overline{\operatorname{Aut}}(X)$ be the stabilizer of~$C_1$. Clearly, the subgroup $\operatorname{Aut}^\sharp(X)$ has index at most $m$ in $\overline{\operatorname{Aut}}(X)$, so it is sufficient to prove that $\operatorname{Aut}^\sharp(X)$ is Jordan. For any finite subgroup $G\subset \operatorname{Aut}^\sharp(X)$ we have an exact sequence \[ 1\longrightarrow\Gamma \longrightarrow G\longrightarrow\operatorname{Aut}(C_1), \] where $\Gamma$ is a finite cyclic group. If $\Gamma=\{1\}$, then $G$ is contained in $\operatorname{Aut}(C_1)$. Since $C_1$ is an elliptic curve, the group $G$ has an abelian subgroup of index at most $6$. If $\Gamma\neq \{1\}$, then $G$ has an abelian subgroup of index at most $12\chi_{\mathrm{top}}(X)$ by Lemma \ref{lemma-cyclic-subgroups}. Therefore, in both cases $G$ also has a normal abelian subgroup of bounded index. \end{proof} In the following lemmas we will deal with compact complex surfaces $X$ that contain no curves. In particular, this implies that $\mathrm{a}(X)=0$, and the action of any finite subgroup of $\operatorname{Aut}(X)$ is free in codimension one. \begin{lemma}\label{lemma:no-even-order} Let $X$ be a compact complex surface with $\chi_{\mathrm{top}}(X)\neq 0$. Suppose that $X$ contains no curves. Then the group $\overline{\operatorname{Aut}}(X)$ has no elements of even order. \end{lemma} \begin{proof} Let $g\in \overline{\operatorname{Aut}}(X)$ be an element of order $2$ (such elements always exist provided that there are elements of even order). First assume that $\varkappa(X)=-\infty$. We have $\mathrm{b}_1(X)=1$ and $\mathrm{b}_2(X)=\chi_{\mathrm{top}}(X)>0$ (see Theorem~\ref{theorem:classification}). Moreover, we know that $\mathrm{h}^{2,0}(X)=0$ because $\varkappa(X)=-\infty$. Hodge relations (see e.g. \cite[\S\,IV.2]{BHPV-2004}) give us \[ \mathrm{h}^{0,1}(X)=1,\quad \mathrm{h}^{1,0}(X)=0, \quad \text{and}\quad \mathrm{h}^{2,0}(X)=\mathrm{h}^{0,2}(X)=0. \] Therefore, one has $\chi({\mathscr{O}}_X)=0$. Since $g$ acts trivially on $H^*(X,\mathbb{Z})$, the holomorphic Lefschetz fixed point formula shows that $g$ has no fixed points. This contradicts Lemma~\ref{lemma-b2-fixed-points}. Now assume that $\varkappa(X)\geqslant 0$. Since $\mathrm{a}(X)=0$, this implies that $\varkappa(X)=0$ and $X$ is a K3 surface (see Theorem~\ref{theorem:classification}). Therefore, one has $\chi_{\mathrm{top}}(X)=24$ and $\chi({\mathscr{O}}_X)=2$. As above the holomorphic Lefschetz fixed point formula shows that $g$ has exactly $8$ fixed points. This contradicts Lemma \ref{lemma-b2-fixed-points}. \end{proof} \begin{lemma}\label{lemma-G-fixed-point} Let $X$ be a compact complex surface with $\chi_{\mathrm{top}}(X)\neq 0$. Suppose that $X$ contains no curves. Let $G\subset\overline{\operatorname{Aut}}(X)$ be a finite subgroup. Suppose that $G$ has a fixed point. Then $G$ is cyclic. \end{lemma} \begin{proof} Let $x\in X$ be a fixed point of $G$. By Proposition \ref{proposition:fixed-point} we have an embedding $$ G\subset \operatorname{GL}(T_{x, X})\cong\operatorname{GL}_2(\mathbb{C}). $$ Since the order of $G$ is odd by Lemma~\ref{lemma:no-even-order}, it must be abelian. Since the action is free in codimension one, the group $G$ is cyclic. \end{proof} \begin{lemma}\label{lemma:same-fixed-points} Let $X$ be a compact complex surface with $\chi_{\mathrm{top}}(X)\neq 0$. Suppose that $X$ contains no curves. Let $G\subset\overline{\operatorname{Aut}}(X)$ be a finite cyclic subgroup, and $g\in G$ be a non-trivial element. Then $g$ has the same set of fixed points as $G$. \end{lemma} \begin{proof} For an arbitrary element $h\in G$ denote by $\operatorname{Fix}(h)$ the fixed locus of $h$. By Lemma~\ref{lemma-b2-fixed-points} one has $$ |\operatorname{Fix}(h)|=\chi_{\mathrm{top}}(X) $$ for every non-trivial element $h$. Let $f$ be a generator of $G$. Then for some positive integer $n$ one has $f^n=g$, so that $$ \operatorname{Fix}(f)\subset\operatorname{Fix}(g). $$ Therefore, one has $\operatorname{Fix}(f)=\operatorname{Fix}(g)$, which means that every non-trivial element of $G$ has one and the same set of fixed points. \end{proof} \begin{lemma}\label{lemma-G-decomposition} Let $X$ be a compact complex surface with $\chi_{\mathrm{top}}(X)\neq 0$. Suppose that $X$ contains no curves. Then every finite subgroup $G\subset\overline{\operatorname{Aut}}(X)$ is a union $G=\bigcup_{i=1}^m G_i$ of cyclic subgroups such that $G_i\cap G_j=\{1\}$ for $i\neq j$. \end{lemma} \begin{proof} Choose some representation of $G$ as a union $G=\bigcup_{i=1}^m G_i$, where $G_i$ are cyclic groups that possibly have non-trivial intersections. Let $G_1$ and $G_2$ be subgroups such that $G_1\cap G_2\neq \{1\}$. Let $g\in G_1\cap G_2$ be a non-trivial element. By Lemma~\ref{lemma-b2-fixed-points} it has a fixed point, say $x$. By Lemma \ref{lemma-G-fixed-point} the stabilizer $G_x$ is a cyclic group. By Lemma~\ref{lemma:same-fixed-points} the groups $G_1$ and $G_2$ fix the point $x$, so that $G_1, G_2\subset G_x$. Replacing $G_1$ and $G_2$ by $G_x$, we proceed to construct the required system of subgroups by induction. \end{proof} \begin{lemma}\label{lemma:no-curves} Let $X$ be a compact complex surface with $\chi_{\mathrm{top}}(X)\neq 0$. Suppose that $X$ contains no curves. Then the group $\operatorname{Aut}(X)$ is Jordan. \end{lemma} \begin{proof} It is enough to prove that the group $\overline{\operatorname{Aut}}(X)$ is Jordan. Let $G\subset \overline{\operatorname{Aut}}(X)$ be a finite subgroup. Let $\Xi\subset X$ be the set of points with non-trivial stabilizers in $G$. By Lemma~\ref{lemma-G-decomposition} the group $G$ is a union $G=\bigcup_{i=1}^m G_i$ of cyclic subgroups such that~\mbox{$G_i\cap G_j=\{1\}$} for $i\neq j$. We claim that the stabilizer of any point $x\in \Xi$ is one of the groups $G_1,\dots, G_m$. Indeed, choose a point $x\in\Xi$, and let $G_x$ be its stabilizer. Let $g_x$ be a generator of $G_x$, and let $1\leqslant r\leqslant m$ be the index such that the group $G_r$ contains~$g_x$. Then $G_x\subset G_r$. Now Lemma~\ref{lemma:same-fixed-points} implies that $G_x=G_r$. By Lemma~\ref{lemma-b2-fixed-points} every element of $G$ has exactly $\mathrm{b}_2(X)$ fixed points. The set $\Xi$ is a disjoint union of orbits of the group~$G$. Therefore, for some positive integers $k_i$ one has \[ |\Xi|= m \mathrm{b}_2(X)= \sum_{i=1}^m k_i [G: G_i]. \] Hence, for some $i$ we have $[G: G_i]\leqslant \mathrm{b}_2(X)$, i.e. $G$ contains a cyclic subgroup $G_i$ of index at most $\mathrm{b}_2(X)$. This implies that $G$ contains a normal abelian subgroup of bounded index. \end{proof} Now we are ready to prove Theorem~\ref{theorem:class-VII}. \begin{proof}[Proof of Theorem~\ref{theorem:class-VII}] If $\mathrm{a}(X)=1$, then the assertion follows from Lemma~\ref{lemma:a-1}. If~\mbox{$\mathrm{a}(X)=0$} and $X$ contains at least one curve, then the assertion follows from Lemma~\ref{lemma:at-least-one-curve}. Finally, if $X$ contains no curves, then the assertion follows from Lemma~\ref{lemma:no-curves}. \end{proof} An alternative way to prove Theorem~\ref{theorem:class-VII} is provided by the following more general result due to I.\,Mundet i Riera. Our proof of Theorem~\ref{theorem:class-VII} is a simplified version of the proof of this result given in~\cite{Riera2016}. \begin{theorem}[{\cite[Theorem~1.1]{Riera2016}}] \label{theorem:Riera} Let $X$ be a compact, orientable, connected four-dimensional smooth manifold with $\chi_{\mathrm{top}}(X)\neq 0$. Then the group of diffeomorphisms of $X$ is Jordan. In particular, if $X$ is a compact complex surface with non-vanishing topological Euler characteristic, then the group $\operatorname{Aut}(X)$ is Jordan. \end{theorem} Note however that our proof of Theorem~\ref{theorem:class-VII} implies that for a compact complex surface~$X$ with $\chi_{\mathrm{top}}(X)\neq 0$ and containing no curves, there exists a constant $J$ such that for every finite subgroup $G\subset\operatorname{Aut}(X)$ there exists a normal \emph{cyclic} subgroup of index at most~$J$, while the results of \cite{Riera2016} provide only a normal abelian subgroup of bounded index generated by at most~$2$ elements. \section{Hopf surfaces} \label{section:Hopf} In this section we study automorphism groups of Hopf surfaces. Recall that a \emph{Hopf surface} $X$ is compact complex surface whose universal cover is (analytically) isomorphic to $\mathbb{C}^2\setminus\{0\}$. Thus $X\cong \left(\mathbb{C}^2\setminus\{0\}\right)/\Gamma$, where $\Gamma\cong \pi_1(X)$ is a group acting freely on $\mathbb{C}^2\setminus\{0\}$. A Hopf surface $X$ is said to be \emph{primary} if~\mbox{$\pi_1(X)\cong \mathbb{Z}$}. One can show that a primary Hopf surface is isomorphic to a quotient $$ X(\alpha,\beta,\lambda,n)=\left(\mathbb{C}^2\setminus\{0\}\right)/\Lambda, $$ where $\Lambda\cong\mathbb{Z}$ is a group generated by the transformation \begin{equation}\label{eq:Hopf} (x,y)\mapsto (\alpha x+\lambda y^n, \beta y). \end{equation} Here $n$ is a positive integer, and $\alpha$ and $\beta$ are complex numbers satisfying $$ 0 < |\alpha|\leqslant |\beta|<1; $$ moreover, one has $\lambda=0$, or $\alpha=\beta^n$ \cite[\S10]{Kodaira-structure-2}. A \emph{secondary} Hopf surface is a quotient of a primary Hopf surface by a free action of a finite group \cite[\S10]{Kodaira-structure-2}. Every Hopf surface contains a curve, see~\cite[Theorem~32]{Kodaira-structure-2}. Automorphisms of Hopf surfaces were studied in details in \cite{Kato}, \cite{Kato1}, \cite{Namba}, and~\cite{Matumoto}. Our approach does not use these results. We will need the following easy observation. \begin{lemma}\label{lemma:centralizers} Let $$ M=\left( \begin{array}{cc} \alpha & \lambda\\ 0 & \beta \end{array} \right)\in\operatorname{GL}_2(\mathbb{C}) $$ be an upper triangular matrix, and $Z\subset\operatorname{GL}_2(\mathbb{C})$ be the centralizer of~$M$. The following assertions hold. \begin{enumerate} \item If $\alpha=\beta$ and $\lambda=0$, then $Z=\operatorname{GL}_2(\mathbb{C})$. \item If $\alpha\neq\beta$ and $\lambda=0$, then $Z\cong(\mathbb{C}^*)^2$. \item If $\alpha=\beta$ and $\lambda\neq 0$, then $Z\cong\mathbb{C}^*\times\mathbb{C}^+$. \end{enumerate} \end{lemma} \begin{proof} Simple linear algebra. \end{proof} \begin{lemma}\label{lemma:secondary-Hopf} Let $X$ be a Hopf surface. Then the group $\operatorname{Aut}(X)$ is Jordan. \end{lemma} \begin{proof} The non-compact surface $\mathbb{C}^2\setminus\{0\}$ is the universal cover of $X$; moreover, $X$ is obtained from~\mbox{$\mathbb{C}^2\setminus\{0\}$} as a quotient by a free action of some group $\Gamma$ that contains a normal subgroup $\Lambda\cong\mathbb{Z}$ of finite index. We may shrink $\Lambda$ if necessary and suppose that~$\Lambda$ is a characteristic subgroup of $\Gamma$; in fact, it is enough to replace $\Lambda$ by its subgroup of index~\mbox{$|\Gamma/\Lambda|$}. A generator of $\Lambda$ is given by formula~\eqref{eq:Hopf}. There is an exact sequence of groups $$ 1\longrightarrow \Gamma \longrightarrow \widetilde{\operatorname{Aut}}(X)\longrightarrow\operatorname{Aut}(X)\longrightarrow 1, $$ where $\widetilde{\operatorname{Aut}}(X)$ acts by automorphisms of $\mathbb{C}^2\setminus\{0\}$. By Hartogs theorem the action of~\mbox{$\widetilde{\operatorname{Aut}}(X)$} extends to~$\mathbb{C}^2$ so that $\widetilde{\operatorname{Aut}}(X)$ fixes the origin $0\in\mathbb{C}^2$. The image of the generator of~$\Lambda$ is mapped by the natural homomorphism $$ \varsigma\colon\widetilde{\operatorname{Aut}} (X)\longrightarrow\operatorname{GL}\big(T_{0,\mathbb{C}^2}\big)\cong\operatorname{GL}_2(\mathbb{C}) $$ to the matrix \[ M=\begin{pmatrix} \alpha & \lambda \updelta_1^n \\ 0& \beta \end{pmatrix} \] where $\updelta$ is the Kroneker symbol. Let $G\subset\operatorname{Aut}(X)$ be a finite subgroup, and $\tilde{G}$ be its preimage in $\widetilde{\operatorname{Aut}}(X)$. Thus, one has~\mbox{$G\cong\tilde G/\Gamma$}. By Corollary~\ref{corollary:fixed-point} we know that $\varsigma\vert_{\tilde{G}}$ is an embedding. Let $\Omega$ be the normalizer of $\varsigma(\Lambda)$ in $\operatorname{GL}_2(\mathbb{C})$. By construction $\varsigma(\tilde G)$ is contained in the normalizer of $\varsigma(\Gamma)$ in $\operatorname{GL}_2(\mathbb{C})$, which in turn is contained in $\Omega$ because $\Lambda$ is a characteristic subgroup of~$\Gamma$. We see that every finite subgroup of $\operatorname{Aut}(X)$ is contained in the group~\mbox{$\Omega/\varsigma(\Lambda)$}. Since~\mbox{$\varsigma(\Lambda)\cong\mathbb{Z}$}, the group $\Omega$ has a (normal) subgroup $\Omega'$ of index at most $2$ that coincides with the centralizer of the matrix $M$. It remains to check that the group $\Omega'/\varsigma(\Lambda)$ is Jordan. If $\lambda=0$ and $\alpha=\beta$, then this follows from Lemmas~\ref{lemma:centralizers}(i) and~\ref{lemma:GL-quotient-Z}. If either $\lambda=0$ and $\alpha\neq \beta$, or $\lambda\neq 0$ and $n\geqslant 2$, then this follows from Lemmas~\ref{lemma:centralizers}(ii) and~\ref{lemma:diagonal-quotient-Z-1}. If $\lambda\neq 0$ and $n=1$, then this follows from Lemmas~\ref{lemma:centralizers}(iii) and~\ref{lemma:diagonal-quotient-Z-1}. \end{proof} \begin{remark} Suppose that for a primary Hopf surface $X\cong X(\alpha,\beta,\lambda,n)$ one has $\lambda=0$ and $\alpha^k=\beta^l$ for some $k, l\in \mathbb{Z}$. Then there is an elliptic fibration $$ X\cong\left(\mathbb{C}^2\setminus \{0\}\right)/\Lambda \to \mathbb{P}^1\cong\left(\mathbb{C}^2\setminus \{0\}\right)/\mathbb{C}^*, $$ and one has an exact sequence of groups $$ 1\longrightarrow E\longrightarrow \operatorname{Aut}(X)\longrightarrow \operatorname{PGL}_2(\mathbb{C}), $$ where $E$ is the group of points of the elliptic curve~$\mathbb{C}^*/\mathbb{Z}$. \end{remark} \section{Inoue surfaces} \label{section:Inoue} In this section we study automorphism groups of Inoue surfaces, and make some general conclusions about automorphisms of surfaces of class~VII. Inoue surfaces are quotients of $\mathbb{C}\times \mathbb H$, where $\mathbb H$ is the upper half-plane, by certain infinite discrete groups. They were introduced by M.\,Inoue \cite{Inoue1974}. These surfaces contain no curves and their invariants are as follows: \[ \mathrm{a}(X)=0,\quad \mathrm{b}_1(X)=1, \quad \mathrm{b}_2(X)=0,\quad \mathrm{h}^{1,0}(X)=0,\quad \mathrm{h}^{0,1}(X)=1. \] The following result shows the significance of Hopf and Inoue surfaces from the point of view of Enriques--Kodaira classification. \begin{theorem}[{see \cite{Bogomolov} and~\cite{Teleman-classification}}] \label{theorem:Bogomolov} Every minimal surface of class VII with vanishing second Betti number is either a Hopf surface or an Inoue surface. \end{theorem} \begin{lemma} \label{lemma-Inoue-quotient} Let $X$ be an Inoue surface, and $G\subset \operatorname{Aut}(X)$ be a finite subgroup. Then the action of $G$ on $X$ is free, and the quotient $\hat{X}=X/G$ is again an Inoue surface. \end{lemma} \begin{proof} Assume that the action of $G$ on $X$ is not free. To get a contradiction we may assume that $G$ is a cyclic group of prime order. Let $\updelta$ be its generator. Since $X$ contains no curves, the fixed point locus of $G$ consists of a finite number of points $P_1,\dots, P_n$. On the other hand, by the topological Lefschetz fixed point formula, one has \[ n=\operatorname{Lef}(\updelta)= 2 - 2\operatorname{tr}_{H^1(X,\mathbb{R})} \updelta^*>0. \] Hence the action of $\updelta^*$ on $H^1(X,\mathbb{R})\cong\mathbb{R}$ is not trivial. This is possible only if $\updelta$ is of order $2$ and $n=4$. Then $\hat{X}$ has exactly $4$ singular points which are Du Val of type~$A_1$. Let~\mbox{$Y\to \hat{X}$} be the minimal resolution of singularities. Then $$ \mathrm{c}_1(Y)^2=\mathrm{c}_1({\mathscr{K}}_{\hat{X}})^2=\frac 12 \mathrm{c}_1(X)^2=0, $$ and $\chi_{\mathrm{top}}(Y)=4+\chi_{\mathrm{top}}(\hat{X})=6$. This contradicts the Noether's formula, see e.g.~\cite[\S\,I.5]{BHPV-2004}. Thus $X\to \hat X$ is an unramified finite cover. This implies that $\chi_{\mathrm{top}}(\hat{X})=0$. Furthermore, one has \[ \mathrm{b}_2(\hat X)=\operatorname{rk} H^2(X,\mathbb{Z})^G=0, \] and so $\mathrm{b}_1(\hat X)=1$. Therefore, $\hat X$ is a minimal surface of class VII (see Theorem~\xref{theorem:classification}). Clearly, $\hat X$ contains no curves. Thus by Theorem~\ref{theorem:Bogomolov} the surface $\hat X$ is either a Hopf surface or an Inoue surface. Since Hopf surfaces contain curves, we conclude that $\hat X$ is an Inoue surface. \end{proof} There are three types of Inoue surfaces: $\mathrm{S_M}$, $\mathrm{S^{(+)}}$ and $\mathrm{S^{(-)}}$. They are distinguished by the type of their fundamental group $\Gamma=\pi_1(X)$, see~\cite{Inoue1974}: \par\bigskip\noindent {\rm \setlength{\extrarowheight}{3pt} \newcommand{\heading}[1]{\multicolumn{1}{c|}{#1}} \newcommand{\headingl}[1]{\multicolumn{1}{c}{#1}} \begin{tabularx}{0.97\textwidth}{p{0.06\textwidth}|p{0.1\textwidth}|p{0.7\textwidth}} type & \heading{generators} & \multicolumn{1}{Y}{relations} \\\Xhline{2\arrayrulewidth} $\mathrm{S_M}$ & $\updelta_1, \updelta_2, \updelta_3, \upgamma$ & $[\updelta_i,\updelta_j]=1$, $\upgamma\updelta_i\upgamma^{-1}=\updelta_1^{m_{1,i}}\updelta_2^{m_{2,i}}\updelta_3^{m_{3,i}}$, $(m_{j,i})\in \operatorname{SL}_3(\mathbb{Z})$ \\\hline $\mathrm{S^{(\pm)}}$ & $\updelta_1, \updelta_2, \updelta_3, \upgamma$ & $[\updelta_i,\updelta_3]=1$, $[\updelta_1,\updelta_2]=\updelta_3^{r}$, $\upgamma\updelta_i\upgamma^{-1}=\updelta_1^{m_{1,i}}\updelta_2^{m_{2,i}}\updelta_3^{p_i}$ for $i=1,2$, $\upgamma\updelta_3\upgamma^{-1}=\updelta_3^{\pm 1}$, $(m_{j,i})\in \operatorname{GL}_2(\mathbb{Z})$, $\det (m_{j,i})=\pm 1$ \end{tabularx} } \par\bigskip In the notation of Appendix~\ref{section:appendix} one has $\Gamma\cong\Gamma_0\rtimes\Gamma_1$, where $\Gamma_1\cong\mathbb{Z}$, while $\Gamma_0\cong\mathbb{Z}^3$ for Inoue surfaces of type $\mathrm{S_M}$, and $\Gamma_0\cong{\mathscr{H}}(r)$ for Inoue surfaces of types $\mathrm{S^{(\pm)}}$ (see~\S\ref{subsection:lattices-semi-direct} and~\S\ref{subsection:Heisenberg-semi-direct} for more details). In the former case the matrix $M\in\operatorname{SL}_3(\mathbb{Z})$ that defines the semi-direct product has eigenvalues $\upalpha$, $\upbeta$, and $\bar \upbeta$, where $\upalpha\in \mathbb{R}$, $\upalpha>1$, and~\mbox{$\upbeta\notin\mathbb{R}$}. In the latter case the matrix $M\in\operatorname{GL}_2(\mathbb{Z})$ that defines the action of~$\mathbb{Z}$ on~\mbox{${\mathscr{H}}(r)/\operatorname{z}({\mathscr{H}}(r))\cong\mathbb{Z}^2$} has real eigenvalues $\upalpha$ and $\upbeta$, where $\upalpha\upbeta=\pm1$ (depending on whether $\Gamma$ is of type~$\mathrm{S^{(+)}}$ or~$\mathrm{S^{(-)}}$), and both $\upalpha$ and $\upbeta$ are different from~$1$, see~\mbox{\cite[\S\S2--4]{Inoue1974}}. \begin{lemma}\label{lemma:Inoue-group-types} Let $\Gamma$ be a group of one of the types $\mathrm{S_M}$, $\mathrm{S^{(+)}}$, or $\mathrm{S^{(-)}}$. Then \begin{enumerate} \item $\Gamma$ is of type $\mathrm{S_M}$ if and only if $\Gamma$ contains a characteristic subgroup isomorphic to~$\mathbb{Z}^3$; \item $\Gamma$ is of type $\mathrm{S^{(+)}}$ if and only if $\Gamma$ contains no subgroups isomorphic to $\mathbb{Z}^3$ and~\mbox{$\operatorname{z}(\Gamma)\neq\{1\}$}; \item $\Gamma$ is of type $\mathrm{S^{(-)}}$ if and only if $\Gamma$ contains no subgroups isomorphic to $\mathbb{Z}^3$ and~\mbox{$\operatorname{z}(\Gamma)=\{1\}$}. \end{enumerate} \end{lemma} \begin{proof} This follows from Lemmas~\ref{lemma:lattice-exercise}(ii),(iii) and~\ref{lemma:Heisenberg-semi-direct-center}(ii) and Remark~\ref{remark:Heisenberg-semi-direct-no-pm-1}. \end{proof} \begin{corollary} \label{corollary-Inoue-quotient-types} Let $X$ be an Inoue surface, and $G\subset \operatorname{Aut}(X)$ be a finite subgroup. Then the action of $G$ on $X$ is free, and the following assertions hold. \begin{enumerate} \item If $X$ is of type $\mathrm{S_M}$, then so is $X/G$; \item If $X$ is of type $\mathrm{S^{(-)}}$, then so is $X/G$; \item If $X$ is of type $\mathrm{S^{(+)}}$, then $X/G$ is of type $\mathrm{S^{(+)}}$ or $\mathrm{S^{(-)}}$. \end{enumerate} \end{corollary} \begin{proof} Put $\hat{X}=X/G$. Then the action of $G$ on $X$ is free, and $X$ is an Inoue surface by Lemma~\ref{lemma-Inoue-quotient}. Put $\hat{\Gamma}=\pi_1(\hat{X})$. Then $\hat{\Gamma}$ is a group of one of the types $\mathrm{S_M}$, $\mathrm{S^{(+)}}$, or $\mathrm{S^{(-)}}$, and~\mbox{$\Gamma\subset\hat{\Gamma}$} is a normal subgroup of finite index. Now everything follows from Lemma~\ref{lemma:Inoue-group-types}. \end{proof} \begin{lemma}\label{lemma:Jordan-Inoue-M} Let $X$ be an Inoue surface of type $\mathrm{S_M}$. Then the group $\operatorname{Aut}(X)$ is Jordan. \end{lemma} \begin{proof} Let $G\subset \operatorname{Aut}(X)$ be a finite subgroup, and put $\hat{X}=X/G$. By Corollary~\ref{corollary-Inoue-quotient-types} the action of $G$ on $X$ is free, and $\hat{X}$ is also an Inoue surface of type $\mathrm{S_M}$. Put $\Gamma=\pi_1(X)$ and~\mbox{$\hat{\Gamma}=\pi_1(\hat{X})$}. Then $\Gamma$ is a normal subgroup of $\hat{\Gamma}$, and $\hat{\Gamma}/\Gamma\cong G$; moreover, both~$\Gamma$ and~$\hat{\Gamma}$ are semi-direct products as in~\S\ref{subsection:lattices-semi-direct}. Now it follows from Lemma~\ref{lemma:lattices-semi-direct} that there is a constant $\nu$ that depends only on $\Gamma$ (that is, only on $X$), such that $G$ has a normal abelian subgroup of index at most~$\nu$. \end{proof} \begin{lemma}\label{lemma:Jordan-Inoue-pm} Let $X$ be an Inoue surface of type $\mathrm{S^{(+)}}$ or $\mathrm{S^{(-)}}$. Then the group $\operatorname{Aut}(X)$ is Jordan. \end{lemma} \begin{proof} Let $G\subset \operatorname{Aut}(X)$ be a finite subgroup, and put $\hat{X}=X/G$. By Corollary~\ref{corollary-Inoue-quotient-types} the action of $G$ on $X$ is free, and $\hat{X}$ is also an Inoue surface of type $\mathrm{S^{(+)}}$ or $\mathrm{S^{(-)}}$. Put~\mbox{$\Gamma=\pi_1(X)$} and $\hat{\Gamma}=\pi_1(\hat{X})$. Then $\Gamma$ is a normal subgroup of $\hat{\Gamma}$, and $\hat{\Gamma}/\Gamma\cong G$; moreover, both~$\Gamma$ and~$\hat{\Gamma}$ are semi-direct products as in~\S\ref{subsection:Heisenberg-semi-direct}. Now it follows from Lemma~\ref{lemma:Heisenberg-semi-direct} that there is a constant $\nu$ that depends only on $\Gamma$ (that is, only on $X$), such that $G$ has a normal abelian subgroup of index at most~$\nu$. \end{proof} We summarize the results of Lemmas~\ref{lemma:Jordan-Inoue-M} and~\ref{lemma:Jordan-Inoue-pm} as follows. \begin{corollary}\label{corollary:Inoue} Let $X$ be an Inoue surface. Then the group $\operatorname{Aut}(X)$ is Jordan. \end{corollary} Finally, we put together the information about automorphisms of surfaces of class~VII. \begin{corollary}\label{corollary:class-VII} Let $X$ be a minimal surface of class VII. Then the group $\operatorname{Aut}(X)$ is Jordan. \end{corollary} \begin{proof} If the second Betti number $\mathrm{b}_2(X)$ vanishes, then $X$ is either a Hopf surface or an Inoue surface by Theorem~\ref{theorem:Bogomolov}. Thus the assertion follows from Lemma~\ref{lemma:secondary-Hopf} and Corollary~\ref{corollary:Inoue} in this case. If $\mathrm{b}_2(X)$ does not vanish, then the assertion follows from Theorem~\ref{theorem:class-VII}. \end{proof} \begin{remark} Except for Hopf surfaces, there are some other classes of minimal compact complex surfaces of class~VII whose automorphism groups are studied in details. For instance, this is the case for so called hyperbolic and parabolic Inoue surfaces, see~\cite{Pinkham} and~\cite{Fujiki-ParabolicInoue}, respectively. Note that surfaces of both of these types have positive second Betti numbers (and thus they are not to be confused with Inoue surfaces studied in this section). \end{remark} \section{Kodaira surfaces} \label{section:Kodaira} In this section we study automorphism groups of Kodaira surfaces. Our approach here is similar to what happens in~\S\ref{section:Inoue}. Recall (see e.g. \cite[\S\,V.5]{BHPV-2004}) that a Kodaira surface is a compact complex surface of Kodaira dimension $0$ with odd first Betti number. There are two types of Kodaira surfaces: primary and secondary ones. A primary Kodaira surface is a compact complex surface with the following invariants \cite[\S6]{Kodaira-structure-1}: \[ \newcommand{\hspace{6pt}}{\hspace{4pt}} {\mathscr{K}}_X \sim 0, \hspace{6pt} \mathrm{a}(X)=1,\hspace{6pt} \mathrm{b}_1(X)=3,\hspace{6pt} \mathrm{b}_2(X)=4,\hspace{6pt} \chi_{\mathrm{top}}(X)=0,\hspace{6pt} \mathrm{h}^{1}(X,{\mathscr{O}}_X)=2,\hspace{6pt} \mathrm{h}^{2}(X,{\mathscr{O}}_X)=1. \] Let $X$ be a primary Kodaira surface and let $\phi\colon X\to B$ be its algebraic reduction. Then $B$ is an elliptic curve and $\phi$ is a principal elliptic fibration \cite[\S6]{Kodaira-structure-1}, \cite[\S\,V.5]{BHPV-2004}. The universal cover of $X$ is isomorphic to $\mathbb{C}^2$, and the fundamental group $\Gamma=\pi_1(X)$ has the following presentation: \begin{equation} \label{equation-fundamenta-group-Kodaira-surface} \Gamma=\langle \updelta_1,\updelta_2,\updelta_3,\upgamma \mid [\updelta_1, \updelta_2]= \updelta_3^r,\ [\updelta_i, \updelta_3]=[\updelta_i,\upgamma]=1 \rangle, \end{equation} where $r$ is a positive integer \cite[\S6]{Kodaira-structure-1}. In the notation of Appendix~\ref{section:appendix} one has~\mbox{$\Gamma\cong{\mathscr{H}}(r)\times\mathbb{Z}$}. A secondary Kodaira surface is a quotient of a primary Kodaira surface by a free action of a finite cyclic group. The invariants of a secondary Kodaira surface are \cite[\S9]{Kodaira-structure-2}: \[ \newcommand{\hspace{6pt}}{\hspace{6pt}} \mathrm{a}(X)=1,\hspace{6pt} \mathrm{b}_1(X)=1,\hspace{6pt} \mathrm{b}_2(X)=0,\hspace{6pt} \chi_{\mathrm{top}}(X)=0,\hspace{6pt} \mathrm{h}^{1}(X,{\mathscr{O}}_X)=1,\hspace{6pt} \mathrm{h}^{2}(X,{\mathscr{O}}_X)=0. \] For both types of Kodaira surfaces the algebraic reduction $\phi\colon X\to B$ is an $\operatorname{Aut}(X)$-equivariant elliptic fibration, so that in particular the group $\operatorname{Aut}(X)$ acts on the curve~$B$. Denote by $\overline{\operatorname{Aut}}(X)\subset \operatorname{Aut} (X)$ the subgroup that consists of all elements acting trivially on $H^*(X,\mathbb{Z})$ and $H^*(B,\mathbb{Z})$. \begin{lemma}[{cf. Lemma~\ref{lemma-Inoue-quotient}}] \label{lemma:again-Kodaira} Let $X$ be a primary Kodaira surface, and $G\subset \overline{\operatorname{Aut}}(X)$ be a finite subgroup. Then the action of $G$ on $X$ is free, and the quotient $\hat{X}=X/G$ is again a primary Kodaira surface. \end{lemma} \begin{proof} The curve $B$ is elliptic; thus the group $G$ acts on $B$ without fixed points. This means that there are no fibers of $\phi$ that consist of points fixed by $G$. On the other hand, every curve on $X$ is a fiber of $\phi\colon X\to B$ by Lemma~\ref{lemma:projective}. Hence there are no curves that consist of points fixed by $G$ on $X$ at all. Now the topological Lefschetz fixed point formula implies that $G$ has no fixed points on $B$ and on $X$. Therefore, $\hat{X}$ is a smooth surface, and the quotient morphism $X\to \hat{X}$ is unramified. Hence $\varkappa(\hat{X})=\varkappa(X)=0$. Moreover, we have \[ \mathrm{c}_1(\hat{X})^2=\mathrm{c}_1(X)^2=0. \] This means that the surface $\hat{X}$ is minimal. Since $G\subset \overline{\operatorname{Aut}}(X)$, we have \[ \mathrm{b}_1(\hat{X})=\mathrm{b}_1(X)=3. \] Therefore, $\hat{X}$ is a primary Kodaira surface by Theorem \ref{theorem:classification}. \end{proof} \begin{lemma} \label{lemma:primary-Kodaira-surface} Let $X$ be a primary Kodaira surface. Then the group $\operatorname{Aut}(X)$ is Jordan. \end{lemma} \begin{proof} Let $G\subset \operatorname{Aut}(X)$ be a finite subgroup. By Theorem~\ref{theorem:Minkowski} we can assume that~\mbox{$G\subset \overline{\operatorname{Aut}}(X)$}. Put $\hat{X}=X/G$. It follows from Lemma~\ref{lemma:again-Kodaira} that $G$ acts freely on~$X$, and $\hat{X}$ is a primary Kodaira surface. Put $\Gamma=\pi_1(X)$ and $\hat{\Gamma}=\pi_1(\hat{X})$. Then $\Gamma$ is a normal subgroup of $\hat{\Gamma}$, and $\hat{\Gamma}/\Gamma\cong G$; moreover, both $\Gamma$ and $\hat{\Gamma}$ are as in~\S\ref{subsection:Heisenberg-direct}. Now it follows from Lemma~\ref{lemma:Heisenberg-direct} that there is a constant $r$ that depends only on $\Gamma$ (that is, only on~$X$), such that $G$ has a normal abelian subgroup of index at most~$r$. \end{proof} \begin{lemma} \label{lemma:secondary-Kodaira-surface} Let $X$ be a secondary Kodaira surface. Then the group $\operatorname{Aut}(X)$ is Jordan. \end{lemma} \begin{proof} Since $\mathrm{a}(X)=1$, the algebraic reduction is an $\operatorname{Aut}(X)$-equivariant elliptic fibration~\mbox{$\pi\colon X\to B$}. Thus there is an exact sequence of groups $$ 1\longrightarrow \operatorname{Aut}(X)_{\pi}\longrightarrow\operatorname{Aut}(X)\longrightarrow\Gamma\longrightarrow 1, $$ where the action of $\operatorname{Aut}(X)_{\pi}$ is fiberwise with respect to $\pi$, and $\Gamma$ is a subgroup of $\operatorname{Aut}(B)$. We claim that the group $\operatorname{Aut}(X)_{\pi}$ is Jordan. Indeed, if $H$ is a finite subgroup of~\mbox{$\operatorname{Aut}(X)_{\pi}$}, then $H$ acts faithfully on a typical fiber of~$\pi$, which is a smooth elliptic curve. This implies that $H$ has a normal abelian subgroup of index at most~$6$. Since \[ \mathrm{h}^0(X,\Omega_X)=\mathrm{b}_1(X)-\mathrm{h}^{1}(X,{\mathscr{O}}_X)= 0, \] the base curve $B$ is rational. Furthermore, one has \[ \chi({\mathscr{O}}_X)=\mathrm{h}^{0}(X,{\mathscr{O}}_X)-\mathrm{h}^{1}(X,{\mathscr{O}}_X)+\mathrm{h}^{2}(X,{\mathscr{O}}_X)=1-1+0=0. \] By the canonical bundle formula (see e.g. \cite[Theorem~V.12.1]{BHPV-2004}) we have \[ {\mathscr{K}}_X\sim\pi^*\left({\mathscr{K}}_B\otimes\mathcal{L}\right)\otimes{\mathscr{O}}_X\left(\sum (m_i-1) F_i\right), \] where $F_i$ are all (reduced) multiple fibers of $\pi$, the fiber $F_i$ has multiplicity $m_i$, and $\mathcal{L}$ is a line bundle of degree $\chi({\mathscr{O}}_X)=0$. Since $X$ has Kodaira dimension~$0$, we see that \begin{equation*} \sum (1-1/m_i)=2. \end{equation*} In particular, the number of multiple fibers equals either $3$ or $4$. This means that $\Gamma$ has a finite non-empty invariant subset in $B$ that consists of $3$ or $4$ points. Hence $\Gamma$ is finite, so that the assertion follows by Lemma~\ref{lemma:group-theory}(i). \end{proof} We summarize the results of Lemmas~\ref{lemma:primary-Kodaira-surface} and~\ref{lemma:secondary-Kodaira-surface} as follows. \begin{corollary}\label{corollary:Kodaira-surface} Let $X$ be a Kodaira surface. Then the group $\operatorname{Aut}(X)$ is Jordan. \end{corollary} An alternative way to prove the Jordan property for the automorphism group of a secondary Kodaira surface is to use the fact that its canonical cover is a primary Kodaira surface together with Lemma~\ref{lemma:group-theory}(ii) and Theorem~\ref{theorem:Riera-generators}. \section{Non-negative Kodaira dimension} \label{section:non-negative} In this section we study automorphism groups of surfaces of non-negative Kodaira dimension, and prove Theorems~\ref{theorem:Aut} and~\ref{theorem:main}. The case of Kodaira dimension $2$ is well known. \begin{theorem}\label{theorem:general-type} Let $X$ be a (minimal) surface of general type. Then the group $\operatorname{Aut}(X)$ is finite. \end{theorem} \begin{proof} The surface $X$ is projective, see Theorem~\ref{theorem:classification}. Thus the group $\operatorname{Aut}(X)$ is finite, see for instance~\cite{HMX} where a much more general result is established for varieties of general type of arbitrary dimension. \end{proof} Now we consider the case of Kodaira dimension $1$. \begin{lemma}[{cf. \cite[Lemma~3.3]{ProkhorovShramov-dim3}}] \label{lemma:Kodaira-dimension-1} Let $X$ be a minimal surface of Kodaira dimension~$1$. Then the group $\operatorname{Aut}(X)$ is Jordan. \end{lemma} \begin{proof} Let $\phi\colon X\to B$ be the pluricanonical fibration, where $B$ is some (smooth) curve. It is equivariant with respect to the action of $\operatorname{Aut}(X)$. Thus there is an exact sequence of groups $$ 1\longrightarrow \operatorname{Aut}(X)_{\phi}\longrightarrow\operatorname{Aut}(X)\longrightarrow\Gamma\longrightarrow 1, $$ where the action of $\operatorname{Aut}(X)_{\phi}$ is fiberwise with respect to $\phi$, and $\Gamma$ is a subgroup of~\mbox{$\operatorname{Aut}(B)$}. As in the proof of Lemma~\ref{lemma:secondary-Kodaira-surface}, we see that the group $\operatorname{Aut}(X)_{\phi}$ is Jordan. Hence by Lemma~\ref{lemma:group-theory}(i) it is enough to check that $\Gamma$ has bounded finite subgroups. In particular, this holds if the genus of $B$ is at least~$2$, since the group $\operatorname{Aut}(B)$ is finite in this case. Thus we will assume that the genus of $B$ is at most~$1$. Suppose that $\phi$ has a fiber $F$ such that $F_{\operatorname{red}}$ is not a smooth elliptic curve. Then every irreducible component of $F$ is a rational curve, see e.g.~\mbox{\cite[\S\,V.7]{BHPV-2004}}. Hence Lemma~\ref{lemma:rational-curve} applied to the set of irreducible components of fibers of the morphism~$\phi$ shows that the group~\mbox{$\operatorname{Aut}(X)$} is Jordan. Therefore, we will assume that all (set-theoretic) fibers of $\phi$ are smooth elliptic curves. Then the topological Euler characteristic~\mbox{$\chi_{\mathrm{top}}(X)$} equals~$0$. By the Noether's formula one has \[ \chi({\mathscr{O}}_X)=\frac{1}{12}\left(\mathrm{c}_1(X)^2+\chi_{\mathrm{top}}(X)\right)=0. \] By the canonical bundle formula we have \[ {\mathscr{K}}_X\sim\phi^*\left({\mathscr{K}}_B\otimes\mathcal{L}\right)\otimes{\mathscr{O}}_X\left(\sum (m_i-1) F_i\right), \] where $F_i$ are all (reduced) multiple fibers of $\phi$, the fiber $F_i$ has multiplicity $m_i$, and $\mathcal{L}$ is a line bundle of degree $\chi({\mathscr{O}}_X)=0$. Since $X$ has Kodaira dimension~$1$, we see that \begin{equation}\label{eq:mult-fibers} 2g(B)-2+\sum (1-1/m_i)=\deg \left({\mathscr{K}}_B\otimes\mathcal{L}\right)+ \sum (1-1/m_i)> 0. \end{equation} Suppose that $B$ is an elliptic curve, so that $g(B)=1$. Then~\eqref{eq:mult-fibers} implies that $\phi$ has at least one multiple fiber. This means that $\Gamma$ has a finite non-empty invariant subset in $B$, so that $\Gamma$ is finite. Now suppose that $B$ is a rational curve, so that $g(B)=0$. Then~\eqref{eq:mult-fibers} implies that $\phi$ has at least three multiple fibers, cf. the proof of Lemma~\ref{lemma:secondary-Kodaira-surface}. This means that $\Gamma$ has a finite non-empty invariant subset in $B$ that consists of at least three points. Therefore, $\Gamma$ is finite in this case as well. \end{proof} Finally, we consider the case of Kodaira dimension $0$. The following result is well known. \begin{theorem}\label{theorem:torus} Let $X=\mathbb{C}^n/\Lambda$ be a complex torus. Then \begin{equation}\label{eq:complex-torus} \operatorname{Aut}(X)\cong\left( \mathbb{C}^n/\Lambda\right)\rtimes \Gamma, \end{equation} where $\Gamma$ is isomorphic to the stabilizer of the lattice $\Lambda$ in $\operatorname{GL}_n(\mathbb{C})$. \end{theorem} \begin{proof} The proof is standard, but we include it for the readers convenience. Let $\Gamma$ be the stabilizer of the point $0\in X$. Then the decomposition~\eqref{eq:complex-torus} holds, and it remains to prove that $\Gamma$ is isomorphic to the stabilizer of the lattice $\Lambda$ in $\operatorname{GL}_n(\mathbb{C})$. Since $\mathbb{C}^n$ is the universal cover of~$X$, there is an embedding $\Gamma\hookrightarrow\operatorname{Aut}(\mathbb{C}^n)$, and there is a point in $\Lambda$ that is invariant with respect to $\Gamma$. We may assume that this is the origin in $\mathbb{C}^n$. Let $g$ be an element of $\Gamma$. One has $g(\Lambda)=\Lambda$. We claim that $g\in\operatorname{GL}_n(\mathbb{C})$. Indeed, let~$\lambda$ be an arbitrary element of the lattice $\Lambda$. Consider a holomorphic map $$ f_{\lambda}\colon\mathbb{C}^n\to\mathbb{C}^n,\quad f_{\lambda}(z)=g(z+\lambda)-g(z). $$ One has $f_{\lambda}(z)\in\Lambda$ for every $z\in\mathbb{C}^n$. This means that $f_{\lambda}(z)$ is constant, so that all partial derivatives of $f_{\lambda}$ vanish. Hence the partial derivatives of $g(z)$ are periodic with respect to the lattice $\Lambda$. This in turn means that these partial derivatives are bounded and thus constant, so that~$g(z)$ is a linear function in~$z$. \end{proof} \begin{remark} A complete classification of finite groups that can act by automorphisms of a two-dimensional complex torus (preserving a point therein) was obtained in~\cite{Fujiki}. \end{remark} Theorem~\ref{theorem:torus} immediately implies the following result. \begin{corollary} \label{corollary:torus} Let $X$ be a complex torus. Then the group $\operatorname{Aut}(X)$ is Jordan. \end{corollary} \begin{proof} By Lemma~\ref{lemma:group-theory}(i) it is enough to check that in the notation of Theorem~\ref{theorem:torus} the group $\Gamma$ has bounded finite subgroups. Since $\Gamma$ is a subgroup in the automorphism group of $\Lambda$, the latter follows from Theorem~\ref{theorem:Minkowski}. \end{proof} \begin{lemma}\label{lemma:K3-Enriques} Let $X$ be either a $K3$ surface, or an Enriques surface. Then the group~\mbox{$\operatorname{Aut}(X)$} has bounded finite subgroups. \end{lemma} \begin{proof} Suppose that $X$ is a $K3$ surface. If $X$ is projective, then the assertion follows from \cite[Theorem~1.8(i)]{Prokhorov-Shramov-2013}. If $X$ is non-projective, then the assertion follows from Theorem~\ref{theorem:class-VII}, or from a stronger result of~\cite[Theorem~1.5]{Oguiso08}. Now suppose that $X$ is an Enriques surface. Then it is projective (see Theorem~\ref{theorem:classification}), so that the assertion again follows from~\cite[Theorem~1.8(i)]{Prokhorov-Shramov-2013}. \end{proof} Note that in the assumptions of Lemma~\ref{lemma:K3-Enriques}, the (weaker) assertion that the group~\mbox{$\operatorname{Aut}(X)$} is Jordan follows directly from Theorem~\ref{theorem:Riera}. \begin{lemma}\label{lemma:bielliptic} Let $X$ be a bielliptic surface. Then the group $\operatorname{Aut}(X)$ is Jordan. \end{lemma} \begin{proof} The surface $X$ is projective (see Theorem~\ref{theorem:classification}). Thus the assertion follows from Theorem~\ref{theorem:Popov} (or~\cite{BandmanZarhin2015}, or~\cite{MengZhang}, or \cite[Theorem~1.8(ii)]{Prokhorov-Shramov-2013}). \end{proof} \begin{corollary}\label{corollary:Kodaira-dimension-0} Let $X$ be a minimal surface of Kodaira dimension $0$. Then the group~\mbox{$\operatorname{Aut}(X)$} is Jordan. \end{corollary} \begin{proof} We know from Theorem~\ref{theorem:classification} that $X$ is either a complex torus, or a $K3$ surface, or an Enriques surface, or a bielliptic surface, or a Kodaira surface. If $X$ is a complex torus, then the assertion holds by Corollary~\ref{corollary:torus}. If $X$ is a $K3$ surface or an Enriques surface, then the assertion is implied by Lemma~\ref{lemma:K3-Enriques}. If $X$ is a bielliptic surface, then the assertion holds by Lemma~\ref{lemma:bielliptic}. If $X$ is a Kodaira surface, then the assertion holds by Corollary~\ref{corollary:Kodaira-surface}. \end{proof} \begin{proposition}\label{proposition:Aut-minimal} Let $X$ be a minimal surface. Then the group $\operatorname{Aut}(X)$ is Jordan. \end{proposition} \begin{proof} We check the possibilities for the birational type of $X$ listed in Theorem~\ref{theorem:classification} case by case. If $X$ is rational or ruled, then $X$ is projective (see Theorem~\ref{theorem:classification}), and thus the group $\operatorname{Aut}(X)$ is Jordan by~\cite[Corollary~1.6]{Zarhin2015} or \cite{MengZhang}. If $X$ is a surface of class~VII, then the group $\operatorname{Aut}(X)$ is Jordan by Corollary~\ref{corollary:class-VII}. If the Kodaira dimension of~$X$ is~$0$, then the group $\operatorname{Aut}(X)$ is Jordan by Corollary~\ref{corollary:Kodaira-dimension-0}. If the Kodaira dimension of~$X$ is~$1$, then the group $\operatorname{Aut}(X)$ is Jordan by Lemma~\ref{lemma:Kodaira-dimension-1}. Finally, if the Kodaira dimension of~$X$ is~$2$, then the group $\operatorname{Aut}(X)$ is finite by Theorem~\ref{theorem:general-type}. \end{proof} Now we are ready to prove Theorem~\ref{theorem:Aut}. \begin{proof}[Proof of Theorem~\textup{\ref{theorem:Aut}}] If $X$ is rational or ruled, then $X$ is projective (see Theorem~\ref{theorem:classification}), and thus the group $\operatorname{Aut}(X)$ is Jordan by~\cite{BandmanZarhin2015} or~\cite{MengZhang}. Otherwise Proposition~\ref{proposition:Bir-vs-Aut} implies that there is a unique minimal surface $X'$ birational to $X$, and $$ \operatorname{Aut}(X)\subset\operatorname{Bir}(X)\cong\operatorname{Bir}(X')=\operatorname{Aut}(X'). $$ Now the assertion follows from Proposition~\ref{proposition:Aut-minimal}. \end{proof} Finally, we are going to prove Theorem~\textup{\ref{theorem:main}}. \begin{proof}[Proof of Theorem~\textup{\ref{theorem:main}}] There always exists a minimal surface birational to a given one, so we may assume that $X$ is a minimal surface itself. If $X$ is rational, then the group $\operatorname{Bir}(X)$ is Jordan by Theorem~\ref{theorem:Serre}. Also, by \cite[Theorem~4.2]{ProkhorovShramov-RC} and Proposition~\ref{proposition:fixed-point} every finite subgroup of $\operatorname{Bir}(X)$ contains a subgroup of bounded index that can be embedded into $\operatorname{GL}_2(\mathbb{C})$. Hence every finite subgroup of~\mbox{$\operatorname{Bir}(X)$} can be generated by a bounded number of elements. If $X$ is ruled and non-rational, let $\phi\colon X\to B$ be the $\mathbb{P}^1$-fibration over a (smooth) curve. Since~$X$ is projective (see Theorem~\ref{theorem:classification}), the group $\operatorname{Bir}(X)$ is Jordan if and only if $B$ is not an elliptic curve by Theorem~\ref{theorem:Popov}. Moreover, we always have an exact sequence of groups $$ 1\to\operatorname{Bir}(X)_\phi\to \operatorname{Bir}(X)\to\operatorname{Aut}(B), $$ where the action of the subgroup $\operatorname{Bir}(X)_\phi$ is fiberwise with respect to $\phi$. In particular, the group~\mbox{$\operatorname{Bir}(X)_\phi$} acts faithfully on the schematic general fiber of $\phi$, which is a conic over the field~\mbox{$\mathbb{C}(B)$}. This implies that finite subgroups of $\operatorname{Bir}(X)_\phi$ are generated by a bounded number of elements. Also, finite subgroups of $\operatorname{Aut}(B)$ are generated by a bounded number of elements. Therefore, the same holds for finite subgroups of $\operatorname{Bir}(X)$ as well. In the remaining cases we have $\operatorname{Bir}(X)=\operatorname{Aut}(X)$ by Proposition~\ref{proposition:Bir-vs-Aut}, so the assertion follows from Proposition~\ref{proposition:Aut-minimal} and Theorem~\ref{theorem:Riera-generators}. \end{proof}
b5b5d0c8afab93a5535f3caa7c067e484c6b7669
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }
\section{Introduction} In Neutron Stars the elementary excitations of the matter affect the whole thermodynamics and long term evolution of the star, because they are involved in the emission and scattering of neutrinos, in the values of the heat content and transport coefficient. At not too high density it is expected that the main components of the matter are neutron, protons, electrons and muons~\cite{shap}, and then the spectral properties of these excitations can have a complex structure. Collective modes in asymmetric nuclear matter have been studied previously, e.g. in Refs.~\cite{Haensel-NPA301, Matera-PRC49, Greco-PRC67}. In the astrophysical context, a study of the collective excitations in normal neutron star matter on the basis of the relativistic mean field method has been presented in Ref.~\cite{Providencia-PRC74}. The spectral functions of the different components in normal neutron star matter have been calculated in Ref.~\cite{paper1,paper2} on the basis of non-relativistic Random Phase Approximation (RPA) for the nucleonic components and relativistic RPA for the leptonic components. Different models for the nuclear effective interaction were considered and a detailed comparison was done between some Skyrme forces and a microscopically derived interaction. This work was extended to superfluid proton matter in ref. \cite{paper3}, but neglecting the proton-neutron coupling. The elementary excitations in superfluid neutron star matter have been studied by several authors ~\cite{Reddy,Kundu,Leinson1,Armen,Leinson2,Vosk}. In ref. \cite{paper3} it was shown that the proton superfluid presents a pseudo-Goldstone mode at low momentum, which can have a strong influence e.g. on neutrino emission~\cite{Yako,Reddy,Kundu,Leinson1,Armen,Leinson2,Vosk} or mean free path. More recently in refs. \cite{Urban1,Thesis} the elementary excitations of superfluid neutron matter in the crust region were studied, and with the possible inclusion of the coupling with the nuclear lattice in refs. \cite{Chamel,Urban2,Thesis,Koby} . \par In this paper we focus on the region of homogeneous matter where proton superfluidity is expected to occur, while the neutron component is predicted to have very small or zero pairing gap. This restricts the density region between about saturation density and twice saturation density. This work extends the study of refs. \cite{paper3} by including the neutron-proton interaction. An extensive study of the elementary excitations in presence of both proton and neutron superfluidity has been presented in ref. \cite{Koby1}, where the hydrodynamics formalism was used with the inclusion of proton-neutron coupling. As in ref. \cite{paper3} we formulate the general theoretical scheme within the generalized RPA approximation, which is known to be a conserving approximations~\cite{BaymKad,Baym}, i.e. current is conserved locally and the related Generalized Ward's Identities (GWI)~\cite{Schriefferb} are fulfilled. One of the main goal of the work is the study of the mutual influence of proton and neutrons on the overall spectral functions. \par The plan of the paper is as follows. In Sec. \ref{sec:form} the formalism for the response function in the generalized RPA scheme is briefly sketched. In particular it is discussed the method to estimate microscopically the effective nucleon-nucleon interaction. In Sec. \ref{sec:res} the results are presented for the spectral function taking the proton pairing gap as a parameter. The role of the neutron-proton interaction is discussed in detail. In Sec. \ref{sec:conc} we summarize and draw the conclusions. Finally in the Appendix additional details of the calculations are given. \section{Formalism. \label{sec:form}} For future reference we sketch in this section the formalism of the generalized Random Phase Approximation (RPA). In a multi-components fermion systems the equations for the generalized response functions $ \Pi $ can be written schematically \cite{paper3,Schriefferb} \begin{equation} \Pi_{ik}(t,t') \,=\, \Pi_{ik}^{0}(t,t') \,+\, \sum_{jl} \, \Pi_{ij}^{0}(t,\overline{t_1}) v_{j,l} \Pi_{lk}(\overline{t_1},t') \label{eq:RPA}\end{equation} \par\noindent where $ \, i,j, \cdots \,$ label the different components and the corresponding degrees of freedom, $\, v_{j,l}\, $ is the effective interaction between them and $\, \Pi^{0}\, $ is the free response function. The time variable with an overline is integrated. Specifically, for the considered Neutron Star matter one has neutron, proton and electron components (neglecting muons), with the possibility of both particle-hole and pair excitations in the proton channel. If we take into account both proton and neutron pairing, in terms of creation and annihilation operators the indexes $\, i,j, \cdots $ run over the following configurations \begin{equation} \begin{array}{ll} \ &\ a^\dag(p)\,a(p) | \Psi_0>\,\,\ ,\ \,\, a^\dag(p)\,a^\dag(p) | \Psi_0>\,\,\ ,\ \,\, a(p)\,a(p) | \Psi_0> \\ \ &\ \\ \ &\ a^\dag(n)\,a(n) | \Psi_0>\,\,\ ,\ \,\, a^\dag(n)\,a^\dag(n) | \Psi_0>\,\,\ ,\ \,\, a(n)\,a(n) | \Psi_0> \\ \ &\ \\ \ &\ a^\dag(e)\,a(e) | \Psi_0> \end{array} \label{eq:conf}\end{equation} \par\noindent where the labels $\, n, p, e\, $ indicate neutrons, protons and electron respectively, and $ |\Psi_0> $ is the correlated ground state. If we call $ A_i $ the generic configuration, the response functions can be written \begin{equation} \Pi_{ik}(t,t') \,=\, - < \Psi_0 | T\{A_i^\dag(t) A_k^{\phantom{\dag}}(t') | \Psi_0 > \label{eq:Pi}\end{equation} \par\noindent where $ T $ is the usual fermion time ordering operator. The configurations (\ref{eq:conf}) correspond in fact to both density and pairing excitations. In agreement with (\ref{eq:conf}), in principle Eqs. (\ref{eq:RPA}) form a $ 7\times 7 $ system of coupled equations. However, as mentioned in the Introduction, neutrons are assumed to be in the normal state. Furthermore, it turns out \cite{paper3,Schrieffer} that one equation can be decoupled to a good approximation by taking suitable linear combinations of the pairing additional mode $ a^\dag(p)\, a^\dag(p) $ and pairing removal mode $ a(p)\, a(p) $. In this way the system reduces to $ 4\times 4 \, $ coupled equations. Details on the equations and their explicit analytic form are given in Appendix A. The system has to be solved for the response functions $\, \Pi_{ik}\, $, all of which can be obtained by selecting the inhomogeneous term in Eq. (\ref{eq:RPA}). More precisely one has to select a given configuration indicated by the right index $ k $ and solve the system for each choice of $ k $. In this way one gets all the diagonal and non-diagonal elements of $\, \Pi_{ik}\, $. \par One has to notice that the generalized RPA equations are valid in the collisionless regime, so that the only damping of the modes is the Landau damping, which is very effective above a certain momentum threshold. In particular dissipation due to electron-electron collisions is neglected. This is justified if the electron mean free path is much larger than the typical wavelength of the mode. Under the physical conditions of Neutron Star matter, i.e. low temperature and density of the order of the saturation one, the electron mean free path was estimated in ref. \cite{Shtern}, where it was shown that the collisions are dominated by the exchange of transverse plasmon modes and the mean free path extends to a macroscopic size, of the order of 10$^{-3}$ cm. This also indicates that the collision time is much longer than the characteristic period of the modes, and that the electron collisions are relevant only for macroscopic motion like viscous flow. We therefore neglect in the sequel electron dissipation, and include only the Landau damping. \par For simplicity for the single particle energy spectrum we are going to use the free one. The introduction of the effective masses is trivial and we think that it is not going to change qualitatively the overall pattern of the results, but of course for quantitative results the effective mass is mandatory. Then the main input needed in Eq. (\ref{eq:RPA}) is the effective interaction $ v_{ij} $. The proton pairing interaction strength $ U $ is very sensitive to many-body effects \cite{ppair} and it is quite challenging to estimate its size. We prefer to use the proton pairing gap as a parameter to be explored and fix the pairing interaction consistently with the gap equation ( $ U > 0 $ ) \begin{eqnarray} \Delta &=& U\int{\frac{d^3 \mathbf{k}}{(2\pi)^3}\frac{\Delta}{2E_{{\rm k}}}} = U\int{\frac{d^3 \mathbf{k}}{(2\pi)^3} u_{{\rm k}}v_{{\rm k}}} \label{eq:gap}\end{eqnarray} \noindent The quasi-particle energy $ E $ and coherence factors $ u, v $ have the standard form \begin{eqnarray} \begin{array}{ll} E_{{\rm k}} &\,=\, \sqrt{(\epsilon_{{\rm k}} - \mu)^2 + \Delta^2} \\ \ &\ \\ v_{{\rm k}}^2 &\,=\, \frac{1}{2} \big( 1 \,-\, \frac{\epsilon_{{\rm k}} - \mu}{E_{{\rm k}}} \big) \ \ \ \ \ , \ \ \ \ \ u_{{\rm k}}^2 \,=\, \frac{1}{2} \big( 1 \,+\, \frac{\epsilon_{{\rm k}} - \mu}{E_{{\rm k}}} \big) \\ \end{array} \label{eq:stand}\end{eqnarray} \noindent where $ \epsilon_{{\rm k}} $ is the proton kinetic energy and $ \mu $ the proton chemical potential. In the calculations the value of the pairing gap $ \Delta $ is fixed at a given value and the effective interaction strength $ U $ is extracted from the gap equation (\ref{eq:gap}). Then the pairing interaction $ -U $ is inserted in the RPA equations (\ref{eq:RPA}). \par For the various particle-hole interactions we focus on the density response function, i.e. the vector channel, and then we follow the Landau monopolar approximation. The corresponding strength can be estimated on the basis of a realistic Skyrme interaction. It is also possible to consider microscopic many-body Equation of State (EOS) as an Energy Density Functional. In this approach for Brueckner-Hartree-Fock (BHF) calculations the interaction strength can be obtained from the derivative with respect to the density $ \rho_j$ of the BHF potential energy $ V_i $, with $ i, j $ running on the proton and neutron components \begin{equation} \label{eq:vres-derU} v_{ij}=\left(\frac{\delta V_i}{\delta\rho_j}(k_{{\rm F}i},\rho_n,\rho_p)\right)_{k,\rho_i={\rm cst}}\;. \end{equation} \noindent Notice that in Eq. (\ref{eq:vres-derU}) the Fermi momenta $ k_{{\rm F}i} $ must be kept fixed in performing the derivative. How to do that in practice from the numerical calculations is explained in ref. \cite{paper1}. In the case of Skyrme forces the possible dependence on momenta is analytic and the procedure is trivial. In any case no strong pairing effects are considered, e.g. the effective interactions are assumed to be independent of the pairing gap and calculated for the normal system. \section{Results.\label{sec:res}} \label{sec:results} Before presenting the results obtained when all interactions are fully taken into account (namely nuclear pairing, Coulomb, and density-density nuclear interactions), it is instructive to discuss briefly the development of the overall structure of the excitation spectrum as the interactions are introduced. This can help to characterize the effects of each one of them. \par It is well known that in a neutral superfluid, with only pairing interaction, there are two types of excitations. Below $ 2\Delta $ a sharp Goldstone mode is present. It is a consequence of the breaking of gauge invariance that occurs in the ground state of a superfluid system, and its energy is linear in momentum for small momenta, with a velocity equal to $ v_F/\sqrt{3} $, being $ v_F $ the Fermi velocity. Notice that a gauge transformation on the field operators is equivalent to a $ U(1) $ transformation on the order parameter \cite{Greiter}. At increasing momentum the energy spectrum deviates from linearity and approaches $ 2\Delta $ for large momenta \cite{paper2}. Above $ 2\Delta $ another excitation mode appears, usually indicated as " pair breaking " mode, because it corresponds indeed to the breaking of a Cooper pair. It is strongly damped and it is reflected in a bump of the spectral function above $ 2\Delta $. The energy of this mode, after increasing with the momentum, bends down towards $ 2\Delta $ also. At even higher momentum the spectral function has no structure and any excitation is overdamped \cite{paper2}. \par If we consider only the proton component and introduce the Coulomb interaction, in principle the Goldstone mode should disappear, and it should be substituted by a proton plasma mode which has a finite energy at zero momentum. However it has been shown in ref. \cite{paper2} that the electrons are fully screening the proton-proton Coulomb interaction, and a sound mode reappears below $ 2\Delta $. The (screened) Coulomb interaction however affects the sound velocity, which turns out to be about three times the Goldstone mode velocity. This mode can be considered a pseudo-Goldstone mode, since it is still below $ 2\Delta $ but with a modified velocity due to the interaction. Details can be found in ref. \cite{paper2}, where the structure of the proton spectral function is discussed in detail. Here we notice only that the electron Thomas-Fermi screening length $ v_F / \sqrt{3}\omega_p $, where $\omega_p$ is the electron plasma frequency and $v_F$ the Fermi velocity, at saturation density and with the calculated proton fraction is about 18 fm and about 11 fm at twice saturation density. This is more than one order of magnitude larger than the average interparticle distance. However the Coulomb interaction is in any case screened, and this is enough to suppress the proton plasma mode. \par We now introduce the nuclear interaction, including the proton-neutron nuclear coupling. The proton fraction is taken from BHF calculations, which include three-body forces and correctly reproduce the phenomenological saturation point. The corresponding nuclear interaction strengths are calculated according to Eq. (\ref{eq:vres-derU}). The values of these physical parameters are reported in ref. \cite{paper1}. The comparison with Skyrme forces will be discussed later.\par The spectral or strength function $ S(q,\omega) $, at given momentum $ q $ and energy $ \omega $, is directly related to the response function $ \Pi $ \begin{equation} S_i(q,\omega) \,=\, -{\rm Im} \big( \Pi_{ii}(q,\omega) \big) \label{eq:strength}\end{equation} \noindent where $ {\rm Im} $ indicates the imaginary part and the index $ i $ runs over the particle-hole configurations of neutrons, protons and electrons. In Fig. 1a are reported the strength functions of the three components at the matter density $ \rho \,=\,$ 0.16 fm$^{-3}$, for which the proton fraction is 3.7\%, and for the momentum $ q \,=\, $0.025 fm$^{-1}$. The pairing gap is assumed to be 1 MeV \cite{ppair}. \begin{figure} \vskip -8 cm \includegraphics[bb= 300 0 470 790,angle=0,scale=0.55]{Fig1a.ps} \includegraphics[bb= 80 0 230 790,angle=0,scale=0.55]{Fig1b.ps} \vskip -1 cm \caption{(Color on line) Spectral functions of neutrons (black thick line), proton (red thin line) and electrons (green dashed line). In panels a) are reported the spectral functions when the Coulomb and nuclear interactions are introduced. For comparison in panel b) are reported the spectral functions when the neutron-proton interaction is neglected. For a closer comparison the neutron spectral function of this case is also reported in panel a) ( blue dotted line ) } \label{fig:Fig1} \end{figure} The neutron strength function (SF) is of course much larger than the proton one, due to the higher density of states, and for convenience it has been divided by 15. For comparison in Fig. 1b are reported the SF when the neutron-proton coupling is switched off. A few comments are in order. The proton SF (red thin line) is characterized by the pseudo-Goldstone mode below 2$\Delta$ and a pair-breaking mode above 2$\Delta$. Both modes persist when the neutron-proton interaction is switched on. However the presence of the neutron component suppresses the strength of the pseudo-Goldstone by about a factor two. Furthermore the pseudo-Goldstone acquires an additional width, which is the consequence of the large spread of the neutron SF (black thick line). The neutron-proton coupling is apparent from the dip present in the neutron SF around the energy of the proton pseudo-Goldstone mode, and from the shift to lower energy of the neutron SF. Notice that the neutron SF, without proton coupling, is smooth and unstructured. The reason is that the neutron effective particle-hole interaction is attractive in this density range, so that the possible sound mode lies inside the particle-hole continuum and it is therefore overdamped (Landau damping). In order to facilitate the comparison this uncoupled SF is reported also in Fig. 1a (blue dotted line). Finally the electron SF (green dashed line) is following closely the proton one, which indicates the full electron screening of the proton-proton Coulomb interaction. \par At higher momenta the neutron-proton coupling is stronger, as it is apparent in Figs. 2a,b, where the spectral functions are displayed at the two higher momenta $ q \,=\, 0.05,\, 0.075 $\, fm$^{-1}$. In fact the neutron spectral function is strongly modified by the coupling with the proton component. Not only a dip is present around the energy of the proton pseudo-Goldstone, but the whole structure of the spectral function is strongly modified, which is a little surprising in view of the smallness of the proton fraction. It is also important to notice that the electron spectral function does not follow any more closely the proton spectral function, since at higher momenta the proton velocity starts to be comparable with the electron Fermi velocity. \begin{figure} \vskip -8 cm \includegraphics[bb= 300 0 470 790,angle=0,scale=0.55]{Fig2a.ps} \includegraphics[bb= 80 0 230 790,angle=0,scale=0.55]{Fig2b.ps} \vskip -1 cm \caption{(Color on line) Spectral functions as in Fig. 1a, for other two different momenta.} \label{fig:Fig2} \end{figure} \par It is interesting to compare the spectral functions in the case of proton superfluidity with the ones in the normal system. We then fix the pairing gap at $ \Delta \,=\, $0.02 MeV. For not too small momenta the system is expected to behave as a normal system. In Figs. 3a,b are reported the spectral functions for two values of the momentum. If we compare Fig. 3a with Fig. 1a one notices a few apparent differences. First of all for the small value of the gap there is a sharp peak in the electron spectral function, corresponding to the electron plasmon excitation. This feature indicates that, as already noticed in ref. \cite{paper2}, the electron plasma mode is damped by the presence of a pairing gap. The reason of the damping is due to the opening of a new plasmon decay channel due to the coupling with the pair breaking mode through the Coulomb interaction. The peak in the proton spectral function looks similar. However the nature of the corresponding excitation is different. For this small value of the pairing gap the excitation energy is well above $ 2\Delta $ and the mode is a sound mode instead of a pseudo-Goldstone mode. However the neutron-proton coupling looks quite strong also in this case. Of course in this case no pair-breaking mode is present, as one can see from the absence of any appreciable proton strength at higher energy, in comparison with the large strength that appears just above $ 2\Delta $ in the case of $ \Delta \,=\, 1 $ MeV, as depicted in Fig. 1a. Notice that the electron plasmon peak does not appear in Fig. 3b because, at the indicated momentum, it lies outside the considered energy range. In any case at high enough momentum the electron plasmon is washed out by the Landau damping. \begin{figure} \vskip -8 cm \includegraphics[bb= 300 0 470 790,angle=0,scale=0.55]{Fig3a.ps} \includegraphics[bb= 80 0 230 790,angle=0,scale=0.55]{Fig3b.ps} \vskip -1 cm \caption{(Color on line) Spectral functions for the small pairing gap $ \Delta \,=\, $0.002 MeV. Symbols like in Figs. 1,2.} \label{fig:Fig3} \end{figure} \par It can be instructive to locate the position of the peak in the proton spectral function as a function of momentum. To be quantitative one can extract the position of the peak as the zero of the real part of inverse response function, that is the zero of the real part of the matrix of Eq. (\ref{eq:RPA}). For a sharp mode this is indeed the position of the energy. Since the uncoupled neutron component does not support in this case any well defined mode, its effect is only to produce an additional width to the proton excitation mode. In Fig. \ref{fig:Fig4} is reported the energy of the peak so defined as a function of the momentum in the case of an uncoupled neutron-proton system and in the case of the coupled one. In the case of the uncoupled system one observes the expected \cite{paper2} smooth increase of the energy towards twice the pairing gap ($ \Delta \,=\,$ 1\, MeV). At variance with this behavior, in the case of the coupled system the energy of the mode seems to saturate to a value which is a fraction of 2 $\Delta$. It has to be stressed however that the width of the excitation mode is quite substantial and the position of the peak is largely undefined. \par \begin{center} \begin{figure} \vskip -7 cm \includegraphics[bb= 200 0 360 790,angle=0,scale=0.5]{Fig4.ps} \vskip -1. cm \caption{(Color on line) Energy of the zero of the real part of the inverse of the response function at different momenta $ q $. The pairing gap is $ \Delta \,=\,$ 1 MeV. The red full line and full squares correspond to the case when the neutrons and the proton are uncoupled. The green dashed line and the full circles correspond to the case when the interaction between neutrons and protons is switched on.} \label{fig:Fig4} \end{figure} \end{center} \par It is interesting to compare these results, obtained within the BHF microscopic approach, with the ones that can be obtained with Skyrme functionals. In Fig. \ref{fig:Fig5} are reported the spectral functions for two Skyrme functionals, the Sly230a \cite{SLya} and the NRAPR \cite{NRAPR}. It is apparent that the neutron-proton coupling is substantially smaller in both cases with respect to the microscopic one. Notice anyhow that the Sly230a gives an attractive neutron-neutron interaction, while for the NRAPR it is repulsive. In the latter case in fact a well defined neutron peak is present in the spectral function. This shows that the spectral function is quite sensitive to the choice of the Skyrme functional.\par At twice saturation density the spectral function develops as illustrated in Fig. \ref{fig:Fig6} for the BHF estimate of the nucleon-nucleon interaction. In panel (a) is reported the spectral function for no neutron-proton coupling. At this density the neutron-neutron interaction is repulsive and a sharp delta-like peak is present for the neutron component. The picture changes only little when the neutron-proton coupling is switched on, panel (b). The only effect is the presence of a small width for the neutron peak, but no neutron-proton coupling effect is apparent. This means that the proton and neutron component are excited independently, despite the fact that the proton fraction has increased to about $ 8\% $. It turns out also that the two Skyrme functionals give a repulsive neutron-neutron effective interaction at this density and the spectral functions are quite similar to the microscopic one. \begin{figure} \vskip -8 cm \includegraphics[bb= 300 0 470 790,angle=0,scale=0.55]{Fig5a.ps} \includegraphics[bb= 80 0 230 790,angle=0,scale=0.55]{Fig5b.ps} \vskip -1 cm \caption{(Color on line) Spectral functions for the Skyrme force Sly230, panel (a), and for the functional NRAPR, panel (b). The pairing gap is $ \Delta \,=\, $1 MeV. Symbols as in Figs. 1,2,3.} \label{fig:Fig5} \end{figure} \begin{figure} \vskip -8 cm \includegraphics[bb= 300 0 470 790,angle=0,scale=0.55]{Fig6a.ps} \includegraphics[bb= 80 0 230 790,angle=0,scale=0.55]{Fig6b.ps} \vskip -1 cm \caption{(Color on line) Spectral function within the microscopic approach to the effective nuclear interaction without neutron-proton coupling at twice saturation density, panel (a). The same, but with the neutron-proton interaction included, panel (b). The pairing gap is $ \Delta \,=\, $1 MeV. Symbols as in Figs. 1,2,3.} \label{fig:Fig6} \end{figure} \par \section{Summary and conclusions. \label{sec:conc}} In the region below the crust of a neutron star the dense matter is expected to be composed mainly by neutrons, protons and electrons. The collective elementary excitations of the medium are determined in general by the coupling among the three components and the corresponding spectral function has three components. We have studied the spectral function of the density excitations, separating the neutron, proton and electron strength functions. We focused on the density region where protons are expected to be superfluid, while neutrons are expected to be normal or with a very small pairing gap. The generalized RPA scheme in the Landau monopole approximation was used for the calculations of the density-density spectral function, which describes the response of each component to an external probe. A key quantity for this analysis is the nuclear interaction among the nucleons, which was derived by adopting the energy calculated in the Brueckner-Hartee-Fock approximation as an energy density functional for nuclear matter. The microscopic BHF calculation includes three-body forces and for symmetric matter it reproduces the phenomenological saturation point. Two Skyrme functionals were also considered for comparison. The inclusion of the electron-proton Coulomb interaction is essential, since the electron screening on the Coulomb proton-proton interaction strongly affects the excitation branch corresponding to the pseudo-Goldstone mode associated with the proton superfluidity. Although the proton fraction is few percents, the neutron component of the spectral function is substantially affected by the neutron-proton interaction, while the proton component still presents a peak at the position of the pseudo-Goldstone mode. At twice saturation density one observes an essential decoupling between neutrons and protons, both for the microscopic and the Skyrme functionals. In this case all the considered interactions give a repulsive neutron-neutron effective interaction and the neutron component of the spectral function displays a sharp peak corresponding to a well defined excitation mode. \par Finally, the calculations can be extended to the spin-density excitations, but in this case the modes should be much less collective, as expected by the small value of the corresponding Landau parameter in symmetric matter \cite{Dick,BF}.
05a48e702de885068101267d1a346fe4a4762145
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0048.json.gz" }